After reading Shaun's excellent post about Top Down TDD, I'd like to share my views on top down and bottom up development.

Now, I've never enjoyed the database first design processes so commonly followed in waterfall projects. My experience with projects following a broadly bottom up approach have generally suffered from overly complex data models designed to satisfy edge cases that may or may not exist in the real world. These overly complex schemas make consuming components overly complex, often right up to the UI (user interface) tier

When I discovered agile software development, I particularly latched onto top down development - I love the narrowly scoped nature of implementing Just Enough business logic, data access code, and data schema definitions to support one feature at a time. It helps with my inability to think at all abstraction layers of a system at the same time...

When I test drive "top down" on a web application, I typically start by writing a failing acceptance test that proves the system does not satisify the desired feature. In order to satisfy this feature, I will, start to test drive controllers to ensure they work appropriately, and co-operate with dependencies accordingly. I typically use mocks and stubs to isolate these tests so I don't have to unduly consider business or persistance concerns at this point, and focus on Controller level concerns - HTTP, routing, possibly session, or cookies and user flow. Only when the controller is completed do I shift context to the next level down, and clear out much of my mental state about the controller. I find this a relaxing and more productive way to develop due to the controlled context shifting between these concerns.

Up until a couple of years ago, I would nearly always test drive top down, only choosing other approaches for defect resolution, or due to particular constraints. However, it was probably reading Matt Wynne and Aslak Hellesoy's The Cucumber Book that I started to reconsider bottom up design, but not in the database centric traditional way...

Rather than implementing a significant data or domain model, and then validating this against a set of user stores / use cases / acceptance criteria, they switched it around - just a little.

They would iterate, story by story, specifying acceptance criteria as cucumber / gherkin scenarios, and wire them up through step definition to the system under test. However, somewhat surprisingly, they would not couple these step definitions to the UI using capybara, page objects or other UI centric patterns as is the norm. Rather, they would couple the step definitions directly to the domain model thus enabling the domain model to be developed in isolation - in their approach the model would depend on repositories and DAOs through loosely coupled interfaces.

Why did they take this approach, shunning the UI, but testing against the domain model? Well, as you'll learn later, they refactor the step definitions couple them to the UI. However, this is deliberately deferred until the domain model is developed sufficiently for the current user story. This early focus on testing the domain model has several intended benefits.

Firstly, it allows earlier validation of the domain model which can only be a good thing. Typically, domain problems are the most complex, and costly to resolve; the earlier these can be identified, the easier they can be resolved. The testing and implementing of the typically less complex UI layer is defered until the most important core concepts are Done.

Secondly, this approach mandates the entire user story be satisfied fully by the domain model. The UI no longer needs to implement any logic to satisfy these requirements - it must simply present the domain model in the most appropriate way. The really important takeaway here is that any business logic leaking into the UI must be replicated in all other external interfaces (e.g. web services, mobile clients) to ensure consistency. Keeping the UI alongside other external interfaces as thin as possible DRYs the solution and reduces the surface area for defects.

Finally, this accelerated validation and learning of the domain model can help make more informed decisions concerning UI, database (schema, technology, approach) and integration with other systems. This helps bridge the balance between the purest 'defer implementation decisions' with the practical need to deploy or even license databases or similar dependencies.

Now, back to this middle up / middle down process...

To recap, we now have cucumber / gherkin formatted acceptance criteria with step definitions coupling directly to the domain model. The domain model should have unit tests alongside the integration tests, than commonly isolate the domain model through mocks and stubs. Once all units have been unit test driven to completion, the acceptance test(s) should pass. The final step in this approach is to refactor the step definitions to depend upon the UI, rather than directly on the domain model. Using modern lightweight techniques and tools like Capybara and FactoryGirl allow these refactorings to be conducted quickly - thus reducing the cost incurred by this seemingly wasteful refactoring approach. Which this refactoring complete, we have the same artifacts as using a more tradition top down TDD approach - acceptance tests that test the entire stack, alongside unit tested ui, domain and persistence layers.

Phew, now after that presentation of middle up / middle down development, what approach do I advocate nowadays? Simple - whatever makes sense. For simpler, less innovative systems, or those with most innovation in the UI rather than in the core domain, I think top down development is the best choice. However, if the core of the system is complex, maybe using complex algorithms or processes, then maybe the domain first approach will de-risk and validate the most important concerns before having to implement the relatively simpler UI.

So what do you think?


comments powered by Disqus