TDD Test Drive

I admit to being a bit of a skeptic when it comes to agile programming methodologies. Back before XP was an operating system, I read Kent Beck's book with a highlighter and felt-tip pen in hand. Most of what I found there ran counter to my own experience.

Still, I found some good ideas that I have since incorporated into my daily routine. I do the simplest thing that works (though I have a high standard for what "works"). I check in unit tests and make them part of the build process. I refine my code through refactoring. And I have even engaged in pair programming with some success.

So when I saw Jean Paul Boodhoo on DNR.TV demonstrating Test Driven Development, I though I would give it a try. His presentation did nothing to convince me that TDD was a good thing. In fact if I had stopped there, I would have said it was an excuse to do sloppy work. However, my own experience with it has shown that it can be useful.

TDD as presented by JPB is a cycle of "red green refactor". First you write a failing test. Then you make it pass by whatever means necessary. Then you refactor your code to get rid of the ugliness that you had to add to make the test pass. I experimented with this cycle in my own work, and found that it quickly ground to a halt. But then, I added another step to the cycle and found myself in a better place.

My experiment was to build the message pump for our current project using TDD. The message pump is the background thread that pulls messages from a queue, sends them to a web service, receives messages from the web service, and dispatches those messages to business logic. It has to handle RAS\WAN failover scenarios, and use the phone line judiciously. The challenge here was to ensure that the code balanced these disparate concerns under various configurations.

I though this would be a better test for TDD than JPB's Model\View\Presenter example. JPB tested only the presenter, which in his case was simply a pass through from the model to the view. Not a very challenging piece of code. The message pump, however, has to communicate with four different peers: the queue, the phone, the web service, and the business logic. By necessity of requirements, it's a much more complex piece of code.

To isolate the message pump from the four peers that it touches (some of which were being developed in parallel), I used NMock2 to mock their interfaces. I used dependency injection to provide these interfaces to my production code. Some of the interfaces were at least partially defined, while others were initially empty.

I created the first unit test to do the simplest thing possible: start and stop the thread. It failed, I made it pass, and I found I did not need to refactor. I was off to a good start.

On the second test, I configured the pump for a WAN environment (no phone concerns) and queued a synchronous message. It failed, I made it pass, then I refactored. So far, TDD was working as advertised.

I added asynchronous messages next, and discovered that I should have done them first. After all, they are simpler than synchronous messages. In addition, I found that the interaction between these two concerns was causing my code to smell. I refactored for about half a day to correct this, but felt that I should have foreseen the problem.

When I finished the WAN test suite, I started the RAS test suite. I wrote the first test, saw it fail, and then set to work on making it pass. Here is where I hit a wall. The code that I had written for WAN was not well organized to support RAS. According to the TDD philosophy, I needed to refactor it to make it ready. Unfortunately, I didn't know what I needed to refactor toward. I spent the day chasing that wild goose.

Here's my solution
I solved the problem by going back to the whiteboard. This is how I work naturally, so I figured out how to work it into TDD. I drew the structure of the code I had written so far, then I added the new code. I applied the strategy pattern and defined a ConnectionStrategy base class, with WAN and Dialup concrete classes. Then I drew a flowchart (yes, I still use them) for the algorithm that used the strategy. Once I had planned all of my changes, I coded them.

I found that the whiteboard allowed me to explore my ideas from the top down, as I have always done. However, by refining my design for the specific purpose of RAS vs. WAN, my whiteboard design remained focused and grounded. I wasn't designing the whole system ahead of time. I was designing just enough to pass the next test. I used the whiteboard as a guide to writing code, and then I reflected minor changes made in the code back to the whiteboard.

So I added a step to the TDD cycle: Red, Redraw, Green, Refactor. With this approach, I get the best of both worlds. Top-down design and agility.

One Response to “TDD Test Drive”

  1. Adventures in Software » Blog Archive » TDD test drive number 3 Says:

    [...] the first attempt way back in 2006, I added a step to the Red Green Refactor cadence. I Redrew the design diagram [...]

Leave a Reply

You must be logged in to post a comment.