Archive for the ‘Unit testing’ Category

Linq to SQL and the Repository Pattern

Monday, September 27th, 2010

Earlier this month, I posted Entity Framework and the Repository Pattern. Mickey posted that code on GitHub, along with his own Linq to SQL implementation.

For those new to Git, here’s how you can get Mickey’s repository:

Mickey took the time to complete the story for me. I posted only enough code to make my point (which was, BTW, that the Repository Pattern takes more work than you might expect). Mikey reverse engineered the database that I used for my example, and even populated it with data from this blog. Wow!

Now let’s review his work.

Tests

image Mickey not only includes unit test (the motivation for my original post), but also integration tests. Just open the solution and hit Ctrl+R, A to run all tests. If you find that the tests don’t deploy the database file, double-click the Local.testsettings file and add the dbf to the Deployment section. I found that the tests passed before I turned on code coverage, but then I had to make this change.

I enabled code coverage and found that he achieves 92%-100% in all modules except for the generated code. The reason that it is not 100% is mostly due to the fact that my original tests didn’t add anything to the repository.

Two sets of data types

The goal of the original post was not persistence ignorance, it was testing. But persistence ignorance is often a reason for using the Repository Pattern. Mickey’s solution is not persistence ignorant, either.

If you look in the Data project, you will see NoteworthyEntitiesL2SModel.dbml and NoteworthyEntitiesEFModel.edmx. That is, he used both EF and L2S to generate models from the database. This created two separate sets of data types.

This division necessarily permeates the solution, since those generated data types are exposed from the repository. As a result, he has a separate memory provider for each data access technology. The EF memory provider expects ObjectContext and ObjectQuery. The L2S memory provider replaces those with DataContext and Table.

Both of the memory providers have unit tests. They distinguish between the two by namespace. EF and L2S use different strategies for naming associative tables, but other than that the code looks the same.

In practice, you will typically choose one data access technology, so this lack of persistence ignorance is not a problem. But if it bothers you, please look into POCO support for EF and L2S to see if there’s a way to remove the dependency upon the data access technology.

Thank you, Mickey

I am quite impressed with the effort that Mickey put into packaging this example and making it available to us. He took code that only worked on my machine and filled in the missing components so that we can all run the tests. And he took what was working for Entity Framework and ported it to Linq to SQL so that you have the choice. Now, either way you go, you can test your data services.

Thanks, man.

TDD test drive number 3

Thursday, September 23rd, 2010

On two prior occasions, I have given TDD a test drive. Neither attempt convinced me that it was a useful way to design a system. To me, TDD has always stood for Top Down Design.

On the first attempt way back in 2006, I added a step to the Red Green Refactor cadence. I Redrew the design diagram before Green. My observation was that tests are no substitute for visualizing the solution.

On the second attempt in January 2009, I used TDD to design and construct a web navigation framework. It had three intersecting axes: URL mapping, security, and menu hierarchy. It ended up being extremely well tested, but incredibly complex. It worked, and continues to work, but it defies understanding.

Last month I began attending Dallas Geek Night at ThoughtWorks. This is a weekly meetup where open-source developers pair up and work on each other’s projects. I’ve used this opportunity to improve the tests for both Update Controls and Correspondence.

All of the work is done in pairs, and all of it is test driven.

Question and answer

On that first day, I paired with Richard Jensen, one of the organizers of the event. He asked questions about Update Controls. We answered those questions with tests.

For example, does Update Controls notify when something doesn’t actually change?

[TestMethod]
public void WhenSortOrderIsChangedButNotDifferentWeShouldNotGetACollectionChanged()
{
    ContactViewModel defaultOrder = _viewModel.Contacts.First();
    _viewModel.SortOrder = ContactListSortOrder.FirstName;

    ContactViewModel firstByFirstName = _viewModel.Contacts.First();
    _viewModel.SortOrder = ContactListSortOrder.FirstName;

    Assert.AreEqual(1, _collectionChangedCount);

}

At first this test failed. Then we added this code to make it pass:

public ContactListSortOrder SortOrder
{
    get
    {
        _indSortOrder.OnGet();
        return _sortOrder;
    }
    set
    {
        if (_sortOrder != value) _indSortOrder.OnSet();
        _sortOrder = value;
    }
}

I could have just answered the question, but this helped keep the discussion measureable. A simple answer would have just been based on faith, and could even have been wrong.

Finding bugs

On the second day, I paired with Dhruv Chandna. We were testing Correspondence synchronizing between two machines. He asked some good questions, too.

For example, what would happen if I added a fact to the opposite community:

[TestMethod]
public void WhenLogOnToOtherMachine_ShouldThrow()
{
    Machine playerOneMachine = _playerOneCommuniy.AddFact(new Machine());
    User playerOne = _playerOneCommuniy.AddFact(new User("one"));

    try
    {
        _playerTwoCommuniy.AddFact(new LogOn(playerOne, playerOneMachine));
        Assert.Fail("AddFact did not throw.");
    }
    catch (CorrespondenceException ex)
    {
        Assert.AreEqual("A fact cannot be added to a different community than its predecessors.", ex.Message);
    }
}

This error should have been caught. It was not. So after writing this failing test, Dhruv dug into the code base and fixed it.

foreach (RoleMemento role in prototype.PredecessorRoles)
{
    PredecessorBase predecessor = prototype.GetPredecessor(role);
    if (predecessor.Community != null && predecessor.Community != _community)
        throw new CorrespondenceException("A fact cannot be added to a different community than its predecessors.");
}

There were a few other changes required to make this work, but they did not significantly impact the design.

TDD as communication

Dan North says TDD is not about testing, it’s all about design. My experience tells me otherwise. Whenever I’ve let my design emerge through tests, I’ve ended up with an overly complex mess. It’s like pouring concrete into a mold. It takes the right shape because it is forced to. But the results are not as pretty as a sculpture.

Instead, I’m beginning to see TDD as a form of communication. This helps a pair to discuss a problem in tangible terms. It gives them something to point to. And it gives you a specific goal so that the conversation doesn’t get derailed. You are done when the test passes.

So I will continue to construct top-down designs. I will continue to use unit tests to verify my implementation of those designs. But when I pair, we will communicate through TDD.

A fluent interface gone wrong, so wrong

Friday, September 17th, 2010

In January I created a unit test helper called Predassert. The goal of this library was to make assertions more readable, both in code and in test results. I wanted to turn this:

Assert.AreEqual(3, theResult);

Into this:

Pred.Assert(theResult, Is.EqualTo(3));

And to turn this:

System.InvalidOperationException: Sequence contains no matching element

Into this:

Microsoft.VisualStudio.TestTools.UnitTesting.AssertFailedException: Assert.Fail failed. The collection contains no matching element.

  The property Name is not correct. Expected ThisOne, actual ThatOne.

It started out harmless enough. I created classes with names like “Is”, “Contains”, and “Has”. These had methods like “SameAs”, “That”, and “Property”. I was able to string together grammatically correct sentences that were at once predicates and assertions (hence the name). Things were fine, it’s the dress that makes you look fat, and I can quit anytime I want.

Then I paired with Paul Hammant at ThoughtWorks. I had to show him this:

Pred.Assert(result.Facts, Contains<Fact>.That(
    Has<Fact>.Property(fact => fact.Members, Contains<FactMember>.That(
        KindOf<FactMember, DataMember>.That(
            Has<DataMember>.Property(field => field.Name, Is.EqualTo("players")) &
            Has<DataMember>.Property(field => field.Type,
                Has<DataType>.Property(type => type.Cardinality, Is.EqualTo(Cardinality.Many)) &
                KindOf<DataType, DataTypeFact>.That(
                    Has<DataTypeFact>.Property(type => type.FactName, Is.EqualTo("User")) &
                    Has<DataTypeFact>.Property(type => type.IsPivot, Is.EqualTo(true))
                )
            ) &
            Has<DataMember>.Property(field => field.LineNumber, Is.EqualTo(4))
        )
    ))
));

When this test fails, this is the result.

Microsoft.VisualStudio.TestTools.UnitTesting.AssertFailedException: Assert.Fail failed. The collection contains no matching element.

  The property Members is not correct. The collection contains no matching element.

  The property Type is not correct. The property IsPivot is not correct. Expected True, actual False.

How did it ever come to this?

After some gentle suggestions from Paul, the test looks like this.

string code =
    "namespace Reversi.GameModel; " +
    "                             " +
    "fact Game {                  " +
    "  publish User *players;     " +
    "}                            ";
Namespace result = ParseToNamespace(code);
Field players = result.WithFactNamed("Game").WithFieldNamed("players");
Assert.IsInstanceOfType(players.Type, typeof(DataTypeFact));
Assert.IsTrue(((DataTypeFact)players.Type).IsPivot,    "The players field is not a pivot.");

And the test failure reads:

Assert.IsTrue failed. The players field is not a pivot.

If you ever want to see the abomination that was Predassert, you will have to dig through the revision history of Correspondence.

Entity Framework and the Repository Pattern

Thursday, September 9th, 2010

When I’ve asked how to unit test Entity Framework, the best answer was “use the Repository pattern to encapsulate your EF code” (thanks Andrew Peters). I recently asked the same thing about RIA Services. Mike Brown responded with the same advice, even going so far as to spend a couple of hours with me on Live Meeting.

Since all you gotta do is implement the Repository Pattern, it should be easy, right? Let’s take a look at a minimalist implementation.

Inject the implementation

To support unit testing, we need to be able to swap out our implementation. At the top level, the consumer of a repository begins a unit of work. So we’ll inject a unit of work factory.

public interface IUnitOfWorkFactory
{
    IUnitOfWork Begin();
}
public class NoteworthyService
{
    private IUnitOfWorkFactory _unitOfWorkFactory;

    public NoteworthyService(IUnitOfWorkFactory unitOfWorkFactory)
    {
        _unitOfWorkFactory = unitOfWorkFactory;
    }
}

Identify the repository

In DDD we create one repository per aggregate root, which is the entity from which you begin the query. In EF, entities exist in a container. So we’ll take two steps to get the repository via the container.

public interface IUnitOfWork : IDisposable
{
    IContainer<TContainer> UsingContainer<TContainer>()
        where TContainer : ObjectContext, new();
}

public interface IContainer<TContainer> : IDisposable
{
    IRepository<TEntity> GetRepository<TEntity>(Func<TContainer, ObjectQuery<TEntity>> repositorySelector)
        where TEntity : EntityObject;
}

Provide a specification

The second half of the Repository Pattern is the Specification. A specification identifies which entities to pull from the repository. In Linq, we use lambdas for that. So a repository should be able to give you back some entities when given a lambda.

public interface IRepository<TEntity>
{
    IQueryable<TEntity> GetSatisfying(Expression<Func<TEntity, bool>> specification);
    void Add(TEntity entity);
}

Query the repository

With those interfaces in place, here’s what a query looks like.

public List<Article> GetArticlesByTopic(string topicName)
{
    using (var unitOfWork = _unitOfWorkFactory.Begin())
    {
        return unitOfWork.UsingContainer<NoteworthyEntities>().GetRepository(container => container.Articles)
            .GetSatisfying(article => article.Topics.Any(topic => topic.TopicName == topicName))
            .ToList();
    }
}

Implement in memory

For unit testing, we implement the repository interfaces using in-memory lists.

public class MemoryUnitOfWorkFactory : IUnitOfWorkFactory
{
    private MemoryUnitOfWork _unitOfWork = new MemoryUnitOfWork();

    public IUnitOfWork Begin()
    {
        return _unitOfWork;
    }
}

public class MemoryUnitOfWork : IUnitOfWork
{
    private Dictionary<Type, object> _containerByType = new Dictionary<Type, object>();

    public IContainer<TContainer> UsingContainer<TContainer>()
        where TContainer : ObjectContext, new()
    {
        object container;
        if (!_containerByType.TryGetValue(typeof(TContainer), out container))
        {
            container = new MemoryContainer<TContainer>();
            _containerByType.Add(typeof(TContainer), container);
        }
        return (IContainer<TContainer>)container;
    }

    public void Dispose()
    {
    }
}

public class MemoryContainer<TContainer> : IContainer<TContainer>
    where TContainer : ObjectContext, new()
{
    private Dictionary<Type, object> _containerByType = new Dictionary<Type, object>();

    public IRepository<TEntity> GetRepository<TEntity>(Func<TContainer, ObjectQuery<TEntity>> repositorySelector)
        where TEntity : EntityObject
    {
        object container;
        if (!_containerByType.TryGetValue(typeof(TEntity), out container))
        {
            container = new MemoryRepository<TEntity>();
            _containerByType.Add(typeof(TEntity), container);
        }
        return (IRepository<TEntity>)container;
    }

    public void Dispose()
    {
    }
}

public class MemoryRepository<TEntity> : IRepository<TEntity>
    where TEntity : EntityObject
{
    private List<TEntity> _entities = new List<TEntity>();

    public IQueryable<TEntity> GetSatisfying(Expression<Func<TEntity, bool>> specification)
    {
        return _entities.Where(specification.Compile()).AsQueryable();
    }

    public void Add(TEntity entity)
    {
        _entities.Add(entity);
    }
}

And here’s what a unit test looks like.

[TestClass]
public class NoteworthyServiceTest
{
    private NoteworthyService _noteworthyService;

    [TestInitialize]
    public void Initialize()
    {
        IUnitOfWorkFactory memory = new MemoryUnitOfWorkFactory();

        IRepository<Article> articlesRepository = memory.Begin()
            .UsingContainer<NoteworthyEntities>()
            .GetRepository(container => container.Articles);
        Topic ddd = new Topic()
        {
            TopicName = "ddd"
        };
        Topic corresopndence = new Topic()
        {
            TopicName = "correspondence"
        };
        Article efRepository = new Article()
        {
            Title = "Entity Framework and the Repository Pattern"
        };
        efRepository.Topics.Add(ddd);
        Article correspondenceLaunch = new Article()
        {
            Title = "Correspondence Launch"
        };
        correspondenceLaunch.Topics.Add(corresopndence);
        Article correspondenceDDD = new Article()
        {
            Title = "Correspondence and DDD"
        };
        correspondenceDDD.Topics.Add(ddd);
        correspondenceDDD.Topics.Add(corresopndence);

        articlesRepository.Add(efRepository);
        articlesRepository.Add(correspondenceLaunch);
        articlesRepository.Add(correspondenceDDD);

        _noteworthyService = new NoteworthyService(memory);
    }

    [TestMethod]
    public void QueryReturnsArticles()
    {
        var articles = _noteworthyService.GetArticlesByTopic("ddd").ToArray();

        Assert.AreEqual("Entity Framework and the Repository Pattern", articles[0].Title);
        Assert.AreEqual("Correspondence and DDD", articles[1].Title);
    }
}

Implement with Entity Framework

Finally, we implement the real thing using Entity Framework.

public class EntityFrameworkUnitOfWorkFactory : IUnitOfWorkFactory
{
    public IUnitOfWork Begin()
    {
        return new EntityFrameworkUnitOfWork();
    }
}

public class EntityFrameworkUnitOfWork : IUnitOfWork
{
    private Dictionary<Type, IDisposable> _containerByType = new Dictionary<Type, IDisposable>();

    public IContainer<TContainer> UsingContainer<TContainer>()
        where TContainer : ObjectContext, new()
    {
        IDisposable container;
        if (!_containerByType.TryGetValue(typeof(TContainer), out container))
        {
            container = new EntityFrameworkContainer<TContainer>();
            _containerByType.Add(typeof(TContainer), container);
        }
        return (IContainer<TContainer>)container;
    }

    public void Dispose()
    {
        foreach (var container in _containerByType.Values)
            container.Dispose();
    }
}

public class EntityFrameworkContainer<TContainer> : IContainer<TContainer>
    where TContainer : ObjectContext, new()
{
    private TContainer _container;

    public EntityFrameworkContainer()
    {
        _container = new TContainer();
    }

    public IRepository<TEntity> GetRepository<TEntity>(Func<TContainer, ObjectQuery<TEntity>> repositorySelector)
        where TEntity : EntityObject
    {
        return new EntityFrameworkRepository<TContainer, TEntity>(_container, repositorySelector(_container));
    }

    public void Dispose()
    {
        _container.Dispose();
    }
}

public class EntityFrameworkRepository<TContainer, TEntity> : IRepository<TEntity>
    where TContainer : ObjectContext, new()
    where TEntity : EntityObject
{
    private TContainer _container;
    private ObjectQuery<TEntity> _objectQuery;

    public EntityFrameworkRepository(TContainer container, ObjectQuery<TEntity> objectQuery)
    {
        _container = container;
        _objectQuery = objectQuery;
    }

    public IQueryable<TEntity> GetSatisfying(Expression<Func<TEntity, bool>> specification)
    {
        return _objectQuery.Where(specification);
    }

    public void Add(TEntity entity)
    {
        _container.AddObject(_objectQuery.Name, entity);
    }
}

Analysis

This is the smallest implementation of the Repository pattern that I could come up with. It is obviously not feature complete. For example, it does not implement Delete, nor does it allow you to specify eager loading with Include. There are other implementations that are bigger, but I doubt that there could be one smaller. Even at this size, this doesn’t qualify as “all you need to do is”.

One benefit of the Repository pattern is supposed to be that it abstracts the persistence mechanism. But this implementation returns EntityObjects, which are a distinctly Entity Framework-ish data type. I could try for a POCO compliant implementation to solve that problem.

Because this implementation requires EntityObjects, it could never work with RIA Services. I would have to write a different Repository implementation to unit test my client code.

And finally, Entity Framework does some things that the in-memory repository does not. My unit tests can’t verify that I’m using EF correctly. For example, Entity Framework does eager loading if I explicitly request it. The in-memory implementation always has all navigation properties populated. Because of this difference, I can’t verify with a unit test that I have included all of the required navigations.

Conclusion

All of this leads me to conclude that the Repository pattern is not the best way to add testability to an untestable framework. The framework needs to be testable from the beginning. If Entity Framework had provided an in-memory implementation for testing, then I could test my use of EF, including eager loading.

By the way, Correspondence does in fact provide an in-memory implementation for unit testing. Please go through the lessons to see what I think a testable framework should look like.

Don’t unit test the glove

Sunday, February 22nd, 2009

There was an email chain letter that my Mom forwarded to me a while back. It was a math magic trick that could predict your age. She asked me how it worked.

It had you start with the year you were born, multiply by one number, add another, repeat the digits, and go through all sorts of numeric hocus-pocus. If you just worked through it using basic algebra, it's easy to see that you were just subtracting the year of your birth from the current year. The intermediate steps added no value. How could thing predict your age? How could it not?

I'm looking at a unit test for a data access service, and I'm reminded of that email that filled Mom with wonder. The unit test mocks an order repository using Rhino Mocks:

int expectedOrderIdentifier = 5;

Order expectedOrder = new Order();
expectedOrder.OrderId = expectedOrderIdentifier;
expectedOrder.WebOrderNumber = 123;
IList<Order> orderList = new List<Order>();
orderList.Add(expectedOrder);

MockRepository mocks = new MockRepository();
IAggregateRootRepository<Order> orderRepositoryMock =
    mocks.CreateMock<IAggregateRootRepository<Order>>();

using (mocks.Record())
{
    Expect.Call(
        orderRepositoryMock.GetSatisfying(dummyOrderSpecification))
        .IgnoreArguments()
        .Return(orderList.AsQueryable<Order>());
}
using (mocks.Playback())
{
    OrderService target = new OrderService(orderRepositoryMock);
    Order returnedOrder = target.GetOrderByWebOrderNumber(expectedOrderIdentifier);
    Assert.IsTrue(
        returnedOrder == expectedOrder,
        "The mocked repostiory provided order was not returned by the repository changes will not be tracked by the unit of work.");
    Assert.IsTrue(
        returnedOrder.WebOrderNumber == 123,
        "The mocked repostiory provided order was not returned by the repository changes will not be tracked by the unit of work.");

}
// Tests to make sure the order repository was enlisted in the provided context.
mocks.VerifyAll();

Lo and behold the test passes! Amazing! Let's take a look at the code under test:

public Order GetOrderByWebOrderNumber(int webOrderNumber)
{
    // Create Specification 
    // This specification is used to get just the order entity without any dependent entities 
    ReferencedOrderByWebOrderNumberSpecification specification =
        new ReferencedOrderByWebOrderNumberSpecification(webOrderNumber);

    // Here Assumption is we get only one order from database  for the provided web order number 
    Order order = _orderRepository.GetSatisfying(specification).FirstOrDefault();

    if (order == null)
    {
        throw new ArgumentException();
    }

    return order;
}

A perfectly reasonable bit of code. It pulls an object from the repository according to a specified condition. In fact, the only interesting thing about this code is that condition.

But look closer. The unit test passes in the wrong parameter! The method takes a web order number, and the unit test passes in an order ID. The code under test creates a specification based on this incorrect information, but the mock repository ignores this parameter. No matter what the condition is, it returns the object we tell it to. Since we mocked the repository to return the object we expected, that's exactly what was returned. How could the test ever fail?

This unit test is completely meaningless. It goes through all sorts of gyrations and mock-object hocus-pocus in order to return the object that we expect. The one interesting feature of the code under test is never tested. In fact, this test passes even thought the object that we fed the mock repository didn't satisfy the condition!

Like my Mom with the email, the developer who wrote this test really though something interesting was happening. In striving for greater test coverage, he was just fooling himself into thinking that he was improving the quality of the code. Useless unit tests are worse than no unit tests, because they give you false confidence in the code.

A glove is a thin wrapper over a hand. The glove is going to do what the hand does. How could it do any different? Don't waste your time unit testing the glove.

Necessary and Sufficient

Thursday, September 14th, 2006

A good interface contains only the methods that it needs. The interface has enough methods to accomplish its task: the set methods is sufficient. In addition, the interface does not have any methods that it doesn't need: the set of methods is necessary. If you find that an interface has unnecessary methods, you should remove them. And if you find that it is insufficient, you must add to it.

A good contract is also both necessary and sufficient. A contract is more than the set of methods (the interface) that a type supports. It is also the constraints: preconditions, postconditions, and invariants. If it has unnecessary constraints, then there are otherwise valid situations that violate the contract. If it has insufficient constraints, then the contract allows some invalid situations.

A good unit test suite is also both necessary and sufficient. If it has unnecessary tests, then it is harder to maintain than it could be. If it has insufficient tests, then it won't detect all defects.

The balance of necessity and sufficiency is found all throughout mathematics. Software, being a form of applied mathematics, inherits this trait. Something that achieves this balance is beautiful to behold.

Two examples of this balance in literature can be found in my recommended book links. Euclid's Elements is the foundation for modern mathematics. He provides a small number of axioms, and from those he derives a large number of theorems. The axioms are necessary and sufficient to describe all of geometry. Bertrand Meyer's Object-Oriented Software Construction contains example after example of necessity and sufficiency in software contracts. Pay close attention to his Stack type.

Examples of software that do not achieve this balance are abundant. Many parts of the JDK have "convenience methods", which by definition are unnecessary. The SOAP specification is an insufficient contract, which leads to many interoperability problems.

Achieving this balance takes time. Only with experience can we hope to get there. It's something I strive for every day.

Order-sensitive unit testing

Saturday, August 12th, 2006

I've spoken brifly in the post TDD Test Drive about using NMock2 to test a transaction pump. For this test, I mocked the transaction queue, the web service, and the dialup connection manager. I did this so that I could make sure, for example, that the transaction pump dialed the phone before it invoked the web service.

I set up expectations in my code as follows:

    [TestFixture]
    public class FirstTest
    {
        private Mockery _mockery;
        private DialupConnectionManager _dialupConnectionManager;
        private TransactionService _transactionService;
        private TransactionPump _transactionPump;

        [SetUp]
        public void SetUp()
        {
            _mockery = new Mockery();
            _dialupConnectionManager =
                _mockery.NewMock();
            _transactionService = _mockery.NewMock();
            _transactionPump = new TransactionPump(_dialupConnectionManager,
                _transactionService);
        }

        [TearDown]
        public void TearDown()
        {
            _mockery.VerifyAllExpectationsHaveBeenMet();
        }

        [Test]
        public void TestPump()
        {
            Expect.Once.On(_dialupConnectionManager).Method("Dial");
            Expect.Once.On(_transactionService).Method("SendTransaction").
                With("Hello");
            Expect.Once.On(_dialupConnectionManager).Method("HangUp");
            _transactionPump.Run();
        }
    }

The tests failed, I made them pass, then I moved on to the next. Unfortunatly, they passed for the wrong reasons.

My transaction pump had a bug in it that caused the dialup connection to dial at the wrong times (it was much more complex than the example given above). By default, NMock validates that all expectations are met, but not in the order that you specified. You have to take one additional step to get there:

        public void TestPump()
        {
            using (_mockery.Ordered)
            {
                Expect.Once.On(_dialupConnectionManager).Method("Dial");
                Expect.Once.On(_transactionService).Method("SendTransaction").
                    With("Hello");
                Expect.Once.On(_dialupConnectionManager).Method("HangUp");
            }
            _transactionPump.Run();
        }

I had already written 40 tests with the assumption that NMock was validating order, so I didn't want to add this using statement to every test. Instead, I added it to the SetUp and TearDown methods:

    [TestFixture]
    public class FirstTest
    {
        private Mockery _mockery;
        private DialupConnectionManager _dialupConnectionManager;
        private TransactionService _transactionService;
        private TransactionPump _transactionPump;

        private IDisposable _ordered;

        [SetUp]
        public void SetUp()
        {
            _mockery = new Mockery();
            _dialupConnectionManager =
                _mockery.NewMock();
            _transactionService = _mockery.NewMock();
            _transactionPump = new TransactionPump(_dialupConnectionManager,
                _transactionService);

            _ordered = _mockery.Ordered;
        }

        [TearDown]
        public void TearDown()
        {
            _ordered.Dispose();

            _mockery.VerifyAllExpectationsHaveBeenMet();
        }

        [Test]
        public void TestPump()
        {
            Expect.Once.On(_dialupConnectionManager).Method("Dial");
            Expect.Once.On(_transactionService).Method("SendTransaction").
                With("Hello");
            Expect.Once.On(_dialupConnectionManager).Method("HangUp");
            _transactionPump.Run();
        }
    }

This caused all unit tests to become order-sensitive. Not surprisingly, this revealed some errors in other test that I tought were working. Unit testing with NMock2 is incredibly effective, but you must be aware of this caveat. If order matters (and it usually does), then tell NMock2 to pay attention to it.

TDD Test Drive

Monday, July 3rd, 2006

I admit to being a bit of a skeptic when it comes to agile programming methodologies. Back before XP was an operating system, I read Kent Beck's book with a highlighter and felt-tip pen in hand. Most of what I found there ran counter to my own experience.

Still, I found some good ideas that I have since incorporated into my daily routine. I do the simplest thing that works (though I have a high standard for what "works"). I check in unit tests and make them part of the build process. I refine my code through refactoring. And I have even engaged in pair programming with some success.

So when I saw Jean Paul Boodhoo on DNR.TV demonstrating Test Driven Development, I though I would give it a try. His presentation did nothing to convince me that TDD was a good thing. In fact if I had stopped there, I would have said it was an excuse to do sloppy work. However, my own experience with it has shown that it can be useful.

TDD as presented by JPB is a cycle of "red green refactor". First you write a failing test. Then you make it pass by whatever means necessary. Then you refactor your code to get rid of the ugliness that you had to add to make the test pass. I experimented with this cycle in my own work, and found that it quickly ground to a halt. But then, I added another step to the cycle and found myself in a better place.

My experiment was to build the message pump for our current project using TDD. The message pump is the background thread that pulls messages from a queue, sends them to a web service, receives messages from the web service, and dispatches those messages to business logic. It has to handle RAS\WAN failover scenarios, and use the phone line judiciously. The challenge here was to ensure that the code balanced these disparate concerns under various configurations.

I though this would be a better test for TDD than JPB's Model\View\Presenter example. JPB tested only the presenter, which in his case was simply a pass through from the model to the view. Not a very challenging piece of code. The message pump, however, has to communicate with four different peers: the queue, the phone, the web service, and the business logic. By necessity of requirements, it's a much more complex piece of code.

To isolate the message pump from the four peers that it touches (some of which were being developed in parallel), I used NMock2 to mock their interfaces. I used dependency injection to provide these interfaces to my production code. Some of the interfaces were at least partially defined, while others were initially empty.

I created the first unit test to do the simplest thing possible: start and stop the thread. It failed, I made it pass, and I found I did not need to refactor. I was off to a good start.

On the second test, I configured the pump for a WAN environment (no phone concerns) and queued a synchronous message. It failed, I made it pass, then I refactored. So far, TDD was working as advertised.

I added asynchronous messages next, and discovered that I should have done them first. After all, they are simpler than synchronous messages. In addition, I found that the interaction between these two concerns was causing my code to smell. I refactored for about half a day to correct this, but felt that I should have foreseen the problem.

When I finished the WAN test suite, I started the RAS test suite. I wrote the first test, saw it fail, and then set to work on making it pass. Here is where I hit a wall. The code that I had written for WAN was not well organized to support RAS. According to the TDD philosophy, I needed to refactor it to make it ready. Unfortunately, I didn't know what I needed to refactor toward. I spent the day chasing that wild goose.

Here's my solution
I solved the problem by going back to the whiteboard. This is how I work naturally, so I figured out how to work it into TDD. I drew the structure of the code I had written so far, then I added the new code. I applied the strategy pattern and defined a ConnectionStrategy base class, with WAN and Dialup concrete classes. Then I drew a flowchart (yes, I still use them) for the algorithm that used the strategy. Once I had planned all of my changes, I coded them.

I found that the whiteboard allowed me to explore my ideas from the top down, as I have always done. However, by refining my design for the specific purpose of RAS vs. WAN, my whiteboard design remained focused and grounded. I wasn't designing the whole system ahead of time. I was designing just enough to pass the next test. I used the whiteboard as a guide to writing code, and then I reflected minor changes made in the code back to the whiteboard.

So I added a step to the TDD cycle: Red, Redraw, Green, Refactor. With this approach, I get the best of both worlds. Top-down design and agility.

Exceptions in unit testing

Tuesday, June 27th, 2006

Unit testing is a good way to exercise code in isolation and discover insidious bugs prior to integration. I unit test often, especially at the beginning of the coding phase.

But there is, I believe, a fundamental flaw in JUnit, NUnit, and CPPUnit. These frameworks throw exceptions to indicate test failure. At first glance, this seems to be the best solution to the problem, but experience shows that it is not.

Test failure needs to both terminate the test and report the reason for the failure. At first glance, exceptions seem like the perfect fit. They terminate a method and carry information from the point of termination. However, I have often found myself testing code that includes cleanup constructs. Exceptions from within these constructs cause extra code to be executed between the throw and the catch. When this code itself throws an exception, the original exception is lost.

In Java, the only cleanup construct we have is try {} finally {}. In C#, we have both try {} finally {} and using() {}. In C++, we have destructors. But no matter what the language, these constructs allow the developer to add bookends to their code. So I find myself using them often.

Ideally, cleanup code should never throw exceptions. This would avoid the problem altogether. However in practice, it sometimes must. I find that this happens more often when using a unit test harness, such as NMock. I put cleanup expectations into my mock objects, because discovering leaks is an important part of the test. So the harness throws when my cleanup code is called unexpectedly.

The problem occurs when a unit test failure asserts in the body of a using or try block. This assertion throws, and then my cleanup code is executed. Since the unit test aborted prematurely, my harness wasn't expecting the cleanup at this point, so it asserts a different failure. The second failure masks the first, and I find myself chasing a wild goose down a blind alley after a red herring.

So here's my solution.
I instrument my production code with logs. I capture these logs in unit tests so that I have additional information. I add extra catches to the production code to log and rethrow. For example:

using() {
try {}
catch (Exception x)
{ log x; throw; }
}

This solve the problem, but it contaminates production code with extra instrumentation. It also contaminates logs with duplicate exception lines. I could make the extra logging conditional so that it only comes into play during unit testing, but that is even more intrusive. I would prefer that unit test frameworks used some other mechanism to record the reasons for test failure.