I just spent the last six hours in the debugger going through some legacy code because of some problems we had in a production system. Okay, maybe the code is only a few years old, but it doesn't have any tests in it whatsoever, which is what makes it legacy code.
The most common argument against TDD (and testing in general) is that it takes too much time. That's a half truth because it does take time. But considering the ripple effect that two edge conditions had in some of our production code today and the hours spent isolating, understanding, and fixing the bugs, I would consider a few extra minutes of upfront TDD investment well worth the price. That was time that I could have been spending writing more code or expanding business capabilities instead of chasing down a bug.
I think the biggest reasons so many are hesitant to get do any kind of automated tests are threefold:
- We get our head down and we start programming away and we don't even want to think about writing tests and killing the momentum.
- Writing tests is hard because we have to think more. If anything, this should be an incentive rather than a deterrent because we're thinking about the problem from different angles and our comprehension of the problem increases.
- We don't understand how and when to use state-based testing and interaction-based testing, nor do we know the difference.
On # 3, I try as hard as I can to make everything a state-based test. One technique is to use a fully encapsulated domain model. My technique is a little different than what Udi outlines in his blog, but the principle remains.
In the end, as professionals and craftsman that care about our code, we should be writing tests not only for ourselves, but for those that follow us.