C. Keith Ray

C. Keith Ray is developing software for Sizeography and Upstart Technology. He writes code for multiple platforms and languages, including iOS® and Macintosh®.
Go to Sizeography and Upstart Technology to join our mailing lists and see more about our products. Keith's Résumé (pdf)

Thursday, April 25, 2013

Test-Driven Design/Development

(Repeat from my old blog 2003/11/25.)

I'm about two-thirds done reading Unit Testing In Java: How tests drive the code by Johannes Link. It's a very good book. Go buy it now, even if you're not programming in Java, and then finish reading my blog. :-)

The summary form of test-driven development is:
  1. Think about what we want to the program to do.
  2. Write a test that shows some aspect is done.
  3. Run the test, see it fail because we haven't written the code to get the thing done yet (this tests our test, giving us some confidence that our test is written correctly).
  4. Write the smallest amount of code to make that test pass.
  5. Run the test, hopefully see if pass. (If it doesn't, fix the problem.)
  6. Refactor to clean up the code.
  7. Run the test again, to make sure refactoring didn't break anything.
  8. Repeat [sometimes skipping 6 and 7] until you can't think of any new tests, or until the code stops changing as a result of adding tests.
So what if we don't know what we want the program to do?

Writing a test can help us figure that out. Since the code hasn't been done yet, the test is "black-box" and does not necessarily commit us to any particular way of implementing the solution. We can write "Exploratory Tests" to investigate existing code - does calling "X" do "Y"? Write a test and find out.

I don't have time to write lots of tests.

Well, you only have to test code that you want to work. If you don't test it before you write, you'll be testing it in some fashion after you write it. Or someone else will test it, file a bug report because you didn't test it, and then you have to not only interrupt whatever you're doing and fix it, but also test it to confirm that you fixed it. And the time-delays between your writing the code, the tester finding the bug, and your trying to fix the bug means that you will be less familiar with the code, and thus fixing it will be that much harder and slower.

There is, of course LOTS of code out there that was not written this way (some of it is mine). Just browse some of the code on the internet or in your company's source-code-control system - probably most of it isn't written using test-driven techniques.

You'll probably find these things in non-test-driven code (particularly in projects that don't do code-reviews or pair-programming and refactoring):
  • Dead code -- code that is never invoked.
  • Duplicate code.
  • Unreferenced parameters and variables.
  • Variables that never vary: 'constant' variables.
  • Toughly-coupled code. You can't use a class in isolation because it depends on lots of other concrete classes.
  • Non-cohesive classes and methods. (Classes and methods that do "too much")
  • BUGS
There is in fact a whole list of "code smells" (possible design problems) at http://c2.com/cgi/wiki?CodeSmell. TDD helps prevent some of them, and refactoring with the safety net of tests can allow you to to fix the others. With TDD and Refactoring, you don't have to live with smelly code.
I find that once my tests are passing, I almost never have to go back and fix any bugs later. This saves me a lot of time. At the worse, it takes about the same time as coding + debugging, but it's less stressful. If a bug arises during TDD, it is most likely in code that I wrote less than five minutes ago -- easily found and fixed.

For those people who say they don't have time to write tests, I ask: why do you have time to write dead code, duplicated code, and where does the time to fix bugs come from?

In traditional development and testing, you create a lot of defect-ridden code, and then test the defects out. The underlying assumption is that this is the only way it can be done. Test-Driven-Design works with a different assumption: You can put the quality in first, and not have to spend time getting defects out.

Traditional (maintenance) development, because it usually isn't supported by suites of tests, eventually grinds to a halt: the code becomes so fragile that it can't be modified without breaking something. I don't know if the IRS has depreciation tables for source-code assets, but the reality is that code eventually loses value unless you take steps to keep it from losing value. Sometimes, code becomes unmaintainable while the company is still trying to make money off of it - the result is the company loses money or doesn't get the opportunity to modify the code to make more money.

Test-Driven-Design provides a suite of tests that allows you to refactor - keeping the code maintainable so it is easy to enhance, re-use, etc., so you can make money.


  1. "Repeat until you can't think of any new tests." Did he really write that? You should be doing exactly the reverse: Writing the fewest number of tests that enable you to accomplish your goal, whether it's writing new code or testing existing code. Why would you ever want to write more than the minimum test needed to reach your goal?

  2. Bad phrasing on my part.

    Repeat until you can't think of NEW tests. Emphasis on "NEW".

    Not useless variations of existing tests. Not tests that pass when you first run them because the code already exists. Not tests that don't actually test your code.

    Tests that are relevant to the feature you are testing; tests that exercise some logic you haven't written yet. If you can't think of a _new_, _relevant_, test, then you are finished.

    Combine this with other XP practices, such as "StoryTest Driven Development" aka "Acceptance-Test Driven Development" aka "Behavior Driven Development" to insure that you only write the minimum code and tests for a new feature to work, and also refactoring to maintain a good design.