C. Keith Ray
Keith's Résumé (pdf)
Tuesday, November 23, 2010
High technical debt = slum
Some books I recommend to those new to Agile
The following are some books I recommend to developers new to Agile software development.
- This book is new, pretty complete, and easy-to-read about adopting agile software development: The Agile Samurai
- This book covers the same ground as the Agile Samurai book. It's a little more in-depth. Most of it is now free on-line, but you can buy the book from Amazon: The Art of Agile Development.
- This book is about test-driven development in C: Test Driven Development for Embedded C
- This is the original TDD book in Java: Test-Driven Development
- The original book on Refactoring.
- If you have legacy code, you also need to this book: Working Effectively With Legacy Code.
Industrial logic has eLearning and in-person workshops on TDD, Refactoring, Design Patterns, and other topics. This web-based eLearning contains videos, exercises, and quizzes that are, for most people, more effective than reading a book. I would recommend both.
Friday, November 19, 2010
Going meta sometimes helps.
Thursday, November 4, 2010
Beautiful tools
Wednesday, November 3, 2010
Virtual Functions in C++ make TDD easier, but at what cost?
Saturday, October 23, 2010
Summoning the Demon: Testing
Saturday, October 2, 2010
My Response on TDD list on the "excessive cost" of unit testing
And also, and excessive focus on unit testing inhibits refactoring.
Not in my experience.
What if I split C into two or more classes?
So C was too large and now you're fixing it, splitting it into C and CPrime. Great! Good unit tests for C insure this refactoring doesn't break desired behaviors. If a test fails, it's doing its job, warning you that you did part of the refactoring wrong. Because unit tests are closer to code they test than integration tests, it will be easier to locate the problem and fix it.
Do I have to rework and split all its tests, so that they'll be one-class unit tests on the new classes?
No. If you do refactorings in small steps as described in Martin Fowler's book, the tests don't change (except for name and signature changes that I hope you are using a refactoring tool to do.) CPrime is indirectly tested by the tests for C.
AFTER I've split the class, and the tests are still passing, I may move some tests to directly test CPrime.
Further refactorings might remove some of the forwarding-functions from C that you would have created if you were doing the Extract Class/Move Method refactorings as per Fowler.
What if I decide to combine C and D into a single class. What will that do to my tests?
You change change all users of C and D appropriately, whether those users are tests or production code. Refactoring tools make that easy.
Avoiding the code smell "Duplicated Code" in both your test code and your production code insures that all the changes required for a Merge Class Refactoring are minimal.
And how will my tests help me determine if the new combined class is correct? If I have only unit tests for C and D, then really I won't have any way to determine if the new combined "CD" class is working correctly?
There are three activities in TDD: (1) writing a test (usually one that fails first), (2) writing code to pass a test, (3) refactoring to clean up code smells.
By separating #3 from #2, you are much less likely to have problems. You do need new skills: refactoring is not the same as "rewriting"; you can't let code smells go unfixed for very long; and you need to know how (and when) to use fakes and mocks to test how objects collaborate and also break dependencies to test in isolation.
Because tests in TDD _do_ allow collaboration between multiple objects, they might not fit your definition of a "unit test". That's one reason we at Industrial Logic call them "microtests". Each microtest only tests a single behavior, setting up the class under test with real or fake collaborators as needed.
You might find IL's courses athttp://elearning.industriallogic.com/gh helpful for learning Refactoring, Code Smells, TDD, etc.
Hope this helps,
C Keith Ray
Tuesday, September 7, 2010
What's the most important thing?
Friday, April 23, 2010
Mind Heart Body 2
- Mind: More time creating value!
- Heart: No more debugging!
- Body: Test-drive new features.
Mind Heart Body 1
- Mind: More time creating value!
- Heart: No more debugging!
- Body: Write microtests whenever you find bugs.
Friday, April 16, 2010
Test-Driving C++
Reprinted from my blog in 2007.Jan.31 Wed
In most languages, when I'm using Test Driven Development to create a class, I only put into that class those methods or fields that I needed to pass a test. C++ has some exceptions to that, given how the compiler will generate aspects of a "canonical c++ class" for you.
I should explain the idea of a "Canonical C++" class. Imagine that I have this code:
class Buddy
{
public:
Icon* myIcon;
std::string myName;
};
Now, I didn't write a constructor, destructor, nor an assignment-operator, but the compiler did create those for me. It's as if I really wrote the following code:
class Buddy
{
public:
Icon* myIcon;
std::string myName;
Buddy() // default constructor
: myName() // invokes std::string's default constructor
{ // myIcon is not initialized, it probably has a garbage value here.
}
Buddy(const Buddy& other) // copy constructor
: myIcon( other.myIcon ) // copy the variable's value
, myName( other.myName ) // invokes std::string's copy constructor
{
}
~Buddy() // destructor
{
} // invokes std::string's destructor for myName.
Buddy& operator=(const Buddy& other)
{ // assignment operator
myIcon = other.myIcon; // copy the variable's value
myName = other.myName; // call std::string's assignment operator
}
};
A "canonical" C++ class has default constructor (and/or other constructors), copy constructor, destructor, and assignment-operator. These may be defined by the programmer or created by the compiler.
And this invisible compiler-generated code can be wrong, particularly if ownership of pointers or other resources is involved. Let's say that I test-drive a default constructor that sets up myIcon to point to a newly-created Icon object, and write the corresponding destructor code to delete the Icon object. It's hard to verify the "state" of an object after its destructor is called ('cuz it's GONE), but there are a few tricks to verify the behavior of a destructor that I won't get into here.
class Buddy
{
public:
Icon* myIcon;
std::string myName;
Buddy()
: myIcon( NULL )
, myName( "no name" )
{
myIcon = new Icon(Icon::DEFAULT_ICON);
}
~Buddy()
{
delete myIcon;
}
};
SPECIFY_(Context,BuddyHasDefaultNameAndIcon)
{
Buddy* aBud = new Buddy;
VALUE( aBud->myName ).SHOULD_EQUAL( "no name" );
VALUE( aBud->myIcon ).SHOULD_NOT_EQUAL( NULL );
delete aBud;
}
This test will pass. (by the way, I'm using "ckr_spec" here, a Behavior-Driven-Design framework I've written in C++ in my spare time. I'll publish more about ckr_spec one of these days.) However, this test doesn't exercise the compiler-created copy-constructor and assignment operators. AND THOSE ARE WRONG. Nothing (besides self-discipline) prevents anyone from writing the following (crashing) code:
void crashingCode()
{
Buddy keith;
Buddy keithClone(keith); // calls compiler-created copy constructor
Buddy anotherKeith;
anotherKeith = keith; // calls compiler-created assignment operator
// destructors are called invisibly here - crash deleting the same Icon
// object 3 times. (Also leaks an Icon object, too.)
}
The compiler-created copy constructors just copy the pointer to the Icon object. They don't create a NEW Icon object. So in "crashingCode" above, the Icon object created in the constructor of "keith" gets deleted three times, when the destructors for the "keith", "keithClone", and "anotherKeith" objects get called at the end of the function.
Therefore, when I'm test-driving a C++ class, very early on, I make a decision. Is this a "value" class that is going support the copy-constructor and assignment-operator, or an "entity" class that should never be copied because the instance represents something with a persistent identity? (These are some over-simplified ideas from Domain-Driven Design.) I can change my mind later, of course.
If my class is going to allow "value" semantics, then I'll need to write some tests to assure that the copy-constructor and assignment operator function correctly, whether I've written them, or the compiler has generated them.
If I'm not going to allow "value" semantics, then I need to signal to the compiler and to my fellow programmer not to generate or use the copy-constructor and assignment-operator. Declaring them private and unimplemented is how to do that.
class Buddy
{
public:
Icon* myIcon;
std::string myName;
Buddy()
: myIcon( NULL )
, myName( "no name" )
{
myIcon = new Icon(Icon::DEFAULT_ICON);
}
~Buddy()
{
delete myIcon;
}
private:
Buddy(const Buddy& other);
// don't implement copy constructor
Buddy& operator=(const Buddy& other);
// don't implement assignment operator
};
// the crashingCode example will not compile now.
For entity objects, quite often I don't want to allow the default constructor either, so I would declare that private and unimplemented as well.
The moral of the story is that in C++, sometimes you have to write code to prevent the compiler from writing the code for you. Just add that to your TDD/BDD development process.