C. Keith Ray

C. Keith Ray is developing software for Sizeography and Upstart Technology. He writes code for multiple platforms and languages, including iOS® and Macintosh®.
Go to Sizeography and Upstart Technology to join our mailing lists and see more about our products. Keith's Résumé (pdf)

Tuesday, November 23, 2010

High technical debt = slum

Sometimes I think the "debt" analogy is too clean. Brian Foote and his coauthor used many analogies in his "Big Ball of Mud" paper, including "Shanty Town", but I think the best analogy for the products I've seen with large amounts of technical debt is "Slum".

Products with large technical debt are money-makers, in spite of their problems (until the problems cost too much).

Slums have slum-lords, who profit from lack of maintenance, who are ignoring the pain of the inhabitants (developers and tester), and who are also ignoring the benefits of a clean environment... (of course, you can only take an analogy so far).

Still, it takes an empowered group of inhabitants to transform a slum into a desirable place to live and work. That requires management support.

Some books I recommend to those new to Agile

The following are some books I recommend to developers new to Agile software development.

  • This book is new, pretty complete, and easy-to-read about adopting agile software development: The Agile Samurai

  • This book covers the same ground as the Agile Samurai book. It's a little more in-depth. Most of it is now free on-line, but you can buy the book from Amazon: The Art of Agile Development.

Industrial logic has eLearning and in-person workshops on TDD, Refactoring, Design Patterns, and other topics. This web-based eLearning contains videos, exercises, and quizzes that are, for most people, more effective than reading a book. I would recommend both.

Friday, November 19, 2010

Going meta sometimes helps.

One of the sessions I participated in at AYE 2010 was Steve Smith's session on "Power, Authority, and Teams". We did an exercise where we individually tried to solve a problem (on paper, a problem whose answer can be scored as to correctness) and then we were divided into four teams, with one observer each, to try to solve the same problem.

The team that I was on, started with some direct actions by individuals: deciding we would use a flip-chart on an easel, and moving the flip-chart away from another team (which, funnily enough, put us next to another flip-chart on an easel, so we could have avoided carrying that easel around). One partipant decided to act as scribe, and off we want, problem-solving. I'm sure it looked like pure chaos. And it was. (But not necessarily in a bad way.)

As we worked through the problem, sometimes we'd ask for a show of hands in favor of a particular part of the solution. We never actually formalized how voting would work, and certainly never had more than one third of the group raising their hands. I think a lot of us in the group were trying to gauge agreement by audio and visual cues, but we didn't go "meta" long enough to formalize how we would signal agreement. We didn't appoint roles, or vote on a leader.

You have to remember that a many of us were coaches. "Agile coaches," even. We know about dot-voting (I almost suggested it once during this chaos), consensus/thumb-voting. We know about team-formation, Satir change model, self-organizing teams. Some of us had been through Problem-Solving Leadership training. We didn't use any of those tools. I know I could have, but I didn't, and I expect some others were thinking the same thing.

After struggling with one problem solving technique for a while, one of the Europeans in the group complained that our usage of American-style measurements was making it hard for him to contribute. I wrote "1 meter ≈ 3 feet" on the flip-chart to help him out, but another member of the team proposed working in relative sizes instead of absolute sizes. We transitioned to that problem-solving style and, after a little confusion, the team seemed to jell.

We had come to an agreement on process, though never explicitly agreeing on process through any formal means.

Later, we could compare our individual scores on the answering the problem with our team scores. Our team scored higher than any of our individual scores, though we were in third-place compared to the other three teams. The team with the highest score had chosen to use computers to solve the problem. Imagine! A bunch of software people using computers to solve the problem! :-)

OK, our team had a moment of conversation, early on, where some of us agreed not to use computers for the problem-solving, as that seemed against the spirit of the simulation. Again, not a formally-ratified agreement, but no one in our team went against that decision during the simulation. We were supposed to be learning about power: how people interact with each other, so we didn't bring computers into the simulation.

The team in fourth place had a score that was lower than any of their individual scores. I really wish I could have observed what was going on in that team. Somehow their collective problem-solving skill went completely opposite of their individual skills.

I think that even if we had formalized how our team made its decisions, we probably would have come to the same solution that we did in our informal way. We might have done so faster. We might have included insights from some of the quieter members of the team, and might done a little bit better.

It's also possible that if we had been too formal, we could have done worse, perhaps much worse.

Two points I want to mention.

In a team-building exercise at AYE 2005, my team happened to be all PSL graduates and no one in the other team was a PSL graduate. My team quickly jelled. We decided to enjoy ourselves and not stress out. The other team, across the room, we could see going through "forming, storming, norming, performing" stages. We ended up coming in second-place (of two teams)... but each team was working on a different LEGO model, so the difference in performance could be the difference in complexity of the models, or differences in our LEGO-building skills. As much as we would like to have simple causes-and-effects, sometimes effectiveness in one area doesn't mean success in another.

The other point I want to mention, I just read in The Gift Of Time, in the chapter written by Willem van den Ende: "Solving the Groupthink Problem". He quotes a diagram-of-effects by Jerry Weinberg (Quality Software Management vol. 3) Rather than re-print the image, I'll try to describe it.

"Effectiveness of meta-planning" directly affects "Effectiveness of strategic planning".

"Effectiveness of strategic planning" directly affects "Effectiveness of action planning".

"Effectiveness of action planning" directly affects "Quality of Action".

"Other Factors" directly affects "Effectiveness of action planning" and also directly affects "Quality of Action".

"Change Artistry" directly affects "Quality of Action".

That may or may not speak for itself. (Probably not, but it's getting late.) Willem has a paper here that shows other diagrams of effects. Diagramming systems can be useful.

And one last thing... In a discussion with the "power" team, I listed a bunch of different ways we can have power (or influence): speaking louder, speaking more persuasively, deciding on a process for the team to follow, holding the marker and thus acting as scribe...

That last idea had some impact on the participant who had been acting as scribe. She had been thinking that by acting as scribe, she would be avoiding taking power in this simulation. But the scribe can influence what gets recorded, who gets listened to. It's a position of power. We coaches who sometimes facilitate meetings by acting as scribe, should keep that in mind.

There are all sorts of power.

Thursday, November 4, 2010

Beautiful tools

Go to this page and scroll down to Figure 4. Apple's IDE graphically shows you the results of static analysis by drawing arrows over the code.

Beautiful.

Wednesday, November 3, 2010

Virtual Functions in C++ make TDD easier, but at what cost?

Virtual functions in C++ make allow us to do mocking, stubbing, and faking, which helps us test code in isolation in TDD or just microtesting in general. A frequent objection to virtual functions is that it "costs more" than simple function calls.

The cost of a virtual function is hardly ever important, unless the function is being called in a very tight loop, or it is being called at interrupt-time, and even then compiler / linker optimization might overcome the "extra" table-lookup that makes a function virtual.

In C, calling a function through a function-pointer is equivalent to calling a virtual function in C++. Implementations of file-systems APIs in Linux and BSD Unix use function-pointers internally to do what C++ does with virtual functions.

The cost of cache-misses (on-chip cache and off-chip cache) is a LOT bigger than the cost of virtual function pointers. Incorrect branch-prediction is also a CPU-level cost that people fail to understand.

The best way to know if making a function virtual has any user-apparent cost, is to measure the application via some tool that does non-invasive timing. Often, there will be no visible effect if you make some functions virtual that were non-virtual before. (I'm talking about REAL applications, not benchmarks.)

Saturday, October 23, 2010

Summoning the Demon: Testing

James Bach, advocate of exploratory testing and one of the founders of context-driven testing movement. He gave a keynote in Sweden. Watch it here:


He says at one point "You got to get good at what you're doing, and you got to get credit for what you're doing." But they don't blog about it, don't talk about it on the internet. They don't establish a reputation. Then they don't have a fall-back position if they disagree with some stupid metrics that their managers want to impose on the testers.

There are lots of programmers out there blogging about their craft. Percentage-wise its probably a small number, but it seems like it's even fewer testers that do so.

James also says "I have a bunch of Google alerts. If you use certain code words in your blog I will read them. [...] If you say the name of the demon, he shall appear: and read your blog."

:-)

Hi James!

Saturday, October 2, 2010

My Response on TDD list on the "excessive cost" of unit testing


And also, and excessive focus on unit testing inhibits refactoring.

Not in my experience.

What if I split C into two or more classes?

So C was too large and now you're fixing it, splitting it into C and CPrime. Great! Good unit tests for C insure this refactoring doesn't break desired behaviors. If a test fails, it's doing its job, warning you that you did part of the refactoring wrong. Because unit tests are closer to code they test than integration tests, it will be easier to locate the problem and fix it.

Do I have to rework and split all its tests, so that they'll be one-class unit tests on the new classes?

No. If you do refactorings in small steps as described in Martin Fowler's book, the tests don't change (except for name and signature changes that I hope you are using a refactoring tool to do.) CPrime is indirectly tested by the tests for C.

AFTER I've split the class, and the tests are still passing, I may move some tests to directly test CPrime.

Further refactorings might remove some of the forwarding-functions from C that you would have created if you were doing the Extract Class/Move Method refactorings as per Fowler.

What if I decide to combine C and D into a single class. What will that do to my tests?

You change change all users of C and D appropriately, whether those users are tests or production code. Refactoring tools make that easy.

Avoiding the code smell "Duplicated Code" in both your test code and your production code insures that all the changes required for a Merge Class Refactoring are minimal.

And how will my tests help me determine if the new combined class is correct? If I have only unit tests for C and D, then really I won't have any way to determine if the new combined "CD" class is working correctly?

There are three activities in TDD: (1) writing a test (usually one that fails first), (2) writing code to pass a test, (3) refactoring to clean up code smells.

By separating #3 from #2, you are much less likely to have problems. You do need new skills: refactoring is not the same as "rewriting"; you can't let code smells go unfixed for very long; and you need to know how (and when) to use fakes and mocks to test how objects collaborate and also break dependencies to test in isolation.

Because tests in TDD _do_ allow collaboration between multiple objects, they might not fit your definition of a "unit test". That's one reason we at Industrial Logic call them "microtests". Each microtest only tests a single behavior, setting up the class under test with real or fake collaborators as needed.

You might find IL's courses athttp://elearning.industriallogic.com/gh helpful for learning Refactoring, Code Smells, TDD, etc.

Hope this helps,
C Keith Ray

Tuesday, September 7, 2010

What's the most important thing?

For Agile software development, what is the one thing that's so important that anything else is a distraction?

short answer: Working software

slightly longer answer: Being able to reliably build working software.

Thoughts?

Friday, April 23, 2010

Mind Heart Body 2

  1. Mind: More time creating value!
  2. Heart: No more debugging!
  3. Body: Test-drive new features.

Mind Heart Body 1

  1. Mind: More time creating value!
  2. Heart: No more debugging!
  3. Body: Write microtests whenever you find bugs.

Friday, April 16, 2010

Test-Driving C++

Reprinted from my blog in 2007.Jan.31 Wed

In most languages, when I'm using Test Driven Development to create a class, I only put into that class those methods or fields that I needed to pass a test. C++ has some exceptions to that, given how the compiler will generate aspects of a "canonical c++ class" for you.

I should explain the idea of a "Canonical C++" class. Imagine that I have this code:

class Buddy
{
public:
Icon* myIcon;
std::string myName;
};

Now, I didn't write a constructor, destructor, nor an assignment-operator, but the compiler did create those for me. It's as if I really wrote the following code:

class Buddy
{
public:
Icon* myIcon;
std::string myName;

Buddy() // default constructor
: myName() // invokes std::string's default constructor
{ // myIcon is not initialized, it probably has a garbage value here.
}

Buddy(const Buddy& other) // copy constructor
: myIcon( other.myIcon ) // copy the variable's value
, myName( other.myName ) // invokes std::string's copy constructor
{
}

~Buddy() // destructor
{
} // invokes std::string's destructor for myName.

Buddy& operator=(const Buddy& other)
{ // assignment operator
myIcon = other.myIcon; // copy the variable's value
myName = other.myName; // call std::string's assignment operator
}
};

A "canonical" C++ class has default constructor (and/or other constructors), copy constructor, destructor, and assignment-operator. These may be defined by the programmer or created by the compiler.

And this invisible compiler-generated code can be wrong, particularly if ownership of pointers or other resources is involved. Let's say that I test-drive a default constructor that sets up myIcon to point to a newly-created Icon object, and write the corresponding destructor code to delete the Icon object. It's hard to verify the "state" of an object after its destructor is called ('cuz it's GONE), but there are a few tricks to verify the behavior of a destructor that I won't get into here.

class Buddy
{
public:
Icon* myIcon;
std::string myName;

Buddy()
: myIcon( NULL )
, myName( "no name" )
{
myIcon = new Icon(Icon::DEFAULT_ICON);
}

~Buddy()
{
delete myIcon;
}
};

SPECIFY_(Context,BuddyHasDefaultNameAndIcon)
{
Buddy* aBud = new Buddy;
VALUE( aBud->myName ).SHOULD_EQUAL( "no name" );
VALUE( aBud->myIcon ).SHOULD_NOT_EQUAL( NULL );
delete aBud;
}

This test will pass. (by the way, I'm using "ckr_spec" here, a Behavior-Driven-Design framework I've written in C++ in my spare time. I'll publish more about ckr_spec one of these days.) However, this test doesn't exercise the compiler-created copy-constructor and assignment operators. AND THOSE ARE WRONG. Nothing (besides self-discipline) prevents anyone from writing the following (crashing) code:

void crashingCode()
{
Buddy keith;
Buddy keithClone(keith); // calls compiler-created copy constructor
Buddy anotherKeith;
anotherKeith = keith; // calls compiler-created assignment operator

// destructors are called invisibly here - crash deleting the same Icon
// object 3 times. (Also leaks an Icon object, too.)
}

The compiler-created copy constructors just copy the pointer to the Icon object. They don't create a NEW Icon object. So in "crashingCode" above, the Icon object created in the constructor of "keith" gets deleted three times, when the destructors for the "keith", "keithClone", and "anotherKeith" objects get called at the end of the function.

Therefore, when I'm test-driving a C++ class, very early on, I make a decision. Is this a "value" class that is going support the copy-constructor and assignment-operator, or an "entity" class that should never be copied because the instance represents something with a persistent identity? (These are some over-simplified ideas from Domain-Driven Design.) I can change my mind later, of course.

If my class is going to allow "value" semantics, then I'll need to write some tests to assure that the copy-constructor and assignment operator function correctly, whether I've written them, or the compiler has generated them.

If I'm not going to allow "value" semantics, then I need to signal to the compiler and to my fellow programmer not to generate or use the copy-constructor and assignment-operator. Declaring them private and unimplemented is how to do that.

class Buddy
{
public:
Icon* myIcon;
std::string myName;

Buddy()
: myIcon( NULL )
, myName( "no name" )
{
myIcon = new Icon(Icon::DEFAULT_ICON);
}

~Buddy()
{
delete myIcon;
}

private:
Buddy(const Buddy& other);
// don't implement copy constructor

Buddy& operator=(const Buddy& other);
// don't implement assignment operator
};
// the crashingCode example will not compile now.

For entity objects, quite often I don't want to allow the default constructor either, so I would declare that private and unimplemented as well.

The moral of the story is that in C++, sometimes you have to write code to prevent the compiler from writing the code for you. Just add that to your TDD/BDD development process.