C. Keith Ray

C. Keith Ray writes about and develops software in multiple platforms and languages, including iOS® and Macintosh®.
Keith's Résumé (pdf)

Sunday, September 13, 2009

Lean in a Nutshell

(originally posted 2003 Dec. 31 on my old blog-site)

On the Lean Development mailing list, John Roth summarizes the heart of Lean: (my emphasis)

Lean is the opposite of fat, and the only basic principle of lean thinking is to eliminate waste. As the saying goes, all else is commentary, or at least an elaboration on that one basic thought: find a faster, better, cheaper way to add value to the product, and do it at all scales, from the total, end to end process on down to the individual practices used to construct it.

[...] In manufacturing, the first exercise is to squeeze as much in-process inventory out of the value chain as possible. Doing that will force a huge number of other beneficial changes.

In Lean Software Development, the first exercise is to go to fixed length iterations, where you have a production quality deployable piece of software at the end of each iteration. Doing that one thing will force a huge number of other changes in the process. I can't emphasize those two criteria too much: production quality and deployable. If you need further testing, signoffs or work from some other department after the iteration ends, you don't have it.[...]


Let's look at the seven core principles of lean, and compare them to Extreme Programming practices:

1. Eliminate Waste. Waste is anything that doesn't contribute to adding value to the product. One-hour status meetings waste time, so XP has 15-minute daily standup meetings instead. Excessive documentation and planning wastes time and energy, so XP recommends creating just enough documentation and planning, at appropriate levels of detail. Debugging is a waste, so XP does test-driven-development, which reduces debugging time and reduces time spent manually testing. Slow communication to/from the Customer/Domain Expert is a waste, so XP recommends having the Customer on-site.

2. Amplify Learning. Creating software is a 'learning' process. Typically developers do this learning alone, and that knowledge doesn't spread around to other developers very quickly. XP does coding in pairs to spread knowledge around faster. (Code reviews can also do this, but with higher overhead.) The Customer/Domain Expert is learning, too, as he sees his ideas implemented and developers ask him for details needed for acceptance tests. XP teams often do Retrospectives each iteration to bring learning to the whole team. Having all the developers and Customer working in a "war-room" also helps the whole team learn together.

3. Decide as Late as Possible. In Release Planning, stories are "promises for future communication" - the Customer/Domain Expert describes each feature/story in just enough detail for the engineers to provide a rough estimate. In Iteration Planning, the Customer describes the story in more detail, so the engineers can break it down into a task-list and/or rough design. The Customer/Tester specifies acceptance tests, which test the "what" not the "how", and then the programmers do test-driven-development where "what" tests and "how" coding are interleaved in a short cycle.

4. Deliver as Fast as Possible. "Simple Code" does everything necessary for the current requirements, but not more, so the team can ship it quickly. Keeping the quality high via test-driven-development avoids coding on an unstable foundation, wasting time debugging. Having the Customer/Domain Expert within range of spoken questions allows getting requirements into code in a minimum of time.

5. Empower the Team. XP encourages all team members to design, test, and code, instead of restricting those activities to a few. It encourages people to sign up for tasks, instead of assigning them to those who may not be as willing or as able. Anyone can improve code at any time, as long as the tests continue to run - allowing new learnings to be reflected in the code/design.

6. Build Integrity In. With short iterations where something isn't "done" until it passes (automated and manual) acceptance testing, and test-driven-development insuring that every line of code is tested, the XP team always builds on top of working software. In every iteration, all automated testing is repeated frequently, detecting as soon as possible any "breakage" of code. Any bugs found by manual testing or automated tests failing are not just fixed, but also force the creation of new automated tests.

7. See the Whole. Before beginning the first iteration, the XP team has a release plan. The Customer/Domain Expert has created that plan with the development team, and together they maintain and update the plan throughout the project, measuring and charting progress. Code is only implemented according to the current and previous requirements, so there is no work wasted on a requirement that may be dropped... however, engineers are not going to write code that will be difficult to adapt to that future requirement in that future comes around. To use the (not very good bridge-building analogy): the draw-bridge may not have motors in this iteration, but the builders are not going to build the bridge out of stone in this iteration, ignoring the need for mobility in the future.

By the way, I hate the "bridge-building" analogy, because writing software isn't building, it is designing. Running the compiler and linker associated "build" scripts, etc., are the "building". If building a bridge were as quick and easy as running a compiler and linker, we'd build bridges multiple times in order to test them. Any comparison of software development to bridges should be to designing bridges - an iterative, learning activity where multiple solutions are designed and tested (via mathematical/computer models or physical models) until all the requirements are met.

Wednesday, June 10, 2009

Measured Throughput - Where are your bottlenecks?

Agile Hiring: see Esther Derby's article on Hiring for a Collaborative Team.

Johanna Rothman writes about an organization where the number of requirements implemented are always significantly less than the number of requirements requested. They've been blaming the testers (!) and not seeing that their throughput through the whole system is limited. Given the numbers that she posts, I compute that the through-put is about 3.4 completed requirements per week for project 1, 3.7 per week for project 2, and 2.3 per week for project 3.

If that company accepted that they're only going to get about 3 requirements per week implemented, no matter how long the project is, they could avoid waste in the "requirements/design" phase by not specifying more requirements than their measured through-put allows.

They could also use that figure to move to incremental development as well, and maybe learn practices that would improve throughput as well. If they used XP or another appropriate set of agile practices, they could maximize information-flow and minimize delays between requirements, implementation, and testing.

To find the bottle-neck, we need to know who's waiting on whom in this organization. Is there a queue of finished-but-not-tested work piling up in front of the testers? Then it could be that the testers are the bottleneck. One way to deal with a bottleneck is to change to a "pull system". In this case, a programmer only does implementation when tester is about to be ready to test their work. That leaves the programmers idle, but then they could use that "idle" time to improve their development process (code reviews, unit testing, etc.)

In this system, the bottleneck is the programmers. The "pull system" can extend to the requirements: don't specify a requirement until a programmer about to be ready to implement it and a tester is about ready to test it. To move out of phasist approach, work on testing (creating automated tests) and implementation can overlap -- or (the XP way) the programming can actually follow the creation of tests and use the tests as an executable version of the specification.

(Reposted: original was posted 2004.Apr.05 Mon)

Wednesday, May 27, 2009

Test Driven Dialog

repost of my blog entry of 2003, April 08:

This is a dialog I created in July 2002, from messages on the Test Driven Development mailing list -- an email conversation between Robert Blum and Roger Lipscombe. After my editing, they didn't recognize their words, so perhaps I did a little more writing than editing. Any similarity to my favorite comics characters is purely coincidence.

Calvin: I don't do test driven programming. I write some code, and test it manually, walking through it in the debugger, checking that it works.

Hobbes: How do you know that it works? Could you express that as a test before you run your code?

Calvin: I can't think about the test until I've thought about the code

Hobbes: Changing the habit of writing code before the test can be a bother. Try this: Write the code on paper. Then write a test.

Calvin: Paper?!

Hobbes: What if you just outlined the code on a whiteboard and didn't write any real code? Could you still write a test first?

Calvin: I'd feel silly doing that alone.

Hobbes: What if you explained it to your pair partner? Could you still write a test first, or maybe have your pair partner write the test?

Calvin: That could work.

Hobbes: I notice you're refactoring without tests. What's that like?

Calvin: After I refactor, I have to test it again manually, in case I make a mistake doing the refactoring.

Hobbes: Wouldn't it be easier to run some quick automated tests before and after you refactor?

Calvin: Well duh! If I had the tests.

Hobbes: If you do test-driven development, your tests drive the design of your code, and as a side-benefit, you get a nice small set of automated tests that make refactoring safe.

Calvin: Perhaps if I start by thinking about the interfaces that my class or module would need to support, I would be able to write the unit test first? Does this sound like the kind of thing I should be trying more of?

Hobbes: I think so. If I were given just the task of writing a class or module, my first test would describe a very simple case of what I wanted from that code. The interface of this not-yet-written code is evolving, of course, during my test-driven development process. Since I'm calling the functions that I need at each moment, without a care how (or if) they are implemented, it tends to create a pretty useable interface.

Calvin: So you find out quickly if your interface functions are no good when you're forced to use them as client, in the tests, before investing a lot of work at writing them!

Hobbes: Now you're getting it.

Calvin: Tell me, when you think of a feature, is your next thought about testing or implementation?

Hobbes: I just think about using that feature. Which leads me to an example of how I would use it. Now I just write down the results I expect from that, and voila, there's the test. I'm not even actively thinking 'I have to write a test now. How can I test it?'. My main focus is how I can use it.

J. B. Rainsberger added a few more lines:

Hobbes: Have you written tests for that code yet?

Calvin: Tests?! TESTS?! I don't need tests! What do I need tests for? Don't you think I can write something this simple without using tests as a crutch?! I know it's right!

Hobbes: (rolling his eyes) Of course you do.

Calvin: Right. Now help me fix these last four bugs.

And Laurent Bossavit capped it off:

Hobbes: I thought I was doing that.

Tuesday, March 10, 2009

New Industrial Blogic

Industrial Logic, Inc., my employer, now has a blog, built using technology from our eLearning software. (Which means it doesn't have an RSS feed yet, but I'm sure it will eventually.)

The first entry, "Do What You Love In A Down Economy" by Joshua Kerievsky, is here. I'll probably contribute blog entries there, as well continuing to post, perhaps, more personal blog entries here.

Josh has also finally made visible to the public, the page in our eLearning where graphically demonstrates technical debt. It's the movie where a printout of a single function is draped out in a hallway, 28 pages long.

Students of our training courses and eLearning have been asking Josh to make this video public for years. Now it is.

Thursday, February 5, 2009

Who Tests the Tests?

Q: Who tests the tests?
A: The whole team.

1. Story Tests or Customer Tests test the fully-integrated application. Those tests are developed in close collaboration with the Customer and his/her helpers (for example, Business Analysts, Professional Testers).

2. Programmer Tests, which come from doing Test Driven Development, are tested by seeing them fail (for the right reasons) before making them pass. When pair programming, two sets of eyes see those tests and the code that makes them pass.

The Programmer and Story tests are also run again many times during development. And incremental/iterative development (ala XP, or Scrum, or any of the other Agile methods) requires refactoring as the team adds new features. Refactoring will change the code, but the tests should continue to pass.

The tests continue to pass despite refactoring because the behavior of the tested code should stay the same until both the tests and code-under-test are deliberately changed. If the tests fail during refactoring, either the tests are too close to the code, or the refactoring wasn't done entirely correctly. In either case, at least one or two programmers are going to look at those tests and the associated code again. So the tests are getting another set of eyes reviewing them.

Sometimes I have written a programmer test, which I expect to fail, and it passes unexpectedly. I take that seriously. Is the test not good enough? Is the code that I planned to change already doing what I want? Good thing I have a pair to help me answer those questions.

Saturday, January 3, 2009

Peer Discussion Improves Student Performance

Science Daily reports that peer discussion using "clickers" helps students learn in a way that simple lecturing does not.

Clickers are simple audience response devices, similar to a TV remote control, that allow students to record their answers to thought-provoking, multiple-choice questions in class. After students answer a question individually, the instructor often asks them to discuss the question and then vote again before revealing the answer. After discussion, they usually do better on the question - but why?

The important point is that none of the students were told what the right answer was," said Su. "Even when students in a discussion group all got the initial answer wrong, after talking to each other they were able to figure out the correct response, to learn. That was unexpected, and I think that's dramatic.


Industrial Logic's Agile eLearning allows students to add comments or questions on every page, and even comment or question specific parts of a quiz. You can see a sample of this in the new videos in the Welcome Album. (See "Community and Feedback" in the track "Find Your Way: Start Here")

Monday, December 29, 2008

William Wake described the essence of an automated test as Arrange, Act, Assert. I've added "Erase", to account for the clean-up that some tests have to do. You might ask why "Erase" added to "Arrange, Act, Assert" and not some word starting with "A". I think starting with "eh?" is close enough. :-)

"The thing about elves is they've got no ... begins with m," Granny snapped her fingers irritably."

"Manners?"

"Hah! Right, but no."

"Muscle? Mucus? Mystery?"

"No. No. No. Means like ... seein' the other person's point of view."

Verence tried to see the world from a Granny Weatherwax perspective, and suspicion dawned.

"Empathy?"

"Right. None at all. Even a hunter, a good hunter, can feel for the quarry. That's what makes 'em a good hunter. Elves aren't like that. They're cruel for fun [...]"

—Terry Pratchett, Lords and Ladies


In C++, with certain test frameworks, a test might be specified in a manner something like the following in C++.



TEST(TestBlurImageFilter)
{
// Arrange
string outfileName = NewTempFileName("TestImageFilter");
Image* sourceImage = new Image("lena.png");

// Act
ImageFilter* filter = new BlurImageFilter();
filter->ProcessToFile(sourceImage, outfileName);

// Assert
AssertImagesEqual("expected_lena_blurred.png", outfileName);

// Erase
DeleteTempFile(outfileName);
delete filter;
delete sourceImage;
}


Note: generally you don't want to deal with files in unit tests; working in-memory would be much faster. Also, if this is one of those frameworks that throws an exception, or otherwise aborts the test if an assertion fails, then the "Erase" portion of the test won't get executed if AssertImagesEqual failed. Let's assume that's not a problem for the moment.

Let's imagine that you then write a another test like so:



TEST(TestUnblurImageFilter)
{
// Arrange
string outfileName = NewTempFileName("TestImageFilter");
Image* sourceImage = new Image("lena.png");

// Act
ImageFilter* filter = new UnblurImageFilter();
filter->ProcessToFile(sourceImage, outfileName);

// Assert
AssertImagesEqual("expected_lena_unblurred.png", outfileName);

// Erase
DeleteTempFile(outfileName);
delete filter;
delete sourceImage;
}


Now you've got duplicated "Arrange" and "Erase" sections. And duplicated logic in tests can be just as bad it would be in production code. Fortunately, most test frameworks already have support for extracting "Arrange" and "Erase" to methods in a "test fixture". The above code could be refactored to something like the following:


class ImageFilterTests : public TestFixture
{
public:
ImageFilterTests()
: sourceImage(NULL), filter(NULL)
{
}

string outfileName;
Image* sourceImage;
ImageFilter* filter;

virtual void SetUp()
{
// Arrange
outfileName = NewTempFileName("TestImageFilter");
sourceImage = new Image("lena.png");
}
virtual void TearDown()
{
// Erase
DeleteTempFile(outfileName);
delete filter;
delete sourceImage;
}
};

TEST_F(ImageFilterTests, TestBlurImageFilter)
{
// Act
filter = new BlurImageFilter();
filter->ProcessToFile(sourceImage, outfileName);

// Assert
AssertImagesEqual("expected_lena_blurred.png", outfileName);
}

TEST_F(ImageFilterTests, TestUnblurImageFilter)
{
// Act
filter = new UnblurImageFilter();
filter->ProcessToFile(sourceImage, outfileName);

// Assert
AssertImagesEqual("expected_lena_unblurred.png", outfileName);
}


Not only has this eliminated the duplicated logic, most unit test frameworks will also guarantee running the TearDown method even if the test fails, so you don't have to write your own try/catch blocks or other contortions for exception-safe "erase".

You'll see that I also added a constructor to insure that the pointer variables have valid NULL values so we don't delete garbage pointers if the Image or Filter objects were not allocated successfully. (You should consider using boost::shared_ptr and/or boost::scoped_ptr if you're dealing with object pointers in C++ code and tests, by the way.)

In those C++ test frameworks where the test-fixture creation and deletion is done just before and after executing the test, the SetUp and TearDown methods can (almost always) be replaced with a constructor and destructor instead. Using that and boost::scoped_ptr to insure exception-safe object deletion would allow us to write the following code:



class ImageFilterTests : public TestFixture
{
public:
ImageFilterTests()
: outfileName(NewTempFileName("TestImageFilter")),
sourceImage(new Image("lena.png"))
{
// Arrange
}

virtual ~ImageFilterTests()
{
// Erase
DeleteTempFile(outfileName);
}

string outfileName;
boost::scoped_ptr sourceImage;
boost::scoped_ptr filter;
};

TEST_F(ImageFilterTests, TestBlurImageFilter)
{
// Act
filter.reset(new BlurImageFilter());
filter->ProcessToFile(sourceImage.get(), outfileName);

// Assert
AssertImagesEqual("expected_lena_blurred.png", outfileName);
}

TEST_F(ImageFilterTests, TestUnblurImageFilter)
{
// Act
filter.reset(new UnblurImageFilter());
filter->ProcessToFile(sourceImage.get(), outfileName);

// Assert
AssertImagesEqual("expected_lena_unblurred.png", outfileName);
}

Monday, December 15, 2008

C++ Mocking Framework

Google released Google Test for C++ earlier this year, and just recently released Google C++ Mocking Framework.

These are extensively documented... check them out!

Tuesday, December 9, 2008

What a Tangled Web We Weave, When Don't Have Effective Tracking

This is a re-post of a blog entry I wrote in 2003...

Steve Norrie points to a recent online article by Jerry Weinberg published on CrossTalk, the Journal of Defense Software Engineering here: Destroying Communication and Control in Software Development.

I'd like to mention a few of the Extreme Programming (XP) solutions to some of the problems mentioned in this article. Be aware that XP is a light-weight process, relying on people rather than technology to do the right thing. (A savvy person can undermine technology anyway.)

Requirements. One of the first areas that communication can be destroyed is by not doing requirements well... not involving the customer, thinking that requirements are a waste of time, and so on. Extreme Programming recommends involving the customer, or a qualified representative of the customer, throughout the entire project. And in addition to talking about the requirements often and in detail, XP requires writing the requirements down in an executable form - automated acceptance tests.

The configuration management system (CMS). The CMS tracks requirements, design, code, test data, test results, user documentation, etc. Jerry notes that information in the CMS can be undermined by failing to keep it up to date, restricting read-access from people who should have access, removing data or failing to put data into it. Extreme Programming doesn't require any specific CMS system (software or otherwise) for this tracking, but does suggest using the simplest thing that actually works. For code, tests, and test data, I recommend using CVS or some other source-code-management system. For acceptance test results, many XP teams record those on a white board or poster board, updated them weekly or daily, charting them over the course of the project. XP does strongly recommend using index cards (story cards) for tracking requirements during initial and weekly planning, and also strongly recommends documenting the relationship between the automatic acceptance tests and the story cards. The person playing the Customer role, as well as the whole team, is responsible for keeping track of the stories. Many XP teams also record the stories in web-accessible ways, such as a wiki.

In regards to a bug-tracking database, some XP teams track bugs the same way as stories - index cards and acceptance tests. You might think that this won't work, but consider that several XP teams have reported that their bug rate after implementing XP dropped from hundreds per six months to around one bug per month. It helps that bugs are not usually recorded until after a story is finished, and in XP, a story is not finished until it is passing its acceptance tests - this requires conversation between the tester and coder as soon as problems are noticed during the implementation and testing of the story. When a bug is recorded, it probably indicates that an acceptance test that was passing has started failing.

Weinberg recommends that you "set and enforce a policy of complete and open information at all times." Agile processes like XP need accurate information daily. Many projects keep tracking information on poster boards and white boards, visible not only to all team members, but anyone in management who walks by. This is the "Project Progress Poster" concept that Weinberg recommends in Quality Software Management: Anticipating Change. Since in my company, few people in management walk by, we keep tracking information in our wiki web pages.

Quality Assurance. Weinberg recommends "Prevent these abuses by having quality assurance report to the highest levels of management, and not to project management." In XP, QA testers are delegates of the person playing the Customer role -- the same person who defines the requirements. QA testers should not only be implementing and running the automated acceptance tests, but also running stress tests and manual testing of the product's user interface.

Weinberg reports that testing often comes too late in the project to be useful in effective risk management. (There's a phrase in XP circles: "Doctor, it hurts when I do this..."). Don't wait until it is too late. XP requires testing to start in the first iteration of the project -- the first week. This and other XP practices enables effective risk management.

The XP solutions noted here require that project management be willing to face reality at all times. One of the quickest routes to failing with XP is to not do XP. If project management destroys information, hides it, degrades it, or inserts misleading inforation, intentionally or not, it is going to be very difficult to have a successful project, no matter what the methodology used.

Tuesday, November 25, 2008

Science Daily Snips

Hopes for cleaner, safer energy:

Quicker, Easier Way To Make Coal Cleaner "Construction of new coal-fired power plants in the United States is in danger of coming to a standstill, partly due to the high cost of the requirement — whether existing or anticipated — to capture all emissions of carbon dioxide [...] Instead of capturing all of its CO2 emissions, plants could capture a significant fraction of those emissions with less costly changes in plant design and operation [...]"

Making Gases More Transportable "Chemists at the University of Liverpool have developed a way of converting methane gas into a powder form [...] made out of a mixture of silica and water which can soak up large quantities of methane molecules [...] might be easily transported or used as a vehicle fuel."

Tuesday, November 18, 2008

Agile Methods Help Programmers Shine

Now and then someone says something like "Agile methods won't turn a mediocre programming into a star. That irritates me, because bad non-agile methods can make a very good programmer produce mediocre results. Without taking into account their environment, no one will know how good that "mediocre" programmer can actually be.

A star NASCAR driver probably won't win the race, if his car has a flat tire and his support team can't or won't replace it. It doesn't matter how good the driver is, if the car is bad.

One simple idea - that the "traditional" way of developing an application is by implementing one complete layer at a time (whether bottom-up or top-down) - dooms a project to deliver no working features until late in the development cycle. When feedback finally arrives about the features (wrong requirements and/or wrong implementations), there is often not enough time to fix all the problems, and so the developers look bad. The testers also look bad if they don't have enough time to test, or enough time for fixes to be tested.

Turn that idea around - develop one vertical slice of an application at a time, so each feature can be tested as soon as possible - allows the customers and testers to provide feedback early in the development cycle. That can make a difference in how good the final product can be. Instead of projects running late or shipping with bugs, the project has known good features, can ship when enough features have been done, and the developers look so much better. People might say the project has star programmers.

Many programmers and testers have been trained in that "traditional" style. It's assumed in many books, schools, and corporate cultures. I call it "Unconscious Waterfall" when the style is just assumed. It takes a very remarkable developer to do good work in this environment.

In some shops where they do something they call "Waterfall" and they make it work, it turns out that they verify that many features are working correctly early in the project - there are many places where requirements, tests, and code are examined, verified, validated. Those kinds of waterfall projects are pretty rare, from what I see.

Wednesday, November 5, 2008

Adopt pets from the Humane Society

I adopted two cats last weekend. A five-month old female and a 3-year old female. Beautiful, curious, energetic, affectionate. I'll post pictures later.

Check the web pages of your local humane society. They probably have cats, dogs, rabbits and birds. If you can, give some of them a loving home.