C. Keith Ray

C. Keith Ray writes about and develops software in multiple platforms and languages, including iOS® and Macintosh®.
Keith's Résumé (pdf)

Sunday, September 13, 2009

Lean in a Nutshell

(originally posted 2003 Dec. 31 on my old blog-site)

On the Lean Development mailing list, John Roth summarizes the heart of Lean: (my emphasis)

Lean is the opposite of fat, and the only basic principle of lean thinking is to eliminate waste. As the saying goes, all else is commentary, or at least an elaboration on that one basic thought: find a faster, better, cheaper way to add value to the product, and do it at all scales, from the total, end to end process on down to the individual practices used to construct it.

[...] In manufacturing, the first exercise is to squeeze as much in-process inventory out of the value chain as possible. Doing that will force a huge number of other beneficial changes.

In Lean Software Development, the first exercise is to go to fixed length iterations, where you have a production quality deployable piece of software at the end of each iteration. Doing that one thing will force a huge number of other changes in the process. I can't emphasize those two criteria too much: production quality and deployable. If you need further testing, signoffs or work from some other department after the iteration ends, you don't have it.[...]


Let's look at the seven core principles of lean, and compare them to Extreme Programming practices:

1. Eliminate Waste. Waste is anything that doesn't contribute to adding value to the product. One-hour status meetings waste time, so XP has 15-minute daily standup meetings instead. Excessive documentation and planning wastes time and energy, so XP recommends creating just enough documentation and planning, at appropriate levels of detail. Debugging is a waste, so XP does test-driven-development, which reduces debugging time and reduces time spent manually testing. Slow communication to/from the Customer/Domain Expert is a waste, so XP recommends having the Customer on-site.

2. Amplify Learning. Creating software is a 'learning' process. Typically developers do this learning alone, and that knowledge doesn't spread around to other developers very quickly. XP does coding in pairs to spread knowledge around faster. (Code reviews can also do this, but with higher overhead.) The Customer/Domain Expert is learning, too, as he sees his ideas implemented and developers ask him for details needed for acceptance tests. XP teams often do Retrospectives each iteration to bring learning to the whole team. Having all the developers and Customer working in a "war-room" also helps the whole team learn together.

3. Decide as Late as Possible. In Release Planning, stories are "promises for future communication" - the Customer/Domain Expert describes each feature/story in just enough detail for the engineers to provide a rough estimate. In Iteration Planning, the Customer describes the story in more detail, so the engineers can break it down into a task-list and/or rough design. The Customer/Tester specifies acceptance tests, which test the "what" not the "how", and then the programmers do test-driven-development where "what" tests and "how" coding are interleaved in a short cycle.

4. Deliver as Fast as Possible. "Simple Code" does everything necessary for the current requirements, but not more, so the team can ship it quickly. Keeping the quality high via test-driven-development avoids coding on an unstable foundation, wasting time debugging. Having the Customer/Domain Expert within range of spoken questions allows getting requirements into code in a minimum of time.

5. Empower the Team. XP encourages all team members to design, test, and code, instead of restricting those activities to a few. It encourages people to sign up for tasks, instead of assigning them to those who may not be as willing or as able. Anyone can improve code at any time, as long as the tests continue to run - allowing new learnings to be reflected in the code/design.

6. Build Integrity In. With short iterations where something isn't "done" until it passes (automated and manual) acceptance testing, and test-driven-development insuring that every line of code is tested, the XP team always builds on top of working software. In every iteration, all automated testing is repeated frequently, detecting as soon as possible any "breakage" of code. Any bugs found by manual testing or automated tests failing are not just fixed, but also force the creation of new automated tests.

7. See the Whole. Before beginning the first iteration, the XP team has a release plan. The Customer/Domain Expert has created that plan with the development team, and together they maintain and update the plan throughout the project, measuring and charting progress. Code is only implemented according to the current and previous requirements, so there is no work wasted on a requirement that may be dropped... however, engineers are not going to write code that will be difficult to adapt to that future requirement in that future comes around. To use the (not very good bridge-building analogy): the draw-bridge may not have motors in this iteration, but the builders are not going to build the bridge out of stone in this iteration, ignoring the need for mobility in the future.

By the way, I hate the "bridge-building" analogy, because writing software isn't building, it is designing. Running the compiler and linker associated "build" scripts, etc., are the "building". If building a bridge were as quick and easy as running a compiler and linker, we'd build bridges multiple times in order to test them. Any comparison of software development to bridges should be to designing bridges - an iterative, learning activity where multiple solutions are designed and tested (via mathematical/computer models or physical models) until all the requirements are met.

Wednesday, June 10, 2009

Measured Throughput - Where are your bottlenecks?

Agile Hiring: see Esther Derby's article on Hiring for a Collaborative Team.

Johanna Rothman writes about an organization where the number of requirements implemented are always significantly less than the number of requirements requested. They've been blaming the testers (!) and not seeing that their throughput through the whole system is limited. Given the numbers that she posts, I compute that the through-put is about 3.4 completed requirements per week for project 1, 3.7 per week for project 2, and 2.3 per week for project 3.

If that company accepted that they're only going to get about 3 requirements per week implemented, no matter how long the project is, they could avoid waste in the "requirements/design" phase by not specifying more requirements than their measured through-put allows.

They could also use that figure to move to incremental development as well, and maybe learn practices that would improve throughput as well. If they used XP or another appropriate set of agile practices, they could maximize information-flow and minimize delays between requirements, implementation, and testing.

To find the bottle-neck, we need to know who's waiting on whom in this organization. Is there a queue of finished-but-not-tested work piling up in front of the testers? Then it could be that the testers are the bottleneck. One way to deal with a bottleneck is to change to a "pull system". In this case, a programmer only does implementation when tester is about to be ready to test their work. That leaves the programmers idle, but then they could use that "idle" time to improve their development process (code reviews, unit testing, etc.)

In this system, the bottleneck is the programmers. The "pull system" can extend to the requirements: don't specify a requirement until a programmer about to be ready to implement it and a tester is about ready to test it. To move out of phasist approach, work on testing (creating automated tests) and implementation can overlap -- or (the XP way) the programming can actually follow the creation of tests and use the tests as an executable version of the specification.

(Reposted: original was posted 2004.Apr.05 Mon)

Wednesday, May 27, 2009

Test Driven Dialog

repost of my blog entry of 2003, April 08:

This is a dialog I created in July 2002, from messages on the Test Driven Development mailing list -- an email conversation between Robert Blum and Roger Lipscombe. After my editing, they didn't recognize their words, so perhaps I did a little more writing than editing. Any similarity to my favorite comics characters is purely coincidence.

Calvin: I don't do test driven programming. I write some code, and test it manually, walking through it in the debugger, checking that it works.

Hobbes: How do you know that it works? Could you express that as a test before you run your code?

Calvin: I can't think about the test until I've thought about the code

Hobbes: Changing the habit of writing code before the test can be a bother. Try this: Write the code on paper. Then write a test.

Calvin: Paper?!

Hobbes: What if you just outlined the code on a whiteboard and didn't write any real code? Could you still write a test first?

Calvin: I'd feel silly doing that alone.

Hobbes: What if you explained it to your pair partner? Could you still write a test first, or maybe have your pair partner write the test?

Calvin: That could work.

Hobbes: I notice you're refactoring without tests. What's that like?

Calvin: After I refactor, I have to test it again manually, in case I make a mistake doing the refactoring.

Hobbes: Wouldn't it be easier to run some quick automated tests before and after you refactor?

Calvin: Well duh! If I had the tests.

Hobbes: If you do test-driven development, your tests drive the design of your code, and as a side-benefit, you get a nice small set of automated tests that make refactoring safe.

Calvin: Perhaps if I start by thinking about the interfaces that my class or module would need to support, I would be able to write the unit test first? Does this sound like the kind of thing I should be trying more of?

Hobbes: I think so. If I were given just the task of writing a class or module, my first test would describe a very simple case of what I wanted from that code. The interface of this not-yet-written code is evolving, of course, during my test-driven development process. Since I'm calling the functions that I need at each moment, without a care how (or if) they are implemented, it tends to create a pretty useable interface.

Calvin: So you find out quickly if your interface functions are no good when you're forced to use them as client, in the tests, before investing a lot of work at writing them!

Hobbes: Now you're getting it.

Calvin: Tell me, when you think of a feature, is your next thought about testing or implementation?

Hobbes: I just think about using that feature. Which leads me to an example of how I would use it. Now I just write down the results I expect from that, and voila, there's the test. I'm not even actively thinking 'I have to write a test now. How can I test it?'. My main focus is how I can use it.

J. B. Rainsberger added a few more lines:

Hobbes: Have you written tests for that code yet?

Calvin: Tests?! TESTS?! I don't need tests! What do I need tests for? Don't you think I can write something this simple without using tests as a crutch?! I know it's right!

Hobbes: (rolling his eyes) Of course you do.

Calvin: Right. Now help me fix these last four bugs.

And Laurent Bossavit capped it off:

Hobbes: I thought I was doing that.

Tuesday, March 10, 2009

New Industrial Blogic

Industrial Logic, Inc., my employer, now has a blog, built using technology from our eLearning software. (Which means it doesn't have an RSS feed yet, but I'm sure it will eventually.)

The first entry, "Do What You Love In A Down Economy" by Joshua Kerievsky, is here. I'll probably contribute blog entries there, as well continuing to post, perhaps, more personal blog entries here.

Josh has also finally made visible to the public, the page in our eLearning where graphically demonstrates technical debt. It's the movie where a printout of a single function is draped out in a hallway, 28 pages long.

Students of our training courses and eLearning have been asking Josh to make this video public for years. Now it is.

Thursday, February 5, 2009

Who Tests the Tests?

Q: Who tests the tests?
A: The whole team.

1. Story Tests or Customer Tests test the fully-integrated application. Those tests are developed in close collaboration with the Customer and his/her helpers (for example, Business Analysts, Professional Testers).

2. Programmer Tests, which come from doing Test Driven Development, are tested by seeing them fail (for the right reasons) before making them pass. When pair programming, two sets of eyes see those tests and the code that makes them pass.

The Programmer and Story tests are also run again many times during development. And incremental/iterative development (ala XP, or Scrum, or any of the other Agile methods) requires refactoring as the team adds new features. Refactoring will change the code, but the tests should continue to pass.

The tests continue to pass despite refactoring because the behavior of the tested code should stay the same until both the tests and code-under-test are deliberately changed. If the tests fail during refactoring, either the tests are too close to the code, or the refactoring wasn't done entirely correctly. In either case, at least one or two programmers are going to look at those tests and the associated code again. So the tests are getting another set of eyes reviewing them.

Sometimes I have written a programmer test, which I expect to fail, and it passes unexpectedly. I take that seriously. Is the test not good enough? Is the code that I planned to change already doing what I want? Good thing I have a pair to help me answer those questions.

Saturday, January 3, 2009

Peer Discussion Improves Student Performance

Science Daily reports that peer discussion using "clickers" helps students learn in a way that simple lecturing does not.

Clickers are simple audience response devices, similar to a TV remote control, that allow students to record their answers to thought-provoking, multiple-choice questions in class. After students answer a question individually, the instructor often asks them to discuss the question and then vote again before revealing the answer. After discussion, they usually do better on the question - but why?

The important point is that none of the students were told what the right answer was," said Su. "Even when students in a discussion group all got the initial answer wrong, after talking to each other they were able to figure out the correct response, to learn. That was unexpected, and I think that's dramatic.


Industrial Logic's Agile eLearning allows students to add comments or questions on every page, and even comment or question specific parts of a quiz. You can see a sample of this in the new videos in the Welcome Album. (See "Community and Feedback" in the track "Find Your Way: Start Here")