On the XP mailing list today, Ron Jeffries responds to a question about code reviews:
Frankly, code reviews are so much worse than pair programming that a dose of them would make me fly to pair. Let's see if we can replicate my experience.
Here's one path through a network of a million decision points:
To do code reviews, everyone has to read the code beforehand, unless you're
doing a walkthrough, see below. I'd ask everyone to come together
physically to the review. Then I'd ask them to report truthfully how much
time they spent reviewing the code. Early on, I would report truthfully
that I had spent zero or very little time, in hopes of getting others to
admit the truth. When they admit the truth, I'd dismiss the meeting and
Dismiss and reschedule is the appropriate response to lack of preparation, according to Freedman and Wienberg.
Then, after a while, the only alternative is a walkthrough, since no one is
preparing effectively for the review. So we do walkthroughs for a while.
They are intensely boring, and few people stay involved. Note in your mind
the people who are present but not involved. At the end of the session,
say, holding your hand up, "Who else had a real problem staying engaged
with this walkthrough?" If there's honesty in the room, hands will go up.
Prompting may be necessary. Then: "Any ideas?"
Surely someone will think of "doing this in smaller groups or one on one".
Try it. Ask the team whether "we should empower the one-on-one folks to
change the code, and under what circumstances." Don't mention that this is
Try an experiment. You're "interested in collaborative programming".
Interested parties should come to the room to help. On the screen, start
writing a program. Ask for help with it, get the room to pair with you. Get
stuck (no need to fake this if you are me). Someone will start telling you
what to do. Don't get it (no need to fake this either). Get them to come up
and do it ... grabbing the chair that is accidentally beside you, while you
Note that reviews often find things. Observe how many of them are resisted
by the original programmer, or are "too much trouble to fix now".
Build a few BVCs [Big Visible Charts] relating to time spent prepping, in the meetings, number of useful suggestions (by person if you can do it without problems), number of changes made in response to suggestions, ...
Code reviews are intensely painful, in my experience, and we were trained
by Freedman himself. There will be no need to set them up to be perceived
that way, though it will take honesty among the group to express it. After
doing enough code reviews, which take way more than half the groups' time
by the way, a team who has heard of pair programming should be begging to
pair. About all you have to do is make sure that no one treats the review
session as nap time, and that you are early in recognizing the people who think it's a waste of time. Because they're right.
I have done reviews on a project that was building software for a medical device. We prepared for the code review (and also reviewed unit tests). We found issues (like not enough unit tests for specific parts of the code). The programmer corrected issues after the review, and had a second review (a walkthrough with the coder and one reviewer) to confirm that the issues were corrected. We signed off pieces of paper to have the paper trail that the FDA would expect. It wasn't the most exciting work, but we were one of the best teams, people-wise, that I've ever been privileged to work with. Truly respecting each other and working together very well.
Coding and reviewing this way was very slow - for eight person-hours of programming, expect at least a half an hour of preparation (coder and three reviewers - so two person-hours of preparation), one hour of review (times four people - so four person-hours of review), one person-hour correcting issues, and another half-hour of walkthrough (times two people - one person-hour). So for 8 person-hours of programming, there was another 8 person-hours of overhead: reviews, correcting, and walkthrough.
As is often said, "basic XP" isn't intended for safety-critical applications. But I could see the following working in that environment: pair-programming and test-driven development for four hours (eight person-hours), and a one-hour walk-through by two people who didn't do the pair-programming (two person-hours). I would expect very few issues to be found in the walkthrough that hadn't already been found (or avoided) and corrected in the pair programming, and therefore there would be much less need for correcting issues. So 8 person-hours of (pair) programming would have an overhead of 2 person-hours of walkthroughs.
While I was working on the medical device project, we didn't have acceptance testers nor automated acceptance tests. If we had been doing XP, we would have had better testing -- better unit tests, and real acceptance tests. I didn't stay on that project through completion - when our (well-liked but not very political) manager was removed from the project, and our team leader quit, almost everyone else on the team quit too. I was one of the last ones to leave, accepting an offer from Apple Computer, Inc. and moving to California. I heard the the project eventually finished, with half the man-power and twice the planned schedule. Very likely the acceptance testing was done at the end of that project.