C. Keith Ray

C. Keith Ray writes about and develops software in multiple platforms and languages, including iOS® and Macintosh®.
Keith's Résumé (pdf)

Tuesday, April 21, 2015

Stages of Failure

(Originally posted 2003.Jun.07 Sat; links may have expired.)

From Rob Thomsett's page "Project Pathology: Causes, patterns and symptoms of project failure"... "We observed this pattern of failure in every one of the 20 major projects we reviewed. Fifteen of the 20 projects we reviewed had degraded to Step 4 and the remaining 5 were at Step 3."

The four stages of failure:

  1. Development team unilaterally de-scopes the project.
  2. Project manager requests additional people.
  3. Unpaid overtime work becomes the norm.
  4. Team has lost control of the project, but the team is in denial.

For an example of stage 4: "This denial was well evidenced in a project where all team members and the project manager felt that the project could meet a deadline 3 months out and that our review was a waste of time. In one day, we identified that the project was over 12 months behind schedule and would deliver [at best] in 15 months not 3! Our findings were wrong - the project finally delivered 18 months later."

Sunday, April 19, 2015

Shopping and Getting Better

(Originally posted 2003.Jun.05 Thu; links may have expired.)


Andy Hunt blogs about shopping, not being able to find a shampoo that doesn't have fruit juice or herbal essences in it. He calls this "herd marketing" and writes: "It should be [...] the things I want. Isn't that supposed to be the essence of marketing? Give the people what they want -- not what everyone else wants to give them, or even what you want to give them."

Well, my understanding of typical marketing is that it's all about getting people to want what you have to offer, not offering what people want. Harry Beckwith's book Selling The Invisible was refreshing in that he goes against the grain, and says that selling the steak is more important than selling the sizzle.

Getting Better

Laurent Bossavit quoted Beck in his blog, referring to http://c2.com/cgi/wiki?GettingBetter. Thanks, Laurent.
How good the design is doesn't matter near as much as whether the design is getting better or worse. If it is getting better, day by day, I can live with it forever. If it is getting worse, I will die.

Friday, April 17, 2015

Open, Collaborative, Project Planning

(Originally posted 2003.May.09 Fri; links may have expired.)

Mary Poppendieck on the Lean Development mailing list writes:
The difference between a planned and a market economy is rooted in two different management philosophies:
  • Management-as-planning/adhering focuses on creating a plan that becomes a blueprint for action, then managing implementation by measuring adherence to the plan.
  • Management-as-organizing/learning focuses on organizing work so that intelligent agents know what to do by looking at the work itself, and improve upon their implementation through a learning process.

and links to Laurie Koskela and Greg Howell's article at: http://www.cpgec.ufrgs.br/norie/iglc10/papers/47-Koskela&Howell.pdf.

I've recently read two books on project management:

David Schmaltz makes a point in his book that a plan or organization done by one person or group will not be self-evident to another person or group. With that in mind, The Goal was somewhat frustrating, because the main character, who is changing the way his factory works, isn't involving the people on the factory floor in designing his process changes (he is involving other managers). In fact, of the few factory-floor union workers depicted in the novel, all but one of them are depicted as stupidly getting in the way of the process changes. I suppose one man against the world makes for some drama in a novel, but doesn't work that well in real life.

Goldratt writes in the after-word of The Goal that major obstacles of process improvement discovered by his readers were:
  • Lack of ability to propagate the message throughout the company.
  • Lack of ability to translate what they learned from the book into workable procedures for their plant
  • Lack of ability to persuade decision makers to allow the change of some of the measurements
which certainly shows that involving others is necessary for plans to work.

I also found The Goal frustrating because he made no mention of Deming, whose collaborative techniques for process improvement were proven "long ago" in Japan, and predated Goldratt's techniques.

Wednesday, April 15, 2015

Swift: Playing with HalfOpenInterval and Range

// HalfOpenInterval and Range

func brackets(x: Range<T>, i: T) -> T {
    return x[i] // Just forward to subscript

import Cocoa

var h1 = HalfOpenInterval<Int>(0, 4)
h1                    //  "0..<4 span="">

h1.start              // 0
h1.end                // 4
// h1.startIndex      // error
// h1.endIndex        // error
h1.description        // "0..<4 span="">
h1.debugDescription   // HalfOpenInterval(0..<4 p="">
h1.contains(-1)       // false
h1.contains(0)        // true
h1.contains(1)        // true
h1.contains(3)        // true
h1.contains(4)        // false
h1.contains(5)        // false

var b = (h1 == h1)    // true

var h2 = 0..<4
h2                    //  "0..<4 span="">

// h2.start           // error
// h2.end             // error
h2.startIndex         // 0
h2.endIndex           // 4
h2.description        // "0..<4 span="">
h2.debugDescription   // Range(0..<4 span="">
// h2[-1]       // error
brackets(h2, -1)    // -1
brackets(h2, 0)    // 0
brackets(h2, 1)    // 1
brackets(h2, 3)    // 3
// brackets(h2, 4)    // error
// brackets(h2, 5)    // error

b = (h2 == h2)        // true

// b = (h1 == h2)     // error

var h3 = 0..<4
h3                    //  "0..<4 span="">

b = (h2 == h3)        // true

h1.isEmpty            // false
h2.isEmpty            // false

h3 = 0..<0            // 0..<0 span="">
h3.isEmpty            // true
h3.debugDescription   // Range(0..<0 span="">

h1 = HalfOpenInterval<Int>(0, 0)
h1                    // 0..<0 span="">
h1.isEmpty            // true
h1.debugDescription   // HalfOpenInterval(0..<0 p="">

More on Refactoring / Fear of Refactoring

(Originally posted 2003.Jun.04 Wed; links may have expired.)

Ken Arnold points out Martin Fowler commenting on Cringley's post on Refactoring.

Somebody, possibly Kent Beck, said something very much like "If the code quality is going down, you're doomed. If the quality, however bad to start with, is always going up, you have a chance at success." (I can't find the quote. I need google to Find What I mean, not what I ask it to find.)

Every company that has invested in software has invested in an asset that they can choose to improve or let decline in value. If those software assets never need to change - no bug-fixes, no enhancements, no adapting to changes in the software ecology (new versions of libraries, new OSes) - then don't refactor.

Fear of changing legacy code is healthy, if the legacy code is not fully covered by automated tests. You need something to rapidly tell you if you've broken something when you're doing a code improvement. Michael Features is working on a book about making legacy testable here: http://groups.yahoo.com/group/welc/.

Probably a good motivator for improving code is measuring the cost of changing code for bug-fixes and feature additions (and measuring the Fault Feedback Ratio). The older and cruftier the code is, the higher the cost of changing it. The cost of refactoring (which will reduce maintenance costs in the long run) has to be weighed against the increasing costs of maintenance when no refactorings are being done.

Perhaps the best way of addresssing fear of refactoring is by practicing it on toy projects. William C. Wake has a book called The Refactoring Workbook due in July. Preview chapters are on-line: http://www.xp123.com/rwb/.

The other key thing about refactoring to improve the design by taking small steps. For example, most code has functions and methods that are too long, containing multiple 'bundles' of lines, sometimes with comments describing the intent of each bundle. Use the extract method refactoring to move each little bundle of lines into a new function, named after that comment. (See my example below.) When big methods have been broken down into small methods, duplicate code becomes more obvious, and can be removed, reducing the amount of code to be maintained and raising the level of abstraction in the code.

Big refactorings are much easier if lots of small refactorings have already been done. It is important that the team members agree on the value of the small refactorings - you don't want inviduals to 'undo' each other's code improvements.

Example of "extract method" refactorings - pseudocode


 procedure testVerylongmethod()
    var result = Verylongmethod()
    var expectedResult = 2398
    assertEqual( expectedResult, result )
 end procedure

 function Verylongmethod()
   // set up foos
   var foos = array [1..3] of foobar
   foos[1] = new foobar(2,2,3,4)
   foos[2] = new foobar(3,4,6,3)
   foos[3] = new foobar(4,67,83,3)

   // calculate muckiness
   var muckiness = 0
   for index = 1 to foos.length
      muckiness = foos[ index ] . muckinessQuotient( index ) + muckiness
   end for

   return muckiness
 end function


(the test doesn't change, so I don't reproduce it here.)

 function setupfoos()
   var foos = array [1..3] of foobar
   foos[1] = new foobar(2,2,3,4)
   foos[2] = new foobar(3,4,6,3)
   foos[3] = new foobar(4,67,83,3)
  return foos
 end function

 function calculatemuckiness( foos )
   var muckiness = 0
   for index = 1 to foos.length
      muckiness = foos[ index ] . muckinessQuotient( index ) + muckiness
   end for
   return muckiness
 end function

 function Verylongmethod()
   var foos = setupfoos()
   var muckiness = calculatemuckiness( foos )
   return muckiness
 end function

Now imagine that there is Anotherlongmethod similar to the original function Verylongmethod, where the only difference in how it sets up the foos array. It can be refactored to call the same calculatemuckiness method that had been extracted from Verylongmethod.


 procedure testAnotherlongmethod()
  var result = Anotherlongmethod
  var expectedResult = 9487
  assertEqual( expectedResult, result )
 end procedure

 function Anotherlongmethod()

   // set up foos
   var foos = array [1..4] of foobar
   foos[1] = new foobar(2,9,3,4)
   foos[2] = new foobar(3,9,6,3)
   foos[3] = new foobar(4,9,83,3)
   foos[4] = new foobar(9,9,9,9)

   // calculate muckiness
   var muckiness = 0
   for index = 1 to foos.length
      muckiness = foos[ index ] . muckinessQuotient( index ) + muckiness
   end for

   return muckiness
 end function


(the test doesn't change, so I don't reproduce it here.)

 function setupfourfoos()
   var foos = array [1..4] of foobar
   foos[1] = new foobar(2,9,3,4)
   foos[2] = new foobar(3,9,6,3)
   foos[3] = new foobar(4,9,83,3)
   foos[4] = new foobar(9,9,9,9)
   return foos
 end function

 function Anotherlongmethod()
   var foos = setupfourfoos()
   var muckiness = calculatemuckiness( foos )
   return muckiness
 end function

Please forgive all the magic numbers; I wanted to keep the example self-contained. While the safety of these refactorings can often be determined by code inspection, they are best determined by tests - the test don't change during (most) refactorings, and they should be passing before and after the refactoring.

Tuesday, April 14, 2015

Playing with Swift Array

// Playing with Swift Array

var a = Array<Int>([1,2,3])
a              //  [1,2,3]
a[0]           // 1
a[1]           // 2
a[2]           // 3

var b = Array<Int>(count:5, repeatedValue:42)
b   // [42, 42, 42, 42, 42]

a.startIndex   // 0
b.startIndex   // 0

a.endIndex     // 3
b.endIndex     // 5

a.count        // 3
b.count        // 5

a.capacity     // 4
b.capacity     // 6

var c = Array<Int>()
c   // 0 elements

a.isEmpty      // false
c.isEmpty      // true

a.first        // {Some 1}
b.first        // {Some 42}
c.first        // nil

a.last         // {Some 3}
b.last         // {Some 42}
c.last         // nil

// "[1, 2 3]"

// "[42, 42, 42, 42, 42]"

// "[]"

// "1, 2, 3]"

// "[]"

var f = a.generate()
// {{1, 2, 3}, _position 0}
f.next()         // {Some 1}
f.next()         // {Some 2}
f.next()         // {Some 3}
f.next()         // nil

a.capacity       // 12

a                // [1, 2, 3, -1]
b                // [42, 42, 42, 42, 1, 2, 3, -1]
b.removeLast()   // -1
b                // [42, 42, 42, 42, 1, 2, 3]

a.insert(31, atIndex: 0)
a                // [31, 1, 2, 3, -1]

a                // [31, 1, 3, -1]

a.insert(13, atIndex: 2)
a                // [31, 1, 13, 3, -1]

a                // 0 elements

var aa = [10]
var bb = [[-1],[-2],[-3]]

var cc = aa.join(bb)
cc               // [-1, 10, -2, 10, -3]

-1 + 10 + -2 + 10 + -3    // 14

var dd = cc.reduce(0, combine: {(Int u, Int t)->Int in u+t})
dd    // 14

dd = cc.reduce(0, combine: {$0 + $1})
dd    // 14

-1 * 10 * -2 * 10 * -3    // -600

dd = cc.reduce(1, combine: {(Int u, Int t)->Int in u*t})
dd    // -600

cc.reduce(1, combine: {$0 * $1})
dd    // -600

cc.sort(>)         // [10, 10, -1, -2, -3]
cc                 // [10, 10, -1, -2, -3]
cc.sort(<)         // [-3, -2, -1, 10, 10]
cc                 // [-3, -2, -1, 10, 10]

cc = [5, 2, 4, 8]
cc.sorted(<)       // [2, 4, 5, 8]
cc                 // unchanged = [5, 2, 4, 8]
cc.sorted(>)       // [8, 5, 4, 2]
cc                 // unchanged = [5, 2, 4, 8]

cc = [5, 2, 4, 8, 1, 3, 6, 7]

let ee = cc.sorted( {(Int v, Int w) -> Bool in
    if (v % 2) == 0 && (w % 2) != 0 { // even, odd
        return true
    else if (v % 2) != 0 && (w % 2) == 0 { // odd, even
        return false
    else if (v % 2) == 0 && (w % 2) == 0 { // even, even
        return v < w
    else { // if (v % 2) != 0 && (w % 2) != 0 // odd, odd
        return w > v
ee       // [2, 4, 6, 8, 1, 3, 5, 7]

let ff = ee.map( {100 + $0} )
ff               // [102, 104, 106, 108, 101, 103, 105, 107]

cc = [1, 2, 3, 4]
cc.reverse()     // [4, 3, 2, 1]
cc               // unchanged = [1, 2, 3, 4]

let gg = cc.filter({$0 % 2 == 0})
gg               // [2, 4]
let hh = cc.filter({$0 % 2 != 0})
hh               // [1, 3]

cc = [1, 2, 3, 4]
cc.replaceRange( Range(start: 1, end: 3), with: [5,6])
cc               // [1, 5, 6, 4]

cc.splice([13, 14], atIndex: 2)
cc               // [1, 5, 13, 14, 6, 4]

cc.removeRange(Range(start: 2, end: 4))
cc               // [1, 5, 6, 4]

// a.getMirror() // {{1, 2, 3}}


(Originally posted 2003.May.08 Thu; links may have expired.)

What's the difference in a bug in a new feature, and a bug that suddenly appears in an old feature? In a new feature, one is getting added value imperfectly. In an old feature, one that used to work bug-free, but is now broken, someone on your team is subtracting value.

For a very long time, experts like Jerry Weinberg have been recommending careful code reviews of maintenance changes, because a bug in a deployed system (like payroll or billing) can result in multiple millions of dollars in losses.

Someone inserting defects into working code is creating waste: not just the risk of printing bad checks or bills, but creating work for themselves and others to identify and fix the problem they created.

The danger in maintenance or incremental development is that an innocuous change breaks an existing feature. Extreme Programming deals with this danger in three ways: pair programming (continuous code review), extensive automated unit tests, and automated acceptance tests. Plus, most if not all XP teams have manual testing as well.

Many teams doing test-driven-development (a core practice of XP) report that their unit-test code coverage is nearly 100% statement/branch coverage without taking extra effort. This gives them the freedom to make an innocuous change (as part of a refactoring, for example), and see within minutes if it breaks anything. If it does, they can Undo their change, or evaluate the change more carefully.

Monday, April 13, 2015

So-Called Software Engineering Body of Knowledge (SWEBOK)

(Originally posted 2003.Jun.01 Sun; links may have expired.)

Grady Booch has this to say about the SWEBOK: "The SWEBOK I reviewed was well-intentioned but misguided, naive, incoherent, and just flat wrong in so many dimensions."

Cem Kaner writes: "I think SWEBOK promotes a set of practices that are sometimes appropriate, but in many contexts thay are as outrageously expensive as they are remarkably ineffective. The idea of adopting these as a standard of care for our profession, is, at least to me, abhorrent."

It seems like the SWEBOK group ignored Grady Booch and many other reviewers three years ago. It attempts to be an authoritative reference of "generally accepted" practices, but the SWEBOK does not reflect principles and practices of agile software development, and otherwise misrepresents itself when claiming to be "generally accepted" by our diverse "industry". Do game developers, web-app developers, AI researchers, and shrink-wrap software developers need to use the same practices? SWEBOK may be representative of Department of Defense contracting practices (which is far from "general"), but even the DoD is trying to develop software systems in more agile ways these days.

The SWEBOK is again open to review. Cem Kaner recommends that we become reviewers to highlight that the practices the SWEBOK preaches are not accepted by all. I recommend that you not only review the SWEBOK and provide your comments to that group, but you also post your comments on the web, so that everyone using a search engine can see the controversy around the SWEBOK.

One danger of the SWEBOK is that it would be used to license software developers. Software development is too broad a field, too immature, and too rapidly changing a profession to have meaningful tests for licensing developers. Cem Kaner was part of the ACM task force that considered and rejected the SWEBOK and licensing based on it. Check out what the ACM task force had to say about it at http://www.acm.org/serving/se_policy/. The ACM task force writes:
  • Licensing as Professional Engineers would be impractical for software engineers, because it would require examinations over subjects most software engineers neither study in their formal education nor need in order to practice competent software engineering.
  • Licensing software engineers as Professional Engineers would have no or little effect on the safety of the software produced.
  • The SWEBOK effort, which specifically excludes from the body of knowledge the special knowledge required for most safety-critical systems (such as real-time software engineering techniques), will have little relevance for safety-critical systems, and it dangerously excludes the most important knowledge required to build these systems.
  • Each industry and software engineering domain will need to determine an appropriate mix of approaches that work together to solve their particular problems and fit within the cultural context of the particular industry. There are no simple and universal fixes to solve the problem of ensuring public safety. Effective approaches will involve establishing accountability, competency within specific application domains and job responsibilities, liability, regulation where appropriate, standards, voluntary product certification and warranties, and industry-specific requirements. Licensing as Professional Engineers would not be an effective way to accomplish any of these goals.

The task force recommended that:
  • ACM withdraw from efforts to license software engineers as Professional Engineers.
  • ACM take a stand against government efforts to require the licensing of software engineers as impractical, ineffective with respect to protecting public safety, and potentially detrimental with respect to economic and other societal and technological factors.
  • ACM not support the SWEBOK activities, but consider supporting other efforts to validate and codify basic knowledge in various aspects of software engineering.[...]
  • The professional societies, including ACM, must pursue every possible means towards improving the current state of affairs. At the same time, they must refrain from pursuing activities like SWEBOK that have a significant chance of reducing the public's understanding of, confidence in, and assurances about key properties of software.

When reviewing, please include positive comments as well as negative comments. If it's all negative, they will feel tempted to dismiss all that you write. George Dinwiddie had a good example of both positive and negative in the xp mailing list: They say, "One of the fundamental tenets of good software engineering is that there is good communication between system users and system developers." That's very good, but the next sentence says, "It is the requirements engineer who is the conduit for this communication." I don't see any justification for the assumption that there must be an intermediary.

Not only could a reviewer say that good communication is a good thing to emphasize, but the reviewer could then follow that by commenting that the assumption of a "conduit" is not generally accepted, perhaps referencing materials that document the "telephone game" effect of having someone between the system developers and the system users. Sure, a facilitator of some kind ("business analyst" or "human interface designer") is very helpful, but as an aid to the communication, not as a substitute for it. Extreme Programming, for example, says the "customer" should "speak with one voice", but not that the "customer" be a single person -- and if you call him a "requirements engineer", does that mean that person will have to a licensed engineer as well?

How much of a priesthood do we need?


(Originally posted 2003.Jun.23 Mon; links may have expired.)

Cem Kaner's Blog has as its first entry an call to arms on the SWEBOK. Here are few snippets:

In retrospect, I think that keeping away from SWEBOK was a mistake. I think it has the potential to do substantial harm. I urge you to get involved in the SWEBOK review, make your criticisms clear and explicit, and urge them in writing to abandon this project. Even though this will have little influence with the SWEBOK promoters, it will create a public record of controversy and protest. Because SWEBOK is being effectively pushed as a basis for licensing software engineers and evaluating / accrediting software engineering degree programs, a public record of controversy may play an important role.

I am most familiar with SWEBOK's treatments of software testing, software quality and metrics. It endorses practices that I consider wastefully bureaucratic, document-intensive, tedious, and in commercial software development, not likely to succeed. These are the practices that some software process enthusiasts have tried and tried and tried and tried and tried to ram down the throats of the software development community, with less success than they would like.

Only 500 people participated in the development of SWEBOK and many of them voiced deep criticisms of it. The balloted draft was supported by just over 300 people (of a mere 340 voting).  Within this group were professional trainers who stand to make substantial income from pre-licensing-exam and pre-certification-exam review courses, consulting/contracting firms who make big profits from contracts that (such as government contracts) that specify gold-plated software development processes (of course you need all this process documentation - the IEEE standards say you need it!), and academics who have never worked on a serious development project. There were also experienced, honest people with no conflicts of interest, but when there are only a few hundred voices, the voices of vested interests can exert a substantial influence on the result.

Sunday, April 12, 2015

Appreciative Inquiry

(Originally posted 2003.May.07 Wed; links may have expired.)

Brad Appleton, on the XP mailing list, recommends "appreciative inquiry" http://www.appreciative-inquiry.org/ and http://www.thinbook.com/chap11fromle.html. This technique lets people feel they and their ideas are appreciated, and focus on what's right rather than on what's wrong.

On Hal Macomber's blog Reforming Project Management, he wrote on Tuesday, April 29, 2003:

My favorite management author is Ken Blanchard author of The One Minute Manager and dozens of other books. [...] Blanchard implores people to focus on positive feedback. In his book Whale Done! he goes into how trainers never use negative feedback when working with dangerous animals. If killer whales can be trained to do the spectacular things that they do with only positive feedback, then why would we want to use negative feedback with the even more dangerous human beings?

Apparently, "Whale Done!" is also on video, dvd, etc... http://www.rctm.com/app/Product/71727.html

Saturday, April 11, 2015

Feedback, the Secret of Agile Development

(Originally posted 2003.May.10 Sat; links may have expired.)

A paper of mine, about feedback in Agile / XP projects, has been published on the AYE Conference website: http://www.ayeconference.com/Articles/Secretofagile.html. That paper concentrated on testing and customer involvement, but there is more feedback possible: there is feedback from members of the developer team, management, QA, and other stake-holders associated with the project.

The Standish group, in their Chaos Report, reports that the top three project success factors, in order of most important to less important are:

  • User Involvement
  • Executive Management Support
  • Clear Statement of Requirements

Other success factors that also support my point are:

  • Realistic Expectations
  • Smaller Project Milestones
  • Competent Staff
  • Ownership
  • Clear Vision and Objectives

In typical waterfall projects, user involvement is only at the first and last stages: requirements gathering and final testing (and sometimes users are not involved even in those two stages!). This obviously isn't enough, since IT projects, according to the Standish group, are only successful an average of 12% of the time.

Agile methods aim for success by requiring more user involvement through the life of the project: the practices of On-Site Customer and Frequent Releases, among others. User involvement also helps with maintaining realistic expectations and getting those requirements clearly understood.

Even with user involvement, there are project failures from lack of management support. Most of the stories of unsuccessful projects I've heard at BayXP are due to lack of executive management support - failure to get buy-in from the rest of the company. In some cases, the project was on-time and on-budget, but the rest of the company rejected the process and/or the people involved.

Successfully adopting XP requires a company culture compatible with its values, and many companies just don't have that culture. To get to XP, the company would need a culture change, which requires skilled, dedicated, change artists and a company-wide perceived need for change. Thus, Industrial XP does a Readiness Assessment to see if there is a culture fit.

Agile methods have smaller project milestones, a proven success factor, as well as frequent releases. Getting the product out in front of real users not only enables more feedback, but can be financially rewarding as well, by reducing time-to-market or opening a new market. For internal projects, "small releases" to some portion of the end-users is a proof of concept, demonstrating to management the viability of the project.

Competent staff is a sticky point. No method can work well if the people don't know how to do it or don't understand it. Many XP project failures result from staff that had no training on the practices and didn't practice all of the recommended practices. I've already written that XP's test-driven-development requires testing skills and refactoring skills that both inexperienced and experienced programmers may be lacking. Training and practice, whether by intensive, disciplined, self-training, or training from good teachers, is necessary for those new to XP.

Lack of competence in test-driven-development shows up in unexpectedly-failing acceptance tests, bugs slipping through to users, and decreasing velocity (features done per iteration). Developers puzzling over the bug-reports, unhappy users and managers, and increasingly-unmaintainable code probably realize that something is going wrong, but without a coach or someone experienced in software development, may not know how to fix it. In the worst cases, they ignore this feedback and deny that anything is going wrong.

Industrial XP's additional practices - Project Chartering, Project Community, Test-Driven Management, and so on - address the factors of ownership and clear vision. If the project starts to drift off course, members of the project community can point to the Charter and Manager-Tests and ask "why are we moving in this direction?" IXP's practice of doing Retrospectives regularly (mini-retrospective each iteration, as well as larger ones at longer intervals) will also help with clear vision of the objectives and ownership of the process, providing feedback to the participating members of the project community.

The hardest kind of feedback to specify in a "process" is personal feedback from people in the team, whether in pair programming, retrospectives, or otherwise.

Naomi Karten's book on Communication Gaps can help.

Many people I respect recommend What Did You Say?: The Art of Giving and Receiving Feedback by Charles N. Seashore, Edith Whitfield Seashore and Gerald M. Weinberg.

Norman L. Kerth wrote the definitive book on Project Retrospectives, but you shouldn't try to do a large retrospective without a skilled facilitator. Read the book to see why.

The larger community is also a valuable source of feedback and learning. Go to your local Agile or XP group meetings, join or at least read the XP, IXP, and/or TDD mailing lists. Read the books. Read the web. Go to conferences if you can.

Friday, April 10, 2015

Wright Interview

(Originally posted 2003.Jul.11 Fri)

I picked up a tape of the Mike Wallace Interviews of Frank Lloyd Wright the other day. It consists of two 25-minute interviews that were recorded around 1957, when Wright was around 88 years old. The first interview says a LOT about 1950's art of "the celebrity TV interview" and not much about Wright. The second interview was better, but neither one lets you hear Wright say anything specific about any of his buildings.

"My name is Mike Wallace; the cigarette is Philip Morris". In the first interview, Wallace is apparently trying to get to know "Wright, the man" by asking controversial questions like "what is your opinion on organized Christianity?" and "What do you think about mercy killing?" and questions about the "Mobocracy". Wright answers the questions very carefully and reasonably, well aware that he's on nation-wide TV. Unlike modern-day interviews, the interviewee isn't prompted or allowed to tell stories about his life.

Sometimes the first interview degenerates into asking about other celebrities: "What do you think about Salvador Dali?" and Picasso? Wallace tries to get Wright to say something about Charlie Chaplin's "anti-Americanism" (Chaplin was forced to leave the US because of accusations of Communistic tendencies), but changes the subject when Wright mentions McCarthyism. The last question of the first interview is: what do you think of Marilyn Monroe's architecture?

Quote from the first interview:
"I don't think Architecture is for the Mob, it certainly isn't for education. Education certainly knows nothing of it. And very few architects in the world know anything about it. I've been accused of saying I was the greatest architect in the world, and if I had said so, I don't think it would be very arrogant because I don't believe there are many, if any. For 500 years architecture has been phony... in the sense that it was not innate, it was not organic, it didn't have the character of Nature."

In the second interview, (several months later, back by popular demand) Wright is more comfortable, and can express his philosophy at more length. Still almost nothing "concrete" about architecture, though.

Quotes from FLW in the second interview:
"I would like to make architecture appropriate to the Declaration of Independence, to the center-line of our freedom; I would like to have a free architecture. I would like to have architecture that belong to where you see it standing, an architecture that was a grace to the landscape rather than a disgrace."

On the New York skyline:
"[It] never was planned; it's all a race for rent, and it is a great monument I think to the power of money and greed, trying to substitute money for ideas. I don't see an idea in the whole thing anywhere, do you?"

Answering "can you give me something to live by?" (the question young students ask of Wright):
"The answer is within yourself. In the nature of thing that you represent as yourself. Jesus said it, I think, when he said the kingdom of God is within you. That's where Architecture lies, that's where Humanity lies, that's where the future we're going to have lies. If we're ever going to amount to anything, it's there, now, and all we have to do is develop it."

Thursday, April 9, 2015

Waste Not

(Originally posted 2003.May.13 Tue; links may have expired.)

The Standish group, in their Chaos Report, reports that 85% of the features implemented in the average IT project are NOT not used by the customer.

If you knew which features the customers were really going to use, You could avoid wasting 85% of your budget, and you could ship 85% earlier, getting revenue sooner. Sounds like a win-win for both customer and supplier.

This applies to shrink-wrap software as well as in-house projects. id software (if I recall correctly) released limited-features (free) early versions of their breakthrough game Doom to get feedback from their users and get them addicted to their software.

How do you find out which features the customer really need? Ask them. Get them involved in the product with early releases. If possible, get a representative customer involved in the planning and feature definition.

Eliminating waste (like that unnecessary 85%) is one part of Lean Manufacturing. Like myself, Kent Beck did not view "manufacturing" as an appropriate metaphor for software development until he started reading about Taiichi Ohno's Toyota Production Method. Here's a small write-up: by R.Balakrishnan.

Lean Manufacturing, like XP, often encounters what seems to be irrational rejection (even after demonstrating success). On the XP mailing list Kent Beck writes:
I'm reading Lean Transformation, which is full of stories of dramatic improvements in manufacturing quality and productivity that are immediately dismantled as soon as the consultants leave the building, even at the eventual cost of the jobs of the people doing the dismantling. Which is to say, you are not alone.
Harvard Business Review has an article this month about Bill
Bratton, who has turned around 5 police organizations, including the
whole of NYPD. You should read the article, but the sequence I took
away was:
  • Put the managers in direct contact with the problem

  • Focus you resources on the single worst problem

  • Enlist powerful allies

  • Marginalize or eliminate naysayers

  • Beck also points to a paper [a zip file containing a pdf] on adopting Lean Manufacturing which is titled "First, Do No Harm!" by Michel Baudin. The paper recommends converting to Lean with pilot projects that show rapid improvement and pay for themselves. At no time should converting to Lean cause a drop in revenue or delays in delivering product. Baudin also points out that you can't convert to Lean instantly, because it requires learning new skills. Not unlike the new skills needed for XP.