C. Keith Ray

C. Keith Ray is developing software for Sizeography and Upstart Technology. He writes code for multiple platforms and languages, including iOS® and Macintosh®.
Go to Sizeography and Upstart Technology to join our mailing lists and see more about our products. Keith's Résumé (pdf)

Thursday, August 27, 2015

DDMathParser

Today’s blog entry (and hopefully for a few more) is about Dave DeLong’s DDMathParser, which is hosted here:  https://github.com/davedelong/DDMathParser.

I found DDMathParser after I wrote a calculator app that was intended to provide very precise math, when I was looking to improve its parsing of equations.

I’m also a little obsessed with numeric precision, but only a smidgen.

Computers and programmers usually represent “real number” types using float or double in C/C++ (and similarly-named types in many other languages). These can store a certain number of digits and an exponent. If you exceed that number of digits, you are no longer doing math precisely, and that can be VERY important when you dealing with money.

Never use float or double to calculate quantities of money that are large or which need to be precise. (Who likes sloppy accounting?)

To illustrate the lack of precision that double gets you, we can run the command-line demo of DDMathParser:

Math Evaluator!
    Type a mathematical expression to evaluate it.
    Type "functions" to show available functions
    Type "operators" to show available operators
    Type "exit" to quit
> 300.1 ** 14
    Parsed: pow(300.1,14)
    Rewritten as: 4.805337947671661e+34
    Evaluated: 4.805337947671661e+34
> 300.1 ** 14 + 1
    Parsed: add(pow(300.1,14),1)
    Rewritten as: 4.805337947671661e+34
    Evaluated: 4.805337947671661e+34

As you can see here, (300.1 ** 14) prints out as 4.805337947671661e+34,
and (300.1 ** 14 + 1) prints out as              4.805337947671661e+34.

(I’ve made sure the two numbers line up so you can see that they look the same.)

They not only look the same, but when I use == to compare them, it says they are the same!

> 300.1 ** 14 == 300.1 ** 14 + 1
    Parsed: l_eq(pow(300.1,14),add(pow(300.1,14),1))
    Rewritten as: 1
    Evaluated: 1

However, Dave provides a “high precision” option. So let’s turn that on.

In main.c we have:

int main (int argc, const char * argv[]) {
    ...
    DDMathEvaluator *evaluator = [[DDMathEvaluator alloc] init];

and when we look at the header file DDMathEvaluator.h we see:

    @interface DDMathEvaluator : NSObject

    @property (nonatomic) BOOL usesHighPrecisionEvaluation; // default is NO

so let’s add a line to main.c:

int main (int argc, const char * argv[]) {
    ...
    DDMathEvaluator *evaluator = [[DDMathEvaluator alloc] init];
    [evaluator setUsesHighPrecisionEvaluation: YES];

and run the command-line demo:

> 300.1 ** 14
    Parsed: pow(300.1,14)
    Rewritten as: 48053379476716554740761909736735670.908
    Evaluated: 48053379476716554740761909736735670.908
> 300.1 ** 14 + 1
    Parsed: add(pow(300.1,14),1)
    Rewritten as: 48053379476716554740761909736735671.908
    Evaluated: 48053379476716554740761909736735671.908
> 300.1 ** 14 == 1 + 300.1 ** 14
    Parsed: l_eq(pow(300.1,14),add(1,pow(300.1,14)))
    Rewritten as: 0
    Evaluated: 0

This time, instead of (300.1 ** 14) printing out as 4.805337947671661e+34,
it prints out as 48053379476716554740761909736735670.908; and, (300.1 ** 14 + 1)
prints as        48053379476716554740761909736735671.908.

In default precision mode, DDMathParser uses NSNumber, which uses float in 32-bit applications, and double in 64-bit apps. (The Swift version currently uses Double instead of NSNumber and doesn’t implement a high precision mode.)

In high precision mode, DDMathParser uses NSDecimal. Apple has provided NSDecimal as a money-safe numeric type in Foundation/NSDecimal.h.

NSDecimal can represent some big numbers very precisely, but it has a limit on the magnitude and number of digits it allows. According to Apple’s documentation for NSDecimalNumber (which seems to use NSDecimal internally), NSDecimal can represent any number that can be expressed as (mantissa * 10 ** exponent) where mantissa is a decimal integer up to 38 digits long, and exponent is an integer from –128 through 127.

We can test these limits using factorial:

> 103!
    Parsed: factorial(103)
    Rewritten as: 9902 9007164861 8040754671 5254581773 3488000000 
                       0000000000 0000000000 0000000000 0000000000 
                       0000000000 0000000000 0000000000 0000000000 
                       0000000000 0000000000 0000000000 0000000000
    Evaluated: ...
> 104!
    Parsed: factorial(104)
    Rewritten as: NaN
    Evaluated: NaN

(I modified the output to group digits by tens.)

I picked factorial because DDMathParser’s code to compute factorial with NSDecimal explicitly looks for the error NSCalculationOverflow. I found only four other places checking for underflow/overflow/etc., so I conjecture that most math in DDMathParser isn’t checking for those kinds of errors. (I could be wrong.)

In comparison, Python has infinite-precision integers, making it very good for financial calculations. Check it out:

$ python
Python 2.7.10 (default, Jul 14 2015, 19:46:27) 
[GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.39)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import math
>>> math.factorial(103)

   9902 9007164861 8040754671 5254581773 3490901658 
        2211449248 3005280554 6998766658 4162228321 
        4144107388 3538492653 5163859772 9209322288 
        2134415149 8915840000 0000000000 0000000000L

>>> math.factorial(104)

1029901 6745145627 6238485838 6476504428 3053772454 
        9990721823 2549177688 7871732475 2871745427 
        0987168388 8003235965 7041416383 7769517974 
        1979175588 7247360000 0000000000 0000000000L

(I also modified this output to group digits by tens.) Notice that the DDMathParser/NSDecimal version of 103! isn’t the same as Python’s 103!.

There’s a lot to explore with DDMathParser. For one, how does it actually parse out equations? For two, can we convert it to Swift? For three, what might be different if we developed this code in a test-driven style?

Dave has already converted some (most?) of the DDMathParser code to Swift. I’d like to get some practice with Swift, so in the next blog entry of this series, I’ll try to write some unit tests for some DDMathParser classes.

Other ideas: If we add overloaded math operators for NSDecimalNumber, we can simplify how that class can be used. If we wrap NSDecimal or NSDecimalNumber in a Swift struct, that might be even better. Swiftifying NSDecimal or NSDecimalNumber could be a separate project, but since it’s a good practice to actually use the code that you write, I’d like to do that in the context of DDMathParser.

Tuesday, April 21, 2015

Stages of Failure

(Originally posted 2003.Jun.07 Sat; links may have expired.)

From Rob Thomsett's page "Project Pathology: Causes, patterns and symptoms of project failure"... "We observed this pattern of failure in every one of the 20 major projects we reviewed. Fifteen of the 20 projects we reviewed had degraded to Step 4 and the remaining 5 were at Step 3."

The four stages of failure:

  1. Development team unilaterally de-scopes the project.
  2. Project manager requests additional people.
  3. Unpaid overtime work becomes the norm.
  4. Team has lost control of the project, but the team is in denial.


For an example of stage 4: "This denial was well evidenced in a project where all team members and the project manager felt that the project could meet a deadline 3 months out and that our review was a waste of time. In one day, we identified that the project was over 12 months behind schedule and would deliver [at best] in 15 months not 3! Our findings were wrong - the project finally delivered 18 months later."

Sunday, April 19, 2015

Shopping and Getting Better

(Originally posted 2003.Jun.05 Thu; links may have expired.)

Shopping

Andy Hunt blogs about shopping, not being able to find a shampoo that doesn't have fruit juice or herbal essences in it. He calls this "herd marketing" and writes: "It should be [...] the things I want. Isn't that supposed to be the essence of marketing? Give the people what they want -- not what everyone else wants to give them, or even what you want to give them."

Well, my understanding of typical marketing is that it's all about getting people to want what you have to offer, not offering what people want. Harry Beckwith's book Selling The Invisible was refreshing in that he goes against the grain, and says that selling the steak is more important than selling the sizzle.

Getting Better

Laurent Bossavit quoted Beck in his blog, referring to http://c2.com/cgi/wiki?GettingBetter. Thanks, Laurent.
How good the design is doesn't matter near as much as whether the design is getting better or worse. If it is getting better, day by day, I can live with it forever. If it is getting worse, I will die.

Friday, April 17, 2015

Open, Collaborative, Project Planning

(Originally posted 2003.May.09 Fri; links may have expired.)

Mary Poppendieck on the Lean Development mailing list writes:
The difference between a planned and a market economy is rooted in two different management philosophies:
  • Management-as-planning/adhering focuses on creating a plan that becomes a blueprint for action, then managing implementation by measuring adherence to the plan.
  • Management-as-organizing/learning focuses on organizing work so that intelligent agents know what to do by looking at the work itself, and improve upon their implementation through a learning process.

and links to Laurie Koskela and Greg Howell's article at: http://www.cpgec.ufrgs.br/norie/iglc10/papers/47-Koskela&Howell.pdf.

I've recently read two books on project management:

David Schmaltz makes a point in his book that a plan or organization done by one person or group will not be self-evident to another person or group. With that in mind, The Goal was somewhat frustrating, because the main character, who is changing the way his factory works, isn't involving the people on the factory floor in designing his process changes (he is involving other managers). In fact, of the few factory-floor union workers depicted in the novel, all but one of them are depicted as stupidly getting in the way of the process changes. I suppose one man against the world makes for some drama in a novel, but doesn't work that well in real life.

Goldratt writes in the after-word of The Goal that major obstacles of process improvement discovered by his readers were:
  • Lack of ability to propagate the message throughout the company.
  • Lack of ability to translate what they learned from the book into workable procedures for their plant
  • Lack of ability to persuade decision makers to allow the change of some of the measurements
which certainly shows that involving others is necessary for plans to work.

I also found The Goal frustrating because he made no mention of Deming, whose collaborative techniques for process improvement were proven "long ago" in Japan, and predated Goldratt's techniques.

Wednesday, April 15, 2015

Swift: Playing with HalfOpenInterval and Range

// HalfOpenInterval and Range

func brackets(x: Range<T>, i: T) -> T {
    return x[i] // Just forward to subscript

import Cocoa

var h1 = HalfOpenInterval<Int>(0, 4)
h1                    //  "0..<4 span="">

h1.start              // 0
h1.end                // 4
// h1.startIndex      // error
// h1.endIndex        // error
h1.description        // "0..<4 span="">
h1.debugDescription   // HalfOpenInterval(0..<4 p="">
h1.contains(-1)       // false
h1.contains(0)        // true
h1.contains(1)        // true
h1.contains(3)        // true
h1.contains(4)        // false
h1.contains(5)        // false

var b = (h1 == h1)    // true

var h2 = 0..<4
h2                    //  "0..<4 span="">

// h2.start           // error
// h2.end             // error
h2.startIndex         // 0
h2.endIndex           // 4
h2.description        // "0..<4 span="">
h2.debugDescription   // Range(0..<4 span="">
// h2[-1]       // error
brackets(h2, -1)    // -1
brackets(h2, 0)    // 0
brackets(h2, 1)    // 1
brackets(h2, 3)    // 3
// brackets(h2, 4)    // error
// brackets(h2, 5)    // error

b = (h2 == h2)        // true

// b = (h1 == h2)     // error

var h3 = 0..<4
h3                    //  "0..<4 span="">

b = (h2 == h3)        // true

h1.isEmpty            // false
h2.isEmpty            // false

h3 = 0..<0            // 0..<0 span="">
h3.isEmpty            // true
h3.debugDescription   // Range(0..<0 span="">

h1 = HalfOpenInterval<Int>(0, 0)
h1                    // 0..<0 span="">
h1.isEmpty            // true
h1.debugDescription   // HalfOpenInterval(0..<0 p="">


More on Refactoring / Fear of Refactoring

(Originally posted 2003.Jun.04 Wed; links may have expired.)

Ken Arnold points out Martin Fowler commenting on Cringley's post on Refactoring.

Somebody, possibly Kent Beck, said something very much like "If the code quality is going down, you're doomed. If the quality, however bad to start with, is always going up, you have a chance at success." (I can't find the quote. I need google to Find What I mean, not what I ask it to find.)

Every company that has invested in software has invested in an asset that they can choose to improve or let decline in value. If those software assets never need to change - no bug-fixes, no enhancements, no adapting to changes in the software ecology (new versions of libraries, new OSes) - then don't refactor.

Fear of changing legacy code is healthy, if the legacy code is not fully covered by automated tests. You need something to rapidly tell you if you've broken something when you're doing a code improvement. Michael Features is working on a book about making legacy testable here: http://groups.yahoo.com/group/welc/.

Probably a good motivator for improving code is measuring the cost of changing code for bug-fixes and feature additions (and measuring the Fault Feedback Ratio). The older and cruftier the code is, the higher the cost of changing it. The cost of refactoring (which will reduce maintenance costs in the long run) has to be weighed against the increasing costs of maintenance when no refactorings are being done.

Perhaps the best way of addresssing fear of refactoring is by practicing it on toy projects. William C. Wake has a book called The Refactoring Workbook due in July. Preview chapters are on-line: http://www.xp123.com/rwb/.

The other key thing about refactoring to improve the design by taking small steps. For example, most code has functions and methods that are too long, containing multiple 'bundles' of lines, sometimes with comments describing the intent of each bundle. Use the extract method refactoring to move each little bundle of lines into a new function, named after that comment. (See my example below.) When big methods have been broken down into small methods, duplicate code becomes more obvious, and can be removed, reducing the amount of code to be maintained and raising the level of abstraction in the code.

Big refactorings are much easier if lots of small refactorings have already been done. It is important that the team members agree on the value of the small refactorings - you don't want inviduals to 'undo' each other's code improvements.

Example of "extract method" refactorings - pseudocode

"before"

 procedure testVerylongmethod()
    var result = Verylongmethod()
    var expectedResult = 2398
    assertEqual( expectedResult, result )
 end procedure

 function Verylongmethod()
   // set up foos
   var foos = array [1..3] of foobar
   foos[1] = new foobar(2,2,3,4)
   foos[2] = new foobar(3,4,6,3)
   foos[3] = new foobar(4,67,83,3)

   // calculate muckiness
   var muckiness = 0
   for index = 1 to foos.length
      muckiness = foos[ index ] . muckinessQuotient( index ) + muckiness
   end for

   return muckiness
 end function

"after"

(the test doesn't change, so I don't reproduce it here.)

 function setupfoos()
   var foos = array [1..3] of foobar
   foos[1] = new foobar(2,2,3,4)
   foos[2] = new foobar(3,4,6,3)
   foos[3] = new foobar(4,67,83,3)
  return foos
 end function

 function calculatemuckiness( foos )
   var muckiness = 0
   for index = 1 to foos.length
      muckiness = foos[ index ] . muckinessQuotient( index ) + muckiness
   end for
   return muckiness
 end function

 function Verylongmethod()
   var foos = setupfoos()
   var muckiness = calculatemuckiness( foos )
   return muckiness
 end function

Now imagine that there is Anotherlongmethod similar to the original function Verylongmethod, where the only difference in how it sets up the foos array. It can be refactored to call the same calculatemuckiness method that had been extracted from Verylongmethod.

"before"

 procedure testAnotherlongmethod()
  var result = Anotherlongmethod
  var expectedResult = 9487
  assertEqual( expectedResult, result )
 end procedure

 function Anotherlongmethod()

   // set up foos
   var foos = array [1..4] of foobar
   foos[1] = new foobar(2,9,3,4)
   foos[2] = new foobar(3,9,6,3)
   foos[3] = new foobar(4,9,83,3)
   foos[4] = new foobar(9,9,9,9)

   // calculate muckiness
   var muckiness = 0
   for index = 1 to foos.length
      muckiness = foos[ index ] . muckinessQuotient( index ) + muckiness
   end for

   return muckiness
 end function

"after"

(the test doesn't change, so I don't reproduce it here.)

 function setupfourfoos()
   var foos = array [1..4] of foobar
   foos[1] = new foobar(2,9,3,4)
   foos[2] = new foobar(3,9,6,3)
   foos[3] = new foobar(4,9,83,3)
   foos[4] = new foobar(9,9,9,9)
   return foos
 end function

 function Anotherlongmethod()
   var foos = setupfourfoos()
   var muckiness = calculatemuckiness( foos )
   return muckiness
 end function

Please forgive all the magic numbers; I wanted to keep the example self-contained. While the safety of these refactorings can often be determined by code inspection, they are best determined by tests - the test don't change during (most) refactorings, and they should be passing before and after the refactoring.

Tuesday, April 14, 2015

Playing with Swift Array

// Playing with Swift Array

var a = Array<Int>([1,2,3])
a              //  [1,2,3]
a[0]           // 1
a[1]           // 2
a[2]           // 3

var b = Array<Int>(count:5, repeatedValue:42)
b   // [42, 42, 42, 42, 42]

a.startIndex   // 0
b.startIndex   // 0

a.endIndex     // 3
b.endIndex     // 5

a.count        // 3
b.count        // 5

a.capacity     // 4
b.capacity     // 6

var c = Array<Int>()
c   // 0 elements

a.isEmpty      // false
c.isEmpty      // true

a.first        // {Some 1}
b.first        // {Some 42}
c.first        // nil

a.last         // {Some 3}
b.last         // {Some 42}
c.last         // nil

a.description
// "[1, 2 3]"

b.description
// "[42, 42, 42, 42, 42]"

c.description
// "[]"

a.debugDescription
// "1, 2, 3]"

c.debugDescription
// "[]"

var f = a.generate()
// {{1, 2, 3}, _position 0}
f.next()         // {Some 1}
f.next()         // {Some 2}
f.next()         // {Some 3}
f.next()         // nil

a.reserveCapacity(12)
a.capacity       // 12

a.append(-1)
a                // [1, 2, 3, -1]
b.extend(a)
b                // [42, 42, 42, 42, 1, 2, 3, -1]
b.removeLast()   // -1
b                // [42, 42, 42, 42, 1, 2, 3]

a.insert(31, atIndex: 0)
a                // [31, 1, 2, 3, -1]

a.removeAtIndex(2)
a                // [31, 1, 3, -1]

a.insert(13, atIndex: 2)
a                // [31, 1, 13, 3, -1]

a.removeAll()
a                // 0 elements

var aa = [10]
var bb = [[-1],[-2],[-3]]

var cc = aa.join(bb)
cc               // [-1, 10, -2, 10, -3]

-1 + 10 + -2 + 10 + -3    // 14

var dd = cc.reduce(0, combine: {(Int u, Int t)->Int in u+t})
dd    // 14

dd = cc.reduce(0, combine: {$0 + $1})
dd    // 14

-1 * 10 * -2 * 10 * -3    // -600

dd = cc.reduce(1, combine: {(Int u, Int t)->Int in u*t})
dd    // -600

cc.reduce(1, combine: {$0 * $1})
dd    // -600

cc.sort(>)         // [10, 10, -1, -2, -3]
cc                 // [10, 10, -1, -2, -3]
cc.sort(<)         // [-3, -2, -1, 10, 10]
cc                 // [-3, -2, -1, 10, 10]

cc = [5, 2, 4, 8]
cc.sorted(<)       // [2, 4, 5, 8]
cc                 // unchanged = [5, 2, 4, 8]
cc.sorted(>)       // [8, 5, 4, 2]
cc                 // unchanged = [5, 2, 4, 8]

cc = [5, 2, 4, 8, 1, 3, 6, 7]

let ee = cc.sorted( {(Int v, Int w) -> Bool in
    if (v % 2) == 0 && (w % 2) != 0 { // even, odd
        return true
    }
    else if (v % 2) != 0 && (w % 2) == 0 { // odd, even
        return false
    }
    else if (v % 2) == 0 && (w % 2) == 0 { // even, even
        return v < w
    }
    else { // if (v % 2) != 0 && (w % 2) != 0 // odd, odd
        return w > v
    }
})
ee       // [2, 4, 6, 8, 1, 3, 5, 7]

let ff = ee.map( {100 + $0} )
ff               // [102, 104, 106, 108, 101, 103, 105, 107]

cc = [1, 2, 3, 4]
cc.reverse()     // [4, 3, 2, 1]
cc               // unchanged = [1, 2, 3, 4]

let gg = cc.filter({$0 % 2 == 0})
gg               // [2, 4]
let hh = cc.filter({$0 % 2 != 0})
hh               // [1, 3]

cc = [1, 2, 3, 4]
cc.replaceRange( Range(start: 1, end: 3), with: [5,6])
cc               // [1, 5, 6, 4]

cc.splice([13, 14], atIndex: 2)
cc               // [1, 5, 13, 14, 6, 4]

cc.removeRange(Range(start: 2, end: 4))
cc               // [1, 5, 6, 4]

//
// a.getMirror() // {{1, 2, 3}}






Regressions

(Originally posted 2003.May.08 Thu; links may have expired.)

What's the difference in a bug in a new feature, and a bug that suddenly appears in an old feature? In a new feature, one is getting added value imperfectly. In an old feature, one that used to work bug-free, but is now broken, someone on your team is subtracting value.

For a very long time, experts like Jerry Weinberg have been recommending careful code reviews of maintenance changes, because a bug in a deployed system (like payroll or billing) can result in multiple millions of dollars in losses.

Someone inserting defects into working code is creating waste: not just the risk of printing bad checks or bills, but creating work for themselves and others to identify and fix the problem they created.

The danger in maintenance or incremental development is that an innocuous change breaks an existing feature. Extreme Programming deals with this danger in three ways: pair programming (continuous code review), extensive automated unit tests, and automated acceptance tests. Plus, most if not all XP teams have manual testing as well.

Many teams doing test-driven-development (a core practice of XP) report that their unit-test code coverage is nearly 100% statement/branch coverage without taking extra effort. This gives them the freedom to make an innocuous change (as part of a refactoring, for example), and see within minutes if it breaks anything. If it does, they can Undo their change, or evaluate the change more carefully.

Monday, April 13, 2015

So-Called Software Engineering Body of Knowledge (SWEBOK)

(Originally posted 2003.Jun.01 Sun; links may have expired.)

Grady Booch has this to say about the SWEBOK: "The SWEBOK I reviewed was well-intentioned but misguided, naive, incoherent, and just flat wrong in so many dimensions."

Cem Kaner writes: "I think SWEBOK promotes a set of practices that are sometimes appropriate, but in many contexts thay are as outrageously expensive as they are remarkably ineffective. The idea of adopting these as a standard of care for our profession, is, at least to me, abhorrent."

It seems like the SWEBOK group ignored Grady Booch and many other reviewers three years ago. It attempts to be an authoritative reference of "generally accepted" practices, but the SWEBOK does not reflect principles and practices of agile software development, and otherwise misrepresents itself when claiming to be "generally accepted" by our diverse "industry". Do game developers, web-app developers, AI researchers, and shrink-wrap software developers need to use the same practices? SWEBOK may be representative of Department of Defense contracting practices (which is far from "general"), but even the DoD is trying to develop software systems in more agile ways these days.

The SWEBOK is again open to review. Cem Kaner recommends that we become reviewers to highlight that the practices the SWEBOK preaches are not accepted by all. I recommend that you not only review the SWEBOK and provide your comments to that group, but you also post your comments on the web, so that everyone using a search engine can see the controversy around the SWEBOK.

One danger of the SWEBOK is that it would be used to license software developers. Software development is too broad a field, too immature, and too rapidly changing a profession to have meaningful tests for licensing developers. Cem Kaner was part of the ACM task force that considered and rejected the SWEBOK and licensing based on it. Check out what the ACM task force had to say about it at http://www.acm.org/serving/se_policy/. The ACM task force writes:
  • Licensing as Professional Engineers would be impractical for software engineers, because it would require examinations over subjects most software engineers neither study in their formal education nor need in order to practice competent software engineering.
  • Licensing software engineers as Professional Engineers would have no or little effect on the safety of the software produced.
  • The SWEBOK effort, which specifically excludes from the body of knowledge the special knowledge required for most safety-critical systems (such as real-time software engineering techniques), will have little relevance for safety-critical systems, and it dangerously excludes the most important knowledge required to build these systems.
  • Each industry and software engineering domain will need to determine an appropriate mix of approaches that work together to solve their particular problems and fit within the cultural context of the particular industry. There are no simple and universal fixes to solve the problem of ensuring public safety. Effective approaches will involve establishing accountability, competency within specific application domains and job responsibilities, liability, regulation where appropriate, standards, voluntary product certification and warranties, and industry-specific requirements. Licensing as Professional Engineers would not be an effective way to accomplish any of these goals.

The task force recommended that:
  • ACM withdraw from efforts to license software engineers as Professional Engineers.
  • ACM take a stand against government efforts to require the licensing of software engineers as impractical, ineffective with respect to protecting public safety, and potentially detrimental with respect to economic and other societal and technological factors.
  • ACM not support the SWEBOK activities, but consider supporting other efforts to validate and codify basic knowledge in various aspects of software engineering.[...]
  • The professional societies, including ACM, must pursue every possible means towards improving the current state of affairs. At the same time, they must refrain from pursuing activities like SWEBOK that have a significant chance of reducing the public's understanding of, confidence in, and assurances about key properties of software.

When reviewing, please include positive comments as well as negative comments. If it's all negative, they will feel tempted to dismiss all that you write. George Dinwiddie had a good example of both positive and negative in the xp mailing list: They say, "One of the fundamental tenets of good software engineering is that there is good communication between system users and system developers." That's very good, but the next sentence says, "It is the requirements engineer who is the conduit for this communication." I don't see any justification for the assumption that there must be an intermediary.

Not only could a reviewer say that good communication is a good thing to emphasize, but the reviewer could then follow that by commenting that the assumption of a "conduit" is not generally accepted, perhaps referencing materials that document the "telephone game" effect of having someone between the system developers and the system users. Sure, a facilitator of some kind ("business analyst" or "human interface designer") is very helpful, but as an aid to the communication, not as a substitute for it. Extreme Programming, for example, says the "customer" should "speak with one voice", but not that the "customer" be a single person -- and if you call him a "requirements engineer", does that mean that person will have to a licensed engineer as well?

How much of a priesthood do we need?


UPDATE


(Originally posted 2003.Jun.23 Mon; links may have expired.)

Cem Kaner's Blog has as its first entry an call to arms on the SWEBOK. Here are few snippets:

In retrospect, I think that keeping away from SWEBOK was a mistake. I think it has the potential to do substantial harm. I urge you to get involved in the SWEBOK review, make your criticisms clear and explicit, and urge them in writing to abandon this project. Even though this will have little influence with the SWEBOK promoters, it will create a public record of controversy and protest. Because SWEBOK is being effectively pushed as a basis for licensing software engineers and evaluating / accrediting software engineering degree programs, a public record of controversy may play an important role.

I am most familiar with SWEBOK's treatments of software testing, software quality and metrics. It endorses practices that I consider wastefully bureaucratic, document-intensive, tedious, and in commercial software development, not likely to succeed. These are the practices that some software process enthusiasts have tried and tried and tried and tried and tried to ram down the throats of the software development community, with less success than they would like.

Only 500 people participated in the development of SWEBOK and many of them voiced deep criticisms of it. The balloted draft was supported by just over 300 people (of a mere 340 voting).  Within this group were professional trainers who stand to make substantial income from pre-licensing-exam and pre-certification-exam review courses, consulting/contracting firms who make big profits from contracts that (such as government contracts) that specify gold-plated software development processes (of course you need all this process documentation - the IEEE standards say you need it!), and academics who have never worked on a serious development project. There were also experienced, honest people with no conflicts of interest, but when there are only a few hundred voices, the voices of vested interests can exert a substantial influence on the result.

Sunday, April 12, 2015

Appreciative Inquiry

(Originally posted 2003.May.07 Wed; links may have expired.)

Brad Appleton, on the XP mailing list, recommends "appreciative inquiry" http://www.appreciative-inquiry.org/ and http://www.thinbook.com/chap11fromle.html. This technique lets people feel they and their ideas are appreciated, and focus on what's right rather than on what's wrong.

On Hal Macomber's blog Reforming Project Management, he wrote on Tuesday, April 29, 2003:

My favorite management author is Ken Blanchard author of The One Minute Manager and dozens of other books. [...] Blanchard implores people to focus on positive feedback. In his book Whale Done! he goes into how trainers never use negative feedback when working with dangerous animals. If killer whales can be trained to do the spectacular things that they do with only positive feedback, then why would we want to use negative feedback with the even more dangerous human beings?

Apparently, "Whale Done!" is also on video, dvd, etc... http://www.rctm.com/app/Product/71727.html


Saturday, April 11, 2015

Feedback, the Secret of Agile Development

(Originally posted 2003.May.10 Sat; links may have expired.)

A paper of mine, about feedback in Agile / XP projects, has been published on the AYE Conference website: http://www.ayeconference.com/Articles/Secretofagile.html. That paper concentrated on testing and customer involvement, but there is more feedback possible: there is feedback from members of the developer team, management, QA, and other stake-holders associated with the project.

The Standish group, in their Chaos Report, reports that the top three project success factors, in order of most important to less important are:

  • User Involvement
  • Executive Management Support
  • Clear Statement of Requirements

Other success factors that also support my point are:

  • Realistic Expectations
  • Smaller Project Milestones
  • Competent Staff
  • Ownership
  • Clear Vision and Objectives

In typical waterfall projects, user involvement is only at the first and last stages: requirements gathering and final testing (and sometimes users are not involved even in those two stages!). This obviously isn't enough, since IT projects, according to the Standish group, are only successful an average of 12% of the time.

Agile methods aim for success by requiring more user involvement through the life of the project: the practices of On-Site Customer and Frequent Releases, among others. User involvement also helps with maintaining realistic expectations and getting those requirements clearly understood.

Even with user involvement, there are project failures from lack of management support. Most of the stories of unsuccessful projects I've heard at BayXP are due to lack of executive management support - failure to get buy-in from the rest of the company. In some cases, the project was on-time and on-budget, but the rest of the company rejected the process and/or the people involved.

Successfully adopting XP requires a company culture compatible with its values, and many companies just don't have that culture. To get to XP, the company would need a culture change, which requires skilled, dedicated, change artists and a company-wide perceived need for change. Thus, Industrial XP does a Readiness Assessment to see if there is a culture fit.

Agile methods have smaller project milestones, a proven success factor, as well as frequent releases. Getting the product out in front of real users not only enables more feedback, but can be financially rewarding as well, by reducing time-to-market or opening a new market. For internal projects, "small releases" to some portion of the end-users is a proof of concept, demonstrating to management the viability of the project.

Competent staff is a sticky point. No method can work well if the people don't know how to do it or don't understand it. Many XP project failures result from staff that had no training on the practices and didn't practice all of the recommended practices. I've already written that XP's test-driven-development requires testing skills and refactoring skills that both inexperienced and experienced programmers may be lacking. Training and practice, whether by intensive, disciplined, self-training, or training from good teachers, is necessary for those new to XP.

Lack of competence in test-driven-development shows up in unexpectedly-failing acceptance tests, bugs slipping through to users, and decreasing velocity (features done per iteration). Developers puzzling over the bug-reports, unhappy users and managers, and increasingly-unmaintainable code probably realize that something is going wrong, but without a coach or someone experienced in software development, may not know how to fix it. In the worst cases, they ignore this feedback and deny that anything is going wrong.

Industrial XP's additional practices - Project Chartering, Project Community, Test-Driven Management, and so on - address the factors of ownership and clear vision. If the project starts to drift off course, members of the project community can point to the Charter and Manager-Tests and ask "why are we moving in this direction?" IXP's practice of doing Retrospectives regularly (mini-retrospective each iteration, as well as larger ones at longer intervals) will also help with clear vision of the objectives and ownership of the process, providing feedback to the participating members of the project community.

The hardest kind of feedback to specify in a "process" is personal feedback from people in the team, whether in pair programming, retrospectives, or otherwise.

Naomi Karten's book on Communication Gaps can help.

Many people I respect recommend What Did You Say?: The Art of Giving and Receiving Feedback by Charles N. Seashore, Edith Whitfield Seashore and Gerald M. Weinberg.

Norman L. Kerth wrote the definitive book on Project Retrospectives, but you shouldn't try to do a large retrospective without a skilled facilitator. Read the book to see why.

The larger community is also a valuable source of feedback and learning. Go to your local Agile or XP group meetings, join or at least read the XP, IXP, and/or TDD mailing lists. Read the books. Read the web. Go to conferences if you can.