C. Keith Ray

C. Keith Ray writes about and develops software in multiple platforms and languages, including iOS® and Macintosh®.
Keith's Résumé (pdf)

Wednesday, March 5, 2014

Microblogged on Twitter

By the way, I recently micro-blogged a series of "straw-man" arguments against, and rebuttals for, unit-testing and TDD. Here they are somewhat expanded. Rebuttals are italic. 

I am NOT making up these straw-men arguments. I’ve even heard them from people who should, 14 years after Extreme Programming Explained was published, have some understanding of incremental design and development, who should know both the differences and similarities between TDD and unit testing, and how refactoring fits into both small and big design activities.

1. straw-man: OO means you can't know what modified something's state.

That sounds like a global var bug. Objects can guard their state. Most people could learn a bit more about designing non-mutable classes, but OO doesn’t prevent you from knowing an object’s state.

2. straw-man: Always-passing tests are useless. 

Yes, tautological tests are useless. Don't do that. You learn something when a rarely-failing unit test fails: something unexpected happened! Either some code (or data) has changed and it now violates a test’s description of how that code (or data) is supposed to work, or the programmers forgot to update their tests. If a test that has been passing for 10 months suddenly fails, you know something is wrong. The worser condition is that no problem is detected until 20 months have passed, and you have to trace changes back to 10-month old code to find out what code-change caused the problem.

Tests that fail frequently or seemingly randomly, indicate a “people” problem: members of the team don’t understand the requirements, or requirements keep changing due to activity outside of the team, or coding or testing not done well and better training is needed.

3. straw-man: without testing-hooks in code, you can't do white-box tests.

Try mock objects.

4. straw-man: to test a for-loop, you need as many tests as there are iterations in the loop. 

Heard of boundary testing? See my Testing on the Toilet entry.

5. straw-man: changing code requires changing tests. 

This isn't a big burden if DRY (“Don’t Repeat Yourself”) is applied to both code & tests. (See also “Once and Only Once.)

5. straw-man: you can't have nice OO because TDD tests are "procedural". 

So not having no tests is required for nice OO?

6. straw-man: TDDing classes = bad design, because incrementally designing a class or classes while writing tests can’t be done well.  And, refactoring the structure of the code (to improve the design) is can’t be done, either.

Really? refactoring is supported by TDD-tests. Do well-designed classes pop out of your head fully-formed?

6. straw-man: when implementing features one-at-a-time, user experience is a  mess.

Before BDD, did you design entire UX in one second? It was incremental then, too.

7. straw-man: the object's state-space is big number (like 2**32) so we can't write enough unit tests, so don't even try.

Heard of boundary testing? See my Testing on the Toilet entry.

OTHER issues with Signed SSLVerifySignedServerKeyExchange ("goto fail" bug)

Mike Bland, whom I worked with at Google, teaching and spreading the word on how to write fast unit tests, find code smells and refactor them away, continuous integration, and TDD, has commented on the "goto fail" bug - and wants to direct your attention to other aspects of that code: the copy-paste code duplication that probably created the problem that the lack of unit tests didn't find:

Mike wrote:
I still haven’t found any other articles that suggest, as mine did, that the same block of code cut-and-pasted six times in the same file was a problem, that it was ripe for factoring out, and that the resulting function was straightforward to test in isolation. That’s curious to me; it’s like people got stuck on the one stupid goto fail; line and started drawing conclusions without looking at the surrounding code, seeing the same algorithm copied immediately above it, and suspecting, as I did, that there was a classic code duplication problem which fixing would’ve likely avoided the bug to begin with, test or no.
(Go read his whole blog entry, it is worth it, and lengthy. So much blogging is too short these days.)

He also wrote:

What’s more, if memory serves, Keith even wrote the Testing on the Toilet article that advocated for breaking larger chunks of logic into smaller, more testable functions to avoid having a combinatorial explosion of test inputs—the very concern that Bellovin had mentioned as rendering exhaustive system-level testing infeasible.5


My response to Mike is:

Hi Mike. Yes I did write a Testing on the Toilet article titled "Too Many Tests", which was posted inside Google as well as on the public Google Testing blog. On of the commenters said "Good example of equivalence class partioning."
In the movie Amadeus, the Austrian Emperor criticizes Mozart's music as having “too many notes.” How many tests are “too many” to test one function?

My personal blogging on the "goto fail" issue did just stick to the testable aspect of the code, because many people were saying it could not be tested at the unit level.

I could have also pointed out the need for code review and/or pair programming and the need for refactoring, based on a foundation of well-tested code, but I kept that blog entry focused only on the 'testable' topic.

[PS: Check out the "Real Programmers Write Tests" merchandise on my blog's home page.]

Tuesday, March 4, 2014

Next Product For Sizeography

As CTO and chief programmer of UpstartTechnology and the Sizeography division, I'd like announce our next product, already in progress.

hands holding tape-measure to find size of unknown object


This new app will allow you to place a common object on, or neat, another object, and find the approximate size of the unknown object. For example, if you place a quarter on a small table, snap a picture of it in our app, the app will compute the size of the table.

While the app won’t be ready right away, we want everyone to have a say, so after we get suggestions for the name, we’ll hold a vote to see which of the names wins.

Nominate a name for app in the comments to this blog (or our main company blog), and look for the poll to be put up after we’ve gotten some good suggestions for names. (G-rated names only, please).

How to enter: Leave a comment with your app name suggestion[s] and make sure we have your email so we can contact you if you win. We’ll take the best names and put up a poll, and the winner of the poll will get the prize.

The prize will probably be an Easter-egg hidden in the app, but if you’ve got a prize suggestion, be sure to let us know in the comments (G-rated only, please).


Sunday, February 23, 2014

TDD and Signed SSLVerifySignedServerKeyExchange

Test-driven development (TDD) done consistently throughout a project results in near-100% logic coverage. That's better than just statement coverage. It also results in testable code, better modularity, better design (if you pay attention to code smells), and faster software development.

The heart of TDD is:


  1. Write a test. Running that test should fail. (Tests are often simpler than the code they test, so usually this step is pretty easy.)
  1. Make the test pass by writing just enough code. (All other tests should pass as well.) 
  1. Refactor. (Not done every time through this loop; remove duplicate logic and look for other code smells.)
  1. Repeat until desired feature is implemented. 
  1. NOTE: Checking into source-code-control can be done whenever all tests are passing. (A Continuous Integration (CI) server can also run tests: the "fast tests" created by TDD and slower system tests.)


Badly-designed code is hard to test. Most developers who try to use TDD in a badly-designed, not-unit-tested project will find TDD is hard to do in this environment, and will give up. If they try to do "test-after" (the opposite of TDD's test-first practice), they will also find it hard to do in this environment and give up. And this creates a vicious cycle: untested bad code encourages more untested bad code.

If code is only written to make a test pass (as in TDD), you can't write an if statement until you have a test that requires it. You should end up with a test for the if statement being true, and a test for the if statement being false, possibly more than one test for each branch.

TDD in C is not terribly hard. I've taught people how to TDD in C and other languages. And the suites of tests created by TDD makes it possible to refactor with more confidence.

If one or more TDD tests (and other tests, like system tests) fail, someone broke something. The fine-grain testing that is part of TDD will usually pin-point where the new bug is. Undo the changes that broke the test, or fix things before doing anything else.

(Tests that fail unpredictably indicate something is non-deterministic either in the code-under-test or the test itself. That also needs to be fixed.)

landonf demonstrated that SSLVerifySignedServerKeyExchange() is unit-testable in isolation. See his code on github. I copied his unit test for a bad signature here:

@interface TestableSecurityTests : XCTestCase @end

@implementation TestableSecurityTests {
    SSLContext _ctx;
}

- (void) setUp {
    memset(_ctx.clientRandom, 'A', sizeof(_ctx.clientRandom));
    memset(_ctx.serverRandom, 'B', sizeof(_ctx.serverRandom));
}

- (void) tearDown {
    [super tearDown];
}


/* Verify that a bogus signature does not validate */
- (void) testVerifyRSASignature {
    SSLBuffer signedParams;
    SSLAllocBuffer(&signedParams, 32);
    uint8_t badSignature[128];
   
    memset(badSignature, 0, sizeof(badSignature));
    OSStatus err;
    err = SSLVerifySignedServerKeyExchange(&_ctx, true, signedParams, badSignature, sizeof(badSignature));
    XCTAssertNotEqual(err, 0, @"SSLVerifySignedServerKeyExchange() returned success on a completely bogus signature");
}

@end

Friday, February 14, 2014

A Key Aspect of Successful Projects

Rapid responses to questions is often a key to successful projects. Extreme Programming and other agile methods like Scrum consider this to be so important as to require the domain expert / product manager / business analyst / "product owner" / "Customer" to be co-located with the developers/testers. 

Obviously some projects can be successful without co-location, but odds against success rise whenever developers can't quickly and easily communicate with the Customer.

Tuesday, January 28, 2014

myGraph Initial View

Let this picture serve as a 1000 words. More to come...




Thanks to a lot of hard work

Thanks to a lot of hard work, Sizeography's first iOS app, myGraph, is now live in the app store. This app allows you to print completely customized grid or graph paper from your iPhone or iPad to any compatible AirPrint printer. For a 99¢ one-time purchase (no ads, ever), you will have as many or as few sheets of graph paper to your exact specifications, at home or away. 

You can also use myGraph to send a pdf of your custom or predefined grid or graph paper to:
  • AirDrop
  • DropBox
  • Evernote
  • Message
  • Mail
  • Save an image to your photo library.
  • Copy an image to be pasted in another app.
  • myGraph can send pdf files to apps that accept pdf files.
  • and, myGraph can send png files to apps that accept png files.


We're working on a list of 101 uses for this app. Here are a few we have thought up so far:
  1. Planning and designing woodworking projects.
  2. Print a grid to help with sewing patterns.
  3. Tailoring.
  4. Scrap-booking.
  5. Science class: plotting lab results.
  6. Math class: graphing equations.
  7. Geometry class.
  8. Trigonometry class.
  9. Practicing writing Chinese, Japanese, Korean, and other languages.
Have another use for custom graph paper in mind? Comment here or on our sizeography.com site.


Monday, January 27, 2014

Some thoughts on C, OO, ObjC, C++

C allows structs to be copied "by value" but C doesn't allow array types to be copied—directly. The following code will not compile:

typedef int FiveIntsArray[5];

FiveIntsArray function5ints( FiveIntsArray arrIn )
{
    FiveIntsArray ret:
    ret = arrIn;
    return ret;
}

int main(int argc,char*argv[])
{
    FiveIntsArray arr = {1,2,3,4,5};
    FiveIntsArray bar;
    bar = function5ints(arr);
    printf("%d, %d, %d, %d, %d\n", bar[0], bar[1], bar[2], bar[3], bar[4]);
}

Yes, it does not compile:



But put all this inside a struct, and it all works!

typedef struct FiveIntsStruct {
    int arr[5];
} FiveIntsStruct;

FiveIntsStruct function5ints( FiveIntsStruct arrIn )
{
    FiveIntsStruct ret;
    ret = arrIn;
    return ret;
}

int main(int argc,char*argv[])
{
    FiveIntsStruct arr = {{1,2,3,4,5}};
    FiveIntsStruct bar;
    bar = function5ints(arr);
    printf("%d, %d, %d, %d, %d\n", bar.arr[0], bar.arr[1], bar.arr[2], bar.arr[3], bar.arr[4]);
}

And I get this output:

1, 2, 3, 4, 5
Program ended with exit code: 0

Ta-da!

However, struct-copying, at least in the early days of C, was thought to be not very efficient, and was often avoided in C code. This tradition is so ingrained in C programmers that I have met 
some programmers who didn't know that struct-copying is allowed in C.

For small structs, and under modern compilers, struct-copying can be just as efficient as passing a single value, and more efficient than passing a pointer to a struct, which requires dereferencing the pointer to access its members. A 64-bit CPU can hold an entire FiveiIntsStruct in a single register.

If you want to do object-oriented programming in C, one way to do that is to start with structs. We can associate functions and structs, and we already have an example of something object-like in the C standard library: the file io functions.

typedef struct FILE { 
    // we don't care what's in here--consider it private.
} FILE;

FILE * fopen(const char *restrict filename, const char *restrict mode);
// fopen allocates & initializes, kind of like a C++ constructor.

int fprintf(FILE * restrict stream, const char * restrict format, …);
// functions taking a FILE* arguments are methods of the FILE "class".

int fclose(FILE *stream);
// cleans up and deallocates, kind of like a C++ destructor.

But a big part of OO is polymorphism. How can we accomplish that? Let's say we want a file-writer, and a network-writer, but the majority of the code should work with both kinds of writer. We can do it a bit like this:

struct Writer;

typedef int (* PrintFunc)(struct Writer * w, char const * format, ...);
// pointer to function that returns int
// and has Writer* as the type of its first argument.

typedef void (* CloseWriterFunc)(struct Writer * w);
// close and de-allocate

typedef struct Writer {
    PrintFunc Print;
    // ...other function pointers
    CloseWriterFunc Close;
} Writer;

Writer * NewFileWriter(const char *restrict filename);
Writer * NewNetworkWriter(const char *restrict serverName);

int main(int argc, char*argv[])
{
    Writer * w;
    int useFile = 1;
    
    if ( useFile )
        w = NewFileWriter("someFile.txt");
    else
        w = NewNetworkWriter("someServer");

    w->Print(w, "etc.");
    w->Close(w);
    w = NULL;
}

Implementing this in one or more .c files:

typedef struct {
    Writer wpart;
    // file-writer specific data members go here.
} FileWriter;

typedef struct {
    Writer wpart;
    // network-writer specific data members go here.
} NetworkWriter;

// must match function-pointers...

static int FileWriterPrint(Writer * w, char const * format, ...)
{
    FileWriter * fw = (FileWriter *) w;
    // real code goes here.
    return 0;
}

static void FileWriterClose(struct Writer * w)
{
    FileWriter * fw = (FileWriter *) w;
    // real code goes here.
    free(fw);
}

static int NetworkWriterPrint(Writer * w, char const * format, ...)
{
    NetworkWriter * fw = (NetworkWriter *) w;
    // real code goes here.
    return 0;
}

// other NetworkWriter functions...

static void NetworkWriterClose(struct Writer * w)
{
    NetworkWriter * fw = (NetworkWriter *) w;
    // real code goes here.
    free(fw);
}

Writer * NewFileWriter(const char *restrict filename)
{
    FileWriter * fw = malloc( sizeof( FileWriter) );
    fw->wpart.Print = FileWriterPrint;
    fw->wpart.Close = FileWriterClose;
    // real code goes here.
    return (Writer *) fw;
}

Writer * NewNetworkWriter(const char *restrict filename)
{
    NetworkWriter * nw = malloc( sizeof(NetworkWriter) );
    nw->wpart.Print = NetworkWriterPrint;
    nw->wpart.Close = NetworkWriterClose;
    // real code goes here.
    return (Writer *) nw;
}

The syntax isn't that nice, but a preprocessor could automate much of this.

And, in fact, both Objective-C and C++ started out as preprocessors generating C code.

There is one problem with this implementation besides its awkward syntax. Each instance of this struct contains copies of the function-pointers. This overhead can be eliminated by another level of indirection.

// there will be one instance of this struct per "class"
typedef struct WriterFunctions {
    PrintFunc Print;
    // ...other function pointers
    CloseWriterFunc Close; // close and de-allocate
} WriterFunctions;

typedef struct Writer {
    // writer data
    WriterFunctions* methods;
} Writer;

static WriterFunctions FileWriterFunctions = {
    FileWriterPrint, FileWriterClose
};

static WriterFunctions NetworkWriterFunctions = {
    NetworkWriterPrint, NetworkWriterClose
};

Writer * NewFileWriter(const char *restrict filename)
{
    FileWriter * fw = malloc( sizeof(FileWriter) );
    fw->methods = FileWriterFunctions;
    // real code goes here.
    return (Writer *) fw;
}

Writer * NewNetworkWriter(const char *restrict filename)
{
    NetworkWriter * nw = malloc( sizeof(NetworkWriter) );
    nw->methods = NetworkWriterFunctions;
    // real code goes here.
    return (Writer *) nw;
}

This is probably not that far from the implementation that C++ uses, where polymorphic ("virtual") function-pointers are kept in "vtables".

C++ contorts the C language into pretending classes are just "special" structs. But a C++ class or struct may have many hidden things: a vtable pointer, an implicitly-defined default constructor calling constructors on its member variables, an implicitly-defined copy constructor calling copy constructors of its member variables, an implicit-defined assignment operator calling assignment-operators on its member variables, and an implicitly-defined non-virtual destructor calling the destructors of its member variables.

And C++ has some gotchas: if the base class in a class hierarchy has an implicitly-defined non-virtual destructor, calling delete on a pointer of the base-class type (but it is pointing to a derived-class object), the derived-class destructor does not get called, which may destroy the correctness of the system.

Objective-C adds classes and objects as something different from structs, adding some pretty self-contained Smalltalk-like syntax to C. Without operator overloading and other contortions of syntax, what you see in Objective-C code is generally what you get. Not much invisible or implicit stuff in the language itself (though the implementation of method-lookup isn't normally invisible). With ARC, retain, release, and autorelease are mostly invisible/implicit, and I'm pretty sure Apple has done some very interesting things at run-time for certain technologies like Core Data.

If you know where the sharp edges are, C++ can be pretty cool. And you can blend C++ and Objective-C if you want to. C++ constructors and destructors will be invoked (invisibly/implicitly) for you in Objective-C's equivalents of constructors and destructors. 

Most of the functionality of ARC could have been implemented with appropriate C++ smart pointer template classes, but I'm sure most Objective-C programmers would not want to sweat the details of the required syntax.

So there it is.

Tuesday, January 21, 2014

FRACTIONS and computers

Fractions and computers: and when I say "computers," I am including the iPad, iPhone, iPod Touch as well as laptop and desktop computers.

If you work in traditional English units—inch, foot, pound, and so on—you are used to working with fractions. Instead of 3.25 inches, you would say 3¼ inches. Computers usually force you to work in decimals, like 3.25 inches, but there are a some issues with decimals, binary, and fractions you should know about.

You should know that computers can handle some fractions very well; these would be the powers-of-two fractions like ½, ¼, ⅛, and so on. This is because the computer represents numbers in a power-of-two fashion we call "binary". In binary there are only two digits: 0, and 1, and we arrange them in columns to make larger numbers. 

(binary) 101 = 1 × 2² + 0 × 2⁰ = 4 + 0 + 1 = 5.

In decimal, we have digits 0-9 and arrange them in columns to make larger numbers. The number 345 has a 3 in the 100's column, a 4 in the 10's column, and a 5 in the 1's column. Each column represents a power of 10.

(decimal) 345 = (3 × 100) + (4 × 10) + (5 × 1)
= (3 × 10²) + (4 × 10¹) + (5 × 10⁰)

On other side of the decimal point, the columns are also powers of ten, but the exponents are negative powers. The number 0.728 has a 7 in the tenth's column, a 2 in the hundreth's column, and an 8 in the thousanth's column.

(decimal) 0.728  = ⁷/₁₀ + ²/₁₀₀ + ⁸/₁₀₀₀
= (7 × 10⁻¹) + (2 × 10⁻²) + (8 × 10⁻³)
= (7 ÷ 10) + (2 ÷ 100) + (8 ÷ 1000)

Here's the scary part. Computers do not handle decimal fractions that are a power of ten very well, like ¹/₁₀ (which is 10⁻¹), ¹/₁₀₀ (which is 10⁻²), or ¹/₁₀₀₀.  (which is 10⁻³.) Because computers think in binary, the fraction one-tenth does not translate well into binary numbers. This is a problem similar to how some numbers are represented in decimal. For example, 1/7 is represented as a series of endlessly repeating digits in decimal: 0.14285714285714…

Similarly, decimal 1/10 in binary is an endlessly repeating sequence of binary digits to the right of the "binary point": 0.000110011….

decimal 1/10 = binary 0.000110011… =
= (0 × 2⁻¹) + (0 × 2⁻²) + (0 × 2⁻³) + (1 × 2⁻⁴) + (1 × 2⁻⁵) 
+ (0 × 2⁻⁶) + (0 × 2⁻⁷) + (1 × 2⁻⁸) + (1 × 2⁻⁹)…
= 0/2 + 0/4 + 0/8 + 1/16 + 1/32 + 0/64 + 0/128 
+ 1/256 + 1/512…
= 0 + 0 + 0 + 0.0625 + 0.03125 + 0 + 0 + 0.00390625 + 0.001953125…
= 0.099609375…

No matter how many binary digits we use to the right of the binary point, we can't get a value that exactly matches the decimal fraction 1/10, though it can get pretty close.


Why should this bother you?

It has to do with adding and multiplying. In many situations while using a computer, if you add the binary equivalent of one-tenth, ten times, you won't get the 1.0 you expect, unless those systems have taken precautions beyond the scope of this essay. 

In some systems, 0.1 translated to binary and back to decimal results in this number: 0.100000001490116119384765625. Add that number to itself ten times and you get approximately 1.0000000149011612. 

A fractional error, added and multiplied many times, can result in numbers being "off" enough to throw off what you want to measure.

What am I going to do about this?

I want, when feasible, to represent numbers as fractions when you enter them into an app that specifically uses fractions in its input, and/or output. When you enter a number like 3¼ into some of my apps, I will actually keep it as three numbers: 3, 1, and 4, and keep those values straight when we add, multiply, or perform other operations on them.

If I represent 0.1 as the fraction ¹/₁₀ , and add that number to itself ten times, should get ¹⁰/₁₀ , and that should compare to the number 1 as exactly equal.


I am using this technique in Sizeography's new iOS app: myGraph. Not just for additional precision, but also to allow the user interface to clearly reflect the user is thinking. If a user wants graph paper with cells that are 1/8 of an inch wide, or 8 cells per inch, that's easier for the user to think about than requiring the user to type in 0.125.



myGraph showing Grid Spacing as 1/8


myGraph represents traditional measures as inches, either in fractions like ¼ or decimals like 0.25. myGraph also works in the metric system using centimeters. Centimeters are only represented as decimals.

When you switch between inches and centimeters, myGraph will convert to the nearest approximately equal value. If you are using inches and swich from fractions to or from decimals, myGraph will convert to the nearest approximately equal value.


Monday, December 30, 2013

Proposed Assembly Language Instructions (mid-1980's humor)

Imagine the computer-room of old: non-removable disk drives that are the size and shape of a dishwasher. Tape drives spinning like in those old black-and-white movies. Punch card input. Paper-tape for input/output. Form-feed printers with paper 17-inches wide. And the computer operator in white lab coat who controls access to the room-sized computer and loads tapes for you. In that world, these assembly-language instructions are funny. 

I'm not quite old enough to have experienced that world directly. I've never actually seen a paper-tape IO device, and I never actually wrote code that used those big magnetic tapes for input/ouput. (I did use cassette-tape storage with a ZX-81. That's very different.)

BH Branch and Hang
TDB Transfer and Drop Bits
DO Divide and Overflow
IIB Ignore Inquiry and Branch
SRZ Subtract and Reset to Zero
PI Punch Invalid
FSRA Forms Skip and Run Away
SRSD Seek Record and Scar Disk
BST Backspace and Stretch Tape
RIRG Read Inter-Record Gap
UER Update and Erase Record
SPSW Scramble Program Status Word
EIOC Execute Invalid OpCode
EROS Erase Read-Only Storage
PBC Print and Break Chain
MLR Move and Lose Record
DMPK Destroy Memory-Protect Key
DC Divide and Conquer
EPI Execute Programmer Immediate
LCC Loud and Clean Core
HCF Halt and Catch Fire
BBI Break on Blinking Indicator
BPO Branch on Power Off
AI Add Improper
ARZ Add and Reset to Zero
RSD Read and Scramble Data
RI Read Invalid
RP Read Printer
BSP Backspace Printer
MPB Move and Pitch Bits
RNR Read Noise Record
WWLR Write Wrong Length Record
RBT Rewind and Break Tape
ED Eject Disk
RW Rewind Disk
RDS Reverse Disk Spin
BD Backspace Disk
RTM Read Tape Mark
DTA Disconnect Telecommunication Adapter
STR Store Random
BKO Branch and Kill Operator
CRN Convert to Roman Numerals
FS Fire Supervisor
BRI Branch to Random Instruction
PDR Play Disk Record
POS Purge Operating System
USO Unwind Spooled Output
EPSW Erase Program Status Word
PMT Punch Magnetic Tape
AAIE Accept Apology and Ignore Errors

Laws of Computing (circa mid-1980's)

First Law of the Computer: I am a computer. I am dumber than human, and smarter than a programmer.

Lloyde's First Law: every program contains [at least] one bug.

Eggleston's Extension Principle: Programming errors which would normally take one day to find will take five days to find if the programmer is in a hurry.

Gumperson's Lemma: The probability of a given event happening is inversely proportional to its desirability.

Weirstack's Well-Ordering principle: the data needed for yesterday's debug shot must be requested no later than noon tomorrow.

Proudfoot's Law of the Good Bet: if someone claims that you can assume the input data to be correct, ask them to promise you a dollar for every input error.

Fenster's Law of Frustration: if you write a program with no error-stops or diagnostics, you will get random numbers for your output. (This can, incidentally, be used to an advantage.) However, if you write a program with 500 error-stops or diagnostic messages, they will all occur.

The Law of the Solid Goof: In any program, the part that is most obviously beyond all need of changing is the part that is totally wrong.
Corollary A: No one you ask will see it either.
Corollary B: Anyone who stops with unsought advice will see it immediately.

Wyllie's Law: Let n be the number of last category-1 job run at the computer center, then the number of your job is either n+1 or n+900.

O'hane's Rule: The number of cards in your deck is inversely proportional to the amount of output your deck produces. [FYI: it was one line of code per card in ye olde days of programming.]

Mashey's First Law: if you lie to the assembler, it will get you.

Mashey's Second Law: if you have debugging statements in your program, the bugs will be scared away and it will work fine, but as soon as you take away the debugging statements, the bugs will come back.

The Law of Dependent Independence: It is foolhardy to assume that jiggling k will not diddle y, however unlikely.

The Law of Logical Incompatibility: all assumptions are false. This is especially true of obvious assumptions.

Velonis's First Law: the question is always more important than the answer.

Velonis's Second Law: when everything possible has gone wrong, things will probably get worse.

Velonis's Third Law: the necessity for providing an answer varies inversely with the amount of time the question can be evaded.


Tuesday, December 3, 2013

Diana Larsen on Changing Your Organization


(Originally posted 2003.Apr.30 Wed; links may have expired.)

Diana Larsen's article on change and learning (and XP) http://www.cutter.com/itjournal/change.html 12 pages (PDF).

She quotes Beckhard's formula for change (my paraphrasing): If dissatisfaction with status quo, plus desirability of change, plus clear definition of what/how to do, are greater than the resistance to change, then you can achieve the desired change.

She says to encourage change, market it by increasing awareness of the problems with the status quo (I see a risk of being called names like "negative" or "not a team player") and the communicating the desirability of getting to a better situation. "When you are implementing change, there is no such thing as too much communication."

Some of this runs counter to Jerry Weinberg and another (name forgotten) book. Jerry says "don't promise more than a 10% improvement." A manager doesn't want to admit that more improvement is possible, because then they would have to admit that they were not doing a "good job" before. The (name forgotten) book pointed out that too clear a picture of the future can be paralyzing because people can see the perceived drawbacks of that situation too visibly, while not appreciating the benefits.

She writes "XP has the advantage over many change efforts in that fast iterations build in the feedback loop for short-term success. While floundering through the chaos, nothing bolsters the participants in a change effort like the sense of progress from a quick 'win.'"

Larsen recommends Chartering to start a project, and agrees with Lindstrom and Beck on "Hold two-day professionally facilitated retrospectives each quarter." (And at project end.)

Change takes time. "Putnam points out the need for patience with change efforts as he maps out six months' worth of defect tracking and shows its consistency with Satir's [change] model. He notes that if you had an evaluation of success or failure after three months, you might have come to an erroneous conclusion."

Also check out Rick Brenner's "Fifteen Tips for Change Leaders" here: http://www.chacocanyon.com/essays/tipsforchange.shtml



Keith Ray is developing new applications for iOS® and Macintosh®.. Go to Sizeography and Upstart Technology to join our mailing lists and see more about our products.