C. Keith Ray

C. Keith Ray writes about and develops software in multiple platforms and languages, including iOS® and Macintosh®.
Keith's Résumé (pdf)

Thursday, September 26, 2013

Technical Reviews, More on Test Driven Design

(Originally posted 2003.Mar.26 Wed; links may have expired.)


Scott's essay on TDD is up here and Ron Jeffries's critique is there. I'll have more to say about it after reading it a couple of times.

That recent issue of STQE Magazine also has a great short essay by Jerry Weinberg on technical reviews being a learning accelerator. One thing I want to point out is that junior programmers should be reviewing the work of master programmers, not necessarily to find errors, but to learn from the master - and of course, master programmers can make mistakes too, which are often visible to junior programmers as well as other master programmers. If the master programmer is humble, he/she can learn from a junior programmer, too.

I've been reading Weinberg and Freedman's book on the subject of technical reviews Handbook of Walkthroughs, Inspections and Technical Reviews (which was written in FAQ style - question/answer), and was a bit surprised by their recommendation that when people are being trained on how to technical (code) reviews, they should have some practice at conducting a review in the presence of hidden agendas.

Some examples of hidden agendas in code reviews: person A wants to impress person B. Person B wants to make person C look bad. Person C needs to go to the restroom, but doesn't want to say so. Person D is distracted by illness of his/her spouse.

Only Weinberg would write about hidden agendas in code reviews - too many writers and books on software development practices seem to assume that people act like machines.

On Weinberg's SHAPE forum, Charlie Adams wrote: "When people are getting tense about their software being reviewed, use Jerry's phrase, 'Yes, I trust your honesty, but I don't trust your infallibility. I don't trust anyone's infallibility.' (QSM 4: page 220) In my experience this has always calmed the atmosphere and allowed us to examine the code rather than the developer."

While I have done code reviews, both informal and formal, I prefer pair programming. It combines reviews with collaborative design, testing, and coding. Rather than go into all the reasons why pair programming is good, I'll point you to www.pairprogramming.com and Pair Programming Illuminated.


Tuesday, September 24, 2013

Test Driven Development is about Designing, Not Testing

(Originally posted 2003.Mar.25 Tue; links may have expired.)


In a recent issue of STQE Magazine, Joel Spolsky wrote that Test Driven Development (TDD) doesn't substitute for "normal" testing. It seems like he doesn't understand that test driven development is about low-level design, not testing. Programmer Tests are a happy (and intentional) side-effect of the design and refactoring process. It is to avoid this misunderstanding that I prefer to call TDD "Test Driven Design".

Ron Jeffries and Scott Ambler had a little spat on the Agile Modeling Mailing List about TDD, not about whether it constitutes "design", but on how much design "up-front" it entails. Scott started it by writing here "An important observation is that both TDD and AMDD [Agile Model-Driven Development] are based on the idea that you should think through your design before you code. With TDD you do so by writing tests whereas with AMDD you do so by creating diagrams or other types of models such as Class Responsibility Collaborator (CRC) cards."

Ron replied "Does TDD suggest that you "think through your design before you code"? I see no such thing in TDD. In TDD we write ONE test, then make it work, then write another." [He's leaving out the refactoring step here, which is another area of design in TDD.]


Maybe Ron doesn't think writing each test is "thinking" or "designing", but I do. At the risk of being snide, I assert that each test represents more thinking than a lot of programmers do when they write code without tests. Perhaps Ron's extensive experience has made his designing unconscious.

When writing the test, you think about the API, the goal of the API, and how to verify the goal is met. That's design. Before you start writing the tests, you think about whether to extend an existing class (and its tests) or to start a new class and new tests. That's higher level design. After you write a test and make it pass, then you look to see if there is duplication or other design smells to be refactored away. Still more design. Perhaps Ron thinks this refactoring step is the only design step in TDD.

Check out Kent Beck's book: Test-Driven Development: By Example for an introduction to TDD. Unfortunately, only a very experienced (zen-master-level) programmer like Kent Beck can take the refactoring step of TDD (remove duplication) and derive all the other good design principles from that. So read Robert Martin's book Agile Software Development: Principles, Patterns, and Practices, which not only uses TDD extensively in its copious examples, but also documents design principles that every programmer should know.


Thursday, September 19, 2013

Writer Ferrets by Richard Bach

Writer Ferrets: Chasing The Muse, by Richard Bach. Quite a delightful little book about writing, publishing and writer's block. Satisfying while reading, but less satisfying in retrospect. I'll be reading the other Ferrets books as I find them. It makes me wonder how much of the book is autobiographical... but not enough to ask the author and possibly get a disappointing answer. Does anyone have a translation of the runes before and after the title page?

Tuesday, September 17, 2013

Fun in the Workplace

(Originally posted 2003.Mar.22 Sat; links may have expired.)


I read some years ago in the book A Great Place to Work that a good company allows people to not "be a part of the family" -- it's ok if the workplace is "just a job" -- though I hope it's a job done well.

I don't play pool or foosball at my office; I'm not in the company softball team; and while I wanted to get together with the radio-control-car guys, I was too busy and didn't want my rc-car to get muddy. My wife is still recovering from playing in a softball game for her office a week or more ago.

My preference for "fun" at the office would be people getting together to learn about software methods, design, and so on (topics I plan to write about in this blog). Getting a group of coworkers together to ride go-carts is not my idea of fun.

It appears that some people in Germany are blaming their dot-com collapse on "fun" in the workplace. Noted in Laurent Bossavit's blog pointing to article on Mair's End The Fun.

Mair's office rules seem to have gone too far in the other direction - uniforms required, no calendars or pictures on the walls, half-hour lunches. And this is in an advertising agency?!

I fail to see how uniforms get creative work done better or faster. Doesn't anyone blame to dot-com crash on bad business plans, profit-mongering brokers, and credulous investors?

A Great Place to Work
 says that the core truth of what makes a company successful, is mutual trust between employees and management. Quoting the preface:
[a great place to work] requires the direct involvement of senior management who must insist that fostering an exceptional work environment is an explicit goal of the organization - one that is on a par with other explicit goals like making a profit or providing high-quality products or services. Being a great place to work cannot be a fad of the month or it will rightfully appear to employees as another version of management by manipulation.




Thursday, August 1, 2013

C May Be Better Than C++ For Your Embedded Application

I have done some coaching of programmers writing C and C++ for embedded systems. The C programmers' target environment didn't have a heap, so C++ new/delete and associated smart pointers were out (and thus, most of C++'s standard libraries were out). But C was an improvement on assembly language for their productivity and unit testing.

The C++ programmers, though they called their target environments "embedded," really had lots of memory and full-fledged Linux or Linux-like operating systems. So they could use C++ instead of C for programming. But my experience helping them has showed me that very few programmers know how to write code that uses C++ safely (as in exception-safe code). Often their use of C++ was a hindrance to productivity, testability, and execution-efficiency.

If I could go back in time to advise those C++ embedded teams on the choice of their programming language, I'd recommend C instead of C++. It's seems like it is harder to write untestable, inefficient code in C, compared to the nightmares I've seen in C++. I've noticed that C programmers seem more pragmatic than many C++ programmers. The C++ programmers often have rules in their heads about the "right" way to code, that they have to unlearn if they want to write testable, efficient code.

If average-skilled C++ programmers could be trusted to use C++ as a "Better C" (which is how C++ used to be marketed), I would recommend that.

I'm currently programming in Objective-C with ARC. I could be blending Objective-C with C++ (just change the source file suffix to ".mm") , to take advantage of C++'s "Better C" advances in type-checking, but so far, I'm OK with the C part of Objective-C. Unit tests have found some of the issues with C's less-powerful type-checking the C++ compiler could have caught.

ARC takes Cocoa's semi-manual memory management and automates it, so I no longer have to write retain, release, or autorelease. If I didn't have ARC, I would seriously consider using C++ smart pointers to manage Objective-C memory usage. Some would view smart-pointers plus Objective-C as a worse solution than explicitly using retain and release, but manual or semi-manual memory management is very error-prone.

Thursday, June 13, 2013

Book Titles that may be More Interesting than the Books they Came From


A list of titles I found interesting, when looking at a mish-mosh of old books, some time ago.

  • Labors of Love
  • Proud Destiny
  • Just In Case
  • Natural Acts
  • Education and Ecstasy
  • The Cannibal Galaxy
  • A Year in Upper Felicity
  • The Land of Milk and Omelets
  • Fly Boys
  • Indian Killer
  • The Formula
  • Moon Deluxe
  • Four Doctors
  • The Outsider
  • The Fourth Dimension is Death
  • Dropping Your Guard
  • Dark History
  • The Crystal Towers of the Moon

and a title I could have used when writing about refactoring:

  • From Jungle to Garden: cutting down long methods and large classes.

Wednesday, May 1, 2013

The Purpose of Unit Tests Is Not Recreating the Entire Running Environment


One of the "Aha!" experiences/epiphanies of a programmer I was working with, was the realization that a unit test only needs to test a "unit". He had been setting up real-life data, when a little "fake" data would verify the desired behavior.

Unit testing also makes the code more robust, because side-effects that might not be seen in a system test, can be found by unit tests exercising all the edge-cases for each function or behavior being tested—particularly testing situations that are hard to replicate in the real system.

Saturday, April 27, 2013

Command Pattern

(Reblogged from 11/23/2003)


Martin Fowler writes of the CommandOrientedInterface and command objects and executors:
Command oriented interfaces have a number of benefits. One of the primary ones is the ability to easily add common behavior to commands by decorating the command executor. This is very handy for handling transactions, logging, and the like. Commands can be queued for later execution and [...] be passed across a network. Command results can be cached [...]
And, of course, it is relatively easy to add "Undo/Redo" support to command objects and store stacks of command objects to be undone/redone.

In a dynamic language that supports reflection, one could create generic command objects by creating a class that retains a reference to an arbitrary object [call it a 'target'], a reference to an arbitrary method of that object [call that an 'action'], and a list of arguments to pass into that method [or perhaps only one argument, the 'sender'].

Instances of this class could be created by reading a configuration file or deserializing instances that had been created in an Interface Builder. I'm not saying that Apple's Interface Builder does exactly like this, but it is doing something kind of like this. See this page for more.

Cocoa and Interface builder allow creating powerful programs without writing (or generating) a lot of code. It's been one of the most advanced development environments for at least a decade (it used to be part of NextStep), and it still is one of the most advanced development environments.


Smalltalk, Objective-C, and no doubt Python and Ruby could generic command-object classes like this very succinctly. Java would be more difficult, but it can be done in Java, as well.


Perhaps if methods and method-calls (and classes) were more easily thought of as first-class objects [which is very far from the case in C++], tools forAspect-Oriented programming would be easier or unnecessary.


Thursday, April 25, 2013

Test-Driven Design/Development

(Repeat from my old blog 2003/11/25.)

I'm about two-thirds done reading Unit Testing In Java: How tests drive the code by Johannes Link. It's a very good book. Go buy it now, even if you're not programming in Java, and then finish reading my blog. :-)

The summary form of test-driven development is:
  1. Think about what we want to the program to do.
  2. Write a test that shows some aspect is done.
  3. Run the test, see it fail because we haven't written the code to get the thing done yet (this tests our test, giving us some confidence that our test is written correctly).
  4. Write the smallest amount of code to make that test pass.
  5. Run the test, hopefully see if pass. (If it doesn't, fix the problem.)
  6. Refactor to clean up the code.
  7. Run the test again, to make sure refactoring didn't break anything.
  8. Repeat [sometimes skipping 6 and 7] until you can't think of any new tests, or until the code stops changing as a result of adding tests.
So what if we don't know what we want the program to do?

Writing a test can help us figure that out. Since the code hasn't been done yet, the test is "black-box" and does not necessarily commit us to any particular way of implementing the solution. We can write "Exploratory Tests" to investigate existing code - does calling "X" do "Y"? Write a test and find out.

I don't have time to write lots of tests.

Well, you only have to test code that you want to work. If you don't test it before you write, you'll be testing it in some fashion after you write it. Or someone else will test it, file a bug report because you didn't test it, and then you have to not only interrupt whatever you're doing and fix it, but also test it to confirm that you fixed it. And the time-delays between your writing the code, the tester finding the bug, and your trying to fix the bug means that you will be less familiar with the code, and thus fixing it will be that much harder and slower.

There is, of course LOTS of code out there that was not written this way (some of it is mine). Just browse some of the code on the internet or in your company's source-code-control system - probably most of it isn't written using test-driven techniques.

You'll probably find these things in non-test-driven code (particularly in projects that don't do code-reviews or pair-programming and refactoring):
  • Dead code -- code that is never invoked.
  • Duplicate code.
  • Unreferenced parameters and variables.
  • Variables that never vary: 'constant' variables.
  • Toughly-coupled code. You can't use a class in isolation because it depends on lots of other concrete classes.
  • Non-cohesive classes and methods. (Classes and methods that do "too much")
  • BUGS
There is in fact a whole list of "code smells" (possible design problems) at http://c2.com/cgi/wiki?CodeSmell. TDD helps prevent some of them, and refactoring with the safety net of tests can allow you to to fix the others. With TDD and Refactoring, you don't have to live with smelly code.
I find that once my tests are passing, I almost never have to go back and fix any bugs later. This saves me a lot of time. At the worse, it takes about the same time as coding + debugging, but it's less stressful. If a bug arises during TDD, it is most likely in code that I wrote less than five minutes ago -- easily found and fixed.

For those people who say they don't have time to write tests, I ask: why do you have time to write dead code, duplicated code, and where does the time to fix bugs come from?

In traditional development and testing, you create a lot of defect-ridden code, and then test the defects out. The underlying assumption is that this is the only way it can be done. Test-Driven-Design works with a different assumption: You can put the quality in first, and not have to spend time getting defects out.

Traditional (maintenance) development, because it usually isn't supported by suites of tests, eventually grinds to a halt: the code becomes so fragile that it can't be modified without breaking something. I don't know if the IRS has depreciation tables for source-code assets, but the reality is that code eventually loses value unless you take steps to keep it from losing value. Sometimes, code becomes unmaintainable while the company is still trying to make money off of it - the result is the company loses money or doesn't get the opportunity to modify the code to make more money.

Test-Driven-Design provides a suite of tests that allows you to refactor - keeping the code maintainable so it is easy to enhance, re-use, etc., so you can make money.

If I Were Advising Someone Forming a Start-up


If I were advising someone forming a start-up, or forming a start-up myself, I'd want to learn a few things first, and then be able to consult certain experts as I continued.

I would learn from this list of books here, in a page created by Steve Blank. Note Kent Beck's Extreme Programming Explained is listed near the top of this list of books, along with The Lean Startup by Eric Ries, and The Startup Owners Manual by Steve Blank. The list of books is quite long, so I'd get summaries of these books and skim through the few I'd buy, digging deeper as needed. Reading them all would take a long time.

I would consult with Esther Derby for team-work and process improvement. And her books:


I would consult with Johanna Rothman for management and hiring. And her books:


There are a variety of legal issues when starting a business. Nolo Press has a lot of advice. I'd get something like The Small Business Start-Up Kit: A Step-by-Step Legal Guide.

Wednesday, January 9, 2013

Basic Unit Testing in Objective-C


When you create an iOS or MacOSX project in Xcode, you have the option of creating unit tests, which you can use to verify your code without manual intervention.

Here's an example...


//  IntroductionToObjectiveCTests.m
//
//  Created by C. Keith Ray on 1/9/13.
//  Copyright (c) 2013 C. Keith Ray. All rights reserved.
//

#import <SenTestingKit/SenTestingKit.h> // Testing Framework

// Note 1
@interface IntroductionToObjectiveCTests : SenTestCase
@end

// Note 2
@implementation IntroductionToObjectiveCTests

// Note 3
- (void)testExampleTrue
{
    STAssertTrue(TRUE, @"TRUE should be true");
}

// Note 4
- (void)testExampleFalse
{
    STAssertTrue(FALSE, @"This assert should fail");
}

@end



If we build and run the tests, Xcode will compile and run these tests.

Note 1: Declares a class that has SenTestCase as its parent class. Nothing else needs to go here.

Note 2: Defines the class we declared in Note 1.

Note 3: This is a method definition, or a "member function" if you use C++ terminology, that the test framework will execute. It has to return nothing ("void"), take no parameters, and has to start with the word "test" for the test framework to find and execute this method. I call this "a test" and this file has two tests in it, so far. This test contains an assertion that will pass, because TRUE is true.

Note 4: This test will fail, because FALSE is not true and the assertion STAssertTrue expects the first parameter to be evaluated to true.

The the tests are executed, Xcode will highlight the lines of code where the failing assertions are. We have just one failure; it looks like this:


The failure output tells us the expression FALSE is expected to be true (in this test), but is actually not true. The output also includes the string supplied as the second argument to STAssertTrue.

When I had Xcode generate the project and create the initial unit test code, it created two files: IntroductionToObjectiveCTests.m and IntroductionToObjectiveCTests.h. A header file (.m) to declare the interface of an objective-c test case, and a source file (.m) implementing the test case. I moved the class interface declaration to the .m file and deleted the .h file, because we don't need it. The .h would only be needed if I had written code in multiple .m files that needed to see the class interface declaration.

Why doesn't the test framework need to see the class interface? Magic. When the test code is executed, the test framework looks for classes and methods at runtime. The framework says something like "Give me a list of all your classes that have SentTestCase as their parent class, and give me the list of methods that start with 'test'. Let's execute those test methods."

SenTestingKit also outputs information to the Console. This is what it looks like for these tests:


Test Suite 'All tests' started at 2013-01-09 20:21:02 +0000
Test Suite '/Users/keithray/Library/Developer/Xcode/DerivedData/intro_objc-gfshipzbnghwbygbtraflyelmail/Build/Products/Debug-iphonesimulator/intro_objcTests.octest(Tests)' started at 2013-01-09 20:21:02 +0000
Test Suite 'IntroductionToObjectiveCTests' started at 2013-01-09 20:21:02 +0000
Test Case '-[IntroductionToObjectiveCTests testExampleFalse]' started.
/Users/keithray/projects/NSScreenCast/01_objective_c_language/intro_objc/intro_objcTests/IntroductionToObjectiveCTests.m:25: error: -[IntroductionToObjectiveCTests testExampleFalse] : "FALSE" should be true. This assert should fail
Test Case '-[IntroductionToObjectiveCTests testExampleFalse]' failed (0.000 seconds).
Test Case '-[IntroductionToObjectiveCTests testExampleTrue]' started.
Test Case '-[IntroductionToObjectiveCTests testExampleTrue]' passed (0.000 seconds).
Test Suite 'IntroductionToObjectiveCTests' finished at 2013-01-09 20:21:02 +0000.
Executed 2 tests, with 1 failure (0 unexpected) in 0.000 (0.000) seconds
Test Suite '/Users/keithray/Library/Developer/Xcode/DerivedData/intro_objc-gfshipzbnghwbygbtraflyelmail/Build/Products/Debug-iphonesimulator/intro_objcTests.octest(Tests)' finished at 2013-01-09 20:21:02 +0000.
Executed 2 tests, with 1 failure (0 unexpected) in 0.000 (0.000) seconds
Test Suite 'All tests' finished at 2013-01-09 20:21:02 +0000.
Executed 2 tests, with 1 failure (0 unexpected) in 0.000 (0.002) seconds

If you had a lot of test failures, one way to find them would be to search the console for "failed", "error:" and so on.


Saturday, October 27, 2012

Agile Limerick 1

(not to be taken too literally.)

I got certified as a Scrum Master,
To make my projects go faster,
But the code was mess,
Never refactored, I guess,
So Agile, for us, was disaster.