C. Keith Ray

C. Keith Ray writes about and develops software in multiple platforms and languages, including iOS® and Macintosh®.
Keith's Résumé (pdf)

Tuesday, November 23, 2010

High technical debt = slum

Sometimes I think the "debt" analogy is too clean. Brian Foote and his coauthor used many analogies in his "Big Ball of Mud" paper, including "Shanty Town", but I think the best analogy for the products I've seen with large amounts of technical debt is "Slum".

Products with large technical debt are money-makers, in spite of their problems (until the problems cost too much).

Slums have slum-lords, who profit from lack of maintenance, who are ignoring the pain of the inhabitants (developers and tester), and who are also ignoring the benefits of a clean environment... (of course, you can only take an analogy so far).

Still, it takes an empowered group of inhabitants to transform a slum into a desirable place to live and work. That requires management support.

Some books I recommend to those new to Agile

The following are some books I recommend to developers new to Agile software development.

  • This book is new, pretty complete, and easy-to-read about adopting agile software development: The Agile Samurai

  • This book covers the same ground as the Agile Samurai book. It's a little more in-depth. Most of it is now free on-line, but you can buy the book from Amazon: The Art of Agile Development.

Industrial logic has eLearning and in-person workshops on TDD, Refactoring, Design Patterns, and other topics. This web-based eLearning contains videos, exercises, and quizzes that are, for most people, more effective than reading a book. I would recommend both.

Friday, November 19, 2010

Going meta sometimes helps.

One of the sessions I participated in at AYE 2010 was Steve Smith's session on "Power, Authority, and Teams". We did an exercise where we individually tried to solve a problem (on paper, a problem whose answer can be scored as to correctness) and then we were divided into four teams, with one observer each, to try to solve the same problem.

The team that I was on, started with some direct actions by individuals: deciding we would use a flip-chart on an easel, and moving the flip-chart away from another team (which, funnily enough, put us next to another flip-chart on an easel, so we could have avoided carrying that easel around). One partipant decided to act as scribe, and off we want, problem-solving. I'm sure it looked like pure chaos. And it was. (But not necessarily in a bad way.)

As we worked through the problem, sometimes we'd ask for a show of hands in favor of a particular part of the solution. We never actually formalized how voting would work, and certainly never had more than one third of the group raising their hands. I think a lot of us in the group were trying to gauge agreement by audio and visual cues, but we didn't go "meta" long enough to formalize how we would signal agreement. We didn't appoint roles, or vote on a leader.

You have to remember that a many of us were coaches. "Agile coaches," even. We know about dot-voting (I almost suggested it once during this chaos), consensus/thumb-voting. We know about team-formation, Satir change model, self-organizing teams. Some of us had been through Problem-Solving Leadership training. We didn't use any of those tools. I know I could have, but I didn't, and I expect some others were thinking the same thing.

After struggling with one problem solving technique for a while, one of the Europeans in the group complained that our usage of American-style measurements was making it hard for him to contribute. I wrote "1 meter ≈ 3 feet" on the flip-chart to help him out, but another member of the team proposed working in relative sizes instead of absolute sizes. We transitioned to that problem-solving style and, after a little confusion, the team seemed to jell.

We had come to an agreement on process, though never explicitly agreeing on process through any formal means.

Later, we could compare our individual scores on the answering the problem with our team scores. Our team scored higher than any of our individual scores, though we were in third-place compared to the other three teams. The team with the highest score had chosen to use computers to solve the problem. Imagine! A bunch of software people using computers to solve the problem! :-)

OK, our team had a moment of conversation, early on, where some of us agreed not to use computers for the problem-solving, as that seemed against the spirit of the simulation. Again, not a formally-ratified agreement, but no one in our team went against that decision during the simulation. We were supposed to be learning about power: how people interact with each other, so we didn't bring computers into the simulation.

The team in fourth place had a score that was lower than any of their individual scores. I really wish I could have observed what was going on in that team. Somehow their collective problem-solving skill went completely opposite of their individual skills.

I think that even if we had formalized how our team made its decisions, we probably would have come to the same solution that we did in our informal way. We might have done so faster. We might have included insights from some of the quieter members of the team, and might done a little bit better.

It's also possible that if we had been too formal, we could have done worse, perhaps much worse.

Two points I want to mention.

In a team-building exercise at AYE 2005, my team happened to be all PSL graduates and no one in the other team was a PSL graduate. My team quickly jelled. We decided to enjoy ourselves and not stress out. The other team, across the room, we could see going through "forming, storming, norming, performing" stages. We ended up coming in second-place (of two teams)... but each team was working on a different LEGO model, so the difference in performance could be the difference in complexity of the models, or differences in our LEGO-building skills. As much as we would like to have simple causes-and-effects, sometimes effectiveness in one area doesn't mean success in another.

The other point I want to mention, I just read in The Gift Of Time, in the chapter written by Willem van den Ende: "Solving the Groupthink Problem". He quotes a diagram-of-effects by Jerry Weinberg (Quality Software Management vol. 3) Rather than re-print the image, I'll try to describe it.

"Effectiveness of meta-planning" directly affects "Effectiveness of strategic planning".

"Effectiveness of strategic planning" directly affects "Effectiveness of action planning".

"Effectiveness of action planning" directly affects "Quality of Action".

"Other Factors" directly affects "Effectiveness of action planning" and also directly affects "Quality of Action".

"Change Artistry" directly affects "Quality of Action".

That may or may not speak for itself. (Probably not, but it's getting late.) Willem has a paper here that shows other diagrams of effects. Diagramming systems can be useful.

And one last thing... In a discussion with the "power" team, I listed a bunch of different ways we can have power (or influence): speaking louder, speaking more persuasively, deciding on a process for the team to follow, holding the marker and thus acting as scribe...

That last idea had some impact on the participant who had been acting as scribe. She had been thinking that by acting as scribe, she would be avoiding taking power in this simulation. But the scribe can influence what gets recorded, who gets listened to. It's a position of power. We coaches who sometimes facilitate meetings by acting as scribe, should keep that in mind.

There are all sorts of power.

Thursday, November 4, 2010

Beautiful tools

Go to this page and scroll down to Figure 4. Apple's IDE graphically shows you the results of static analysis by drawing arrows over the code.


Wednesday, November 3, 2010

Virtual Functions in C++ make TDD easier, but at what cost?

Virtual functions in C++ make allow us to do mocking, stubbing, and faking, which helps us test code in isolation in TDD or just microtesting in general. A frequent objection to virtual functions is that it "costs more" than simple function calls.

The cost of a virtual function is hardly ever important, unless the function is being called in a very tight loop, or it is being called at interrupt-time, and even then compiler / linker optimization might overcome the "extra" table-lookup that makes a function virtual.

In C, calling a function through a function-pointer is equivalent to calling a virtual function in C++. Implementations of file-systems APIs in Linux and BSD Unix use function-pointers internally to do what C++ does with virtual functions.

The cost of cache-misses (on-chip cache and off-chip cache) is a LOT bigger than the cost of virtual function pointers. Incorrect branch-prediction is also a CPU-level cost that people fail to understand.

The best way to know if making a function virtual has any user-apparent cost, is to measure the application via some tool that does non-invasive timing. Often, there will be no visible effect if you make some functions virtual that were non-virtual before. (I'm talking about REAL applications, not benchmarks.)