Ken Arnold points out Martin Fowler commenting on Cringley's post on Refactoring.
Somebody, possibly Kent Beck, said something very much like "If the code quality is going down, you're doomed. If the quality, however bad to start with, is always going up, you have a chance at success." (I can't find the quote. I need google to Find What I mean, not what I ask it to find.)
Every company that has invested in software has invested in an asset that they can choose to improve or let decline in value. If those software assets never need to change - no bug-fixes, no enhancements, no adapting to changes in the software ecology (new versions of libraries, new OSes) - then don't refactor.
Fear of changing legacy code is healthy, if the legacy code is not fully covered by automated tests. You need something to rapidly tell you if you've broken something when you're doing a code improvement. Michael Features is working on a book about making legacy testable here: http://groups.yahoo.com/group/welc/.
Probably a good motivator for improving code is measuring the cost of changing code for bug-fixes and feature additions (and measuring the Fault Feedback Ratio). The older and cruftier the code is, the higher the cost of changing it. The cost of refactoring (which will reduce maintenance costs in the long run) has to be weighed against the increasing costs of maintenance when no refactorings are being done.
Perhaps the best way of addresssing fear of refactoring is by practicing it on toy projects. William C. Wake has a book called The Refactoring Workbook due in July. Preview chapters are on-line: http://www.xp123.com/rwb/.
The other key thing about refactoring to improve the design by taking small steps. For example, most code has functions and methods that are too long, containing multiple 'bundles' of lines, sometimes with comments describing the intent of each bundle. Use the extract method refactoring to move each little bundle of lines into a new function, named after that comment. (See my example below.) When big methods have been broken down into small methods, duplicate code becomes more obvious, and can be removed, reducing the amount of code to be maintained and raising the level of abstraction in the code.
Big refactorings are much easier if lots of small refactorings have already been done. It is important that the team members agree on the value of the small refactorings - you don't want inviduals to 'undo' each other's code improvements.
Example of "extract method" refactorings - pseudocode
"before"
procedure testVerylongmethod() var result = Verylongmethod() var expectedResult = 2398 assertEqual( expectedResult, result ) end procedure function Verylongmethod() // set up foos var foos = array [1..3] of foobar foos[1] = new foobar(2,2,3,4) foos[2] = new foobar(3,4,6,3) foos[3] = new foobar(4,67,83,3) // calculate muckiness var muckiness = 0 for index = 1 to foos.length muckiness = foos[ index ] . muckinessQuotient( index ) + muckiness end for return muckiness end function
"after"
(the test doesn't change, so I don't reproduce it here.)
function setupfoos() var foos = array [1..3] of foobar foos[1] = new foobar(2,2,3,4) foos[2] = new foobar(3,4,6,3) foos[3] = new foobar(4,67,83,3) return foos end function function calculatemuckiness( foos ) var muckiness = 0 for index = 1 to foos.length muckiness = foos[ index ] . muckinessQuotient( index ) + muckiness end for return muckiness end function function Verylongmethod() var foos = setupfoos() var muckiness = calculatemuckiness( foos ) return muckiness end function
Now imagine that there is Anotherlongmethod similar to the original function Verylongmethod, where the only difference in how it sets up the foos array. It can be refactored to call the same calculatemuckiness method that had been extracted from Verylongmethod.
"before"
procedure testAnotherlongmethod() var result = Anotherlongmethod var expectedResult = 9487 assertEqual( expectedResult, result ) end procedure function Anotherlongmethod() // set up foos var foos = array [1..4] of foobar foos[1] = new foobar(2,9,3,4) foos[2] = new foobar(3,9,6,3) foos[3] = new foobar(4,9,83,3) foos[4] = new foobar(9,9,9,9) // calculate muckiness var muckiness = 0 for index = 1 to foos.length muckiness = foos[ index ] . muckinessQuotient( index ) + muckiness end for return muckiness end function
"after"
(the test doesn't change, so I don't reproduce it here.)
function setupfourfoos() var foos = array [1..4] of foobar foos[1] = new foobar(2,9,3,4) foos[2] = new foobar(3,9,6,3) foos[3] = new foobar(4,9,83,3) foos[4] = new foobar(9,9,9,9) return foos end function function Anotherlongmethod() var foos = setupfourfoos() var muckiness = calculatemuckiness( foos ) return muckiness end function
Please forgive all the magic numbers; I wanted to keep the example self-contained. While the safety of these refactorings can often be determined by code inspection, they are best determined by tests - the test don't change during (most) refactorings, and they should be passing before and after the refactoring.
No comments:
Post a Comment