Glen Alleman at Herding Cats has a post on the measure of source lines of code (SLOC) in project management. He expresses the opinion that SLOC is an important measure of determining cost and schedule–a critical success factor–in what is narrowly defined as Software Intensive Systems. Such systems are described as being those that are development intensive and represent embedded code. The Wikipedia definition of an embedded system is as follows: “An embedded system is a computer system with a dedicated function within a larger mechanical or electrical system, often with real-time computing constraints. It is embedded as part of a complete device often including hardware and mechanical parts. Embedded systems control many devices in common use today.” In critiquing what can only be described as a strawman argument, he asserts that in criticizing the effectiveness of SLOC, “It’s one of those irrationally held truths that has been passed down from on high by those NOT working in the domains where SLOC is a critical measure of project and system performance.”
Hmmm. I…don’t…think…so. I must respectfully disagree with my colleague in his generalization.
What are we measuring when we measure SLOC? And which SLOC measure are we using?
Oftentimes we are measuring an estimate of what we think (usually based on the systemic use of the Wild-Assed Guess or “WAG” in real life), given the language in which we are developing, the numbers of effective and executable code lines needed to achieve the desired functionality. No doubt there are parametric sets that are usually based on a static code environment, that will tell us the range of SLOC that should yield a release, given certain assumptions. But this is systems estimation, not project management and execution. Estimates are useful in the systems engineering process in sizing and anticipating the effort in processes such as COCOMO, SEER-SEM, and other estimating methods in a very specific subset of projects where the technology is usually well defined and the code set mature and static–with very specific limitations.
SLOC will not, by itself, provide an indication of a working product. It is, instead, a part of the data stream in the production process in code development. What this means is that the data must be further refined to determine effectiveness to become a true critical success factor. Robert Park at SEI of Carnegie Institute effectively summarizes the history and difficulties in defining and applying SLOC. Even for supporters of the metric, there are a number of papers similar to Nguyen, Deeds-Rubin, Tan, and Boehm of the Center for Systems and Software Engineering at the University of Southern California articulate the difficulty in specifying a counting standard.
The Software Technology Support Center at Hill Air Force Base’s GSAM 3.0 has this to say about SLOC:
Source lines-of-code are easy to count and most existing software estimating models use SLOCs as the key input. However, it is virtually impossible to estimate SLOC from initial requirements statements. Their use in estimation requires a level of detail that is hard to achieve (i.e., the planner must often estimate the SLOC to be produced before sufficient detail is available to accurately do so.)
Because SLOCs are language-specific, the definition of how SLOCs are counted has been troublesome to standardize. This makes comparisons of size estimates between applications written in different programming languages difficult even though conversion factors are available.
What I have learned (through actual experience in coming from the software domain first as a programmer and then as a program manager) is that there was a lot of variation in the elegance of produced code. When we use the term “elegance” we are not using woo-woo terms to obscure meaning. It is a useful term that connotes both simplicity and effectiveness. For example, in C programming language environments (and its successors), differences in SLOC between a good developer and a run-of-the-mill hack who uses cut-and-paste in recycling code can be as much as 20% or more. We find evidence of this variation in the details underling the high rate of software project failure noted in my previous posts and in my article on Black Swans at AITS.org. A 20% difference in executable code translates not only into cost and schedule performance, but the manner in which the code is written translates into qualitative differences in the final product such as its ability to scale and sustainment.
But more to the point, our systems engineering practices seem to contribute to suboptimization. An example of this was articulated by Steve Ballmer in the movie Triumph of the Nerds where he voiced the very practical financial impact of the SLOC measure:
In IBM there’s a religion in software that says you have to count K-LOCs, and a K-LOC is a thousand lines of code. How big a project is it? Oh, it’s sort of a 10K-LOC project. This is a 20K-LOCer. And this is 50K-LOCs. And IBM wanted to sort of make it the religion about how we got paid. How much money we made off OS/2 how much they did. How many K-LOCs did you do? And we kept trying to convince them – hey, if we have – a developer’s got a good idea and he can get something done in 4K-LOCs instead of 20K-LOCs, should we make less money? Because he’s made something smaller and faster, less K-LOC. K-LOCs, K-LOCs, that’s the methodology. Ugh! Anyway, that always makes my back just crinkle up at the thought of the whole thing.
Thus, it is not that SLOC is not a metric to be collected, it is just that, given developments in software technology and especially the introduction of Fourth Generation programming language, that SLOC has a place, and that place is becoming less and less significant. Furthermore, institutionalization of SLOC may represent a significant barrier to technological innovation, preventing leveraging the advantages provided by Moore’s Law. In technology such bureaucratization is the last thing that is needed.