Friday Hot Washup: Daddy Stovepipe sings the Blues, and Net Neutrality brought to you by Burger King

Daddy Stovepipe sings the Blues — Line and Staff Organizations (and how they undermine organizational effectiveness)

In my daily readings across the web I came upon this very well written blog post by Glen Alleman at his Herding Cat’s blog. The eternal debate in project management surrounds when done is actually done–and what is the best measurement of progress toward the completion of the end item application?

Glen rightly points to the specialization among SMEs in the PM discipline, and the differences between their methods of assessment. These centers of expertise are still aligned along traditional line and staff organizations that separate scheduling, earned value, system engineering, financial management, product engineering, and other specializations.

I’ve written about this issue where information also follows these stove-piped pathways–multiple data streams with overlapping information, but which resists effective optimization and synergy because of the barriers between them. These barriers may be social or perceptual, which then impose themselves upon the information systems that are constructed to support them.

The manner in which we face and interpret the world is the core basis of epistemology. When we develop information systems and analytical methodologies, whether we are consciously aware of it or not, we delve into the difference between justified belief and knowledge. I see the confusion of these positions in daily life and in almost all professions and disciplines. In fact, most of us find ourselves jumping from belief to knowledge effortlessly without being aware of this internal contradiction–and the corresponding reduction in our ability to accurately perceive reality.

The ability to overcome our self-imposed constraints is the key but, I think, our PM organizational structures must be adjusted to allow for the establishment of a learning environment in relation to data. The first step in this evolution must be the mentoring and education of a discipline that combines these domains. What this proposes is that no one individual need know everything about EVM, scheduling, systems engineering, and financial management. But the business environment today is such, if the business or organization wishes to be prepared for the world ahead, to train transition personnel toward a multi-disciplinary project management competency.

I would posit, contrary to Glen’s recommendation, that no one discipline claim to be the basis for cross-functional integration, only because it may be a self-defeating one. In the book Networks, Crowds, and Markets: Reasoning about a Highly Connected World by David Easley and Jon Kleinberg of Cornell, our social systems are composed of complex networks, but where negative perceptions develop when the network is no longer considered in balance. This subtle and complex interplay of perceptions drive our ability to work together.

It also affects whether we will stay safe the comfort zone of having our information systems tell us what we need to analyze, or whether we apply a more expansive view of leveraging new information systems that are able to integrate ever expanding sets of relevant data to give us a more complete picture of what constitutes “done.”

Hold the Pickle, Hold the Lettuce, Special Orders Don’t Upset Us: Burger King explains Net Neutrality

The original purpose of the internet has been the free exchange of ideas and knowledge. Initially, under ARPANET, Lawrence Roberts and later Bob Kahn, the focus was on linking academic and research institutions so that knowledge could be shared resulting in collaboration that would overcome geographical barriers. Later the Department of Defense, NASA, and other government organizations highly dependent on R&D were brought into the new internet community.

To some extent there still are pathways within what is now broadly called the Web, to find and share such relevant information with these organizations. With the introduction of commercialization in the early 1990s, however, it has been increasingly hard to perform serious research.

For with the expansion of the internet to the larger world, the larger world’s dysfunctions and destructive influences also entered. Thus, the internet has transitioned from a robust First Amendment free speech machine to a place that also harbors state-sponsored psy-ops and propaganda. It has gone form a safe space for academic freedom and research to a place of organized sabotage, intrusion, theft, and espionage. It has transitioned from a highly organized professional community that hewed to ethical and civil discourse, to one that harbors trolls, prejudice, hostility, bullying, and other forms of human dysfunction. Finally and most significantly, it has become dominated by commercial activity, dominated by high tech giants that stifle innovation, and social networking sites that also allow, applying an extreme Laissez-faire attitude, magnify and spread the more dysfunctional activities found in the web as a whole.

At least for those who still looked to the very positive effects of the internet there was net neutrality. The realization that blogs like this one and the many others that I read on a regular basis, including mainstream news, and scientific journals still were available without being “dollarized” in the words of the naturalist John Muir.

Unfortunately this is no longer the case, or will no longer be the case, perhaps, when the legal dust settles. Burger King has placed it marker down and it is a relevant and funny one. Please enjoy and have a great weekend.

 

Points of View — Source Lines of Code as a Measure of Performance

Glen Alleman at Herding Cats has a post on the measure of source lines of code (SLOC) in project management.  He expresses the opinion that SLOC is an important measure of determining cost and schedule–a critical success factor–in what is narrowly defined as Software Intensive Systems.  Such systems are described as being those that are development intensive and represent embedded code.  The Wikipedia definition of an embedded system is as follows:  “An embedded system is a computer system with a dedicated function within a larger mechanical or electrical system, often with real-time computing constraints. It is embedded as part of a complete device often including hardware and mechanical parts. Embedded systems control many devices in common use today.”  In critiquing what can only be described as a strawman argument, he asserts that in criticizing the effectiveness of SLOC, “It’s one of those irrationally held truths that has been passed down from on high by those NOT working in the domains where SLOC is a critical measure of project and system performance.”

Hmmm.  I…don’t…think…so.  I must respectfully disagree with my colleague in his generalization.

What are we measuring when we measure SLOC?  And which SLOC measure are we using?

Oftentimes we are measuring an estimate of what we think (usually based on the systemic use of the Wild-Assed Guess or “WAG” in real life), given the language in which we are developing, the numbers of effective and executable code lines needed to achieve the desired functionality.  No doubt there are parametric sets that are usually based on a static code environment, that will tell us the range of SLOC that should yield a release, given certain assumptions.  But this is systems estimation, not project management and execution.  Estimates are useful in the systems engineering process in sizing and anticipating the effort in processes such as COCOMO, SEER-SEM, and other estimating methods in a very specific subset of projects where the technology is usually well defined and the code set mature and static–with very specific limitations.

SLOC will not, by itself, provide an indication of a working product.  It is, instead, a part of the data stream in the production process in code development.  What this means is that the data must be further refined to determine effectiveness to become a true critical success factor.  Robert Park at SEI of Carnegie Institute effectively summarizes the history and difficulties in defining and applying SLOC.  Even for supporters of the metric, there are a number of papers similar to Nguyen, Deeds-Rubin, Tan, and Boehm of the Center for Systems and Software Engineering at the University of Southern California articulate the difficulty in specifying a counting standard.

The Software Technology Support Center at Hill Air Force Base’s GSAM 3.0 has this to say about SLOC:

Source lines-of-code are easy to count and most existing software estimating models use SLOCs as the key input.  However, it is virtually impossible to estimate SLOC from initial requirements statements.  Their use in estimation requires a level of detail that is hard to achieve (i.e., the planner must often estimate the SLOC to be produced before sufficient detail is available to accurately do so.)
Because SLOCs are language-specific, the definition of how SLOCs are counted has been troublesome to standardize.  This makes comparisons of size estimates between applications written in different programming languages difficult even though conversion factors are available.

What I have learned (through actual experience in coming from the software domain first as a programmer and then as a program manager) is that there was a lot of variation in the elegance of produced code.  When we use the term “elegance” we are not using woo-woo terms to obscure meaning.  It is a useful term that connotes both simplicity and effectiveness.  For example, in C programming language environments (and its successors), differences in SLOC between a good developer and a run-of-the-mill hack who uses cut-and-paste in recycling code can be as much as 20% or more.  We find evidence of this variation in the details underling the high rate of software project failure noted in my previous posts and in my article on Black Swans at AITS.org.  A 20% difference in executable code translates not only into cost and schedule performance, but the manner in which the code is written translates into qualitative differences in the final product such as its ability to scale and sustainment.

But more to the point, our systems engineering practices seem to contribute to suboptimization.  An example of this was articulated by Steve Ballmer in the movie Triumph of the Nerds where he voiced the very practical financial impact of the SLOC measure:

In IBM there’s a religion in software that says you have to count K-LOCs, and a K-LOC is a thousand lines of code.  How big a project is it?  Oh, it’s sort of a 10K-LOC project.  This is a 20K-LOCer.  And this is 50K-LOCs. And IBM wanted to sort of make it the religion about how we got paid.  How much money we made off OS/2 how much they did.  How many K-LOCs did you do?  And we kept trying to convince them – hey, if we have – a developer’s got a good idea and he can get something done in 4K-LOCs instead of 20K-LOCs, should we make less money?  Because he’s made something smaller and faster, less K-LOC. K-LOCs, K-LOCs, that’s the methodology.  Ugh!  Anyway, that always makes my back just crinkle up at the thought of the whole thing.

Thus, it is not that SLOC is not a metric to be collected, it is just that, given developments in software technology and especially the introduction of Fourth Generation programming language, that SLOC has a place, and that place is becoming less and less significant.  Furthermore, institutionalization of SLOC may represent a significant barrier to technological innovation, preventing leveraging the advantages provided by Moore’s Law.  In technology such bureaucratization is the last thing that is needed.

Days of Future Passed — Legacy Data and Project Parametrics

I’ve had a lot of discussions lately on data normalization, including being asked the question of what constitutes normalization when dealing with legacy data, specifically in the field of project management.  A good primer can be found at About.com, but there are also very good older papers out on the web from various university IS departments.  The basic principals of data normalization today consist of finding a common location in the database for each value, reducing redundancy, properly establishing relationships among the data elements, and providing flexibility so that the data can be properly retrieved and further processed into intelligence in such as way as the objects produced possess significance.

The reason why answering this question is so important is because our legacy data is of such a size and of such complexity that it falls into the broad category of Big Data.  The condition of the data itself provides wide variations in terms of quality and completeness.  Without understanding the context, interrelationships, and significance of the elements of the data, the empirical approach to project management is threatened, since being able to use this data for purposes of establishing trends and parametric analysis is limited.

A good paper that deals with this issue was authored by Alleman and Coonce, though it was limited to Earned Value Management (EVM).  I would argue that EVM, especially in the types of industries in which the discipline is used, is pretty well structured already.  The challenge is in the other areas that are probably of more significance in getting a fuller understanding of what is happening in the project.  These areas of schedule, risk, and technical performance measures.

In looking at the Big Data that has been normalized to date–and I have participated with others in putting a significant dent in this area–it is apparent that processes in these other areas lack discipline, consistency, completeness, and veracity.  By normalizing data in sub-specialties that have experienced an erosion in enforcing standards of quality and consistency, technology becomes a driver for process improvement.

A greybeard in IT project management once said to me (and I am not long in joining that category): “Data is like water, the more it flows downstream the cleaner it becomes.”  What he meant is that the more that data is exposed in the organizational stream, the more it is questioned and becomes a part of our closed feedback loop: constantly being queried, verified, utilized in decision making, and validated against reality.  Over time more sophisticated and reliable statistical methods can be applied to the data, especially if we are talking about performance data of one sort or another, that takes periodic volatility into account in trending and provides us with a means for ensuring credibility in using the data.

In my last post on Four Trends in Project Management, I posited that the question wasn’t more or less data but utilization of data in a more effective manner, and identifying what is significant and therefore “better” data.  I recently heard this line repeated back to me as a means of arguing against providing data.  This conclusion was a misreading of what I was proposing.  One level of reporting data in today’s environment is no more work than reporting on any other particular level of a project hierarchy.  So cost is no longer a valid point for objecting to data submission (unless, of course, the one taking that position must admit to the deficiencies in their IT systems or the unreliability of their data).

Our projects must be measured against the framing assumptions in which they were first formed, as well as the established measures of effectiveness, measures of performance, and measures of technical achievement.  In order to view these factors one must have access to data originating from a variety of artifacts: the Integrated Master Schedule, the Schedule and Cost Risk Analysis, and the systems engineering/technical performance plan.  I would propose that project financial execution metrics are also essential in getting a complete, integrated, view of our projects.

There may be other supplemental data that is necessary as well.  For example, the NDIA Integrated Program Management Division has a proposed revision to what is known as the Integrated Baseline Review (IBR).  For the uninitiated, this is a process in which both the supplier and government customer project teams can come together, review the essential project artifacts that underlie project planning and execution, and gain a full understanding of the project baseline.  The reporting systems that identify the data that is to be reported against the baseline are identified and verified at this review.  But there are also artifacts submitted here that contain data that is relevant to the project and worthy of continuing assessment, precluding manual assessments and reviews down the line.

We don’t yet know the answer to these data issues and won’t until all of the data is normalized and analyzed.  Then the wheat from the chaff can be separated and a more precise set of data be identified for submittal, normalized and placed in an analytical framework to give us more precise information that is timely so that project stakeholders can make decisions in handling any risks that manifest themselves during the window that they can be handled (or make the determination that they cannot be handled).  As the farmer says in the Chinese proverb:  “We shall see.”

No Bucks, No Buck Rogers — Project Work Authorizations, Change Control, and Cash Flow

As I’ve written here most recently, the most significant proposal coming out of the Integrated Program Management Conference (IPMC) this year was the comprehensive manner of integrating all essential elements of a project, presented by Glen Alleman et al.  In their presentation, Alleman, Coonce, and Price, present a process flow (which, in my estimation, should be mirrored in data and information flow) in which program artifacts were imbued with measures of effectiveness, measures of performance, and measures of progress, to achieve an organic integration of all parts of the project that allow the project team to make a valid assessment of achievement against the plan, informed by risk and opportunity.  (Emphasis my own).  The three-legged stool of cost, schedule, and technical performance are thereby integrated properly at the appropriate level of the project structure, and done in such a way as to overcome the rigidity and fallacy of the single point estimate.

But, as is always the case with elegant models, while they replicate a sufficient portion of reality to allow us to make our assessments using statistical methods, there are other elements that we have purposely left out because our present models do not incorporate them into the normal and normative process.  They are considered situational, and so lie just outside of the process flow, though they insert themselves when necessary–and much more frequently than desired.  I am referring to the availability of money and resources, and the manner in which they affect the project: through work authorizations (WADs) and baseline change requests (BCRs).

I have seen situations where fully 90% of the effort in project management is devoted to manage and adjust the plan based on baseline changes.  This is particularly the case where estimates are poorly developed due to the excuse of uncertainty.  Of course there is uncertainty–that’s the purpose of developing a plan.  The issue isn’t the presence of risk (and opportunity) but that our risks are educated ones, that is, informed by familiarity with similar efforts, engineering assessment, core competency, and other empirical factors.  This is where the most radical elements of the Agile Cult gets it wrong–in focusing on risk and assuming that the only way to realize opportunity is to forgo the empirical process.  This is not only a misreading of risk and opportunity assessment in project management, it is a sort of neo-Luddite position regarding scientific management.

The environment in which a project operates undergoes change.  The framing assumptions of the project determine the expectations of scope, cost, and what defines success.  The concept of framing assumptions was fully developed in a RAND study that I covered in a previous blog post.  Most often, but not always, the change in framing assumptions is reflected in the WAD and BCR process, most often in the latter.  Thus, we have a means of determining and taking account of changes in framing assumptions.  This is in the normal process of project management, as opposed to the more obvious examples of a complete replan or over target baseline (OTB).

So where do we track WADs and BCRs in our processes that will provide us sufficient indicators in our measures of effectiveness, performance, and progress that our resources (both size and type) many not be sufficient or that these changes are sufficient enough that our framing assumptions have changed?  I would argue that the linkage for resources must also be made through the Integrated Master Plan (IMP) and reflect in the IMS, cross-referenced to the PMB.  Technology can provide the remainder of the ability to integrate these elements and provide the process flow necessary to provide early warning.  This integration goes beyond the traditional focus on cost and schedule (and the newly reintroduced emphasis on technical achievement).  It involves integration with resource management systems (personnel, skillset assignments, etc.) as well as financial management systems to determine the availability of money (both its sufficiency and “color”*) being applied to the right place at the right time.

Integrating these elements together then allows for more sophisticated methods of determining project success through the introduction of metrics that provide correlations between the elements.  It also answers, absent politics, the optimum level of both analysis and reporting.

*The “color” of money applies mostly to public investments in which monies appropriated are designed by their purpose:  operations, maintenance, engineering, R&D, etc.

Note: This post was modified to add a point of clarification in applying WADs and BCRs to the PMB.

I’ve Got Your Number — Types of Project Measurement and Services Contracts

Glen Alleman reminds us at his blog that we measure things for a reason and that they include three general types: measures of effectiveness, measures of performance, and key performance parameters.

Understanding the difference between these types of measurement is key, I think, to defining what we mean by such terms as integrated project management and in understanding the significance of differing project and contract management approaches based on industry and contract type.

For example, project management focused on commodities, with their price volatility, emphasizes schedule and resource management. Cost performance (earned value) where it exists, is measured by time in lieu of volume- or value-based performance. I have often been engaged in testy conversations where those involved in commodity-based PM insist that they have been using Earned Value Management (EVM) for as long as the U.S.-based aerospace and defense industry (though the methodology was borne in the latter). But when one scratches the surface the approaches in the details on how value and performance is determined is markedly different–and so it should be given the different business environments in which enterprises in each of these industries operate.

So what is the difference in these measures? In borrowing from Glen’s categories, I would like to posit a simple definitional model as follows:

Measures of Excellence – are qualitative measures of achievement against the goals in the project;

Measures of Performance – are quantitative measures against a plan or baseline in execution of the project plan.

Key Performance Parameters – are the minimally acceptable thresholds of achievement in the project or effort.

As you may guess there is sometimes overlap and confusion regarding which category a particular measurement falls. This confusion has been exacerbated by efforts to define key performance indicators (KPIs) based on industry, giving the impression that measures are exclusive to a particular activity. While this is sometimes the case it is not always the case.

So when we talk of integrated project management we are not accepting that any particular method of measurement has primacy over the others, nor subsumes them. Earned Value Management (EVM) and schedule performance are clearly performance measures. Qualitative measures oftentimes measure achievement of technical aspects of the end item application being produced. This is not the same as technical performance measurement (TPM), which measures technical achievement against a plan–a performance measure. Technical achievement may inform our performance measurement systems–and it is best if it does. It may also inform our Key Performance Parameters since exceeding a minimally acceptable threshold obviously helps us to determine success or failure in the end. The difference is the method of measurement. In a truly integrated system the measurement of one element informs the others. For the moment these systems presently tend to be stove-piped.

It becomes clear, then, that the variation in approaches differs by industry, as in the example on EVM above, and–in an example that I have seen most recently–by contract type. This insight is particularly important because all too often EVM is viewed as being synonymous with performance measurement, which it is not. Services contracts require structure in measurement as much as R&D-focused production contracts, particularly because they increasingly take up a large part of an enterprise’s resources. But EVM may not be appropriate.

So for our notional example, let us say that we are responsible for managing an entity’s IT support organization. There are types of equipment (PCs, tablet computers, smartphones, etc.) that must be kept operational based on the importance of the end user. These items of hardware use firmware and software that must be updated and managed. Our contract establishes minimal operational parameters that allow us to determine if we are at least meeting the basic requirements and will not be terminated for cause. The contract also provides incentives to encourage us to exceed the minimums.

The sites we support are geographically dispersed. We have to maintain a help desk but also must have people who can come onsite and provide direct labor to setup new systems or fix existing ones–and that the sites and personnel must be supported within a particular time-frame: one hour, two hours, and within twenty-four hours, etc.

In setting up our measurement systems the standard practice is to start with the key performance parameters. Typically we will also measure response times by site and personnel level, record our help desk calls, and track qualitative aspects of the work: How helpful is the help desk? Do calls get answered at the first contract? Are our personnel friendly and courteous? What kinds of hardware and software problems do we encounter? We collect our data from a variety of one-off and specialized sources and then we generate reports from these systems. Many times we will focus on those that will allow us to determine if the incentive will be paid.

Among all of this data we may be able to discern certain things: if the contract is costing more or less than we anticipated, if we are fulfilling our contractual obligations, if our personnel pools are growing or shrinking, if we are good at what we do on a day-to-day basis, and if it looks as if our margin will be met. But what these systems do not do is allow us to operate the organization as a project, nor do they allow us to make adjustments in a timely manner.

Only through integration and aggregation can we know, for example, how the demand for certain services is affecting our resource demands by geographical location and level of service, on a real=time basis where we need to make adjustments in personnel and training, whether we are losing or achieving our margin by location, labor type, equipment type, hardware vs. software; our balance sheets (by location, by equipment type, by software type, etc.), if there is a learning curve, and whether we can make intermediate adjustments to achieve the incentive thresholds before the result is written in stone. Having this information also allows us to manage expectations, factually inform perceptions, and improve customer relations.

What is clear by this example is that “not doing EVM” does not make measurement easy, nor does it imply simplification, nor the absence of measurement. Instead, understanding the nature of the work allows us to identify those measures within their proper category that need to be applied by contract type and/or industry. So while EVM may not apply to services contracts, we know that certain new aggregations do apply.

For many years we have intuitively known that construction and maintenance efforts are more schedule-focused, that oil and gas exploration more resource- and risk-focused, and that aircraft, satellites, and ships more performance-focused. I would posit that now is the time for us to quantify and formalize the commonalities and differences. This also makes an integrated approach not simply a “nice to have” capability, but an essential capability in managing our enterprises and the projects within them.

Note: This post was updated to correct grammatical errors.

Livin’ on a Prayer — The Importance of Plan B

Glen Alleman over at Herding Cats has a great presentation up on the importance of risk handling by having a Plan B based on the Shackleton Expedition.  This is an important point and one that goes against the oft-heard assertion, particularly in software development, that we are “exploring,” that our systems are evolutionary, that we are delivering value one increment at a time.

Murphy’s Law of combat operations states that “No OPLAN ever survives initial contact.”  This experience is in line with Eisenhower’s comment that “When preparing for battle, I have always found that plans are useless but planning is indispensable.”  What these observations mean is that we know that once tested by reality–the reality of combat in the case of the examples given–that almost nothing will go according to plan.

As part of operational planning, the staff identifies risks, alternatives, and contingencies.  Everyone in the planning process is made familiar with these alternatives–Plan B, Plan C, etc.  We may “think” that the plan we have chosen is the best one, based as it is on certain assumptions and the 80% solution.  But when in the midst of an operation no one can anticipate everything.  Knowing, however, that during the planning process that what one is seeing before them is very much in line with one of the alternative scenarios allows us to initiate the alternative plan.  Rather than having to throw everything out, including the progress that has been made, or having to improvise from zero, we have a basis to make well-informed decisions based on the alternatives.  To do otherwise is folly and may lead to defeat in the case of military operations.  For more workaday situations, like project management, to do otherwise is folly and may lead to project failure.

This is why our community should be following the ACA roll-out, regardless of the surrounding politics, as I stated in a previous blog.  The ACA program is a fascinating real-life and highly visible experiment in program and project management.  Much of the publicity in the press focused on the federal government’s website roll-out.  But that fails to distinguish two important concepts: that a program is not the same as a project, and that the website was not the entire project for the initial enrollment period for the ACA.  It is, to quote Macbeth, “…a tale told by an idiot, full of sound and fury, signifying nothing.”

The reason for my unsympathetic assessment of the critics of the roll-out is because they are using the wrong measures of success, which was the number of people ultimately enrolled.  (Now projected to be about 7.8 million for the website alone, and for between 14 to 20 million overall).  There was a plan B and a plan C.  The facilitators turned out to be very important in the early stages and represented an effective Plan B.  Adjustments in the enrollment period also allowed for some flexibility, giving the digital systems time to recover, and provided a Plan C.

There will be more detailed postmortems as the players begin to publish once the dust has settled.  Early controversy within the IT community has focused on whether the failure rested with Agile or Waterfall.  I think this is a false debate since no software methodology can credibly claim to inherently handles risk or rises to the level of a project management method.  I think the real issue of interest to IT professionals will focus on the areas of testing and recovery: the first because early reports were that testing was insufficient and the latter because the recovery was remarkably fast, which undermines the credibility of the critics of testing.