I’ve had a number of conversations over the last four months on this issue, mostly from the technical solution side rather than the issue side. But intent does eventually translate into action and so it is an important issue to tackle. Usually this issue comes up within the context of the purpose of audit. But then, we have to understand the purpose of audit itself within the context of project management.
For example, it is assumed that in project management that we are dealing with an oversight and regulatory regime. There are dichotomies to this.
The hardcore auditor states that their job is to ensure that the project organization is compliant to a specific set of criteria. These criteria will have attribute associated with them that clarify what an auditor should find as proof of compliance. Sounds cut and dried enough–and from the standpoint of the auditor it is. But what of the rationale behind the criteria and attributes?
No doubt, especially in highly technical fields, these have been published built on the long experience of practitioners of the discipline. Where there are competing interests–which pretty much defines all economic relationships–the criteria and attributes will be informed by a type of in-group politics. The end result is to prescribe (hence prescriptiveness) how something is to be done–approaching standardization–in order to establish some common basis of what constitutes minimally acceptable business practices. Some of the guidance associated with these disciplines can vary. For example, we’re all, at least vaguely, familiar with GAAP in accounting (though here there are many types of GAAP depending on the type and size of business), there are food sanitation standards, and in project management, particularly in government contracting, there are standards for business systems, including criteria for the establishment of an earned value management system.
There are some areas that require more prescriptiveness than others. For example, in visiting ones doctor a look around the examination room gives us some clue of this effect. Examination rooms must have a minimum number of items in them, and among these items are the ubiquitously red hazardous waste bins. These bins must be a certain color (red), a certain size, contain particular markings identifying their purpose and what they may contain, the hazardous nature of their possible contents, and must have a special bag within them so that the hazardous fluids and waste do not permanently contaminate the bin making its reuse prohibited. Each of these attributes that I’ve mentioned must be met. It is assumed that these attributes have been established for good reason associated with the health and safety of the immediate personnel engaged in the medical profession, but also serve the health and safety of the community in general.
Thus, areas that are deemed hazardous, potentially hazardous, or containing much risk that can do harm far beyond the entity immediately involved tend to be the most prescriptive. Naval aviation is another good example, because it is so closely tied to research and development within the aviation project management community. Project managers in Navy aviation, for example, are considered to be the life-cycle managers of their systems. Thus, though a project may be established for a specific purpose, the long term view is not automatically eschewed in favor of immediate concerns. It promotes a more balanced view.
The guidance Navy aviation follows is in the Naval Air Training and Operating Procedures Standardization (NATOPS) manual. NATOPS consists of very specific guidance and checklists on the do’s and don’ts of flying, personnel readiness, and maintaining an aircraft based on the experience of previous operations. It has been said and repeated often that NATOPS was written with the blood of every aviator that came before. Because operating a high performance aircraft in military operations can be so hazardous, the guidance in this case is very specific. There is very little, if any, flexibility in the application of the guidance–and with good reason.
But not every action in life has the same effect as every other. An administrative error, for example, on a form may or may not have an effect far beyond requiring a simple correction. An error in decision making, especially under stress or duress, requires an understanding of mitigating factors. Human action, by definition, is filled with error. This is a given and one who does not accept this very basic fact is neither mature enough to effectively engage with the world, nor qualified to assume any kind of leadership position or role. Some lessons can be simply “learned” given that action to acknowledge and correct the error is immediately taken. Others that have greater impact may require more calibrated levels of response. This is where materiality comes into play.
Materiality is the application not only of experience, but of judgment reasonably applied, and a number of elements go into that judgment. Once again, the military services, probably more than any civilian organization, undergoes regular and intensive rounds of testing, audit, and oversight on all of its financial, operational, and administrative systems. Given that public monies and the public interest is involved, sometimes the results of these audits find themselves into the news. When such events occur, oftentimes a great deal of time, money, and resources are diverted from other activities–including those dedicated to correcting the identified error–to addressing inquiries. The news media, of course, are simply serving the purpose of the press within the idealized framework of a democratic society. (Journalistic standards are a different topic). It would be nice, from the military’s perspective, that these errors (and in some cases misbehavior by individuals) didn’t happen, but they are a part of life in an extremely large and complex organization that also deals with a large number of resources.
Thus, the definition of materiality in these cases also involve considerations of perception, as well as minimizing both the frequency and impact of simple or compound error, incompetence, or misbehavior.
More discussion to follow.