Move It On Over — Third and Fourth Generation Software: A Primer

While presenting to organizations regarding business intelligence and project management solutions I often find myself explaining the current state of programming and what current technology brings to the table. Among these discussions is the difference between third and fourth generation software, not just from the perspective of programming–or the Wikipedia definition (which is quite good, see the links below)–but from a practical perspective.

Recently I ran into someone who asserted that their third-generation software solution was advantageous over a fourth generation one because it was “purpose built.” My response was that a fourth generation application provides multiple “purpose built” solutions from one common platform in a more agile and customer-responsive environment. For those unfamiliar with the differences, however, this simply sounded like a war of words rather than the substantive debate that it was.

For anyone who has used a software application they are usually not aware of the three basic logical layers that make up the solution. These are the business logic layer, the application layer, and the database structure. The user interface delivers the result of the interaction of these three layers to the user–what is seen on the screen.

Back during the early advent of the widespread use of PCs and distributed computing on centralized systems, a group of powerful languages were produced that allowed the machine operations to be handled by an operating system and for software developers to write code to focus on “purpose built” solutions.

Initially these efforts concentrated on automated highly labor-intensive activities to achieve maximum productivity gains in an organization, and to leverage those existing systems to distribute information that previously would require many hours of manual effort in terms of mathematical and statistical calculation and visualization. The solutions written were based on what were referred to as third generation languages, and they are familiar even to non-technical people: Fortran, Cobol, C+, C++, C#, and Java, among others. These languages are highly structured and require a good bit of expertise to correctly program.

In third generation environments, the coder specifies operations that the software must perform based on data structure, application logic, and pre-coded business logic.These three levels of highly integrated and any change in one of them requires that the programmer trace the impact of that change to ensure that the operations in the other two layers are not affected. Oftentimes, the change has a butterfly effect, requiring detailed adjustments to take into the account the subtleties in processing. It is this highly structured, interdependent, “purpose built” structure that causes unanticipated software bugs to pop up in most applications. It is also the reason why software development and upgrade configuration control is also highly structured and time-consuming–requiring long lead-times to even deliver what most users view as relatively mundane changes and upgrades, like a new chart or graph.

In contrast, fourth generation applications separate the three levels and control the underlying behavior of the operating environment by leveraging a standard framework, such as .NET. The .NET operating environment, for example, controls both a library of interoperability across programming languages (known as a Framework Class Library or FCL), and virtual machine that handles exception handling, memory management, and other common functions (known as Common Language Runtime or CLR).

With the three layers separated, with many of the more mundane background tasks being controlled by the .NET framework, a great deal of freedom is provided to the software developer that provides real benefits to customers and users.

For example, the database layer is freed from specific coding from the application layer, since the operating environment allows libraries of industry standard APIs to be leveraged, making the solution agnostic to data. Furthermore, the business logic/UI layer allows for table-driven and object-oriented configuration that creates a low code environment, which not only allows for rapid roll-out of new features and functionality (since hard-coding across all three layers is eschewed), but also allows for more precise targeting of functionality based on the needs of user groups (or any particular user).

This is what is meant in previous posts by new technology putting the SME back in the driver’s seat, since pre-defined reports and objects (GUIs) at the application layer allow for immediate delivery of functionality. Oftentimes data from disparate data sources can be bound together through simple query languages and SQL, particularly if the application layer table and object functionality is built well enough.

When domain knowledge is incorporated into the business logic layer, the distinction between generic BI and COTS is obliterated. Instead, what we have is a hybrid approach that provides the domain specificity of COTS (‘purpose built”), with the power of BI that reduces the response time between solution design and delivery. More and better data can also be accessed, establishing an environment of discovery-driven management.

Needless to say, properly designed Fourth Generation applications are perfectly suited to rapid application development and deployment approaches such as Agile. They also provide the integration potential, given the agnosticism to data, that Third Generation “purpose built” applications can only achieve through data transfer and reconciliation across separate applications that never truly achieve integration. Instead, Fourth Generation applications can subsume the specific “purpose built” functionality found in stand-alone applications and deliver it via a single platform that provides one source of truth, still allowing for different interpretations of the data through the application of differing analytical approaches.

So move it on over nice (third generation) dog, a big fat (fourth generation) dog is moving in.

Friday Hot Washup: Daddy Stovepipe sings the Blues, and Net Neutrality brought to you by Burger King

Daddy Stovepipe sings the Blues — Line and Staff Organizations (and how they undermine organizational effectiveness)

In my daily readings across the web I came upon this very well written blog post by Glen Alleman at his Herding Cat’s blog. The eternal debate in project management surrounds when done is actually done–and what is the best measurement of progress toward the completion of the end item application?

Glen rightly points to the specialization among SMEs in the PM discipline, and the differences between their methods of assessment. These centers of expertise are still aligned along traditional line and staff organizations that separate scheduling, earned value, system engineering, financial management, product engineering, and other specializations.

I’ve written about this issue where information also follows these stove-piped pathways–multiple data streams with overlapping information, but which resists effective optimization and synergy because of the barriers between them. These barriers may be social or perceptual, which then impose themselves upon the information systems that are constructed to support them.

The manner in which we face and interpret the world is the core basis of epistemology. When we develop information systems and analytical methodologies, whether we are consciously aware of it or not, we delve into the difference between justified belief and knowledge. I see the confusion of these positions in daily life and in almost all professions and disciplines. In fact, most of us find ourselves jumping from belief to knowledge effortlessly without being aware of this internal contradiction–and the corresponding reduction in our ability to accurately perceive reality.

The ability to overcome our self-imposed constraints is the key but, I think, our PM organizational structures must be adjusted to allow for the establishment of a learning environment in relation to data. The first step in this evolution must be the mentoring and education of a discipline that combines these domains. What this proposes is that no one individual need know everything about EVM, scheduling, systems engineering, and financial management. But the business environment today is such, if the business or organization wishes to be prepared for the world ahead, to train transition personnel toward a multi-disciplinary project management competency.

I would posit, contrary to Glen’s recommendation, that no one discipline claim to be the basis for cross-functional integration, only because it may be a self-defeating one. In the book Networks, Crowds, and Markets: Reasoning about a Highly Connected World by David Easley and Jon Kleinberg of Cornell, our social systems are composed of complex networks, but where negative perceptions develop when the network is no longer considered in balance. This subtle and complex interplay of perceptions drive our ability to work together.

It also affects whether we will stay safe the comfort zone of having our information systems tell us what we need to analyze, or whether we apply a more expansive view of leveraging new information systems that are able to integrate ever expanding sets of relevant data to give us a more complete picture of what constitutes “done.”

Hold the Pickle, Hold the Lettuce, Special Orders Don’t Upset Us: Burger King explains Net Neutrality

The original purpose of the internet has been the free exchange of ideas and knowledge. Initially, under ARPANET, Lawrence Roberts and later Bob Kahn, the focus was on linking academic and research institutions so that knowledge could be shared resulting in collaboration that would overcome geographical barriers. Later the Department of Defense, NASA, and other government organizations highly dependent on R&D were brought into the new internet community.

To some extent there still are pathways within what is now broadly called the Web, to find and share such relevant information with these organizations. With the introduction of commercialization in the early 1990s, however, it has been increasingly hard to perform serious research.

For with the expansion of the internet to the larger world, the larger world’s dysfunctions and destructive influences also entered. Thus, the internet has transitioned from a robust First Amendment free speech machine to a place that also harbors state-sponsored psy-ops and propaganda. It has gone form a safe space for academic freedom and research to a place of organized sabotage, intrusion, theft, and espionage. It has transitioned from a highly organized professional community that hewed to ethical and civil discourse, to one that harbors trolls, prejudice, hostility, bullying, and other forms of human dysfunction. Finally and most significantly, it has become dominated by commercial activity, dominated by high tech giants that stifle innovation, and social networking sites that also allow, applying an extreme Laissez-faire attitude, magnify and spread the more dysfunctional activities found in the web as a whole.

At least for those who still looked to the very positive effects of the internet there was net neutrality. The realization that blogs like this one and the many others that I read on a regular basis, including mainstream news, and scientific journals still were available without being “dollarized” in the words of the naturalist John Muir.

Unfortunately this is no longer the case, or will no longer be the case, perhaps, when the legal dust settles. Burger King has placed it marker down and it is a relevant and funny one. Please enjoy and have a great weekend.

 

Learning the (Data) — Data-Driven Management, HBR Edition

The months of December and January are usually full of reviews of significant events and achievements during the previous twelve months. Harvard Business Review makes the search for some of the best writing on the subject of data-driven transformation by occasionally publishing in one volume the best writing on a critical subject of interest to professional through the magazine OnPoint. It is worth making part of your permanent data management library.

The volume begins with a very concise article by Thomas C. Redman with the provocative title “Does Your Company Know What to Do with All Its Data?” He then goes on to list seven takeaways of optimizing the use of existing data that includes many of the themes that I have written about in this blog: better decision-making, innovation, what he calls “informationalize products”, and other significant effects. Most importantly, he refers to the situation of information asymmetry and how this provides companies and organizations with a strategic advantage that directly affects the bottom line–whether that be in negotiations with peers, contractual relationships, or market advantages. Aside from the OnPoint article, he also has some important things to say about corporate data quality. Highly recommended and a good reason to implement systems that assure internal information systems fidelity.

Edd Wilder-James also covers a theme that I have hammered home in a number of blog posts in the article “Breaking Down Data Silos.” The issue here is access to data and the manner in which it is captured and transformed into usable analytics. His recommended approach to a task that is often daunting is to find the path of least resistance in finding opportunities to break down silos and maximize data to apply advanced analytics. The article provides a necessary balm that counteracts the hype that often accompanies this topic.

Both of these articles are good entrees to the subject and perfectly positioned to prompt both thought and reflection of similar experiences. In my own day job I provide products that specifically address these business needs. Yet executives and management in all too many cases continue to be unaware of the economic advantages of data optimization or the manner in which continuing to support data silos is limiting their ability to effectively manage their organizations. There is no doubt that things are changing and each day offers a new set of clients who are feeling their way in this new data-driven world, knowing that the promises of almost effort-free goodness and light by highly publicized data gurus are not the reality of practitioners, who apply the detail work of data normalization and rationalization. At the end it looks like magic, but there is effort that needs to be expended up-front to get to that state. In this physical universe under the Second Law of Thermodynamics there are no free lunches–energy must be borrowed from elsewhere in order to perform work. We can minimize these efforts through learning and the application of new technology, but managers cannot pretend not to have to understand the data that they intend to use to make business decisions.

All of the longer form articles are excellent, but I am particularly impressed with the Leandro DalleMule and Thomas H. Davenport article entitled “What’s Your Data Strategy?” from the May-June 2017 issue of HBR. Oftentimes when addressing big data at professional conferences and in visiting businesses the topic often runs to the manner of handling the bulk of non-structured data. But as the article notes, less than half of an organization’s relevant structured data is actually used in decision-making. The most useful artifact that I have permanently plastered at my workplace is the graphic “The Elements of Data Strategy”, and I strongly recommend that any manager concerned with leveraging new technology to optimize data do the same. The graphic illuminates the defensive and offensive positions inherent in a cohesive data strategy leading an organization to the state: “In our experience, a more flexible and realistic approach to data and information architectures involves both a single source of truth (SSOT) and multiple versions of the truth (MVOTs). The SSOT works at the data level; MVOTs support the management of information.” Elimination of proprietary data silos, elimination of redundant data streams, and warehousing of data that is accessed using a number of analytical methods achieve the necessary states of SSOT that provides the basis for an environment supporting MVOTs.

The article “Why IT Fumbles Analytics” by Donald A. Marchand and Joe Peppard from 2013, still rings true today. As with the article cited above by Wilder-James, the emphasis here is with the work necessary to ensure that new data and analytical capabilities succeed, but the emphasis shifts to “figuring out how to use the information (the new system) generates to make better decisions or gain deeper…insights into key aspects of the business.” The heart of managing the effort in providing this capability is to put into place a project organization, as well as systems and procedures, that will support the organizational transformation that will occur as a result of the explosion of new analytical capability.

The days of simply buying an off-the-shelf silo-ed “tool” and automating a specific manual function are over, especially for organizations that wish to be effective and competitive–and more profitable–in today’s data and analytical environment. A more comprehensive and collaborative approach is necessary. As with the DalleMule and Davenport article, there is a very useful graphic that contrasts traditional IT project approaches against Analytics and Big Data (or perhaps “Bigger” Data) Projects. Though the prescriptions in the article assume an earlier concept of Big Data optimization focused on non-structured data, thereby making some of these overkill, an implementation plan is essential in supporting the kind of transformation that will occur, and managers act at their own risk if they fail to take this effect into account.

All of the other articles in this OnPoint issue are of value. The bottom line, as I have written in the past, is to keep a focus on solving business challenges, rather than buying the new bright shiny object. Alternatively, in today’s business environment the day that business decision-makers can afford to stay within their silo-ed comfort zone are phasing out very quickly, so they need to shift their attention to those solutions that address these new realities.

So why do this apart from the fancy term “data optimization”? Well, because there is a direct return-on-investment in transforming organizations and systems to data-driven ones. At the end of the day the economics win out. Thus, our organizations must be prepared to support and have a plan in place to address the core effects of new data-analytics and Big Data technology:

a. The management and organizational transformation that takes place when deploying the new technology, requiring proactive socialization of the changing environment, the teaching of new skill sets, new ways of working, and of doing business.

b. Supporting transformation from a sub-optimized silo-ed “tell me what I need to know” work environment to a learning environment, driven by what the data indicates, supporting the skills cited above that include intellectual curiosity, engaging domain expertise, and building cross-domain competencies.

c. A practical plan that teaches the organization how best to use the new capability through a practical, hands-on approach that focuses on addressing specific business challenges.

Post-Workshop Talking Blues — No Bucks, No Buck Rogers: Cashflow Analysis in Projects (Somewhat Wonkish)

When I used this analogy the week before last during the last Integrated Project Management Workshop in the D.C. area I was accused of dating myself–and perhaps it is true. For those wondering the quote was popularized by the 1983 movie The Right Stuff, which was based on the 1979 book written by Tom Wolfe of the same title. The book and movie was about the beginnings of the U.S. space program culminating in the creation of NASA and the Project Mercury program.

A clip from the movie follows:

It goes without saying that while I was familiar as a boy with Project Mercury and followed the seven astronauts as did the rest of the country, transfixed on the prospect of space exploration during the days of the New Frontier, Buck Rogers was from the childhood of my father’s generation through, at first, its radio program, and then through the serials that were released to the movie theaters during the 1930s.

The point of the quote, of course, is that Project Mercury’s success was based on its ability to obtain funding and, no doubt, the Mercury 7 astronauts so inspired the imagination of the nation that even the most parsimonious Member of Congress could not help but provide it with sufficient funding for success. That this was also the era of the “space race” with the Soviet Union, which also helped to spur funding.

The lesson of “No Bucks, No Buck Rogers” also applies to project management, but not just in the use of imagery and marketing to gain funding. Instead, the principle applies through a more mundane part of the discipline: financial management and the relationship between cash flow and project performance.

What I am referring to as cash flow is not the burn rate of expenditures against an end point, but the intersection of sufficient money at the right time programmed in accordance with the project plan (in alignment with both the IMS and PMB), and informed by project performance.

To those unfamiliar with this method it sounds similar to earned value management, but it is not. EVM informs our decision, but the analysis is not the same.

First, in using this analysis the cumulative actual cost of work performed (ACWP in earned value) should be compared to accrued expenditures for the project. These figures will not be exact, but will provide an indication whether accruals to date have been in line with what was forecasted. In government contracting and project management, these figures will also be somewhat off because earned value figures do not include fee or profit, while financial management figures will include fee or profit. Understanding the profit center from which the financial expenditures are being accrued will allow for a reconciliation of these differences.

Secondly, if projected accruals against the project plan begin to deviate, it is an early indication of programmatic risk being manifested in the physical expenditures of the project. For example, if management anticipates that there will be a delay in project execution in some area, they may decide to defer acquisition of spare parts used in the construction of a component, or they may delay the award of a subcontract that was meant to augment staff in an area requiring specialized expertise.

Third, and conversely, deviations of expenditures for needed materials or manpower may adversely affect project execution, and provide an early warning that such shortages or misalignments will move project accomplishment to the right. For example, a company may have underestimated the combined Procurement Action Lead Time (PALT) and delivery of critical materials, which will now arrive much later than anticipated. This misalignment will cascade through the schedule and future planned work.

For both of these previous conditions, the proper determination of cause-and-effect is essential, since either may appear to suggest the opposite cause.

Fourth, variances in performance either in earned value achievement or schedule performance may require an adjustment to the type of money being provided. For example, when a project fails to execute and risk is manifested in terms of cost and/or schedule, financial management and budgeting personnel, always under pressure to apply excess funds to more immediate needs, may mistakenly believe that a budget mark (a decrease) is appropriate since the allocated money will not be executed in the current time-frame.

But this is not necessarily the case. Performance management data tracks the performance measurement baseline (PMB) for the life of the project, but funding has a finite period in which it can be executed. In government contracting it is not uncommon for there to be different “colors” of money: Research, Development, Test & Evaluation (RDT&E), Procurement, Operations and Maintenance (O&M), and others. Furthermore, these types of appropriations have different expiration dates: two years in terms of RDT&E, three years for procurement, and one year for O&M. The financial management plan takes into account the life of money allocated to the project, as well as the costs of activities necessary to project execution. The time frame for financial execution is shorter and, therefore, more sensitive to risks or variances than project plans that are projected across a longer period of time.

For an R&D program experiencing risk during a particular portion of its PMB, for example, a variance this year may require not only a steady funding profile, but a larger expenditure to handle risk. Marking two-year RDT&E money in its first year in this case would be a mistake, of course, but *not* properly anticipating the proper level of risk adjusted expenditures to handle risk may exacerbate the ability of the project to recover and execute, causing it to fall into a spiral of compounding misalignments and variances from which it may never recover.

Thus, what we can see is that, oftentimes, the availability of cash–and the right kind of cash at the right time–will have as much impact on project execution as the factors of technical and engineering risk. Furthermore, tracking and reconciling the financial plan against actual accomplishment will provide a very detailed early indicator into project performance since it is sensitive to deviations in the fiscal plan.

Postscript.

For those not savvy about the cultural reference to Buck Rogers what follows is a sampling of the first of what became a movie serial in the 1930s, which originated as a radio “space opera”. Later it became a TV series in 1950 as well. For the record, I was not around yet when these were popular, though I did watch the reruns on Saturday mornings in the 1960s and early 1970s.

 

 

 

 

 

 

 

 

 

 

Money for Nothing — Project Performance Data and Efficiencies in Timeliness

I operate in a well regulated industry focused on project management. What this means practically is that there are data streams that flow from the R&D activities, recording planning and progress, via control and analytical systems to both management and customer. The contract type in most cases is Cost Plus, with cost and schedule risk often flowing to the customer in the form of cost overruns and schedule slippages.

Among the methodologies used to determine progress and project eventual outcomes is earned value management (EVM). Of course, this is not the only type of data that flows in performance management streams, but oftentimes EVM is used as shorthand to describe all of the data captured and submitted to customers in performance management. Other planning and performance management data includes time-phased scheduling of tasks and activities, cost and schedule risk assessments, and technical performance.

Previously in my critique regarding the differences between project monitoring and project management (before Hurricane Irma created some minor rearranging of my priorities), I pointed out that “looking in the rear view mirror” was often used as an excuse for by-passing unwelcome business intelligence. I followed this up with an intro to the synergistic economics of properly integrated data. In the first case I answered the critique demonstrating that it is based on an old concept that no longer applies. In the second case I surveyed the economics of data that drives efficiencies. In both cases, new technology is key to understanding the art of the possible.

As I have visited sites in both government and private industry, I find that old ways of doing things still persist. The reason for this is multivariate. First, technology is developing so quickly that there is fear that one’s job will be eliminated with the introduction of technology. Second, the methodology of change agents in introducing new technology often lacks proper socialization across the various centers of power that inevitably exist in any organization. Third, the proper foundation to clearly articulate the need for change is not made. This last is particularly important when stakeholders perform a non-rational assessment in their minds of cost-benefit. They see many downsides and cannot accept the benefits, even when they are obvious. For more on this and insight into other socioeconomic phenomena I strongly recommend Daniel Kahneman’s Thinking Fast and Slow. There are other reasons as well, but these are the ones that are most obvious when I speak with individuals in the field.

The Past is Prologue

For now I will restrict myself to the one benefit of new technology that addresses the “looking in the rear window” critique. It is important to do so because the critique is correct in application (for purposes that I will outline) if incorrect in its cause-and-effect. It is also important to focus on it because the critique is so ubiquitous.

As I indicated above, there are many sources of data in project management. They derive from the following systems (in brief):

a. The planning and scheduling applications, which measure performance through time in the form of discrete activities and events. In the most sophisticated implementations, these applications will include the assignment of resources, which requires the integration of these systems with resource management. Sometimes simple costs are also assigned and tracked through time as well.

b. The cost performance (earned value) applications, which ideally are aligned with the planning and scheduling applications, providing cross-integration with WBS and OBS structures, but focused on work accomplishment defined by the value of work completed against a baseline plan. These performance figures are tied to work accomplishment through expended effort collected by and, ideally, integrated with the financial management system. It involves the proper application of labor rates and resource expenditures in the accomplishment of the work to not only provide an statistical assessment of performance to date, but a projection of likely cost performance outcomes at completion of the effort.

c. Risk assessment applications which, depending of their sophistication and ease of use, provide analysis of possible cost and schedule outcomes, identify the sensitivity of particular activities and tasks, provide an assessment of alternative driving and critical paths, and apply different models of baseline performance to predict future outcomes.

d. Systems engineering applications that provide an assessment of technical performance to date and the likely achievement of technical parameters within the scope of the effort.

e. The financial management applications that provide an accounting of funds allocation, cash-flow, and expenditure, including planning information regarding expenditures under contract and planned expenditures in the future.

These are the core systems of record upon which performance information is derived. There are others as well, depending on the maturity of the project such as ERP systems and MRP systems. But for purposes of this post, we will bound the discussion to these standard sources of data.

In the near past, our ability to understand the significance of the data derived from these systems required manual processing. I am not referring to the sophistication of human computers of 1960s and before, dramatized to great effect in the uplifting movie Hidden Figures. Since we are dealing with business systems, these methodologies were based on simple business metrics and other statistical methods, including those that extended the concept of earned value management.

With the introduction of PCs in the workplace in the 1980s, desktop spreadsheet applications allowed this data to be entered, usually from printed reports. Each analyst not only used standard methods common in the discipline, but also developed their own methods to process and derive importance from the data, transforming it into information and useful intelligence.

Shortly after this development simple analytical applications were introduced to the market that allowed for pairing back the amount of data deriving from some of these systems and performing basic standard calculations, rendering redundant calculations unnecessary. Thus, for example, instead of a person having to calculate multiple estimates to complete, the application could perform those calculations as part of its functionality and deliver them to the analyst for use in, hopefully, their own more extensive assessments.

But even in this case, the data flow was limited to the EVM silo. The data streams relating to schedule, risk, SE, and FM were left to their own devices, oftentimes requiring manual methods or, in the best of cases, cut-and-paste, to incorporate data from reports derived from these systems. In the most extreme cases, for project oversight organizations, this caused analysts to acquire a multiplicity of individual applications (with the concomitant overhead and complexity of understanding differing lexicons and software application idiosyncrasies) in order to read proprietary data types from the various sources just to perform simple assessments of the data before even considering integrating it properly into the context of all of the other project performance data that was being collected.

The bottom line of outlining these processes is to note that, given a combination of manual and basic automated tools, that putting together and reporting on this data takes time, and time, as Mr. Benjamin Franklin noted, is money.

By itself the critique that “looking in the rear view mirror” has no value and attributing it to one particular type of information (EVM) is specious. After all, one must know where one has been and presently is before you can figure out where you need to go and how to get there and EVM is just one dimension of a multidimensional space.

But there is a utility value associated with the timing and locality of intelligence and that is the issue.

Contributors to time

Time when expended to produce something is a form of entropy. For purposes of this discussion at this level of existence, I am defining entropy as availability of the energy in a system to do work. The work in this case is the processing and transformation of data into information, and the further transformation of information into usable intelligence.

There are different levels and sub-levels when evaluating the data stream related to project management. These are:

a. Within the supplier/developer/manufacturer

(1) First tier personnel such as Control Account Managers, Schedulers (if separate), Systems Engineers, Financial Managers, and Procurement personnel among other actually recording and verifying the work accomplishment;

(2) Second tier personnel that includes various levels of management, either across teams or in typical line-and-staff organizations.

b. Within customer and oversight organizations

(1) Reporting and oversight personnel tasks with evaluating the fidelity of specific business systems;

(2) Counterpart project or program officer personnel tasked with evaluating progress, risk, and any factors related to scope execution;

(3) Staff organizations designed to supplement and organize the individual project teams, providing a portfolio perspective to project management issues that may be affected by other factors outside of the individual project ecosystem;

(4) Senior management at various levels of the organization.

Given the multiplicity of data streams it appears that the issue of economies is vast until it is understood that the data that underlies the consumers of the information is highly structured and specific to each of the domains and sub-domains. Thus there are several opportunities for economies.

For example, cost performance and scheduling data have a direct correlation and are closely tied. Thus, these separate streams in the A&D industry were combined under a common schema, first using the UN/CEFACT XML, and now transitioning to a more streamlined JSON schema. Financial management has gone through a similar transition. Risk and SE data are partially incorporated into project performance schemas, but the data is also highly structured and possesses commonalities to be directly accessed using technologies that effectively leverage APIs.

Back to the Future

The current state, despite advances in the data formats that allow for easy rationalization and normalization of data that breaks through propriety barriers, still largely is based a slightly modified model of using a combination of manual processing augmented by domain-specific analytical tools. (Actually sub-domain analytical tools that support sub-optimization of data that are a barrier to incorporation of cross-domain integration necessary to create credible project intelligence).

Thus, it is not unusual at the customer level to see project teams still accepting a combination of proprietary files, hard copy reports, and standard schema reports. Usually the data in these sources is manually entered into Excel spreadsheets or a combination of Excel and some domain-specific analytical tool (and oftentimes several sub-specialty analytical tools). After processing, the data is oftentimes exported or built in PowerPoint in the form of graphs or standard reporting formats. This is information management by Excel and PowerPoint.

In sum, in all too many cases the project management domain, in terms of data and business intelligence, continues to party like it is 1995. This condition also fosters and reinforces insular organizational domains, as if the project team is disconnected from and can possess goals antithetical and/or in opposition to the efficient operation of the larger organization.

A typical timeline goes like this:

a. Supplier provides project performance data 15-30 days after the close of a period. (Some contract clauses give more time). Let’s say the period closed at the end of July. We are now effectively in late August or early September.

b. Analysts incorporate stove-piped domain data into their Excel spreadsheets and other systems another week or so after submittal.

c. Analysts complete processing and analyzing data and submit in standard reporting formats (Excel and PowerPoint) for program review four to six weeks after incorporation of the data.

Items a through c now put a typical project office at project review for July information at the end of September or beginning of October. Furthermore, this information is focused on individual domains, and given the lack of cross-domain knowledge, can be contradictory.

This system is broken.

Even suppliers who have direct access to systems of record all too often rely on domain-specific solutions to be able to derive significance from the processing of project management data. The larger suppliers seem to have recognized this problem and have been moving to address it, requiring greater integration across solutions. But the existence of a 15-30 day reconciliation period after the end of a period, and formalized in contract clauses, is indicative of an opportunity for greater efficiency in that process as well.

The Way Forward

But there is another way.

The opportunities for economy in the form of improvements in time and effort are in the following areas, given the application of the right technology:

  1. In the submission of data, especially by finding data commonalities and combining previously separate domain data streams to satisfy multiple customers;
  2. In retrieving all data so that it is easily accessible to the organization at the level of detailed required by the task at hand;
  3. In processing this data so that it can converted by the analyst into usable intelligence;
  4. In properly accessing, displaying, and reporting properly integrated data across domains, as appropriate, to each level of the organization regardless of originating data stream.

Furthermore, there opportunities to realizing business value by improving these processes:

  1. By extending expertise beyond a limited number of people who tend to monopolize innovations;
  2. By improving organizational knowledge by incorporating innovation into the common system;
  3. By gaining greater insight into more reliable predictors of project performance across domains instead of the “traditional” domain-specific indices that have marginal utility;
  4. By developing a project focused organization that breaks down domain-centric thinking;
  5. By developing a culture that ties cross-domain project knowledge to larger picture metrics that will determine the health of the overarching organization.

It is interesting that when I visit the field how often it is asserted that “the technology doesn’t matter, it’s process that matters”.

Wrong. Technology defines the art of the possible. There is no doubt that in an ideal world we would optimize our systems prior to the introduction of new technology. But that assumes that the most effective organization (MEO) is achievable without technological improvements to drive the change. If one cannot efficiently integrate all submitted cross-domain information effectively and efficiently using Excel in any scenario (after all, it’s a lot of data), then the key is the introduction of new technology that can do that very thing.

So what technologies will achieve efficiency in the use of this data? Let’s go through the usual suspects:

a. Will more effective use of PowerPoint reduce these timelines? No.

b. Will a more robust set of Excel workbooks reduce these timelines? No.

c. Will an updated form of a domain-specific analytical tool reduce these timelines? No.

d. Will a NoSQL solution reduce these timelines? Yes, given that we can afford the customization.

e. Will a COTS BI application that accepts a combination of common schemas and APIs reduce these timelines? Yes.

The technological solution must be fitted to its purpose and time. Technology matters because we cannot avoid the expenditure of time or energy (entropy) in the processing of information. We can perform these operations using a large amount of energy in the form of time and effort, or we can conserve time and effort by substituting the power of computing and information processing. While we will never get to the point where we completely eliminate entropy, our application of appropriate technology makes it seem as if effort in the form of time is significantly reduced. It’s not quite money for nothing, but it’s as close as we can come and is an obvious area of improvement that can be made for a relatively small investment.

Synergy — The Economics of Integrated Project Management

The hot topic lately in meetings and the odd conference on Integrated Project Management (IPM) often focuses on the mechanics of achieving that state, bound by the implied definition of current regulation, which has also become–not surprisingly–practice. I think this is a laudable goal, particularly given both the casual resistance to change (which always there by definition to some extent) and in the most extreme cases a kind of apathy.

I addressed the latter condition in my last post by an appeal to professionalism, particularly on the part of those in public administration. But there is a more elemental issue here than the concerns of project analysts, systems engineers, and the associated information managers. While this level of expertise is essential in the development of innovation, relying too heavily on this level in the organization creates an internal organizational conflict that creates the risk that the innovation is transient and rests on a slender thread. Association with any one manager also leaves innovation vulnerable due to the “not invented here” tact taken by many new managers in viewing the initiatives of a predecessor. In business this (usually self-defeating) approach becomes more extreme the higher one goes in the chain of command (the recent Sears business model anyone?).

The key, of course, is to engage senior managers and project/program managers in participating in the development of this important part of business intelligence. A few suggestions on how to do this follow, but the bottom line is this: money and economics makes the implementation of IPM an essential component of business intelligence.

Data, Information, and Intelligence – Analysis vs. Reporting

Many years ago using manual techniques, I was employed in activities that required that I seek and document data from disparate sources, seemingly unconnected, and find the appropriate connections. The initial connection was made with a key. It could be a key word, topic, individual, technology, or government. The key, however, wasn’t the end of the process. The validity of the relationship needed to be verified as more than mere coincidence. This is a process well known in the community specializing in such processes, and two good sources to understand how this was done can be found here and here.

It is a well trod path to distinguish between the elements that eventually make up intelligence so I will not abuse the reader in going over it. Needless to say that a bit of data is the smallest element of the process, with information following. For project management what is often (mis)tagged as predictive analytics and analysis is really merely information. Thus, when project managers and decision makers look at the various charts and graphs employed by their analysts they are usually greeted with a collective yawn. Raw projections of cost variance, cost to complete, schedule variance, schedule slippage, baseline execution, Monte Carlo risk, etc. are all building blocks to employing business intelligence. But in and of themselves they are not intelligence because these indicators require analysis, weighting, logic testing, and, in the end, an assessment that is directly tied to the purpose of the organization.

The role and application of digitization is to make what was labor intensive less so. In most cases this allows us to apply digital technology to its strength–calculation and processing of large amounts of data to create information. Furthermore, digitization now allows for effective lateral integration among datasets given a common key, even if there are multiple keys that act in a chain from dataset to dataset.

At the end of the line what we are left with is a strong correlation of data integrated across a number of domains that contribute to a picture of how an effort is performing. Still, even given the most powerful heuristics, a person–the consumer–must validate the data to determine if the results possess validity and fidelity. For project management this process is not as challenging as, say, someone using raw social networking data. Project management data, since it is derived from underlying systems that through their processing mimic highly structured processes and procedures, tends to be “small”, even when it can be considered Big Data form the shear perspective of size. It is small Big Data.

Once data has been accumulated, however, it must be assessed so as to ensure that the parts cohere. This is done by assessing the significance and materiality of those parts. Once this is accomplished the overall assessment must then be constructed so that it follows logically from the data. That is what constitutes “actionable intelligence”: analysis of present condition, projected probable outcomes, recommended actions with alternatives. The elements of this analysis–charts, graphs, etc., are essential in reporting, but reporting these indices is not the purpose of the process. The added value of an analyst lies in the expertise one possesses. Without this dimension a machine could do the work. The takeaway from this point, however, isn’t to substitute the work with software. It is to develop analytical expertise.

What is Integrated Project Management?

In my last post I summed up what IPM is, but some elaboration and refinement is necessary.

I propose that Integrated Project Management is defined as that information necessary to derive actionable intelligence from all of the relevant cross-domain information involved in the project organization. This includes cost performance, schedule performance, financial performance and execution, contract implementation, milestone achievement, resource management, and technical performance. Actionable intelligence in this context, as indicated above, is that information that is relevant to the project decision-making authority which effectively identifies specific probable qualitative and quantitative risks, risk impact, and risk handling necessary to make project trade-offs, project re-baselining or re-scope, cost-as-an-independent variable (CAIV), or project cancellation decisions. Underlying all of this are feedback loop systems assessments to ensure that there is integrity and fidelity in our business systems–both human and digital.

The data upon which IPM is derived comes from a finite number of sources. Thus, project management data lends itself to solutions that break down proprietary syntax and terminology. This is really the key to achieving IPM and one that has garnered some discussion when discussing the process of data normalization and rationalization with other IT professionals. The path can be a long one: using APIs to perform data-mining directly against existing tables or against a data repository (or warehouse or lake), or pre-normalizing the data in a schema (given both the finite nature of the data and the finite–and structured–elements of the processes being documented in data).

Achieving normalization and rationalization in this case is not a notional discussion–in my vocation I provide solutions that achieve this goal. In order to do so one must expand their notion of the architecture of the appropriate software solution. The mindset of “tools” is at the core of what tends to hold back progress in integration, that is, the concept of a “tool” is one that is really based on an archaic approach to computing. It assumes that a particular piece of software must limit itself to performing limited operations focused on a particular domain. In business this is known as sub-optimization.

Oftentimes this view is supported by the organization itself where the project management team is widely dispersed and domains hoard information. The rice bowl mentality has long been a bane of organizational effectiveness. Organizations have long attempted to break through these barriers using various techniques: cross-domain teams, integrated product teams, and others.

No doubt some operations of a business must be firewalled in such a way. The financial management of the enterprise comes to mind. But when it comes to business operations, the tools and rice bowl mindset is a self-limiting one. This is why many in IT push the concept of a solution–and the analogue is this: a tool can perform a particular operation (turn a screw, hammer a nail, crimp a wire, etc.); a solution achieves a goal of the system that consists of a series of operations, which are often complex (build the wall, install the wiring, etc.). Software can be a tool or a solution. Software built as a solution contains the elements of many tools.

Given a solution that supports IPM, a pathway is put in place that facilitates breaking down the barriers that currently block effective communication between and within project teams.

The necessity of IPM

An oft-cited aphorism in business is that purpose drives profit. For those in public administration purpose drives success. What this means is that in order to become successful in any endeavor that the organization must define itself. It is the nature of the project–a planned set of interrelated tasks separately organized and financed from the larger enterprise, which is given a finite time and budget specifically to achieve a goal of research, development, production, or end state–that defines an organization’s purpose: building aircraft, dams, ships, software, roads, bridges, etc.

A small business is not so different from a project organization in a larger enterprise. Small events can have oversized effects. What this means in very real terms is that the core rules of economics will come to bear with great weight on the activities of project management. In the world in which we operate, the economics underlying both enterprises and projects punishes inefficiency. Software “tools” that support sub-optimization are inefficient and the organizations that employ them bear unnecessary risk.

The information and technology sectors have changed what is considered to be inefficient in terms of economics. At its core, information has changed the way we view and leverage information. Back in 1997 economists Brad DeLong and Michael Froomkin identified the nature of information and its impact on economics. Their concepts and observations have had incredible staying power if, for no other reason, because what they predicted has come to pass. The economic elements of excludability, rivalry, transparency have transformed how the enterprise achieves optimization.

An enterprise that is willfully ignorant of its condition is one that is at risk. Given that many projects will determine the success of the enterprise, a project that is willfully ignorant of its condition threatens the financial health and purpose of the larger organization. Businesses and public sector agencies can no longer afford not to have cohesive and actionable intelligence built on all of the elements that contribute to determining that condition. In this way IPM becomes not only essential but its deployment necessary.

In the end the reason for doing this comes down to profit on the one hand, and success on the other. Given the increasing transparency of information and the continued existence of rivalry, the trend in the economy will be to reward those that harness the potentials for information integration that have real consequences in the management of the enterprise, and to punish those who do not.

All Along the Watch Tower — Project Monitoring vs. Project Management

My two month summer blogging hiatus has come to a close. Along the way I have gathered a good bit of practical knowledge related to introducing and implementing process and technological improvements into complex project management environments. More specifically, my experience is in introducing new adaptive technologies that support the integration of essential data across the project environment–integrated project management in short–and do so by focusing on knowledge discovery in databases (KDD).

An issue that arose during these various opportunities reminded me of the commercial where a group of armed bank robbers enter a bank and have everyone lay on the floor. One of the victims whispers to a uniformed security officer, “Hey, do something!” The security officer replies, “Oh, I’m not a security guard, I’m a security monitor. I only notify people if there is a robbery.” He looks to the robbers who have a hostage and then turns back to the victim and says calmly, “There’s a robbery.”

We oftentimes face the same issues in providing project management solutions. New technologies have expanded the depth and breadth of information that is available to project management professionals. Oftentimes the implementation of these solutions get to the heart as to whether people considers themselves project managers or project monitors.

Technology, Information, and Cognitive Dissonance

This perceptual conflict oftentimes plays itself out in resistance to change in automated systems. In today’s world the question of acceptance is a bit different than when I first provided automated solutions into organizations more than 30 years ago. At that time, which represented the first modern wave of digitization, focused on simply automating previously manual functions that supported existing line-and-staff organizations. Software solutions were constructed to fit into the architecture of the social or business systems being served, regardless of whether those systems were inefficient or sub-optimal.

The challenge is a bit different today. Oftentimes new technology is paired with process changes that will transform an organization–and quite often is used as the leading edge in that initiative. The impact on work is transformative, shifting the way that the job and the system itself is perceived given the new information.

Leon Festinger in his work A Theory of Cognitive Dissonance (1957) stated that people seek psychological consistency in order to function in the real world. When faced with information or a situation that is contradictory to consistency, individuals will experience psychological discomfort. The individual can then simply adapt to the new condition by either accepting the change, adding rationalizations to connect their present perceptions to the change, or to challenge the change–either by attacking it as valid, by rejecting its conclusions, or by avoidance.

The most problematic of the reactions that can be encountered in IT project management are the last two. When I have introduced a new technology paired with process change this manifestation has usually been justified by the refrains that:

a. The new solution is too hard to understand;

b. The new solution is too detailed;

c. The new solution is too different from the incumbent technology;

d. The solution is unrelated to “my job of printing out one PowerPoint chart”;

e. “Why can’t I just continue to use my own Excel workbooks/Access database/solution”;

f. “Earned value/schedule/risk management/(add PM methodology here) doesn’t tell me what I don’t already know/looks in the rear view mirror/doesn’t add enough value/is too expensive/etc.”

For someone new to this kind of process the objections often seem daunting. But some perspective always helps. To date, I have introduced and implemented three waves of technology over the course of my career and all initially encountered resistance, only to eventually be embraced. In a paradoxical twist (some would call it divine justice, karma, or universal irony), oftentimes the previous technology I championed, which sits as the incumbent, is used as a defense against the latest innovation.

A reasonable and diligent person involved in the implementation of any technology which, after all, is also project management, must learn to monitor conditions to determine if there is good reason for resistance, or if it is a typical reaction to relatively rapid change in a traditionally static environment. The point, of course, is not only to meet organizational needs, but to achieve a high level of acceptance in software deployment–thus maximizing ROI for the organization and improving organizational effectiveness.

If process improvement is involved, an effective pairing and coordination with stakeholders is important. But such objections, while oftentimes a reaction to people receiving information they prefer not to have, are ignored at one’s own peril. This is where such change processes require both an analytical and leadership-based approach.

Technology and Cultural Change – Spock vs. Kirk

In looking at resistance one must determine whether the issue is one of technology or some reason of culture or management. Testing the intuitiveness of the UI, for example, is best accomplished by beta testing among SMEs. Clock speeds latency, reliability, accuracy, and fidelity in data, and other technological characteristics are easily measured and documented. This is the Mr. Spock side of the equation, where, in an ideal world, rationality and logic should lead one to success. Once these processes are successfully completed, however, the job is still not done.

Every successful deployment still contains within it pockets of resistance. This is the emotional part of technological innovation that oftentimes is either ignored or that managers hope to paper or plow over, usually to their sorrow. It is here that we need to focus our attention. This is the Captain Kirk part of the equation.

The most vulnerable portion of an IT project deployment happens within the initial period of inception. Rolling wave implementations that achieve quick success will often find that there is more resistance over time as each new portion of the organization is brought into the fold. There are many reasons for this.

New personnel may be going by what they observed from the initial embrace of the technology and not like the results. Perhaps buy-in was not obtained by the next group prior to their inclusion, or senior management is not fully on-board. Perhaps there is a perceived or real fear of job loss, or job transformation that was not socialized in advance. It is possible that the implementation focused too heavily on the needs of the initial group of personnel brought under the new technology, which caused the technology to lag in addressing the needs of the next wave. It could also be that the technology is sufficiently different as to represent a “culture shock”, which causes an immediate defensive reaction. If there are outsourced positions, the subcontractor may feel that its interests are threatened by the introduction of the technology. Some SMEs, having created “irreplaceable asset” barriers, may feel that their position would be eroded if they were to have to share expertise and information with other areas of the organization. Lower level employees fear that management will have unfettered access to information prior to vetting. The technology may be oversold as a panacea, rather as a means of addressing organizational or information management deficiencies. All of these reasons, and others, are motivations to explore.

There is an extensive literature on the ways to address the concerns listed above, and others. Good examples can be found here and here.

Adaptive COTS or Business Intelligence technologies, as well as rapid response teams based on Agile, go a long way in addressing and handling barriers to acceptance on the technology side. But additional efforts at socialization and senior management buy-in are essential and will be the difference maker. No amount of argumentation or will persuade people otherwise inclined to defend the status quo, even when benefits are self-evident. Leadership by information consumers–both internal and external–as well as decision-makers will win the day.

Process and Technology – Integrated Project Management and Big(ger) Data

The first wave of automation digitized simple manual efforts (word processing, charts, graphs). This resulted in an incremental increase in productivity but, more importantly, it shifted work so that administrative overhead was eliminated. There are no secretarial pools or positions as there were when I first entered the workforce.

The second and succeeding waves tackled transactional systems based on line and staff organizational structures, and work definitions. Thus, in project management, EVM systems were designed for cost analysts, scheduling apps for planners and schedulers, risk analysis software for systems engineers, and so on.

All of these waves had a focus on functionality of hard-coded software solutions. The software determined what data was important and what information could be processed from it.

The new paradigm shift is a focus on data. We see this through the buzz phrase “Big Data”.  But what does that mean? It means that all of the data that the organization or enterprise collects has information value. Deriving that information value, and then determining its relevance and whether it provides actionable intelligence, is of importance to the organization.

Thus, implementations of data-focused solutions represent not only a shift in the way that work is performed, but also how information is used, and how the health and performance of the organization is assessed. Horizontal information integration across domains provides insights that were not apparent in the past when data was served to satisfy the needs of specialized domains and SMEs. New vulnerabilities and risks are uncovered through integration. This is particularly clear when implementing integrated project management (IPM) solutions.

A pause in providing a definition is in order, especially since IPM is gaining traction, and so large lazy and entrenched incumbents adjust their marketing in the hope of muddying the waters to fit their square peg focused and hard-coded solutions into the round hole of flexible IPM solutions.

Integrated Project Management are the processes and integration of information necessary to derive actionable intelligence from all of the relevant cross-domain information involved in the project organization. This includes cost performance, schedule performance, financial performance and execution, contract implementation, milestone achievement, resource management, and technical performance. Actionable intelligence is that information that is relevant to the project decision-making authority which effectively identifies specific probable qualitative and quantitative risks, risk impact, and risk handling necessary to make project trade-offs, project re-baselining or re-scope, cost-as-an-independent variable (CAIV), or project cancellation decisions. Underlying all of this are feedback loop systems assessments to ensure that there is integrity and fidelity in our business systems–both human and digital.

No doubt, we have a ways to go to get to this condition, but organizations are getting there. What it will take is a change the way leadership views its role, in rewriting traditional project management job descriptions, cross-domain training and mentoring, and in enforcing both for ourselves and in others the dedication to the ethics that are necessary to do the job.

Practice and Ethics in Project Management within Public Administration

The final aspect of implementations of project management systems that is often overlooked, and which oftentimes frames the environment that we are attempting to transform, concerns ethical behavior in project management. It is an aspect of project success as necessary as any performance metric, and it is one for which leadership within an organization sets the tone.

My own expertise in project management has concerned itself in most cases with project management in the field of public administration, though as a businessman I also have experience in the commercial world. Let’s take public administration first since, I think, it is the most straightforward.

When I wore a uniform as a commissioned Naval officer I realized that in my position and duties that I was merely an instrument of the U.S. Navy, and its constitutional and legal underpinnings. My own interests were separate from, and needed to be firewalled from, the execution of my official duties. When I have observed deficiencies in the behavior of others in similar positions, this is the dichotomy that often fails to be inculcated in the individual.

When enlisted personnel salute a commissioned officer they are not saluting the person, they are saluting and showing respect to the rank and position. The officer must earn respect as an individual. Having risen from the enlisted ranks, these were the aspects of leadership that were driven home to me in observing this dynamic: in order to become a good leader, one must first have been a good follower; you must demonstrate trust and respect to earn trust and respect. One must act ethically.

Oftentimes officials in other governmental entities–elected officials (especially), judges, and law enforcement–often fail to understand this point and hence fail this very basic rule of public behavior. The law and their position deserves respect. The behavior and actions of the individuals in their office will determine whether they personally should be shown respect. If an individual abuses their position or the exercise of discretion, they are not worthy of respect, with the danger that they will delegitimize and bring discredit to the office or position.

But earning respect is only one aspect of this understanding in ethical behavior in public administration. It also means that one will make decisions based on the law, ethical principles, and public policy regardless of whether one personally agrees or disagrees with the resulting conclusion of those criteria. That an individual will also apply a similar criteria whether or not the decision will adversely impact their own personal interests or those of associates, friends, or family is also part of weight of ethical behavior.

Finally, in applying the ethical test rule, one must also accept responsibility and accountability in executing one’s duties. This means being diligent, constantly striving for excellence and improvement, leading by example, and to always represent the public interest. Note that ego, personal preference, opinion, or bias, self-interest, or other such concerns have no place in the ethical exercise of public administration.

So what does that mean for project management? The answer goes to the heart of whether one views himself or herself as a project manager or project monitor. In public administration the program manager has a unique set of responsibilities tied to the acquisition of technologies that is rarely replicated in private industry. Oftentimes this involves shepherding a complex effort via contractual agreements that involve large specialized businesses–and often a number of subcontractors–across several years of research and development before a final product is ready for production and deployment.

The primary role in this case is to ensure that the effort is making progress and executing the program toward the goal, ensuring accountability of the funds being expended, which were appropriated for the specific effort by Congress, to ensure that the effort intended by those expenditures through the contractual agreements are in compliance, to identify and handle risks that may manifest to bring the effort into line with the cost, schedule, and technical baselines, all the while staying within the program’s framing assumptions. In addition, the program manager must coordinate with operational managers who are anticipating the deployment of the end item being developed, manage expectations, and determine how best to plan for sustainability once the effort goes to production and deployment. This is, of course, a brief summary of the extensive duties involved.

Meeting these responsibilities requires diligence, information that provides actionable intelligence, and a great deal of subject matter expertise. Finding and handling risks, determining if the baseline is executable, maintaining the integrity of the effort–all require leadership and skill. This is known as project management.

Project monitoring, by contrast, is acceptance of information provided by self-interested parties without verification, of limiting the consumption and processing of essential project performance information, of demurring to any information of a negative nature regarding project performance or risk, of settling for less than an optimal management environment, and using these tactics to, euphemistically, kick the ball down the court to the next project manager in the hope that the impact of negligence falls on someone else’s watch. Project monitoring is unethical behavior in public administration.

Practice and Ethics in Project Management within Private Industry

The focus in private industry is a bit different since self-interest abounds and is rewarded. But there are ethical rules that apply, and which a business person in project management would be well-served to apply.

The responsibility of the executive or officers in a business is to the uphold the interests of the enterprise’s customers, its employees, and its shareholders. Oftentimes business owners will place unequal weighting to these interests, but the best businesses view these responsibilities as being in fine balance.

For example, aside from the legal issues, ethics demands that in making a commitment in providing supplies and services there are a host of obligations that go along with that transaction–honest representation, warranty, and a commitment to provide what was promised. For employees, the commitments made regarding the conditions of employment and to reward employees appropriately for their contribution to the enterprise. For stockholders it is to conduct the business in such as way as to avoid placing its fiduciary position and its ability to act as a going concern in avoidable danger.

For project managers the responsibility within these ethical constraints is to honestly assess and communicate to the enterprise’s officers project performance, whether the effort will achieve the desired qualitative results within budgetary and time constraints, and, from a private industry perspective, handle most of the issues articulated for the project manager in the section on public administration above. The customer is different in this scenario, oftentimes internal, especially when eliminating companies that serve the project management verticals in public administration. Oftentimes the issues and supporting systems are less complex because the scale is, on the whole, smaller.

There are exceptions, of course, to the issue of scaling. Some construction, shipbuilding, and energy projects approach the complexity of some public sector programs. Space X and other efforts are other examples. But the focus there is financial from the perspective of the profit motive–not from the perspective of meeting the goals of some public interest involving health, safety, or welfare, and so the measures of measurement will be different, though the need for accountability and diligence is no less urgent. In may ways such behavior is more urgent given that failure may result in the failure of the entire enterprise.

Yet, the basic issue is the same: are you a project manager or a project monitor? Diligence, leadership, and ethical behavior (which is essential to leadership) are the keys. Project monitoring most often results in failure, and with good reason. It is a failure of both practice and ethics.