Don’t Stop Thinking About Tomorrow–Post-Workshop Blogging…and some Low Comedy

It’s been a while since I posted to my blog due to meetings and–well–day job, but some interesting things occurred during the latest Integrated Program Management (IPMD) of the National Defense Industrial Association (NDIA) meeting that I think are of interest. (You have to love acronyms to be part of this community).

Program Management and Integrated Program Management

First off is the initiative by the Program Management Working Group to gain greater participation by program managers with an eye to more clearly define what constitutes integrated program management. As readers of this blog know, this is a topic that I’ve recently written about.

The Systems Engineering discipline is holding their 21st Annual Systems Engineering Conference in Tampa this year from October 22nd to the 25th. IPMD will collaborate and will be giving a track dedicated to program management. The organizations have issued a call for papers and topics of interest. (Full disclosure: I volunteered this past week to participate as a member of the PM Working Group).

My interest in this topic is based on my belief from my years of wide-ranging experience in duties from having served as a warranted government contracting officer, program manager, business manager, CIO, staff officer, and logistics officer that there is much more to the equation in defining IPM that transcends doing so through the prism of any particular discipline. Furthermore, doing so will require collaboration and cooperation among a number of project management disciplines.

This is a big topic where, I believe, no one group or individual has all of the answers. I’m excited to see where this work goes.

Integrated Digital Environment

Another area of interest that I’ve written about in the past involved two different–but related–initiatives on the part of the Department of Defense to collect information from their suppliers that is necessary in their oversight role not only to ensure accountability of public expenditures, but also to assist in project cost and schedule control, risk management, and assist in cost estimation, particularly as it relates to risk sharing cost-type R&D contracted project efforts.

Two major staffs in the Offices of the Undersecretary of Defense have decided to go with a JSON-type schema for, on the one hand, cost estimating data, and on the other, integrated cost performance, schedule, and risk data. Each initiative seeks to replace the existing schemas in place.

Both have been wrapped around the axle on getting industry to move from form-based reporting and data sharing to a data-agnostic solution that meet the goals of reducing redundancy in data transmission, reducing the number of submissions and data streams, and moving toward one version of truth that allows for SMEs on both sides of the table to concentrate on data analysis and interpretation in jointly working toward the goal of successful project completion and end-item deployment.

As with the first item, I am not a disinterested individual in this topic. Back when I wore a uniform I helped to construct DoD policy to create an integrated digital environment. I’ve written about this experience previously in this blog, so I won’t bore with details, but the need for data sharing on cost-type efforts acknowledges the reality of the linkage between our defense economic and industrial base and the art of the possible in deploying defense-related end items. The same relationship exists for civilian federal agencies with the non-defense portion of the U.S. economy. Needless to say, a good many commercial firms unrelated to defense are going the same way.

The issue here is two-fold, I think, from speaking with individuals working these issues.

The first is, I think, that too much deference is being given to solution providers and some industry stakeholders, influenced by those providers, in “working the refs” through the data. The effect of doing so not only slows down the train and protects entrenched interests, it also gets in the way of innovation, allowing the slowest among the group to hold up the train in favor of–to put it bluntly–learning their jobs on the job at the expense of efficiency and effectiveness. As I expressed in a side conversion with an industry leader, all too often companies–who, after all, are the customer–have allowed themselves to view the possible by the limitations and inflexibility of their solution providers. At some point that dysfunctional relationship must end–and in the case of comments clearly identified as working the refs–they should be ignored. Put your stake in the ground and let innovation and market competition sort it out.

Secondly, cost estimating, which is closely tied to accounting and financial management, is new and considered tangential to other, more mature, performance management systems. My own firm is involved in producing a solution in support of this process, collecting data related to these reports (known collectively in DoD as the 1921 reports), and even after working to place that data in a common data lake, exploring with organizations what it tells us, since we are only now learning what it tells us. This is classical KDD–Knowledge Discovery in Data–and a worthwhile exercise.

I’ve also advocated going one step further in favor of the collection of financial performance data (known as the Contract Funds Status Report), which is an essential reporting requirement, but am frustrated to find no one willing to take ownership of the guidance regarding data collection. The tragedy here is that cost performance, known broadly as Earned Value Management, is a technique related to the value of work performance against other financial and project planning measures (a baseline and actuals). But in a business (or any enterprise), the fuel that drives the engine are finance-related, and two essential measures are margin and cash-flow. The CFSR is a report of program cash-flow and financial execution. It is an early measure of whether a program will execute its work in any given time-frame, and provides a reality check on the statistical measures of performance against baseline. It is also a necessary logic check for comptrollers and other budget decision-makers.

Thus, as it relates to data, there has been some push-back against a settled schema, where the government accepts flat files and converts the data to the appropriate format. I see this as an acceptable transient solution, but not an ultimate one. It is essential to collect both cost estimating and contract funds status information to perform any number of operations that relate to “actionable” intelligence: having the right executable money at the right time, a reality check against statistical and predictive measures, value analysis, and measures of ROI in development, just to name a few.

I look forward to continuing this conversation.

To Be or Not to Be Agile

The Section 809 Panel, which is the latest iteration of acquisition reform panels, has recommended that performance management using earned value not be mandated for efforts using Agile. It goes on, however, to assert that program executive “should approve appropriate project monitoring and control methods, which may include EVM, that provide faith in the quality of data and, at a minimum, track schedule, cost, and estimate at completion.”

Okay…the panel is then mute on what those monitoring and control measure will be. Significantly, if only subtly, the #NoEstimates crowd took a hit since the panel recommends and specifies data quality, schedule, cost and EAC. Sounds a lot like a form of EVM to me.

I must admit to be a skeptic when it comes to swallowing the Agile doctrine whole. Its micro-economic foundations are weak and much of it sounds like ideology–bad ideology at best and disproved ideology at worst (specifically related to the woo-woo about self-organization…think of the last speculative bubble and resulting financial crisis and depression along these lines).

When it comes to named methodologies I am somewhat from Missouri. I apply (and have in previous efforts in the Dark Ages back when I wore a uniform) applied Kanban, teaming, adaptive development (enhanced greatly today by using modern low-code technology), and short sprints that result in releasable modules. But keep in mind that these things were out there long before they were grouped under a common heading.

Perhaps Agile is now a convenient catch-all for best practices. But if that is the case then software development projects using this redefined version of Agile deserve no special dispensation. But I was schooled a bit by an Agile program manager during a side conversation and am always open to understanding things better and revising my perspectives. It’s just that there was never a Waterfall/Agile dichotomy just as there never really was a Spiral/Waterfall dichotomy. These were simply convenient development models to describe a process that were geared to the technology of the moment.

There are very good people on the job exploring these issues on the Agile Working Group in the IPMD and I look forward to seeing what they continue to come up with.

Rip Van Winkle Speaks!

The only disappointing presentation occurred on the second and last day of the meeting. It seemed we were treated by a voice from somewhere around the year 2003 that, in what can only be described as performance art involving free association, talked about wandering the desert, achieving certification for a piece of software (which virtually all of the software providers in the room have successfully navigated at one time or another), discovering that cost and schedule performance data can be integrated (ignoring the work of the last ten years on the part of, well, a good many people in the room), that there was this process known as the Integrated Baseline Review (which, again, a good many people in the room had collaborated on to both define and make workable), and–lo and behold–the software industry uses schemas and APIs to capture data (known in Software Development 101 as ETL). He then topped off his meander by an unethical excursion into product endorsement, selected through an opaque process.

For this last, the speaker was either unaware or didn’t care (usually called tone-deafness) that the event’s expenses were sponsored by a software solution provider (not mine). But it is also as if the individual speaking was completely unaware of the work behind the various many topics that I’ve listed above this subsection, ignoring and undermining the hard work of the other stakeholders that make up our community.

On the whole an entertaining bit of poppycock, which leads me to…

A Word about the Role of Professional Organizations (Somewhat Inside Baseball)

In this blog, and in my interactions with other professionals at–well–professional conferences–I check my self-interest in at the door and publicly take a non-commercial stance. It is a position that is expected and, I think, appreciated. For those who follow me on social networking like LinkedIn, posts from my WordPress blog originate from a separate source from the commercial announcements that are linked to my page that originate from my company.

If there are exhibitor areas, as some conferences and workshops do have, that is one thing. That’s where we compete and play; and in private side conversations customers and strategic partners will sometimes use the opportunity as a convenience to discuss future plans and specific issues that are clearly business-related. But these are the exceptions to the general rule, and there are a couple of reasons for this, especially at this venue.

One is because, given that while it is a large market, it is a small community, and virtually everyone at the regular meetings and conferences I attend already know that I am the CEO and owner of a small software company. But the IPMD is neutral ground. It is a place where government and industry stakeholders, who in other roles and circumstances are in a contractual or competing relationship, come to work out the best way of hashing out processes and procedures that will hopefully improve the discipline of program and project management. It is also a place of discovery, where policies, new ideas, and technologies can be vetted in an environment of collaboration.

Another reason for taking a neutral stance is simply because it is both the most ethical and productive one. Twenty years ago–and even in some of the intervening years–self-serving behavior was acceptable at the IPMD meetings where both leadership and membership used the venue as a basis for advancing personal agendas or those of their friends, often involving backbiting and character assassination. Some of those people, few in number, still attend these meetings.

I am not unfamiliar with the last–having been a target at one point by a couple of them but, at the end of the day, such assertions turned out to be without merit, undermining the credibility of the individuals involved, rightfully calling into question the quality of their character. Such actions cannot help but undermine the credibility and pollute the atmosphere of the organization in which they associate, as well.

Finally, the companies and organizations that sponsor these meetings–which are not cheap to organize, which I know from having done so in the past–deserve to have the benefit of acknowledgment. It’s just good manners to play nice when someone else is footing the bill–you gotta dance with those that brung you. I know my competitors and respect them (with perhaps one or two exceptions). We even occasionally socialize with each other and continue long-term friendships and friendly associations. Burning bridges is just not my thing.

On the whole, however, the NDIA IPMD meetings–and this one, in particular–was a productive and positive one, focused on the future and in professional development. That’s where, I think, that as a community we need to be and need to stay. I always learn something new and get my dose of reality from a broad-based perspective. In getting here the leadership of the organization (and the vast majority of the membership) is to be commended, as well as the recent past and current members of the Department of Defense, especially since the formation of the Performance Assessments and Root Cause Analysis (PARCA) office.

In closing, there were other items of note discussed, along with what can only be described as the best pair of keynote addresses that I’ve heard in one meeting. I’ll have more to say about some of the concepts and ideas that were presented there in future posts.

Here It Is–Integrated Project Management and Its Definition

I was recently at a customer site and, while discussing the topic of this post, I noticed a book displayed prominently on the bookshelf behind my colleague entitled “Project Management Using Earned Value” by Gary Humphreys. It is a book that I have on my shelf as well and is required reading by personnel in my company.

I told my colleague: “One of the problems with our ability to define IPM is the conceit embedded in the title of that book behind you.”

My colleague expressed some surprise at my intentionally provocative comment, but he too felt that EVM had taken on a role that was beyond its intent, and so asked for more clarification. Thus, this post is meant to flesh out some of these ideas.

But before I continue, here was my point: while the awkward wording of the title unintentionally creates a syllogism that can be read as suggesting that applying earned value will result in project management–a invalid conclusion based on a specious assumption–there are practitioners who would lend credence to that idea.

Some History in Full Disclosure

Before I begin some full disclosure is in order. When I was on active duty in the United State Navy I followed my last mentor to the Pentagon, who felt that my perspective on acquisition and program management would be best served on the staff of the Undersecretary of Defense for Acquisition and Technology, which subsequently also came to include logistics.

The presence of a uniformed member of the Armed Forces was unusual for that staff at the time (1996). My boss, Dan Czelusniak, a senior SES who was a highly respected leader, program manager, engineer, thought leader, and, for me, mentor, had first brought me on his staff at the U.S. Navy Naval Air Systems Command in PEO(A), and gave me free reign to largely define my job.

For that assignment I combined previously separate duties, taking on the job as Program Manager of an initiative to develop a methodology of assessing technical performance measurement, as Business Manager of the PEO which led me to the use of earned value management and its integration with other program indicators and systems, the development of a risk assessment system for the programs in the PEO for the establishment of a DoD financial management reserve, support to the program managers and their financial managers in the budget hearing process, and as CIO for the programs in identifying and introducing new information technologies in their support.

While there, I had decided to retire from the service after more than 22 years on active duty, but my superiors felt that I had a few more ideas to contribute to the DoD, and did what they could to convince me to stay on a while longer. Having made commitments in my transition, I set my date in the future, but agreed to do the obligatory Pentagon tour of duty to cap my career. Dan had moved over to Undersecretary of Defense for Acquisition and Technology (USD(A&T)) and decided that he wanted me on the OUSD(A&T) staff.

As in PEO(A), Mr. Czelusniak gave me the freedom to define my position with the approval of my immediate superior, Mr. Gary Christle. I chose the title as Lead Action Officer, Integrated Program Management. Mr. Christle, who was a brilliant public servant and thought leader as well, widely heralded in the EVM community, asked me with a bemused expression, “What is integrated program management?” I responded: “I don’t know yet sir but I intend to find out.” Though I did not have a complete definition, I had a seed of an idea.

My initiatives on the staff began with an exploration of data and information. My thinking along these lines early in my career were influenced by a book entitled “Logistics in the National Defense” by retired Admiral Henry E. Eccles, written in 1959. It is a work that still resonates today and established the important concept that “logistics serves as the bridge between a nation’s economy and its forces and defines the operational reach of the joint force commander.” The U.S. Army site referenced for this quote calls him the Clausewitz of logistics.

Furthermore my work as Program Manager of the Technical Performance Management project, and earlier assignments as program manager of IT and IM projects, provided me with insights into the interrelationships of essential data that was being collected as a matter of course in R&D efforts that would provide the basis for a definition of IPM.

In concluding my career on the OSD staff, I produced two main products, among others: a methodology for the integration of technical performance risk in project management performance, and the policy of moving toward what became the DoD-wide policy for an Integrated Digital Environment (IDE). This last initiative was produced with significant contributions from the staff of the Deputy Assistant Secretary of Defense for Command, Control, Communications, and Intelligence (C3I) as well as additional work by my colleague on the A&T staff, Reed White.

Products from IDE included the adoption of the ANSI X12 839 transaction set. Its successors, such as the DCARC UN/CEFACT XML and other similar initiatives are based on that same concept and policy, though, removed by many years, the individuals involved may be only vaguely aware of that early policy–or the controversy that had to be overcome in its publication given its relatively common sense aspects from today’s perspective.

The Present State

Currently there are at least four professional organizations that have attempted to tackle the issue of integrated program and project management. These are the Project Management Institute, NDIA’s Integrated Program Management Division, the College of Performance Management, and the American Association of Cost Engineers. There are also other groups focused in systems engineering, contracting, and cost estimating that contribute to the literature.

PMI is an expansive organization and, oftentimes, the focus of the group is on aspirational goals by those who wish to obtain a credential in the discipline. The other groups tend to emphasize their roots in earned value management or cost engineering as the basis for a definition of IPM. The frustration of many professionals in the A&D and DoD world is that the essential input and participation of the program manager to define the essential data needed to define IPM, which goes beyond the seas of separation that define islands of data and expertise is missing.

Things didn’t used to be this way.

When I served at NAVAIR and the Pentagon the jointly-sponsored fall Integrated Program Management Conference held in Tyson’s Corner, Virginia, would draw more than 600 attendees. Entire contingents from the military systems commands and program offices–as well as U.S. allied countries–would attend, lending the conference a synergy and forward-looking environment not found in other venues. Industries outside of aerospace and defense would also send representatives and contribute to the literature.

As anyone engaged in a scientific or engineering effort can attest, sharing expertise and perspectives among other like professionals from both industry and government is essential to developing a vital, professional, and up-to-date community of knowledge.

During the intervening years the overreaction of the public and the resulting political reaction to a few isolated embarrassing incidents at other professional conferences, and constraints on travel and training budgets, has contributed to a noticeable drop in attendance at these essential venues. But, I think there is also an internal contributing factor within the organizations themselves. That factor is that each views itself and its discipline as the nexus of IPM. Thus, to PMI, a collection of KPIs are the definition of IPM. To CPM and NDIA IPMD, earned value management is the link to IPM, and to AACEI Total Cost Management is the basis for IPM.

All of them cannot be correct and none possesses an overwhelming claim.

The present state currently finds members of each of these groups–all valuable subject matter experts, leaders,  and managers in their areas of concentration–essentially talking to themselves and each other, insulated in a bubble. There is little challenge in convincing another EVM SME that EVM is the basis for the integration of other disciplines. What is not being done is making a convincing case to program managers based on the merits.

A Modest Recommendation

Subsequent to the customer meeting that sent me to, once again, contemplate IPM, Gordon Kranz, President of Enlightened Integrated Program Management LLC posted the following question to LinkedIn:

Integrated Program Management – What is it?  Systems Engineering? Earned Value Management? Agile Development? Lean? Quality? Logistics? Building Information Modeling? …

He then goes on to list some basic approaches that may lead to answering that question. Still the question exists.

Mr. Kranz was the Deputy Director for Earned Value Management policy at the Office of the Secretary of Defense from 2011-2015. During his term in that position I witnessed more innovation and improved relations between government and industry, which resulted in process improvements in accountability and transparency, than I had seen come out of that office over the previous ten years. He brings with him a wealth of knowledge concerning program management from both government and private industry. Now a private consultant, Gordon’s question goes to the heart of the debate in addressing the claims of those who claim to be the nexus of IPM.

So what is Integrated Project or Program Management? Am I any closer to answering that question than when Gary Christle first asked it of me over twenty years ago?

I think so but I abstain in answering the question, but only because in the end it is the community of program management in their respective verticals that must ultimately answer it. Only the participation and perspectives of practicing program managers and corporate management will determine the definition of IPM and the elements that underlie it. Self-interested software publishers, of which I am one, cannot be allowed to define and frame the definition, as much as it is tempting to do so.

These elements must be specific and must address the most recent misunderstandings that have arisen in the PM discipline, such as that there is a dichotomy between EVM and Agile–a subject fit for a different blog post.

So here is my modest recommendation: that the leaders of the program management community from the acquisition organizations in both industry and government–where the real power to make decisions resides and where the discussions that sparked this blog post began–find a sponsor for an IPM workshop that addresses this topic, with the goal of answering the core question. Make no mistake–despite my deference in this post, I intend to be part of the conversation in defining that term. But, in my opinion, no one individual or small group of specialized SMEs are qualified to do so.

Furthermore, doing so, I believe, is essential to the very survival of these essential areas of expertise, particularly given our ability to deploy more powerful information systems that allow us to process larger sets of data. The paradox of more powerful processing of bigger data results in a level of precision that reveals the need for fewer, not more, predictive indicators and less isolated line-and-staff specialized expertise. Discovery-driven project management is here today, bridging islands of data, and providing intelligence in new and better ways that allow for a more systemic approach to project management.

Thus, in this context, a robust definition of Integrated Project Management is an essential undertaking for the discipline.

(Data) Transformation–Fear and Loathing over ETL in Project Management

ETL stands for data extract, transform, and load. This essential step is the basis for all of the new capabilities that we wish to acquire during the next wave of information technology: business analytics, big(ger) data, interdisciplinary insight into processes that provide insights into improving productivity and efficiency.

I’ve been dealing with a good deal of fear and loading regarding the introduction of this concept, even though in my day job my organization is a leading practitioner in the field in its vertical. Some of this is due to disinformation by competitors in playing upon the fears of the non-technically minded–the expected reaction of those who can’t do in the last throws of avoiding irrelevance. Better to baffle them with bullshit than with brilliance, I guess.

But, more importantly, part of this is due to the state of ETL and how it is communicated to the project management and business community at large. There is a great deal to be gained here by muddying the waters even by those who know better and have the technology. So let’s begin by clearing things up and making this entire field a bit more coherent.

Let’s start with the basics. Any organization that contains the interaction of people is a system. For purposes of a project management team, a business enterprise, or a governmental body we deal with a special class of systems known as Complex Adaptive Systems: CAS for short. A CAS is a non-linear learning system that reacts and evolves to its environment. It is complex because of the inter-relationships and interactions of more than two agents in any particular portion of the system.

I was first introduced to the concept of CAS through readings published out of the Santa Fe Institute in New Mexico. Most noteworthy is the work The Quark and the Jaguar by the physicist Murray Gell-Mann. Gell-Mann is received the Nobel in physics in 1969 for his work on elementary particles, such as the quark, and is co-founder of the Institute. He also was part of the team that first developed simulated Monte Carlo analysis during a period he spent at RAND Corporation. Anyone interested in the basic science of quanta and how the universe works that then leads to insights into subjects such as day-to-day probability and risk should read this book. It is a good popular scientific publication written by a brilliant mind, but very relevant to the subjects we deal with in project management and information science.

Understanding that our organizations are CAS allows us to apply all sorts of tools to better understand them and their relationship to the world at large. From a more practical perspective, what are the risks involved in the enterprise in which we are engaged and what are the probabilities associated with any of the range of outcomes that we can label as success. For my purposes, the science of information theory is at the forefront of these tools. In this world an engineer by the name of Claude Shannon working at Bell Labs essentially invented the mathematical basis for everything that followed in the world of telecommunications, generating, interpreting, receiving, and understanding intelligence in communication, and the methods of processing information. Needless to say, computing is the main recipient of this theory.

Thus, all CAS process and react to information. The challenge for any entity that needs to survive and adapt in a continually changing universe is to ensure that the information that is being received is of high and relevant quality so that the appropriate adaptation can occur. There will be noise in the signals that we receive. What we are looking for from a practical perspective in information science are the regularities in the data so that we can make the transformation of receiving the message in a mathematical manner (where the message transmitted is received) into the definition of information quality that we find in the humanities. I believe that we will find that mathematical link eventually, but there is still a void there. A good discussion of this difference can be found here in the on-line publication Double Dialogues.

Regardless of this gap, the challenge of those of us who engage in the business of ETL must bring to the table the ability not only to ensure that the regularities in the information are identified and transmitted to the intended (or necessary) users, but also to distinguish the quality of the message in the terms of the purpose of the organization. Shannon’s equation is where we start, not where we end. Given this background, there are really two basic types of data that we begin with when we look at a set of data: structured and unstructured data.

Structured data are those where the qualitative information content is either predefined by its nature or by a tag of some sort. For example, schedule planning and performance data, regardless of the idiosyncratic/proprietary syntax used by a software publisher, describes the same phenomena regardless of the software application. There are only so many ways to identify snow–and, no, the Inuit people do not have 100 words to describe it. Qualifiers apply in the humanities, but usually our business processes more closely align with statistical and arithmetic measures. As a result, structured data is oftentimes defined by its position in a hierarchical, time-phased, or interrelated system that contains a series of markers, indexes, and tables that allow it to be interpreted easily through the identification of a Rosetta stone, even when the system, at first blush, appears to be opaque. When you go to a book, its title describes what it is. If its content has a table of contents and/or an index it is easy to find the information needed to perform the task at hand.

Unstructured data consists of the content of things like letters, e-mails, presentations, and other forms of data disconnected from its source systems and collected together in a flat repository. In this case the data must be mined to recreate what is not there: the title that describes the type of data, a table of contents, and an index.

All data requires initial scrubbing and pre-processing. The difference here is the means used to perform this operation. Let’s take the easy path first.

For project management–and most business systems–we most often encounter structured data. What this means is that by understanding and interpreting standard industry terminology, schemas, and APIs that the simple process of aligning data to be transformed and stored in a database for consumption can be reduced to a systemic and repeatable process without the redundancy of rediscovery applied in every instance. Our business intelligence and business analytics systems can be further developed to anticipate a probable question from a user so that the query is pre-structured to allow for near immediate response. Further, structuring the user interface in such as way as to make the response to the query meaningful, especially integrated with and juxtaposed other types of data requires subject matter expertise to be incorporated into the solution.

Structured ETL is the place that I most often inhabit as a provider of software solutions. These processes are both economical and relatively fast, particularly in those cases where they are applied to an otherwise inefficient system of best-of-breed applications that require data transfers and cross-validation prior to official reporting. Time, money, and effort are all saved by automating this process, improving not only processing time but also data accuracy and transparency.

In the case of unstructured data, however, the process can be a bit more complicated and there are many ways to skin this cat. The key here is that oftentimes what seems to be unstructured data is only so because of the lack of domain knowledge by the software publisher in its target vertical.

For example, I recently read a white paper published by a large BI/BA publisher regarding their approach to financial and accounting systems. My own experience as a business manager and Navy Supply Corps Officer provide me with the understanding that these systems are highly structured and regulated. Yet, business intelligence publishers treated this data–and blatantly advertised and apparently sold as state of the art–an unstructured approach to mining this data.

This approach, which was first developed back in the 1980s when we first encountered the challenge of data that exceeded our expertise at the time, requires a team of data scientists and coders to go through the labor- and time-consuming process of pre-processing and building specialized processes. The most basic form of this approach involves techniques such as frequency analysis, summarization, correlation, and data scrubbing. This last portion also involves labor-intensive techniques at the microeconomic level such as binning and other forms of manipulation.

This is where the fear and loathing comes into play. It is not as if all information systems do not perform these functions in some manner, it is that in structured data all of this work has been done and, oftentimes, is handled by the database system. But even here there is a better way.

My colleague, Dave Gordon, who has his own blog, will emphasize that the identification of probable questions and configuration of queries in advance combined with the application of standard APIs will garner good results in most cases. Yet, one must be prepared to receive a certain amount of irrelevant information. For example, the query on Google of “Fun Things To Do” that you may use if you are planning for a weekend will yield all sorts of results, such as “50 Fun Things to Do in an Elevator.”  This result includes making farting sounds. The link provides some others, some of which are pretty funny. In writing this blog post, a simple search on Google for “Google query fails” yields what can only be described as a large number of query fails. Furthermore, this approach relies on the data originator to have marked the data with pointers and tags.

Given these different approaches to unstructured data and the complexity involved, there is a decision process to apply:

1. Determine if the data is truly unstructured. If the data is derived from a structured database from an existing application or set of applications, then it is structured and will require domain expertise to inherit the values and information content without expending unnecessary resources and time. A structured, systemic, and repeatable process can then be applied. Oftentimes an industry schema or standard can be leveraged to ensure consistency and fidelity.

2. Determine whether only a portion of the unstructured data is relative to your business processes and use it to append and enrich the existing structured data that has been used to integrate and expand your capabilities. In most cases the identification of a Rosetta Stone and standard APIs can be used to achieve this result.

3. For the remainder, determine the value of mining the targeted category of unstructured data and perform a business case analysis.

Given the rapidly expanding size of data that we can access using the advancing power of new technology, we must be able to distinguish between doing what is necessary from doing what is impressive. The definition of Big Data has evolved over time because our hardware, storage, and database systems allow us to access increasingly larger datasets that ten years ago would have been unimaginable. What this means is that–initially–as we work through this process of discovery, we will be bombarded with a plethora of irrelevant statistical measures and so-called predictive analytics that will eventually prove out to not pass the “so-what” test. This process places the users in a state of information overload, and we often see this condition today. It also means that what took an army of data scientists and developers to do ten years ago takes a technologist with a laptop and some domain knowledge to perform today. This last can be taught.

The next necessary step, aside from applying the decision process above, is to force our information systems to advance their processing to provide more relevant intelligence that is visualized and configured to the domain expertise required. In this way we will eventually discover the paradox that effectively accessing larger sets of data will yield fewer, more relevant intelligence that can be translated into action.

At the end of the day the manager and user must understand the data. There is no magic in data transformation or data processing. Even with AI and machine learning it is still incumbent upon the people within the organization to be able to apply expertise, perspective, knowledge, and wisdom in the use of information and intelligence.

Move It On Over — Third and Fourth Generation Software: A Primer

While presenting to organizations regarding business intelligence and project management solutions I often find myself explaining the current state of programming and what current technology brings to the table. Among these discussions is the difference between third and fourth generation software, not just from the perspective of programming–or the Wikipedia definition (which is quite good, see the links below)–but from a practical perspective.

Recently I ran into someone who asserted that their third-generation software solution was advantageous over a fourth generation one because it was “purpose built.” My response was that a fourth generation application provides multiple “purpose built” solutions from one common platform in a more agile and customer-responsive environment. For those unfamiliar with the differences, however, this simply sounded like a war of words rather than the substantive debate that it was.

For anyone who has used a software application they are usually not aware of the three basic logical layers that make up the solution. These are the business logic layer, the application layer, and the database structure. The user interface delivers the result of the interaction of these three layers to the user–what is seen on the screen.

Back during the early advent of the widespread use of PCs and distributed computing on centralized systems, a group of powerful languages were produced that allowed the machine operations to be handled by an operating system and for software developers to write code to focus on “purpose built” solutions.

Initially these efforts concentrated on automated highly labor-intensive activities to achieve maximum productivity gains in an organization, and to leverage those existing systems to distribute information that previously would require many hours of manual effort in terms of mathematical and statistical calculation and visualization. The solutions written were based on what were referred to as third generation languages, and they are familiar even to non-technical people: Fortran, Cobol, C+, C++, C#, and Java, among others. These languages are highly structured and require a good bit of expertise to correctly program.

In third generation environments, the coder specifies operations that the software must perform based on data structure, application logic, and pre-coded business logic.These three levels of highly integrated and any change in one of them requires that the programmer trace the impact of that change to ensure that the operations in the other two layers are not affected. Oftentimes, the change has a butterfly effect, requiring detailed adjustments to take into the account the subtleties in processing. It is this highly structured, interdependent, “purpose built” structure that causes unanticipated software bugs to pop up in most applications. It is also the reason why software development and upgrade configuration control is also highly structured and time-consuming–requiring long lead-times to even deliver what most users view as relatively mundane changes and upgrades, like a new chart or graph.

In contrast, fourth generation applications separate the three levels and control the underlying behavior of the operating environment by leveraging a standard framework, such as .NET. The .NET operating environment, for example, controls both a library of interoperability across programming languages (known as a Framework Class Library or FCL), and virtual machine that handles exception handling, memory management, and other common functions (known as Common Language Runtime or CLR).

With the three layers separated, with many of the more mundane background tasks being controlled by the .NET framework, a great deal of freedom is provided to the software developer that provides real benefits to customers and users.

For example, the database layer is freed from specific coding from the application layer, since the operating environment allows libraries of industry standard APIs to be leveraged, making the solution agnostic to data. Furthermore, the business logic/UI layer allows for table-driven and object-oriented configuration that creates a low code environment, which not only allows for rapid roll-out of new features and functionality (since hard-coding across all three layers is eschewed), but also allows for more precise targeting of functionality based on the needs of user groups (or any particular user).

This is what is meant in previous posts by new technology putting the SME back in the driver’s seat, since pre-defined reports and objects (GUIs) at the application layer allow for immediate delivery of functionality. Oftentimes data from disparate data sources can be bound together through simple query languages and SQL, particularly if the application layer table and object functionality is built well enough.

When domain knowledge is incorporated into the business logic layer, the distinction between generic BI and COTS is obliterated. Instead, what we have is a hybrid approach that provides the domain specificity of COTS (‘purpose built”), with the power of BI that reduces the response time between solution design and delivery. More and better data can also be accessed, establishing an environment of discovery-driven management.

Needless to say, properly designed Fourth Generation applications are perfectly suited to rapid application development and deployment approaches such as Agile. They also provide the integration potential, given the agnosticism to data, that Third Generation “purpose built” applications can only achieve through data transfer and reconciliation across separate applications that never truly achieve integration. Instead, Fourth Generation applications can subsume the specific “purpose built” functionality found in stand-alone applications and deliver it via a single platform that provides one source of truth, still allowing for different interpretations of the data through the application of differing analytical approaches.

So move it on over nice (third generation) dog, a big fat (fourth generation) dog is moving in.

Fat Tuesday Interlude — New Orleans and Mardi Gras

If New York is the cultural capital of the United States, and San Francisco its heart, then New Orleans must be its soul. For many visitors, the city of New Orleans is represented by the bars and bohemian nightlife of Bourbon Street and, if they venture out just a bit further, it is the French Quarter.

But New Orleans is a place unique in American culture. It is the city that gave birth to jazz–America’s classical music. It has been the incubator of artists, musicians, writers, and entrepreneurs that have introduced a unique multicultural perspective and flavor to American society. It’s Mississippi River has served to introduce new immigrants to American society and to introduce America’s heartland to a melding of cultures and ethnicity.

The city has its roots in the French culture and legal system that founded it, yet it has been transformed through the years by each new flag and influence under which it has existed: Cajuns, the Spaniards, African slaves, American settlers, indigenous people, Caribbean immigrants, Creoles, Italians, Mexican, Central and South American immigrants, and today, people from all lands. Each group celebrated and celebrates their heritage and, in the process are thrown together in a gumbo of ethnic, cultural, and economic admixture that is uniquely American.

For me, New Orleans is like the beautiful woman who has been abused by those who would dominate her, but who picks herself up and overcomes the challenges thrown in her way. The city’s positioning was problematic from the start, sitting between Lake Pontchartrain and the Mississippi, with the Gulf of Mexico looming close by. As the city grew and became the financial and trading capital of the southern states, more and more swampland was drained and built on. Hurricanes and tornadoes have leveled many of her buildings and broken her levies, flooding her streets, costing lives and livelihoods.

But New Orleans has also faced human challenges, especially from those who have resented or devalued her multicultural and other contributions. After the overthrow of Reconstruction, the imposition of Jim Crow imposed itself on its people. It was met by continuous resistance and eventually overthrown, but not before it had its effects on underfunded public schools, great urban wastelands from urban renewal and highway construction, and crumbling neighborhoods.

The Storyville neighborhood–lest the troops and sailors be corrupted by miscegenation–was closed by order of the War Department in 1917 and later leveled–its rich history living on in our music and in our culture, though its physical embodiment long gone.

Its strong roots in Catholicism and its variations, the melding of Native American and African slave culture, and the introduction of other religious traditions from far flung places across the world often made it suspect to the more staid and closed sections of American society.

The reduction of the financial sector, automation and containerization of the its port that reduced high paying jobs, highway construction and resulting suburbanization, redlining, white flight, and Federal neglect in the wake of Hurricane Katrina have all represented existential threats to the city.

And yet it goes on. The people of New Orleans–new arrivals and those who returned home after exile in places like Houston–celebrate their heritage and their culture in the New World in New Orleans.

I am not a resident or native of New Orleans, but I have had a lifetime romance with it. I have spent a good deal of time there and I have seen and lived with its changes over the years. When I walk down the sidewalks in the neighborhoods of New Orleans it is almost as if I am greeted from every corner. People smile and wave, though I am a stranger. People share their unique perspectives on things, and trustingly expose their vulnerabilities, wearing fewer masks than I encounter anywhere else–and when they do wear masks it is to celebrate life and living, and even our shared mortality.

As an old Navy hand I am not so deluded as to believe that the city does not have its downsides or its dangers, as most urban–and rural–areas do. I have walked through cities across the world, through many rough seaport and other neighborhoods. Still, we must keep in mind that we live, especially in our own country, during a relatively safe period. Poverty is a disease, not a moral failing.

New Orleans today remains genuine. It has not experienced the billionaire sanitizing that New York underwent during the Bloomberg years. It is not being gentrified and its character smoothed out by high tech as we are seeing in San Francisco. At least, not yet. It’s neighborhoods are rebuilding and it’s people are proud and optimistic.

So to those who read this blog: Happy Mardi Gras!

 

 

Friday Hot Washup: Daddy Stovepipe sings the Blues, and Net Neutrality brought to you by Burger King

Daddy Stovepipe sings the Blues — Line and Staff Organizations (and how they undermine organizational effectiveness)

In my daily readings across the web I came upon this very well written blog post by Glen Alleman at his Herding Cat’s blog. The eternal debate in project management surrounds when done is actually done–and what is the best measurement of progress toward the completion of the end item application?

Glen rightly points to the specialization among SMEs in the PM discipline, and the differences between their methods of assessment. These centers of expertise are still aligned along traditional line and staff organizations that separate scheduling, earned value, system engineering, financial management, product engineering, and other specializations.

I’ve written about this issue where information also follows these stove-piped pathways–multiple data streams with overlapping information, but which resists effective optimization and synergy because of the barriers between them. These barriers may be social or perceptual, which then impose themselves upon the information systems that are constructed to support them.

The manner in which we face and interpret the world is the core basis of epistemology. When we develop information systems and analytical methodologies, whether we are consciously aware of it or not, we delve into the difference between justified belief and knowledge. I see the confusion of these positions in daily life and in almost all professions and disciplines. In fact, most of us find ourselves jumping from belief to knowledge effortlessly without being aware of this internal contradiction–and the corresponding reduction in our ability to accurately perceive reality.

The ability to overcome our self-imposed constraints is the key but, I think, our PM organizational structures must be adjusted to allow for the establishment of a learning environment in relation to data. The first step in this evolution must be the mentoring and education of a discipline that combines these domains. What this proposes is that no one individual need know everything about EVM, scheduling, systems engineering, and financial management. But the business environment today is such, if the business or organization wishes to be prepared for the world ahead, to train transition personnel toward a multi-disciplinary project management competency.

I would posit, contrary to Glen’s recommendation, that no one discipline claim to be the basis for cross-functional integration, only because it may be a self-defeating one. In the book Networks, Crowds, and Markets: Reasoning about a Highly Connected World by David Easley and Jon Kleinberg of Cornell, our social systems are composed of complex networks, but where negative perceptions develop when the network is no longer considered in balance. This subtle and complex interplay of perceptions drive our ability to work together.

It also affects whether we will stay safe the comfort zone of having our information systems tell us what we need to analyze, or whether we apply a more expansive view of leveraging new information systems that are able to integrate ever expanding sets of relevant data to give us a more complete picture of what constitutes “done.”

Hold the Pickle, Hold the Lettuce, Special Orders Don’t Upset Us: Burger King explains Net Neutrality

The original purpose of the internet has been the free exchange of ideas and knowledge. Initially, under ARPANET, Lawrence Roberts and later Bob Kahn, the focus was on linking academic and research institutions so that knowledge could be shared resulting in collaboration that would overcome geographical barriers. Later the Department of Defense, NASA, and other government organizations highly dependent on R&D were brought into the new internet community.

To some extent there still are pathways within what is now broadly called the Web, to find and share such relevant information with these organizations. With the introduction of commercialization in the early 1990s, however, it has been increasingly hard to perform serious research.

For with the expansion of the internet to the larger world, the larger world’s dysfunctions and destructive influences also entered. Thus, the internet has transitioned from a robust First Amendment free speech machine to a place that also harbors state-sponsored psy-ops and propaganda. It has gone form a safe space for academic freedom and research to a place of organized sabotage, intrusion, theft, and espionage. It has transitioned from a highly organized professional community that hewed to ethical and civil discourse, to one that harbors trolls, prejudice, hostility, bullying, and other forms of human dysfunction. Finally and most significantly, it has become dominated by commercial activity, dominated by high tech giants that stifle innovation, and social networking sites that also allow, applying an extreme Laissez-faire attitude, magnify and spread the more dysfunctional activities found in the web as a whole.

At least for those who still looked to the very positive effects of the internet there was net neutrality. The realization that blogs like this one and the many others that I read on a regular basis, including mainstream news, and scientific journals still were available without being “dollarized” in the words of the naturalist John Muir.

Unfortunately this is no longer the case, or will no longer be the case, perhaps, when the legal dust settles. Burger King has placed it marker down and it is a relevant and funny one. Please enjoy and have a great weekend.

 

Saturday Midnight Special Government Shutdown Blues — Samantha Fish, Nikki Hill, Mike Zito, Joe Louis Walker, and Popa Chubby

I have a number of colleagues, friends, and family serving the public interest and I am sure that serial dysfunctional governance by Continuing Resolution, critical positions at senior levels being unfilled, and now a shutdown that will affect their ability to make ends meet are weighing on them at this moment. Thus, a bit of blues music for our times seems to be apropos.

For those not familiar with the history or the form of the music, rather than a music that leads one to hopelessness and resignation, the blues catalogue through the human folk tradition the day-to-day worries and challenges of everyday people.

The blues were born from the work songs of African-American slaves–a brutal environment that punished summarily any sign of protest or rebellion. Thus two outlets were allowed to them–the use of music during heavy labor and mundane work that the slave owners encouraged as a way of ensuring quiescence and productivity, and in religious worship, which was thought as a means of pacification through the acceptance of sanctified music. The slave owners and slavery’s supporters did not fully understand nor recognize the subversive message in the lyrics of these two musical roots, which communicated human dignity, perseverance, and–yes–hope, in the face of oppression, rape, murder, and brutality. The rhythm of the music is organic, borrowed in part from the African rhythms of the different tribes from which the slaves originated, but also derived from whatever was at hand in the New World borrowed from the ruling white society and indigenous American tribes, many of whom accepted runaway slaves and, later, freedmen among their tribes. Thus, forged from the fire of oppression, came a music that embodied the aspirations inspired by the promise embodied in such ideas as freedom, democracy, and equality in a uniquely American way, The blues are the musical soil and soul of American ideal.

The blues carried itself into jazz, which elevated the simplistic folk forms, and has become an elegant, groundbreaking, and uniquely American classical music that continues to push the limits of musical improvisation and expression. It also carried itself into and influenced American popular music, where its deceivingly simplistic forms were imitated and evolved into other musical styles, merging and developing over time.With the great African-American migration to northern cities to escape Jim Crow, the music evolved and incorporated urban influences, with the added dynamics of electrified instruments, and a new defiant message that included elements of black pride, black power, and northern attitude.

Other countries adopted and unabashedly mimicked the music, reviving interest in the music during times when it was undervalued and ignored in this, its country of origin. The British Invasion of the 1960s reintroduced the music to the U.S. through the hybrid of blues-rock, thus anyone familiar with the Rolling Stones, Led Zepplin, early Fleetwood Mac, The Yardbirds, Cream, and others are listening to the blues adapted and recycled to a new generation. It continues today with a broader mix and diversity of musicians that have taken the music of those first generations of African-American blues musicians–Robert Johnson, Bessie Smith, Muddy Waters, Howlin’ Wolf, Son House, B.B. King, Albert King, Lightnin’ Hopkins, and others–and have broadened it, continuing in both the tradition and in extending the music to make it an urgent and vital expression of the human experience. It reveals in its best form the interconnectedness and basic humanity that we all share, across cultures, across generations, and across time.

Unfortunately many blues performers cannot be readily found on YouTube in authorized forms for sharing. No doubt, this post, as with posts in the past that covered blues and jazz will record below average traffic compared to even the esoteric and specialized subjects of Big Data and project management. Fortunately, however, individuals like Don Odell and his Legends studios in Massachusetts records the new generation of bluesmen and blueswomen and so I can share these artists tonight. Each of the musicians below–all amazing and talented in their own way–provide a diversity of perspectives of life and its challenges through their music.

The first artist is Samantha Fish. She hails from Kansas City, Missouri, a town that is rich in blues and jazz history. She lists her influences as visiting blues musicians who performed at Knuckleheads Saloon, a popular musical venue. She began performing in 2009 and she has been mercurial. Her blues album, Wild Heart, charted as the top blues album in 2015. Her latest album is Belle of the West.

 

Nikki Hill is from North Carolina and, if you aren’t familiar with her the clip that follows should bring you running to the store to find her music. She combines intelligent lyrics, strong woman attitude, and powerful vocals to her music–all hallmarks of a great blues vocalist. Her first album is Heavy Hearts, Hard Fists.

 

 

Mike Zito, like Samatha Fish, also is from the mid-west. In his case it is St. Louis. Born in 1970, he began singing at the age of 5, and performed locally in the St. Louis area for many years. In 2008 he gained his big break and was signed on by the Eclecto Groove label. The title song from his 2009 release entitled Pearl River won Song of the Year at the 2010 Blues Music Awards. In 2013 his album Gone to Texas also garnered critical reviews and was nominated for best album at Blues Music Awards in 2013. His latest album is Make Blues, Not War. Here he is covering “Fortunate Son.”

 

 

Joe Louis Walker is, of course, a living blues legend and a living national treasure. He took up guitar growing up in the San Francisco bay area. He hooked up with Mike Bloomfield, Jimi Hendrix, the Grateful Dead and other musical pioneers that pushed rock and psychedelic music to new pathways. Burned out on blues after 1975, he turned to sanctified music. However, after attending the New Orleans Jazz & Heritage Festival in 1985 he returned to his blues roots. Here he is performing “One Time Around.”

 

 

Last, but not least, is Popa Chubby. The name is actually the nom de plume of Ted Horowitz, who grew up in the Bronx, New York. After working the woodshed for a number of years (he was born in 1960) he was finally “discovered” in 1992 by the public radio station in Long Beach, California, which sponsored a national blues talent search. Since that time his album production has been prolific, spanning and incorporating other musical genres within a blues structure. Idiosyncratic and eclectic, Papa Chubby combines showmanship, independence, and amzaing musical chops to keep the music vital and interesting. In the clip below Popa Chubby is the large man who plays lead guitar, and like a good leader who showcases the talents of others, has deferred to his keyboardist to take the lead vocals on the song, “Not So Nice Anymore.”