Back in the Saddle Again — Putting the SME into the UI Which Equals UX

“Any customer can have a car painted any colour that he wants so long as it is black.”  — Statement by Henry Ford in “My Life and Work”, by Henry Ford, in collaboration with Samuel Crowther, 1922, page 72

The Henry Ford quote, which he made half-jokingly to his sales staff in 1909, is relevant to this discussion because the information sector has developed along the lines of the auto and many other industries.  The statement was only half-joking because Ford’s cars could be had in three colors.  But in 1909 Henry Ford had found a massive market niche that would allow him to sell inexpensive cars to the masses.  His competition wasn’t so much as other auto manufacturers, many of whom catered to the whims of the rich and more affluent members of society, but against the main means of individualized transportation at the time–the horse and buggy.  The color was not so much important to this market as was the need for simplicity and utility.

Since the widespread adoption of the automobile and the expansion of the market with multiple competitors, high speed roadways, a more affluent society anchored by a middle class, and the impact of industrial and information systems development in shaping societal norms, the automobile consumer has, over time, become more varied and sophisticated.  Today automobiles have introduced a number of new features (and color choices)–from backup cameras, to blind spot and back-up warning signals, to lane control, auto headline adjustment, and many other innovations.  Enhancements to industrial production that began with the introduction of robotics into the assembly line back in the late 1970s and early 1980s, through to the adoption of Just-in-Time (JiT) and Lean principles in overall manufacturing, provide consumers a a multitude of choices.

We are seeing a similar evolution in information systems, which leads me to the title of this post.  During the first waves of information systems development and introduction into our governing and business systems, the process has been one in which software is developed first to address an activity that is completed manually.  There would be a number of entries into a niche market (or for more robustly capitalized enterprises into an entire vertical).  The software would be fairly simplistic and the features limited, the objects (the way the information is presented and organized on the screen, the user selections, and the charts, graphs, and analytics allowed to enhance information visibility) well defined, and the UI (user interface) structured along the lines of familiar formats and views.

To include the input of the SME into this process, without specific soliciting of advice, was considered both intrusive and disruptive.  After all, software development largely was an activity confined to a select and highly trained specialty involving sophisticated coding languages that required a good deal of talent to be considered “elegant”.  I won’t go into a definition of elegance here, which I’ve addressed in previous posts, but for a short definition it is this:  the fewest bits of code possible that both maximizes computing power and provides the greatest flexibility for any given operation or set of operations.

This is no mean feat and a great number of software applications are produced in this way.  Since the elegance of any line of code varies widely by developer and organization, the process of update and enhancement can involve a great deal of human capital and time.  Thus, the general rule has been that the more sophisticated that any software application is, the more effort and thus less flexibility that the application possesses.  Need a new chart?  We’ll update you next year.  Need a new set of views or user settings?  We’ll put your request on the road-map and maybe you’ll see something down the road.

It is not as if the needs and requests of users have always been ignored.  Most software companies try to satisfy the needs of their customer, balancing the demands of the market against available internal resources.  Software websites, such as at UXmatters in this article, have advocated the ways that the SME (subject-matter expert) needs to be at the center of the design process.

With the introduction of fourth-generation adaptive software environments–that is, those systems that leverage underlying operating environments and objects such as .NET and WinForms, that are open to any data through OLE DB and ODBC, and that leave the UI open to simple configuration languages that leverage these underlying capabilities and place them at the feet of the user–put the SME at the center of the design process into practice.

This is a development in software as significant as the introduction of JiT and Lean in manufacturing, since it removes both the labor and time-intensiveness involved in rolling out software solutions and enhancements.  Furthermore, it goes one step beyond these processes by allowing the SME to roll out multiple software solutions from one common platform that is only limited by access to data.  It is as if each organization and SME has a digital printer for software applications.

Under this new model, software application manufacturers have a flexible environment to pre-configure the 90% solution to target any niche or market, allowing their customers to fill in any gaps or adapt the software as they see fit.  There is still IP involved in the design and construction of the “canned” portion of the solution, but the SME can be placed into the middle of the design process for how the software interacts with the user–and to do so at the localized and granular level.

This is where we transform UI into UX, that is, the total user experience.  So what is the difference?  In the words of Dain Miller in a Web Designer Depot article from 2011:

UI is the saddle, the stirrups, and the reigns.

UX is the feeling you get being able to ride the horse, and rope your cattle.

As we adapt software applications to meet the needs of the users, the role of the SME can answer many of the questions that have vexed many software implementations for years such as user perceptions and reactions to the software, real and perceived barriers to acceptance, variations in levels of training among users, among others.  Flexible adaptation of the UI will allow software applications to be more successfully localized to not only meet the business needs of the organization and the user, but to socialize the solution in ways that are still being discovered.

In closing this post a bit of full disclosure is in order.  I am directly involved in such efforts through my day job and the effects that I am noting are not simply notional or aspirational.  This is happening today and, as it expands throughout industry, will disrupt the way in which software is designed, developed, sold and implemented.

New York Times Says Research and Development Is Hard…but maybe not

At least that is what a reader is led to believe by reading this article that appeared over the weekend.  For those of you who didn’t catch it, Alphabet, which formerly had an R&D shop under the old Google moniker known as Google X, does pure R&D.  According to the reporter, one Conor Doughtery, the problem, you see, is that R&D doesn’t always translate into a direct short-term profit.  He then makes this absurd statement:  “Building a research division is an old and often unsuccessful concept.”  He knows this because some professor at Arizona State University–that world-leading hotbed of innovation and high tech–told him so.  (Yes, there is sarcasm in that sentence).

Had Mr. Doughtery understood new technology, he would know that all technology companies are, at core, research organizations that sometimes make money in the form of net profits, just as someone once accurately described to me that Tesla is a battery company that also makes cars (and lately its showing).  But let’s return the howler of a statement about research divisions being unsuccessful, apply some, you know, facts and empiricist thought, and go from there.

The most obvious example of a research division is Bell Labs.  From the article one would think that Bell Labs is a dinosaur of the past, but no, it still exists as Nokia Bell Labs.  Bell Labs was created in 1925, but has its antecedents in both Western Electric and AT&T, but its true roots go back to 1880 when Alexander Graham Bell, after being awarded the Volta prize for the invention of the telephone, opened Volta Labs in Washington, D.C.  But it was in the 1920s that Bell Labs, “the Idea Factory” really hit its stride.  Its researchers improved telephone switching, sound transmission, and invented radio astronomy, the transistor, the laser, information theory (of which I’ve written about extensively and which directly impacts on computing and software), Unix, the languages C, C++.  Bell established the precedent that researchers kept and were compensated for use of their inventions and IP.  This goes well beyond the assertion in the article that Bell Labs largely made “contributions to basic, university-style research.”  I guess New York Times reporters, fact checkers, and editors don’t have access to the Google search engine or Wikipedia.

Between 1937 and 2014 seventeen of their researchers have been awarded the Nobel Prize or Turing Award.  Even those who never garnered an award like Claude Shannon, of the aforementioned information theory, is among a Who’s Who of researchers into high tech.  What they didn’t invent directly they augmented and facilitated to practical use, with a good deal of their input going into public R&D through consulting and other contracts with the Department of Defense and federal government.

The reason why Bell Labs didn’t continue as a research division of AT&T wasn’t due to some dictate of the market or investor dissatisfaction.  On the contrary, AT&T (Ma Bell) dominated its market, and Bell Labs ensured that it stayed far ahead of any possible entry.  This is why in 1984 the U.S. Justice Department reached a divestiture agreement for AT&T under antitrust laws to split off Bell Labs from its local carriers in order to promote competition.  Whether the divestiture agreement was a good deal for the American people and had positive economic effects is still a cause for debate, but it is likely that the plethora of choices in cell phone and other technologies that have emerged since that time would not have gone to market without that antitrust action.

Since 1984, Bell Labs continued its significant contributions to the high tech industry through AT&T Technologies which was spun off in 1996 as Lucent Technologies, which is probably why Mr. Doughtery didn’t recognize it.  A merger with Alcaltel and then acquisition by Nokia has provided it with its current moniker.  Bell Labs over that period continued to innovate and has contributed significantly to pushing the boundaries of broadband speed and the use of imaging technology in the medical field.

So what this shows is that, while not every bit of R&D leads directly to profit, especially in the short term, a mix of types of R&D do yield practical results.  Anyone who has worked in project management understands that R&D, by definition, represents the handling of risk.  Furthermore, the lessons learned and spin offs are hard to estimate in advance, though they may result in practical technologies in the short and medium term.

When one reads past the lede and the “research division is an old and often unsuccessful concept” gaffe, among others, what you find is that Google specifically wants this portion of the research division to come up with a series of what it calls a “moon shots”.  In techie lingo this is often called a unicorn, and from personal experience I am part of a company that recently was characterized as delivering a unicorn.  This is simply a shorthand term for producing a solution that is practical, groundbreaking, and shifts the dialogue of what is possible.  (Note that I’m avoiding the tech hipster term “disruption”).

Another significant fact that we find out about Google X is the following:

X employees avoid talking about money, but it is not a subject they can ignore. They face financial barriers that can shut down a project if it does not pan out as quickly as planned. And they have to meet various milestones before they can hire more people for their teams.

This sounds a lot like project and risk management.  But Google X goes a bit further.

Failure bonuses are also an example of how X, which was set up independent of Google from the outset, is a leading indicator of sorts for how the autonomous Alphabet could work. In Alphabet, employees who do not work for Mother Google are supposed to have their financial futures tied to their own company instead of Google’s search ads. At X, that means killing things before they become too expensive.

Note that the incentive here, given in terms of a real financial incentive to the team members, is to manage risk.  No doubt, there are no #NoEstimates cultists at Google.  Psychologically, providing an incentive to find failure no doubt defeats Groupthink and optimism selection bias.  Much of this sounds, particularly in the expectation of non-existential failure, amazingly along the lines of an article recently published on AITS.org by yours truly.

The delayed profitability of software and technology companies is commonplace.  The reason for this is that, at least to my thinking, any technology type worth their salt will continue to push the technology once they have their first version marked to market.  If you’re resting on your laurels then you’re no longer in the software technology business, you’re in the retail business and might as well be selling candy bars or any other consumer product.  What you’re not doing is being engaged in providing a solution that is essential to the target domain.  Practically what this means is that, in garnering value, net profitability is not necessary the measure of success, especially in the first years.

For example, such market leaders such as Box, Workday, and Salesforce have gone years without a net profit, though revenues and market share are significant.  Facebook did not turn a profit for five yearsAmazon took six years, and even those figures were questionable.  The competing need for any executive running a company is between value (the intrinsic value of IP, existing customer base, and potential customer base), and profit.  The job of the CEO is not just to stockholders, yet the article in its lede clearly is biased in that way.  The fiduciary and legal responsibility of the CEO is to the customers, the employees, the entity, and the stockholders–and not necessarily in that order.  This is thus a natural conflict in balancing these competing interests.

Overall, if one ignores the contributions of the reporter, the case of Google X is a fascinating one for its expectations and handling or risk in R&D-focused project management.  It takes value where it can and cuts its losses through incentives to find risk that can’t be handled.  An investor that lives in the real world should find this reassuring.  Perhaps these lessons on incentives can be applied elsewhere.

 

Rise of the Machines — Drivers of Change in Business and Project Management

Last week I found myself in business development mode, as I often am, in explaining to a prospective client our future plans in terms of software development.  The point that I was making was that it was not our goal to simply reproduce the functionality that every other software solution provider offered, but to improve how the industry does business by making the drive for change through the application of appropriate technology so compelling through efficiencies, elimination of redundancy, and improved productivity, that not making the change would be deemed foolish.  In sum, we are out to take a process and improve on it through the application of disruptive technology.  I highlighted my point by stating:  “It is not our goal to simply reproduce functionality so we can party like it’s 1998, it’s been eight software generations since that time and technology has provided us smarter and better ways of doing things.”

I received the usual laughter and acknowledgement by some of the individuals to whom I was making this point, but one individual rejoined: “well, I don’t mind doing things like we did it in 1998,” or words to that effect.  I acknowledged the comment, but then reiterated that our goal was somewhat more proactive.  We ended the conversation in a friendly manner and I was invited to come back and show our new solution upon release to market.

Still, the rejoinder of being satisfied with things the way they are has stuck with me.  No doubt that being a nerd (and years as a U.S. Navy officer) have inculcated a drive in me for constant process improvement.  My default position going into a discussion is that the individuals that I am addressing share that attitude with me.  But that is not always the case.

The kneejerk position of other geeks is often of derision when confronted by resistance to change.  But not every critic or skeptic is a Luddite, and it is important to understand the basis for both criticism and skepticism.  For many of our colleagues in the project management world, software technology is a software application, something that “looks into the rear glass window.”  This meme is pervasive out there, but it is wrong.  Understanding why it is wrong is important in addressing the concerns behind them in an appropriate manner.

This view is wrong because the first generations of software that serve this market simply replicated the line and staff, specialization, and business process and analysis regime that existed prior to digitization.  Integration of data that could provide greater insight was not possible at a level of detail needed to establish confidence.  The datasets upon which we derived our data were not flexible, nor did they allow for widespread distribution of more advanced corporate and institutional knowledge.  In fact, the first software generation in project management often supported and sustained the subject matter expert (SME) framework, in which only a few individuals possessed advanced knowledge of methods and analytics, upon which the organization had to rely.

We still see this structure in place in much of industry and government–and it is self-sustaining, since it involves not only individuals within the organization that possess this attribute, but also a plethora of support contractors and consultants who have built their businesses to support it.

Additional resistance comes from individuals who have dealt with new entries in the past, which turned out only to be incremental or marginal improvements for what is already in place, not to mention the few bad actors that come along.  Established firms in the market take this approach in order to defend market share and like the SME structure, it is self-sustaining by attempting to establish a barrier to new entrants into the market.  At the same time they establish an environment of stability and security from which buyers are hesitant to leave, thus the prospective customer is content to “party like it’s 1998.”

Value proposition alone will not change the mind of those who are content.  You sell what a prospective customer needs, not usually solely what they want.  For those introducing disruptive innovation, the key is to be at the forefront in shifting the basis for what defines the basis of market need.

For example, in business and project systems, the focus has always been on “tools.”  Given the engineering domain that is dominant in many project management organizations, such terminology provides a comfortable and familiar way of addressing technology.  Getting the “right set of tools” and “using the right tool for the work” are the implicit assumptions in using such simplistic metaphors.  This has caused many companies and organizations to issue laundry lists of features and functionality in order to compare solutions when doing market surveys.  Such lists are self-limiting, supporting the self-reinforcing systems mentioned above.  Businesses who rely on this approach to the technology market are not open to leveraging the latest capabilities in improving their systems.  The metaphor of the “tool” is an out of date one.

The shift, which is accelerating in the commercial world, is emphasis on software technology that is focused on the capabilities inherent in the effective use of data.  In today’s world data is king, and the core issue is who owns the data.  I have referred to some of the new metaphors in data in my last post and, no doubt, new ones will arise.  What is important to know about the shift to an emphasis on data and its use is that it is driving organizational change that not only breaks down the “tool”-based approach to the market, but also undermines the software market emphasis on tool functionality, and on the organizational structure and support market built on the SME.

There is always fear surrounding such rapid change, and I will not argue against the fact that some of it needs to be addressed.  For example, the rapid displacement through digitization of previously human-centered manual work that previously required expertise and which paid well, will soon become one of the most important challenges of our time.  I am optimistic that the role of the SME simply needs to follow the shift, but I have no doubt that the shift will require fewer SMEs.  This highlights, however, that the underlying economics of the shift will make it both compelling and necessary.

Very soon, it will be impossible to “party like it’s 1998” and still be in business.

Over at AITS.org — Black Swans: Conquering IT Project Failure & Acquisition Management

It’s been out for a few days but I failed to mention the latest article at AITS.org.

In my last post on the Blogging Alliance I discussed information theory, the physics behind software development, the economics of new technology, and the intrinsic obsolescence that exists as a result. Dave Gordon in his regular blog described this work as laying “the groundwork for a generalized theory of managing software development and acquisition.” Dave has a habit of inspiring further thought, and his observation has helped me focus on where my inquiries are headed…

To read more please click here.

Super Doodle Dandy (Software) — Decorator Crabs and Wirth’s Law

decorator-crab[1]

The song (absent the “software” part) in the title is borrowed from the soundtrack of the movie, The Incredible Mr. Limpet.  Made in the day before Pixar and other recent animation technologies, it remains a largely unappreciated classic; combining photography and animation in a time of more limited tools, but with Don Knotts creating another unforgettable character beyond Barney Fife.  Somewhat related to what I am about to write, Mr. Limpet taught the creatures of the sea new ways of doing things, helping them overcome their mistaken assumptions about the world.

The photo that opens this post is courtesy of the Monterey Aquarium and looks to be the crab Oregonia gracilis, commonly referred to as the Graceful Decorator Crab.  There are all kinds of Decorator Crabs, most of which belong to the superfamily Majoidea.  The one I most often came across and raised in aquaria was Libinia dubia, an east coast cousin.  You see, back in a previous lifetime I had aspirations to be a marine biologist.  My early schooling was based in the sciences and mathematics.  Only later did I gradually gravitate to history, political science, and the liberal arts–finally landing in acquisition and high tech project management, which tends to borrow something from all of these disciplines.  I believe that my former concentration of studies have kept me grounded in reality–in viewing life the way it is and the mysteries that are yet to be solved in the universe absent resort to metaphysics or irrationality–while the latter concentrations have connected me to the human perspective in experiencing and recording existence.

But there is more to my analogy than self-explanation.  You see, software development exhibits much of the same behavior of Decorator Crabs.

In my previous post I talk about Moore’s Law and the compounding (doubling) of greater processor power in computing every 12 to 24 months.  (It does not seem to be as much a physical law as an observation, and we can only guess how long this trend will continue).  We also see a corresponding reduction in cost vis-à-vis this greater capability.  Yet, despite these improvements, we find that software often lags behind and fails to leverage this capability.

The observation that has recorded this phenomenon is found in Wirth’s Law, which posits that software is getting slower at a faster rate than computer hardware is getting faster.  There are two variants of this law, one ironic and the other only less so.  These are May’s and Gates’ variants.  Basically these posit that software speed halves every 18 months, thereby negating Moore’s Law.  But why is this?

For first causes one need only look to the Decorator Crab.  You see, the crab, all by itself, is a typical crab: an arthropod invertebrate with a hard carapace, spikes on its exoskeleton, segmented body with jointed limbs, five pairs of legs, the first pair of legs usually containing chelae (the familiar pincers and claws).  There are all kinds of crabs in salt, fresh, and brackish water.  They tend to be well adapted to their environment.  But they are also tasty and high in protein value, thus having a number of predators.  So the Decorator Crab has determined that what evolution has provided is not enough–it borrows features and items from its environment to enhance its capabilities as a defense mechanism.  There is a price to being a Decorator Crab.  Encrustations also become encumbrances.  Where crabs have learned to enhance their protections, for example by attaching toxic sponges and anemones, these enhancements may also have made them complaisant because, unlike most crabs, Decorator Crabs don’t tend to scurry from crevice to crevice, but tend to walk awkwardly and more slowing than many of their cousins in the typical sideways crab gait.  This behavior makes them interesting, popular, and comical subjects in both public and private aquaria.

In a way, we see an analogy in the case of software.  In earlier generations of software design, applications were generally built to solve a particular challenge that mimicked the line and staff structure of the organizations involved–designed to fit its environmental niche.  But over time, of course, people decide that they want enhancements and additional features.  The user interface, when hardcoded, must be adjusted every time a new function or feature is added.

Rather than rewriting the core code from scratch–which will take time and resource-consuming reengineering and redesign of the overall application–modules, subroutines, scripts, etc. are added to software to adapt to the new environment.  Over time, software takes on the characteristics of the Decorator Crab.  The new functions are not organic to the core structure of the software, just as the attached anemone, sponges, and algae are not organic features of the crab.  While they may provide the features desired, they are not optimized, tending to use brute force computing power as the means of accounting for lack of elegance.  Thus, the more powerful each generation of hardware computing power tends to provide, the less effective each enhancement release of software tends to be.

Furthermore, just as when a crab tends to look less like a crab, it requires more effort and intelligence to identify the crab, so too with software.  The greater the encrustation of features that tend to attach themselves to an application, the greater the effort that is required to use those new features.  Learning the idiosyncrasies of the software is an unnecessary barrier to the core purposes of software–to increase efficiency, improve productivity, and improve speed.  It serves only one purpose: to increase the “stickiness” of the application within the organization so that it is harder to displace by competitors.

It is apparent that this condition is not sustainable–or acceptable–especially where the business environment is changing.  New software generations, especially Fourth Generation software, provide opportunities to overcome this condition.

Thus, as project management and acquisition professionals, the primary considerations that must be taken into account are optimization of computing power and the related consideration of sustainability.  This approach militates against complacency because it influences the environment of software toward optimization.  Such an approach will also allow organizations to more fully realize the benefits of Moore’s Law.

Over at AITS.org — Maxwell’s Demon: Planning for Obsolescence in Acquisitions

I’ve posted another article at AITS.org’s Blogging Alliance, this one dealing with the issue of software obsolescence and the acquisition strategy that applies given what we know about the nature of software.  I also throw in a little background on information theory and the physical limitations of software as we now know it (virtually none).  As a result, we require a great deal of agility inserted into our acquisition systems for new technologies.  I’ll have a follow up article over there that provides specifics on acquisition planning and strategies.  Random thoughts on various related topics will also appear here.  Blogging has been sporadic of late due to op-tempo but I’ll try to keep things interesting and more frequent.

Forget Domani — The Inevitability of Software Transitioning and How to Facilitate the Transition

The old Perry Como* chestnut refers to the Italian word “tomorrow” and is the Italian way of repeating–in a more romantic manner–Keyne’s dictum that in the “long run we’ll all be dead.”  Whenever I hear polemicists talk about the long run or invoke the interests of their grandchildren trumping immediate concerns and decisions I always brace myself for the Paleolithic nonsense that is to follow.  While giving such opinions a gloss of plausibility, at worst, they are simply fabrications to hide self-interest, a form of tribalism, or ideology, at best, they are based on fallacious reasoning, fear, or the effects of cognitive dissonance.

While not as important as the larger issues affecting society, we see this same type of thinking when people and industries are faced with rapid change in software.  I was reminded of this when I sat down to lunch with a colleague who was being forced to drop an established software system being used in project management.  “We spent so much time and money to get it to finally work the way we want it, and now we are going to scrap it,” he complained.  Being a good friend–and knowing the individual as being thoughtful when expressing opinions–I pressed him a bit.  “But was your established system doing what it needed to do to meet your needs?”  He thought a moment.  “Well, it served our needs up to now, but it was getting very expensive to maintain and took a lot of workarounds.  Plus the regulatory requirements in our industry are changing and it can’t make the jump.”  When I pointed out that it sounded as the decision to transition then was the right one he ended with:  “Yes, but I’m within a couple of years of retirement and I don’t need another one of these.”

Thus, within the space of one conversation were the reasons that we all usually hear as excuses for not transitioning to a new software.  In markets that are dominated by a few players with aging and soon to be obsolete software this is a common refrain.  Any one of these rationales, put in the mouth of a senior decision-maker, will kill a deal.  Other rationales are based in a Sharks vs. Jets mentality, in which the established software user community rallies around the soon to be obsolete application.  This is particularly prevalent in enterprise software environments.  This is usually combined with uninformed attacks, sometimes initiated by the established market holder directly or through proxies, about the reliability, scale, and functionality of the new entries.  The typical defensive maneuver is to declare that at some undetermined date in the future–domani–that an update is on the way that will match or exceed what the present applications possess.  Hidden from the non-tech savvy is the reality that the established software is written in old technology and language, oftentimes requiring an entire rewrite that will take years.  Though possessing the same brand name the “upgrade” will, in effect, be new, untested software written in haste to defend market.

As a result of many years marketing and selling various software products, certain of which were and are game-changing in their respective markets, I have compiled a list of typical objections to software transitioning and the means of addressing these concerns.  One should not take this as an easy “how-to” guide.  There is no substitute for understanding your market, understanding the needs of the customer, having the requisite technical knowledge of the regulatory and systematic requirements of the market, and possessing a concern for the livelihood for your customers that is then translated into a trusting and mutually respectful relationship.  If software is just a euphemism for making money–and there are some very successful companies that take this approach–this is not the blog post for you: you might as well be selling burgers and tacos.

1.  Sunk vs. Opportunity Costs.  This is an old one and I find it interesting that this thinking persists.  The classic comparison in understanding the fallacy of sunk cost was first brought up in a class when I was attending Pepperdine University many years ago.  A friend of the professor couldn’t decide if he should abandon the expensive TV antenna he had purchased just a year before in favor of the new-fangled cable television hookup that was just introduced into his neighborhood.  The professor explained to his friend that the money he spent on the antenna was irrelevant to his decision.  That money was gone–it was “sunk” into the old technology.  The relevant question was: what was the cost of not taking the best alternative now, that is, what is the cost of not putting a resource to its best use.  When we persist in using old technologies to address new challenges there comes a point where the costs associated with that old technology no longer are the most effective use of resources in that regard.  That is the point at which the change must occur.  In practical matters, if the overhead associated with the old technology is too high given the payoff, there are gaps and workarounds in using the old technology that sub-optimize and waste resources, then it is time to make a change.  The economics dictates it, and this can be both articulated and demonstrated using a business case.

2.  Need vs. Want.  Being a techie, I often fall into the same trap of most techies in which some esoteric operation or functionality is achieved and I marvel at it.  Then when I show it to a non-techie I am puzzled when the intended market responds with a big yawn.  Within this same category are people on the customer side of the equation who are looking at the latest technologies, but do not have an immediate necessity that propels the need for a transition.  This is often looked at as just “checking in” and, on the sales side, the equivalent of kicking the tires.  These opposing examples outline one of the core elements that will support a transition:  in most cases businesses will buy when they have a need, as opposed to a want.  Understanding the customers needs–and what propels a change based on necessity–whether it be due to a shift in the regulatory or technological environment that changes the baseline condition, is the key to understanding how to support a transition.  This assumes, of course, that the solution one is offering meets the baseline condition to support the shift.  Value and pricing also enter into this equation.  I remember dealing with a software company a few years ago where I noted that their pricing was much too high for the intended market.  “But we offer so much more than our competition” came the refrain.  The problem, however, was that the market did not view the additional functionality as essential.  Any price multiplied by zero equals zero, regardless of how we view the value of an offering.

3.  Acts of Omission and Acts of Commission.  The need for technological transition is, once again, dictated by a need of the organization due to either internal or external factors.  In my career as a U.S. Navy officer we are trained to make decisions and take vigorous action whenever presented with a challenge.  The dictum in this case is that an act of commission, that is, having taken diligent and timely action due to a perceived threat, is defensible, even if someone second guesses those decisions and is critical of them down the line, but an act of omission, ignoring a threat or allowing events to unfold on their own, is always unforgiveable.  Despite the plethora of books, courses, and formal education regarding leadership, there is still a large segment of business and government that prefer to avoid risk by avoiding making decisions.  Businesses operating at optimum effectiveness perform under a sense of urgency.  Software providers, however, must remember that their sense of urgency in making a sale does not mean that the prospective customer’s sense of urgency is in alignment.  A variation of the need vs. want factor, in this case understanding the business and then effectively communicating to the customer those events that are likely to occur due to non-action, is the key component in overcoming this roadblock.  Once again, this is assuming that the proposed solution actually addresses the risk associated with an act of omission.

4.  Not Invented Here.  I have dealt with this challenge in a previous blog post.  Establishing a learning organization is essential under the new paradigm of project management, in which there is more emphasis on a broader sense of integration across what were previously identified as the divisions of labor in the community.  Hand-in-hand with this challenge is the perception, often based on lack of information, that the requirements needed by the organization are so unique to it that only a ground-up, customized solution will do, usually militating against commercial-off-the-shelf (COTS) technologies.  This often takes the form of internal IT shops building business cases to internally develop the system directly to code, or in supporting environments in which users have filled the gaps in their systems with Excel spreadsheets that various users had constructed.  In one case the objection to the proposed COTS solution was based on the rationale that the users they “really liked” their pivot tables.  (Repeat after me:  Excel is not a project management system, Excel is not a project management system, Excel is not a project management system).  As we drive toward integration of more data involving millions of records, such rationales are easily engaged.  This assumes, however, that the software provider possesses a solution that is both powerful and flexible, that is, one that can both handle Big Data and integrate data, not just through data conversion, normalization, and rationalization, but also through the precise use of APIs.  In this last case, we are not talking about glorified query engines against SQL tables but systems that have built-in smarts inherited from the expertise of the developers to properly identify and associate data so that it transformed into information that establishes an effective project management and control environment.

5.  I Heard it Through the Grapevine.  Nothing is harder to overcome than a whisper campaign generated by competitors or their proxies.  I know of companies in which enterprise systems involving billions of dollars of project value being successfully implemented only to have the success questioned in a meeting by the spread of disinformation, or the success acknowledged in a backhand manner.  The response to this kind of challenge is to put the decision makers in direct touch with your customers.  In addition, live demos using releasable data or notional data that is equivalent to the customer’s work in demonstrating functionality is essential.  Finally, the basics of software economics dictate that for an organization to understand whether a solution is appropriate for their needs, that there needs to be some effort in terms of time and resources expended in evaluating the product.  For those offering solutions, the key in effectively communicating the value of your product and not falling into a trap of your competitors’ making, is to ensure that the pilot does not fall into a trained monkey test in which potentially unqualified individuals attempt to operate the software on their own with little or no supervision, training, and lacking effective communication to support the pilot in the same way that an implementation would normally be handled.  Propose a pilot that is structured, has a time limit, a limit to scope, and in which direct labor and travel, if necessary, is reimbursed.  If everyone is professional and serious this will be a reasonable approach that will ensure a transparent process for both parties.

6.  The Familiar and the Unknown.  Given the high failure rate associated with IT projects, one can understand the hesitancy of decision makers to take that step.  A bad decision in selecting a system can, and have, brought organizations to their knees.  Furthermore, studies in human behavior demonstrate that people tend to favor those things that are familiar, even in cases where a possible alternative is better, but unknown.  This is known as the mere-exposure effect.  Daniel Kahneman in the groundbreaking book Thinking Fast and Slow, outlines other cognitive fallacies built into our wiring.  New media and technology only magnify these effects.  The challenge, then, is for the new technological solution provider to address the issue of familiarity directly. Toward this end, software providers must establish trust and rapport with their market, prove their expertise not just in technical matters of software and computing, but also regarding the business processes and needs of the market, and establish their competency in issues affecting the market.  A proven track record of honesty, open communication, and fair dealing are also essential to overcoming this last challenge.

*I can’t mention this song without also noting that the Chairman of the Board, Frank Sinatra, recorded a great version of it, as did Mario Lanza, and that Connie Francis also made it a hit in the 1960s.  It was also the song that Katyna Ranieri made famous in the Shirley MacLaine movie The Yellow Rolls Royce.

Brother Can You (Para)digm? — Four of the Latest Trends in Project Management

At the beginning of the year we are greeted with the annual list of hottest “project management trends” prognostications.  We are now three months into the year and I think it worthwhile to note the latest developments that have come up in project management meetings, conferences, and in the field.  Some of these are in alignment with what you may have seen in some earlier articles, but these are four that I find to be most significant thus far, and there may be a couple of surprises for you here.

a.  Agile and Waterfall continue to duke it out.  As the term Agile is adapted and modified to real world situations, the cult purists become shriller in attempting to enforce the Manifesto that may not be named.  In all seriousness, it is not as if most of these methods had not been used previously–and many of the methods, like scrum, also have their roots in Waterfall and earlier methods.  A great on-line overview and book on the elements of scrum can be found at Agile Learning Labs.  But there is a wide body of knowledge out there concerning social and organizational behavior that is useful in applying what works and doesn’t work.  For example, the observational science behind span of control, team building, the structure of the team in supporting organizational effectiveness, and the use of sprints in avoiding the perpetual death-spiral of adding requirements and not defining “done”, are best practices that identify successful teams (depending how you define success–keeping in mind that a successful team that produces the product often still fails as a going concern, and thus falls into obscurity).

All that being said, if you want to structure these best practices into a cohesive methodology, call it Agile, Waterfall or Harry, and can make money at it while helping people succeed in a healthy work environment, all power to you.  In IT, however, it is this last point that makes this particular controversy seem like we’ve been here before.  When woo-woo concepts like #NoEstimates and self-organization are thrown about, the very useful and empirical nature of the enterprise enters into magical thinking and ideology.  The mathematics of unsuccessful IT projects has not changed significantly since the shift to Agile.  From what one can discern from the so-called studies on the market, which are mostly anecdotal or based on unscientific surveys, somewhere north of 50% of IT projects fail, failure defined as behind schedule and over cost, or failing to meet functionality requirements.

Given this, Agile seems to be the latest belle to the ball and virtually any process improvement introducing scrum, teaming, and sprints seems to get the tag.  Still, there is much blood and thunder being expended for a result that amounts to the same (and probably less than the) mathematical chance of success as found in the coin flip.  I think for the remainder of the year the more acceptable and structured portions of Agile will get the nod.

b.  Business technology is now driving process.  This trend, I think, is why process improvements like Agile, that claim to be the panacea, cannot deliver on their promises.  As best practices they can help organizations avoid a net negative, but they rarely can provide a net positive.  Applying new processes and procedures while driving blind will still run you off the road.  The big story in 2015, I think, is the ability to handle big data and to integrate that data in a manner to more clearly reveal context to business stakeholders.  For years in A&D, DoD, governance, and other verticals engaged in complex, multi-year project management, we have seen the push and pull of interests regarding the amount of data that is delivered or reported.  With new technologies this is no longer an issue.  Delivering a 20GB file has virtually the same marginal cost as delivering a 10GB file.  Sizes smaller than 1G aren’t even worth talking about.

Recently I heard someone refer to the storage space required for all this immense data, it’s immense I tell you!  Well storage is cheap and large amounts of data can be accessed through virtual repositories using APIs and smart methods of normalizing data that requires integration at the level defined by the systems’ interrelationships.  There is more than one way to skin this cat, and more methods for handling bigger data are coming on-line every year.  Thus, the issue is not more or less data, but better data regardless of the size of the underlying file or table structure or the amount of information.  The first go-round of this process will require that all of the data available already in repositories be surveyed to determine how to optimize the information it contains.  Then, once transformed into intelligence, to determine the best manner of delivery so that it provides both significance and context to the decision maker.  For many organizations, this is the question that will be answered in 2015 and into 2016.  At that point it is the data that will dictate the systems and procedures needed to take advantage of this powerful advance in business intelligence.

c.  Cross-functional teams will soon morph into cross-functional team members.  As data originating from previously stove-piped competencies is integrated into a cohesive whole, the skillsets necessary to understand the data, know how to convert it into intelligence, and act appropriately on that intelligence will begin to shift to require a broader, multi-disciplinary understanding.  Businesses and organizations will soon find that they can no longer afford the specialist who only understands cost, schedule, risk, or any one aspect of the other various specialties that were dictated by the old line-and-staff and division of labor practices of the 20th century.  Businesses and organizations that place short term, shareholder, and equity holder interests ahead of the business will soon find themselves out of business in this new world.  The same will apply to organizations that continue to suppress and compartmentalize data.  This is because a cross-functional individual that can maximize the use of this new information paradigm requires education and development.  To achieve this goal dictates the need for the establishment of a learning organization, which requires investment and a long term view.  A learning organization exposes its members to become competent in each aspect of the business, with development including successive assignments of greater responsibility and complexity.  For the project management community, we will increasingly see the introduction of more Business Analysts and, I think, the introduction of the competency of Project Analyst to displace–at first–both cost analyst and schedule analyst.  Other competency consolidation will soon follow.

d.  The new cross-functional competencies–Business Analysts and Project Analysts–will take on an increasing role in design and deployment of technology solutions in the business.  This takes us full circle in our feedback loop that begins with big data driving process.  We are already seeing organizations that have implemented the new technologies and are taking advantage of new insights not only introducing new multi-disciplinary competencies, but also introducing new technologies that adapt the user environment to the needs of the business.  Once the business and project analyst has determined how to interact with the data and the systems necessary to the decision-making process that follows, adaptable technologies that do not take the hard-coded “one size fits all” user interfaces are, and will continue, to find wide acceptance.  Fewer off-line and one-off utilities that have been used to fill the gaps resulting from the deficiencies in inflexible hard-coded business applications will allow innovative approaches to analysis to be mainstreamed into the organization.  Once again, we are already seeing this effect in 2015 and the trend will only accelerate as possessing greater technological knowledge becomes an essential element of being an analyst.

Despite dire predictions regarding innovation, it appears that we are on the cusp of another rapid shift in organizational transformation.  The new world of big data comes with both great promise and great risks.  For project management organizations, the key in taking advantage of its promise and minimizing its risks is to stay ahead of the transformation by embracing it and leading the organization into positioning itself to reap its benefits.

One-Trick Pony — Software apps and the new Project Management paradigm

Recently I have been engaged in an exploration and discussion regarding the utilization of large amounts of data and how applications derive importance from that data.  In an on-line discussion with the ever insightful Dave Gordon, I first postulated that we need to transition into a world where certain classes of data are open so that the qualitative content can be normalized.  This is what for many years was called the Integrated Digital Environment (IDE for short).  Dave responded with his own post at the AITS.org blogging alliance, countering that while such standards are necessary in very specific and limited applications, that modern APIs provide most of the solution.  I then responded directly to Dave here, countering that IDE is nothing more than data neutrality.  Then also at AITS.org I expanded on what I proposed to be a general approach in understanding big data, noting the dichotomy in the software approaches that organize the external characteristics of the data to generalize systems and note trends, as opposed to those that are focused on the qualitative content within the data.

It should come as no surprise then, given these differences in approaching data, that we also find similar differences in the nature of applications that are found on the market.  With the recent advent of on-line and hosted solutions, there are literally thousands of applications in some categories of software that propose to do one thing with data, or that are focused one-trick pony applications that can be mixed and matched to somehow provide an integrated solution.

There are several problems with this sudden explosion of applications of this nature.

The first is in the very nature of the explosion.  This is a classic tech bubble, albeit limited to a particular segment of the software market, and it will soon burst.  As soon as consumers find that all of that information traveling over the web with the most minimal of protections is compromised by the next trophy hack, or that too many software providers have entered the market prematurely–not understanding the full needs of their targeted verticals–it will hit like the last one in 2000.  It only requires a precipitating event that triggers a tipping point.

You don’t have to take my word for it.  Just type in a favorite keyword into your browser now (and I hope you’re using VPN doing it) for a type of application for which you have a need–let’s say “knowledge base” or “software ticket systems.”  What you will find is that there are literally hundreds if not thousands of apps built for this function.  You cannot test them all.  Basic information economics, however, dictates that you must invest some effort in understanding the capabilities and limitations of the systems on the market.  Surely there are a couple of winners out there.  But basic economics also dictates that 95% of those presently in the market will be gone in short order.  Being the “best” or the “best value” does not always win in this winnowing out.  Certainly chance, the vagaries of your standing in the search engine results, industry contacts–virtually any number of factors–will determine who is still standing and who is gone a year from now.

Aside from this obvious problem with the bubble itself, the approach of the application makers harkens back to an earlier generation of one-off applications that attempt to achieve integration through marketing while actually achieving, at best, only old-fashioned interfacing.  In the world of project management, for example, organizations can little afford to revert to the division of labor, which is what would be required to align with these approaches in software design.  It’s almost as if, having made their money in an earlier time, that software entrepreneurs cannot extend themselves beyond their comfort zones in taking advantage of the last TEN software generations that provide new, more flexible approaches to data optimization.  All they can think to do is party like it’s 1995.

For the new paradigm in project management is to get beyond the traditional division of labor.  For example, is scheduling such a highly specialized discipline rising to the level of a profession that it is separate from all of the other aspects of project management?  Of course not.  Scheduling is a discipline–a sub-specialty actually–that is inextricably linked to all other aspects of project management in a continuum.  The artifacts of the process of establishing project systems and controls constitutes the project itself.

No doubt there are entities and companies that still ostensibly organize themselves into specialties as they did twenty years ago: cost analysts, schedule analysts, risk management specialists, among others.  But given that the information from the these systems: schedule, cost management, project financial management, risk management, technical performance, and all the rest, can be integrated at the appropriate level of their interrelationships to provide us a cohesive, holistic view of the complex system that we call a project, is such division still necessary?  In practice the industry has already moved to position itself to integration, realizing the urgency of making the shift.

For example, to utilize an application to query cost management information in 1995 was a significant achievement during the first wave of software deployment that mimicked the division of labor.  In 2015, not so much.  Introducing a one-trick pony EVM “tool” in 2015 is laziness–hoping to turn back the clock in ignoring the obsolescence of such an approach–regardless of which slick new user interface is selected.

I recently attended a project management meeting of senior government and industry representatives.  During one of my side sessions I heard a colleague propose the discipline of Project Management Analyst in lieu of previously stove-piped specialties.  His proposal is a breath of fresh air in an industry that develops and manufacturers the latest aircraft and space technology, but has hobbled itself with systems and procedures designed for an earlier era that no longer align with the needs of doing business.  I believe the timely deployment of systems has suffered as a result during this period of transition. 

Software must lead, and accelerate the transition to the new integration paradigm.

Thus, in 2015 the choice is not between data that adheres to conventions of data neutrality, or to those that utilize data access via APIs, but in favor of applications that do both.

It is not between different hard-coded applications that provide the old “what-you-see-is-what-you-get” approach.  It is instead between such limited hard-coded applications, and those that provide flexibility so that business managers can choose among a nearly unlimited pallet of choices of how and which data, converted into information, is available to the user or classes of user based on their role and need to know; aggregated at the appropriate level of detail for the consumer to derive significance from the information being presented.

It is not between “best-of-breed” and “mix-and-match” solutions that leverage interfaces to achieve integration.  It is instead between such solution “consortiums” that drive up implementation and sustainment costs, bringing with them high overhead, against those that achieve integration by leveraging the source of the data itself, reducing the number of applications that need to be managed, allowing data to be enriched in an open and flexible environment, achieving transformation into useful information.

Finally, the choice isn’t among applications that save their attributes in a proprietary format so that the customer must commit themselves to a proprietary solution.  Instead, it is between such restrictive applications and those that open up data access, clearly establishing that it is the consumer that owns the data.

Note: I have made minor changes from the original version of this post for purposes of clarification.

Over at AITS.org Dave Gordon takes me to task on data normalization — and I respond with Data Neutrality

Dave Gordon at AITS.org takes me to task on my post regarding recommending using common schemas for certain project management data.  Dave’s alternative is to specify common APIs instead.   I am not one to dismiss alternative methods of reconciling disparate and, in their natural state, non-normalized data to find the most elegant solution.  My initial impression, though, is: been there, done that.

Regardless of the method used to derive significance from disparate sources of data that is of a common type, one still must obtain the cooperation of the players involved.  The ANSI X12 standard has been in use in the transportation industry for quite some time and has worked quite well, leaving the preference of proprietary solution up to the individual shippers.  The rule has been, however, that if you are going to write solutions for that industry that you need to allow the shipping info needed by any receiver to conform to a particular format so that it can be read regardless of the software involved.

Recently the U.S. Department of Defense, which had used certain ANSI X12 formats for particular data for quite some time has published and required a new set of schemas for a broader set of data under the rubric of the UN/CEFACT XML.  Thus, it has established the same approach as the transportation industry: taking an agnostic stand regarding software preferences while specifying that submitted data must conform to a common schema so that a proprietary file type is not given preference over another.

A little background is useful.  In developing major systems contractors are required to provide project performance data in order to ensure that public funds are being expended properly for the contracted effort.  This is the oversight responsibility portion of the equation.  The other side concerns project and program management.  Given the usual cost-plus contract type most often used, the government program management office in cooperation with its commercial counterpart looks to identify the manifestation of cost, schedule, and/or technical risk early enough to allow that risk to be handled as necessary.   Also, at the end of this process, which is only now being explored, is the usefulness of years of historical data across contract types, technologies, and suppliers that can be used to benefit the public interest by demonstrating which contractors perform better, to show the inherent risk associated with particular technologies through parametric methods, and a host of insights that can be derived through econometric project management trending and modeling.

So let’s assume that we can specify APIs in requesting the data in lieu of specifying that the customer can receive an application-agnostic file that can be read by any application that conforms with the data standard.  What is the difference?  My immediate observation is that is reverses the relationship in who owns the data.  In the case of the API the proprietary application becomes the gatekeeper.  In the case of an agnostic file structure it is open to everyone and the consumer owns the data.

In the API scenario large players can do what they want to limit competition and extensions to their functionality.  Since they can block box the manner in which data is structured, it also becomes increasingly difficult to make qualitative selections from the data.  The very example that Dave uses–the plethora of one-off mobile apps–usually must exist only in their own ecosystem.

So it seems to me that the real issue isn’t that Big Brother wants to control data structure.  What it comes down to is that specifying an open data structure defeats the ability of one or a group of solution providers from controlling the market through restrictions on accessing data.  This encourages maximum competition and innovation in the marketplace–Data Neutrality.

I look forward to additional information from Dave on this issue.  Each of the methods of achieving the end of Data Neutrality isn’t an end in itself.  Any method that is less structured and provides more flexibility is welcome.  I’m just not sure that we’re there yet with APIs.