Back in the Saddle Again — Putting the SME into the UI Which Equals UX

“Any customer can have a car painted any colour that he wants so long as it is black.”  — Statement by Henry Ford in “My Life and Work”, by Henry Ford, in collaboration with Samuel Crowther, 1922, page 72

The Henry Ford quote, which he made half-jokingly to his sales staff in 1909, is relevant to this discussion because the information sector has developed along the lines of the auto and many other industries.  The statement was only half-joking because Ford’s cars could be had in three colors.  But in 1909 Henry Ford had found a massive market niche that would allow him to sell inexpensive cars to the masses.  His competition wasn’t so much as other auto manufacturers, many of whom catered to the whims of the rich and more affluent members of society, but against the main means of individualized transportation at the time–the horse and buggy.  The color was not so much important to this market as was the need for simplicity and utility.

Since the widespread adoption of the automobile and the expansion of the market with multiple competitors, high speed roadways, a more affluent society anchored by a middle class, and the impact of industrial and information systems development in shaping societal norms, the automobile consumer has, over time, become more varied and sophisticated.  Today automobiles have introduced a number of new features (and color choices)–from backup cameras, to blind spot and back-up warning signals, to lane control, auto headline adjustment, and many other innovations.  Enhancements to industrial production that began with the introduction of robotics into the assembly line back in the late 1970s and early 1980s, through to the adoption of Just-in-Time (JiT) and Lean principles in overall manufacturing, provide consumers a a multitude of choices.

We are seeing a similar evolution in information systems, which leads me to the title of this post.  During the first waves of information systems development and introduction into our governing and business systems, the process has been one in which software is developed first to address an activity that is completed manually.  There would be a number of entries into a niche market (or for more robustly capitalized enterprises into an entire vertical).  The software would be fairly simplistic and the features limited, the objects (the way the information is presented and organized on the screen, the user selections, and the charts, graphs, and analytics allowed to enhance information visibility) well defined, and the UI (user interface) structured along the lines of familiar formats and views.

To include the input of the SME into this process, without specific soliciting of advice, was considered both intrusive and disruptive.  After all, software development largely was an activity confined to a select and highly trained specialty involving sophisticated coding languages that required a good deal of talent to be considered “elegant”.  I won’t go into a definition of elegance here, which I’ve addressed in previous posts, but for a short definition it is this:  the fewest bits of code possible that both maximizes computing power and provides the greatest flexibility for any given operation or set of operations.

This is no mean feat and a great number of software applications are produced in this way.  Since the elegance of any line of code varies widely by developer and organization, the process of update and enhancement can involve a great deal of human capital and time.  Thus, the general rule has been that the more sophisticated that any software application is, the more effort and thus less flexibility that the application possesses.  Need a new chart?  We’ll update you next year.  Need a new set of views or user settings?  We’ll put your request on the road-map and maybe you’ll see something down the road.

It is not as if the needs and requests of users have always been ignored.  Most software companies try to satisfy the needs of their customer, balancing the demands of the market against available internal resources.  Software websites, such as at UXmatters in this article, have advocated the ways that the SME (subject-matter expert) needs to be at the center of the design process.

With the introduction of fourth-generation adaptive software environments–that is, those systems that leverage underlying operating environments and objects such as .NET and WinForms, that are open to any data through OLE DB and ODBC, and that leave the UI open to simple configuration languages that leverage these underlying capabilities and place them at the feet of the user–put the SME at the center of the design process into practice.

This is a development in software as significant as the introduction of JiT and Lean in manufacturing, since it removes both the labor and time-intensiveness involved in rolling out software solutions and enhancements.  Furthermore, it goes one step beyond these processes by allowing the SME to roll out multiple software solutions from one common platform that is only limited by access to data.  It is as if each organization and SME has a digital printer for software applications.

Under this new model, software application manufacturers have a flexible environment to pre-configure the 90% solution to target any niche or market, allowing their customers to fill in any gaps or adapt the software as they see fit.  There is still IP involved in the design and construction of the “canned” portion of the solution, but the SME can be placed into the middle of the design process for how the software interacts with the user–and to do so at the localized and granular level.

This is where we transform UI into UX, that is, the total user experience.  So what is the difference?  In the words of Dain Miller in a Web Designer Depot article from 2011:

UI is the saddle, the stirrups, and the reigns.

UX is the feeling you get being able to ride the horse, and rope your cattle.

As we adapt software applications to meet the needs of the users, the role of the SME can answer many of the questions that have vexed many software implementations for years such as user perceptions and reactions to the software, real and perceived barriers to acceptance, variations in levels of training among users, among others.  Flexible adaptation of the UI will allow software applications to be more successfully localized to not only meet the business needs of the organization and the user, but to socialize the solution in ways that are still being discovered.

In closing this post a bit of full disclosure is in order.  I am directly involved in such efforts through my day job and the effects that I am noting are not simply notional or aspirational.  This is happening today and, as it expands throughout industry, will disrupt the way in which software is designed, developed, sold and implemented.

Over at AITS.org — Black Swans: Conquering IT Project Failure & Acquisition Management

It’s been out for a few days but I failed to mention the latest article at AITS.org.

In my last post on the Blogging Alliance I discussed information theory, the physics behind software development, the economics of new technology, and the intrinsic obsolescence that exists as a result. Dave Gordon in his regular blog described this work as laying “the groundwork for a generalized theory of managing software development and acquisition.” Dave has a habit of inspiring further thought, and his observation has helped me focus on where my inquiries are headed…

To read more please click here.

Super Doodle Dandy (Software) — Decorator Crabs and Wirth’s Law

decorator-crab[1]

The song (absent the “software” part) in the title is borrowed from the soundtrack of the movie, The Incredible Mr. Limpet.  Made in the day before Pixar and other recent animation technologies, it remains a largely unappreciated classic; combining photography and animation in a time of more limited tools, but with Don Knotts creating another unforgettable character beyond Barney Fife.  Somewhat related to what I am about to write, Mr. Limpet taught the creatures of the sea new ways of doing things, helping them overcome their mistaken assumptions about the world.

The photo that opens this post is courtesy of the Monterey Aquarium and looks to be the crab Oregonia gracilis, commonly referred to as the Graceful Decorator Crab.  There are all kinds of Decorator Crabs, most of which belong to the superfamily Majoidea.  The one I most often came across and raised in aquaria was Libinia dubia, an east coast cousin.  You see, back in a previous lifetime I had aspirations to be a marine biologist.  My early schooling was based in the sciences and mathematics.  Only later did I gradually gravitate to history, political science, and the liberal arts–finally landing in acquisition and high tech project management, which tends to borrow something from all of these disciplines.  I believe that my former concentration of studies have kept me grounded in reality–in viewing life the way it is and the mysteries that are yet to be solved in the universe absent resort to metaphysics or irrationality–while the latter concentrations have connected me to the human perspective in experiencing and recording existence.

But there is more to my analogy than self-explanation.  You see, software development exhibits much of the same behavior of Decorator Crabs.

In my previous post I talk about Moore’s Law and the compounding (doubling) of greater processor power in computing every 12 to 24 months.  (It does not seem to be as much a physical law as an observation, and we can only guess how long this trend will continue).  We also see a corresponding reduction in cost vis-à-vis this greater capability.  Yet, despite these improvements, we find that software often lags behind and fails to leverage this capability.

The observation that has recorded this phenomenon is found in Wirth’s Law, which posits that software is getting slower at a faster rate than computer hardware is getting faster.  There are two variants of this law, one ironic and the other only less so.  These are May’s and Gates’ variants.  Basically these posit that software speed halves every 18 months, thereby negating Moore’s Law.  But why is this?

For first causes one need only look to the Decorator Crab.  You see, the crab, all by itself, is a typical crab: an arthropod invertebrate with a hard carapace, spikes on its exoskeleton, segmented body with jointed limbs, five pairs of legs, the first pair of legs usually containing chelae (the familiar pincers and claws).  There are all kinds of crabs in salt, fresh, and brackish water.  They tend to be well adapted to their environment.  But they are also tasty and high in protein value, thus having a number of predators.  So the Decorator Crab has determined that what evolution has provided is not enough–it borrows features and items from its environment to enhance its capabilities as a defense mechanism.  There is a price to being a Decorator Crab.  Encrustations also become encumbrances.  Where crabs have learned to enhance their protections, for example by attaching toxic sponges and anemones, these enhancements may also have made them complaisant because, unlike most crabs, Decorator Crabs don’t tend to scurry from crevice to crevice, but tend to walk awkwardly and more slowing than many of their cousins in the typical sideways crab gait.  This behavior makes them interesting, popular, and comical subjects in both public and private aquaria.

In a way, we see an analogy in the case of software.  In earlier generations of software design, applications were generally built to solve a particular challenge that mimicked the line and staff structure of the organizations involved–designed to fit its environmental niche.  But over time, of course, people decide that they want enhancements and additional features.  The user interface, when hardcoded, must be adjusted every time a new function or feature is added.

Rather than rewriting the core code from scratch–which will take time and resource-consuming reengineering and redesign of the overall application–modules, subroutines, scripts, etc. are added to software to adapt to the new environment.  Over time, software takes on the characteristics of the Decorator Crab.  The new functions are not organic to the core structure of the software, just as the attached anemone, sponges, and algae are not organic features of the crab.  While they may provide the features desired, they are not optimized, tending to use brute force computing power as the means of accounting for lack of elegance.  Thus, the more powerful each generation of hardware computing power tends to provide, the less effective each enhancement release of software tends to be.

Furthermore, just as when a crab tends to look less like a crab, it requires more effort and intelligence to identify the crab, so too with software.  The greater the encrustation of features that tend to attach themselves to an application, the greater the effort that is required to use those new features.  Learning the idiosyncrasies of the software is an unnecessary barrier to the core purposes of software–to increase efficiency, improve productivity, and improve speed.  It serves only one purpose: to increase the “stickiness” of the application within the organization so that it is harder to displace by competitors.

It is apparent that this condition is not sustainable–or acceptable–especially where the business environment is changing.  New software generations, especially Fourth Generation software, provide opportunities to overcome this condition.

Thus, as project management and acquisition professionals, the primary considerations that must be taken into account are optimization of computing power and the related consideration of sustainability.  This approach militates against complacency because it influences the environment of software toward optimization.  Such an approach will also allow organizations to more fully realize the benefits of Moore’s Law.

Over at AITS.org — Maxwell’s Demon: Planning for Obsolescence in Acquisitions

I’ve posted another article at AITS.org’s Blogging Alliance, this one dealing with the issue of software obsolescence and the acquisition strategy that applies given what we know about the nature of software.  I also throw in a little background on information theory and the physical limitations of software as we now know it (virtually none).  As a result, we require a great deal of agility inserted into our acquisition systems for new technologies.  I’ll have a follow up article over there that provides specifics on acquisition planning and strategies.  Random thoughts on various related topics will also appear here.  Blogging has been sporadic of late due to op-tempo but I’ll try to keep things interesting and more frequent.

Brother Can You (Para)digm? — Four of the Latest Trends in Project Management

At the beginning of the year we are greeted with the annual list of hottest “project management trends” prognostications.  We are now three months into the year and I think it worthwhile to note the latest developments that have come up in project management meetings, conferences, and in the field.  Some of these are in alignment with what you may have seen in some earlier articles, but these are four that I find to be most significant thus far, and there may be a couple of surprises for you here.

a.  Agile and Waterfall continue to duke it out.  As the term Agile is adapted and modified to real world situations, the cult purists become shriller in attempting to enforce the Manifesto that may not be named.  In all seriousness, it is not as if most of these methods had not been used previously–and many of the methods, like scrum, also have their roots in Waterfall and earlier methods.  A great on-line overview and book on the elements of scrum can be found at Agile Learning Labs.  But there is a wide body of knowledge out there concerning social and organizational behavior that is useful in applying what works and doesn’t work.  For example, the observational science behind span of control, team building, the structure of the team in supporting organizational effectiveness, and the use of sprints in avoiding the perpetual death-spiral of adding requirements and not defining “done”, are best practices that identify successful teams (depending how you define success–keeping in mind that a successful team that produces the product often still fails as a going concern, and thus falls into obscurity).

All that being said, if you want to structure these best practices into a cohesive methodology, call it Agile, Waterfall or Harry, and can make money at it while helping people succeed in a healthy work environment, all power to you.  In IT, however, it is this last point that makes this particular controversy seem like we’ve been here before.  When woo-woo concepts like #NoEstimates and self-organization are thrown about, the very useful and empirical nature of the enterprise enters into magical thinking and ideology.  The mathematics of unsuccessful IT projects has not changed significantly since the shift to Agile.  From what one can discern from the so-called studies on the market, which are mostly anecdotal or based on unscientific surveys, somewhere north of 50% of IT projects fail, failure defined as behind schedule and over cost, or failing to meet functionality requirements.

Given this, Agile seems to be the latest belle to the ball and virtually any process improvement introducing scrum, teaming, and sprints seems to get the tag.  Still, there is much blood and thunder being expended for a result that amounts to the same (and probably less than the) mathematical chance of success as found in the coin flip.  I think for the remainder of the year the more acceptable and structured portions of Agile will get the nod.

b.  Business technology is now driving process.  This trend, I think, is why process improvements like Agile, that claim to be the panacea, cannot deliver on their promises.  As best practices they can help organizations avoid a net negative, but they rarely can provide a net positive.  Applying new processes and procedures while driving blind will still run you off the road.  The big story in 2015, I think, is the ability to handle big data and to integrate that data in a manner to more clearly reveal context to business stakeholders.  For years in A&D, DoD, governance, and other verticals engaged in complex, multi-year project management, we have seen the push and pull of interests regarding the amount of data that is delivered or reported.  With new technologies this is no longer an issue.  Delivering a 20GB file has virtually the same marginal cost as delivering a 10GB file.  Sizes smaller than 1G aren’t even worth talking about.

Recently I heard someone refer to the storage space required for all this immense data, it’s immense I tell you!  Well storage is cheap and large amounts of data can be accessed through virtual repositories using APIs and smart methods of normalizing data that requires integration at the level defined by the systems’ interrelationships.  There is more than one way to skin this cat, and more methods for handling bigger data are coming on-line every year.  Thus, the issue is not more or less data, but better data regardless of the size of the underlying file or table structure or the amount of information.  The first go-round of this process will require that all of the data available already in repositories be surveyed to determine how to optimize the information it contains.  Then, once transformed into intelligence, to determine the best manner of delivery so that it provides both significance and context to the decision maker.  For many organizations, this is the question that will be answered in 2015 and into 2016.  At that point it is the data that will dictate the systems and procedures needed to take advantage of this powerful advance in business intelligence.

c.  Cross-functional teams will soon morph into cross-functional team members.  As data originating from previously stove-piped competencies is integrated into a cohesive whole, the skillsets necessary to understand the data, know how to convert it into intelligence, and act appropriately on that intelligence will begin to shift to require a broader, multi-disciplinary understanding.  Businesses and organizations will soon find that they can no longer afford the specialist who only understands cost, schedule, risk, or any one aspect of the other various specialties that were dictated by the old line-and-staff and division of labor practices of the 20th century.  Businesses and organizations that place short term, shareholder, and equity holder interests ahead of the business will soon find themselves out of business in this new world.  The same will apply to organizations that continue to suppress and compartmentalize data.  This is because a cross-functional individual that can maximize the use of this new information paradigm requires education and development.  To achieve this goal dictates the need for the establishment of a learning organization, which requires investment and a long term view.  A learning organization exposes its members to become competent in each aspect of the business, with development including successive assignments of greater responsibility and complexity.  For the project management community, we will increasingly see the introduction of more Business Analysts and, I think, the introduction of the competency of Project Analyst to displace–at first–both cost analyst and schedule analyst.  Other competency consolidation will soon follow.

d.  The new cross-functional competencies–Business Analysts and Project Analysts–will take on an increasing role in design and deployment of technology solutions in the business.  This takes us full circle in our feedback loop that begins with big data driving process.  We are already seeing organizations that have implemented the new technologies and are taking advantage of new insights not only introducing new multi-disciplinary competencies, but also introducing new technologies that adapt the user environment to the needs of the business.  Once the business and project analyst has determined how to interact with the data and the systems necessary to the decision-making process that follows, adaptable technologies that do not take the hard-coded “one size fits all” user interfaces are, and will continue, to find wide acceptance.  Fewer off-line and one-off utilities that have been used to fill the gaps resulting from the deficiencies in inflexible hard-coded business applications will allow innovative approaches to analysis to be mainstreamed into the organization.  Once again, we are already seeing this effect in 2015 and the trend will only accelerate as possessing greater technological knowledge becomes an essential element of being an analyst.

Despite dire predictions regarding innovation, it appears that we are on the cusp of another rapid shift in organizational transformation.  The new world of big data comes with both great promise and great risks.  For project management organizations, the key in taking advantage of its promise and minimizing its risks is to stay ahead of the transformation by embracing it and leading the organization into positioning itself to reap its benefits.

One-Trick Pony — Software apps and the new Project Management paradigm

Recently I have been engaged in an exploration and discussion regarding the utilization of large amounts of data and how applications derive importance from that data.  In an on-line discussion with the ever insightful Dave Gordon, I first postulated that we need to transition into a world where certain classes of data are open so that the qualitative content can be normalized.  This is what for many years was called the Integrated Digital Environment (IDE for short).  Dave responded with his own post at the AITS.org blogging alliance, countering that while such standards are necessary in very specific and limited applications, that modern APIs provide most of the solution.  I then responded directly to Dave here, countering that IDE is nothing more than data neutrality.  Then also at AITS.org I expanded on what I proposed to be a general approach in understanding big data, noting the dichotomy in the software approaches that organize the external characteristics of the data to generalize systems and note trends, as opposed to those that are focused on the qualitative content within the data.

It should come as no surprise then, given these differences in approaching data, that we also find similar differences in the nature of applications that are found on the market.  With the recent advent of on-line and hosted solutions, there are literally thousands of applications in some categories of software that propose to do one thing with data, or that are focused one-trick pony applications that can be mixed and matched to somehow provide an integrated solution.

There are several problems with this sudden explosion of applications of this nature.

The first is in the very nature of the explosion.  This is a classic tech bubble, albeit limited to a particular segment of the software market, and it will soon burst.  As soon as consumers find that all of that information traveling over the web with the most minimal of protections is compromised by the next trophy hack, or that too many software providers have entered the market prematurely–not understanding the full needs of their targeted verticals–it will hit like the last one in 2000.  It only requires a precipitating event that triggers a tipping point.

You don’t have to take my word for it.  Just type in a favorite keyword into your browser now (and I hope you’re using VPN doing it) for a type of application for which you have a need–let’s say “knowledge base” or “software ticket systems.”  What you will find is that there are literally hundreds if not thousands of apps built for this function.  You cannot test them all.  Basic information economics, however, dictates that you must invest some effort in understanding the capabilities and limitations of the systems on the market.  Surely there are a couple of winners out there.  But basic economics also dictates that 95% of those presently in the market will be gone in short order.  Being the “best” or the “best value” does not always win in this winnowing out.  Certainly chance, the vagaries of your standing in the search engine results, industry contacts–virtually any number of factors–will determine who is still standing and who is gone a year from now.

Aside from this obvious problem with the bubble itself, the approach of the application makers harkens back to an earlier generation of one-off applications that attempt to achieve integration through marketing while actually achieving, at best, only old-fashioned interfacing.  In the world of project management, for example, organizations can little afford to revert to the division of labor, which is what would be required to align with these approaches in software design.  It’s almost as if, having made their money in an earlier time, that software entrepreneurs cannot extend themselves beyond their comfort zones in taking advantage of the last TEN software generations that provide new, more flexible approaches to data optimization.  All they can think to do is party like it’s 1995.

For the new paradigm in project management is to get beyond the traditional division of labor.  For example, is scheduling such a highly specialized discipline rising to the level of a profession that it is separate from all of the other aspects of project management?  Of course not.  Scheduling is a discipline–a sub-specialty actually–that is inextricably linked to all other aspects of project management in a continuum.  The artifacts of the process of establishing project systems and controls constitutes the project itself.

No doubt there are entities and companies that still ostensibly organize themselves into specialties as they did twenty years ago: cost analysts, schedule analysts, risk management specialists, among others.  But given that the information from the these systems: schedule, cost management, project financial management, risk management, technical performance, and all the rest, can be integrated at the appropriate level of their interrelationships to provide us a cohesive, holistic view of the complex system that we call a project, is such division still necessary?  In practice the industry has already moved to position itself to integration, realizing the urgency of making the shift.

For example, to utilize an application to query cost management information in 1995 was a significant achievement during the first wave of software deployment that mimicked the division of labor.  In 2015, not so much.  Introducing a one-trick pony EVM “tool” in 2015 is laziness–hoping to turn back the clock in ignoring the obsolescence of such an approach–regardless of which slick new user interface is selected.

I recently attended a project management meeting of senior government and industry representatives.  During one of my side sessions I heard a colleague propose the discipline of Project Management Analyst in lieu of previously stove-piped specialties.  His proposal is a breath of fresh air in an industry that develops and manufacturers the latest aircraft and space technology, but has hobbled itself with systems and procedures designed for an earlier era that no longer align with the needs of doing business.  I believe the timely deployment of systems has suffered as a result during this period of transition. 

Software must lead, and accelerate the transition to the new integration paradigm.

Thus, in 2015 the choice is not between data that adheres to conventions of data neutrality, or to those that utilize data access via APIs, but in favor of applications that do both.

It is not between different hard-coded applications that provide the old “what-you-see-is-what-you-get” approach.  It is instead between such limited hard-coded applications, and those that provide flexibility so that business managers can choose among a nearly unlimited pallet of choices of how and which data, converted into information, is available to the user or classes of user based on their role and need to know; aggregated at the appropriate level of detail for the consumer to derive significance from the information being presented.

It is not between “best-of-breed” and “mix-and-match” solutions that leverage interfaces to achieve integration.  It is instead between such solution “consortiums” that drive up implementation and sustainment costs, bringing with them high overhead, against those that achieve integration by leveraging the source of the data itself, reducing the number of applications that need to be managed, allowing data to be enriched in an open and flexible environment, achieving transformation into useful information.

Finally, the choice isn’t among applications that save their attributes in a proprietary format so that the customer must commit themselves to a proprietary solution.  Instead, it is between such restrictive applications and those that open up data access, clearly establishing that it is the consumer that owns the data.

Note: I have made minor changes from the original version of this post for purposes of clarification.

What did I miss over the holiday? — Dave Gordon at AITS.org

Great post by Dave Gordon and a valid point for anyone undertaking development:  determine what “done” looks like.  Understand that people and systems are not perfect, that version 1.0 doesn’t need to do everything.  Here is Dave for the rest:

“Perfectionists are sadomasochists. They are masochists, because they rarely reach perfection, and can’t maintain it for more than an instant when they do. So they are continually frustrated, in addition to being obsessed. And they are sadists, because they drive everyone around them to pursue the same silly goals that they obsess over….” Read more.