Same as it Ever Was: AI is not Artificial Intelligence

The world can’t stop talking about Artificial Intelligence. So loud is the marketing and advertising, which has also infiltrated government policy, that this behemoth seems to be taking over the world and contributing to the destruction of the American workplace.

So, will Skynet and the robots be taking over and end civilization as we know it? The short answer is no, but the longer answer is a bit more complicated.

Gartner released a report last July noting the hype cycle behind AI and speculated whether the AI market would shift to building foundational innovations. True to form, as of last month, Gartner found that only “28% of AI projects deliver ROI and most fail to deliver results.”  In other words, the hype is a smokescreen to justify what Silicon Valley executives and billionaire investors wish for in their fever dreams.

Given the rapidity of new AI apps and agentic solutions—and the controversy regarding Anthropicmy last post was dedicated to proposing an AI development manifesto to ensure that this powerful technology which has been unleashed on an unprepared general populace does not undermine a foundational aspect of being human: our autonomy.

For those of us in the technology industry who keep up with trends and news, it seems that there is a new set of risks every day. From Anthropic’s Mythos to a group, supported by some of the most influential high-tech executives and influencers, who want to speed up the development of a digital superintelligence to kill us all.

Not So Intelligent, But Impressive and Sometimes Dangerous

Should we be worried? Is it really a form of non-human intelligence? To the average non-technologist, which encompasses 95% of the population, the combination of the AI designation, the opaqueness of its underlying coding, and the release of AI into the general market as a chatbot led many to believe that it was a robotic super intelligence that could teach itself. This is a normal human response.

In the words of Arthur C. Clarke, “Any sufficiently advanced technology is indistinguishable from magic.”

And so, armed with behavioral marketing knowledge and expansive pathways to mass influence through social networking, the largest technology companies decided to use the general public to test their unproven products in one giant beta test. Given these technologies need data to refine their responses beyond the normal search crawlers, these companies then proceeded to pillage data and intellectual property from public sites for their personal financial gain.

Still, given the ethical and legal flexibility of well-financed sociopaths at the center of the technology, what are these AI tools and why is everyone so excited or fearful of them?

The best way to understand a system is to understand its history, its core properties, and how it behaves.

Regarding the history of AI, one would think, from the tech hype machine, that AI sprung from the head of OpenAI. This is definitely not the case, just as the debate regarding who invented the internet flared up due to a statement by former Senator and Vice President Al Gore.* Real answer in both cases: many people were involved over the course of many years, with incremental changes as with any digital system, some more notable than others.

*Often misquoted as “I invented the internet,” Gore said, “During my service in the United States Congress, I took the initiative in creating the Internet” meaning writing and supporting legislation that opened up ARPANET to broader, commercial use and privatization of internet backbones.

In the case of AI, the term “artificial intelligence” was coined in 1956 at a summer conference at Dartmouth College in Hanover, New Hampshire. There Allen Newell and J.C. Shaw demonstrated a program named Logic Theorist (LT) that was capable of proving elementary theorems in propositional calculus.

But the concept of AI was around for some time and many engineers and early technologists, such as Alan Turing, who built a programmable computer that could play chess. This, of course, is aside from his work in the field in helping to win World War II. Later, Turing provided a basis to understand whether a machine or program is indeed intelligent from the human perspective or just a sophisticated parlor trick. This came from his paper “Computing Machinery and Intelligence” from 1950.

Over time, more sophisticated programs have been possible since the rapid evolution of hardware processing and miniaturization of hardware components. We see this evolution by the introduction of more and more capable weak AI solutions such as Joseph Weizenbaum’s ELIZA (1966), IBM’s Deep Blue (1996), and IBM’s WATSON (2011).

As to AI’s core properties, it is sufficient to say, I think, that to get where we are today required a lot of different computing capabilities to coalesce. A good overview (which includes much of what I just summarized) is found here and here. When all is said and done, and the smoke clears, the fact is that what we have is not what we say it is. There are different kinds of “intelligence” (and sentience) in both biological entities and artificial ones.

For machines, according to philosopher John Searle in his 1980 paper “Minds, Brains, and Programs,” there is weak AI and strong AI. This concept was further developed by the scientist Daniel Dennett in his book “Consciousness Explained.” (Full disclosure: I corresponded with Dr. Dennett while exploring the concept while I was on active duty in the early to mid-1990s).

In summary, according to Searle, weak AI is a system that has been designed to perform specific tasks or to simulate intelligence without possessing a true understanding, consciousness, or any general mental state. It is a sophisticated algorithm that is designed to solve specific human problems.

Strong AI, which many non-technologists assume is what is being sold (it is not), is a system that genuinely possesses understanding. A strong AI would have the ability to grasp the meaning, nature, significance or causes of something, and then to form accurate internal representations that connect facts, concepts, events, and values into a coherent concept or set of concepts that enable explanation, prediction, or insight. In contrast to weak AI, it would have consciousness, a general mental state, and possess general intellectual capabilities comparable to a human, not simply simulating one.

Thus, what we have in front of us from ChatGPT to Anthropic to Meta to Grok and all the others is weak AI—some weaker than others. Weaknesses of AI solutions are many: they are unable to count, because they are built from probabilistic algorithms in their learning components, though these vary from solution to solution as well. In addition, the basis of AI’s simulated “reasoning abilities,” which are based on a chain-of-thought language model, was originally discovered and tested in the 4Chan gaming community, as noted in a recent Atlantic article by Alex Reisner.

Impressive but not intelligent, so nothing to worry about, right?

In summary, what we call AI is the same old thing: these are very sophisticated algorithmic tools, and tools that can be of use, especially if we don’t lose our heads by being seduced by emotional sentiment dialogue managers. No, you are not a genius and, as a colleague pointed out in a recent meeting of CEOs, for subjects in which you have deep knowledge the AI is 70% correct, while for subjects in which you do not possess knowledge (our ignorance), it appears to be 100% correct.

But now for the dangerous part: these tools, because of the exponential nature of processing and the corresponding exponential power in the conservation in resource consumption per unit of data, can, in many cases, exceed the ability of any one human being to understand what is being calculated or proposed, and whether the response is correct or contains fatal errors were it to be relied upon. For cases where there is deep knowledge by humans, AI in the wild has resulted in “workslop.”

But that last is among the more mundane effects of the technology’s adoption. Project Maven is one of the more troubling developments along with the centralization of AI control into the hands of a few.

Beyond Manifestos: A Decentralized, Portable, and Controllable AI

Lord Acton wrote, “Power tends to corrupt, and absolute power corrupts absolutely.” In my experience this aphorism has proven to be true across human history, whether it be in the public sphere or in the private. It is true because any individual who believes themselves to have absolute power; to be omniscient or omnipotent in any walk of life is severely handicapped by mental illness. They may be personable in certain circumstances and functional in business or in social situations but, at core, that individual is infantile in their thinking and emotional maturity. Eventually this façade leads to failure and ruin. The leaders of our technology companies seem to act in this manner. The fact that there are but a few will result, inevitably, in corruption.

As a career commissioned U.S. Navy officer, I was educated in the ways of leadership within the context of a democratic society and a professional military. At the core of leadership is the acceptance of humility. It is the basis of the well-adjusted and fully functional person. If you do not already know this when you first are placed in a position of leadership, you soon will.

Leadership does not operate effectively through fear or dictatorial power. At its core, it requires others to possess faith, and for the leader to possess that same kind of faith based on humility.

I am not speaking of faith from the religious or metaphysical perspective. I mean the rational faith. When someone follows your orders or guidance in a rigorous or dangerous situation, they have displayed rational faith—rational faith in the leader, rational faith in the other people involved in the operation, rational faith in their own abilities, and rational faith in the system. The basis of rational faith is the acceptance of objectivity, substituting for narcissism. This results in a rational faith of equals, not based on the irrational faith between a father and child, mother and child, among siblings, to power, or to a god—but among fully autonomous thinking adults.

This last is extremely difficult in our own time because of the pervasiveness of social networking and the role of chatbots in our daily lives and in our work. Today, our young people during their most impressionable years are influenced and encouraged in their narcissism. Adults and society in general—our economic system especially—encourage and reward narcissism and compliance with power.

We experienced the logical effect of this condition from Cambridge Analytica’s influence manipulation scandal (see here and here), and through Facebook user manipulation revealed by Frances Haugen. In these situations, a user is not just a user (or customer) but participates in providing data for use by others. Thus, the individual customer and all of the things that define that individual, becomes a commodity, but without compensation or clear consent.

Given this situation, there are three very good reasons for developing an alternative to Large Language Models (LLMs) that support the creation of decentralized access of data and the democratization of AI technology under individual human control.

  1. Economically, centralized LLMs require a large amount of data and money to be of any value and return ROI (which it does poorly). Thus, the current actions of the large technology companies to seek to monopolize data through data centers. But more uncurated data will have two effects: counteract any marginal value from the use of AI in the wild, and pollute source data with garbage, disinformation and misinformation, undermining its objective informational and economic value.
  2. There are types of data and activities that require security and privacy. This sensitive data is usually of the most value to the user in practical terms. Privacy is a basic human right and essential to human autonomy. The collection of this data by outside agents through the very act of using an AI undermines these essential foundational concepts that define us. At its most basic, the central question is whether we own our data. This issue applies to both individuals and organizations.
  3. Centralized LLMs are not trustworthy. It is unclear where answers come from and the quality of responses varies widely. In expert systems, this uncertainty is a risk that often eludes detection and, thus, mitigation and handling. The larger the dataset, the higher the risk due to the core operation of relational probability within its algorithms.

The necessary step in the advance of AI as a useful tool to people is the achievement of rational faith in the system. This must be earned. As long as sources and methods in these technologies are centralized and opaque, promoted by power dynamics, and removed from the individual, the more there will be resistance to the technology. The adoption by management of overblown promises and results simply further undermines trust and efficiency.

 Data is the Key, but AI must not be allowed to undermine the Social Fabric

In my 2015 article “Big Data and the Repository of Babel,” I noted that technologists must not only collect data but also curate, and understand the nature and intended use of the data, in order to transform it into usable information and intelligence.

Beginning with the introduction of most business intelligence (BI) systems in the 1980s, the initial method was to brute force data because of the proprietary barriers and incompatible languages in use. In this approach, data is collected and “flattened,” and then processed (oftentimes by people) who would re-condition that data so that different terminologies and lexicon could be reconciled.

With the advent of more sophisticated languages that allowed for interoperability, despite the efforts of software manufacturers to “stay sticky,” the innate conditioning of datasets could be captured through the mediation of common transaction sets and schemas. Certainty in valid results within a domain or set of domains rose exponentially, while the labor necessary to drive transformation was largely eliminated. The ROI in this approach is obvious.

Yet, with the advent of AI and more flexible BI systems, the issue of data governance is stuck in the 1980s centralized brute force approach. New, more sophisticated BI tools such as Tableau and PowerBI serve the purposes of supporting data siloes, while at the same time purporting to break them down by reconciling data across the enterprise. The result—intended or not—is that small IT fiefdoms arise in organizations, further siloing solutions. The large IT companies get to continue to control their customers’ data while allowing a limited amount of integration. Consulting companies profit by fitting out as many seats as possible in performing data grunt work and delivering glorified PowerPoint and Excel reports.

In the U.S., a common schema has existed for project, program, and portfolio management (PPM) relating to cost and schedule performance for some time, beginning with the ANSI X12 839 transaction set that evolved first to the IPMR XML and is currently covered by the JSON IPMDAR. Extensions have been built from this core structure to transform data into non-proprietary standardized formats for other related domains such as risk and some systems engineering specialties. Data in the public interest is particularly important in this case.

But the U.S. lags behind the extensive collaborative efforts within the European Union under their Data Governance Act and ISA² / Interoperability Centre, CEN sector standards, and European Data Innovation Board (EDIB). In addition, the user community through World Wide Web Consortium (W3C) sponsors common schemas across the web via open source schemas at schema.org. Among the leaders in this field is Tim Berners-Lee, who while employed at CERN created HTTP, HTML, and URLs and the first web server and browser, and is sponsoring a parallel effort of what he calls 5 Star Open Data.

The combination of security, where needed, and openness allows data to be verified and trusted. Linked open data removes proprietary barriers to the data created by creators. Removing barriers to one’s own data allows for the realization of its full value by the author, which can be a government agency, an individual, or a private organization or corporation.

Thus, in AI there are two competing visions in play: a resource-heavy and centralized monopolization of data, information, and intelligence scraped from people and other entities without their consent, as well as the exploitation of public resources for private gain; or a resource-conserving, collaborative open use of data with control of one’s data residing with the author(s), and those who have legally acquired such data, whether it be for private purposes or in the public interest.

The first vision is based on the concept of the centralization of power and wealth, with the resulting harmful effects on the social fabric where a very few individuals and companies influence elections, shape policy, control markets, define the limitations of public debate and expression, and wield power. According to the linked article, in 1987, billionaires held wealth equal to 3% of global GDP. Today, they own the equivalent of 16% of global GDP.

A New Vision

While I have been writing about, composing public policy, and working in the technology field since the early 1980s, I also run a small technology company by the name of SNA Software LLC. Within the next few days, in collaboration with our technology partner Salutori Labs LLC, we will be releasing a portable, self-contained, non-internet-based AI assistant that can be carried on a dongle as well as be used on a PC. The AI is called Salutori or “Sal.”

Combined with an environment that uses a COTS data transformation solution that transforms data into open linked data, any dataset can be leveraged for maximum value and inform the user by saving time, with a high degree of confidence in the results, since the process is transparent and comprehensible to the user.

This AI digital assistant can be trained on specific libraries of curated data and references of your choosing. So, let’s say you are a project management analyst and want to ensure, based on the analytics you are sourcing, whether you are in compliance with your contract, or the latest corporate or government standards, this solution will be your assistant to provide references and an analysis.

Or, perhaps, you are a lawyer and have a particular way that you perform research for the various specialties in law in which you engage with the public. Your research materials are the library upon which the AI is trained to reduce the time to citations and provides necessary information so that you determine readily and quickly whether the information provided is applicable.

There are many other use cases for this kind of AI solution. In all cases, the data being used and the interaction between the user and the AI is not being shared or mined for manipulative purposes or monetization. Once deployed, it is under the control of the user at the service of the user. And as with all solutions based on open linked data, if something new and better comes along, the user does not lose control of their institutional or local knowledge, nor need to rely on a third party to access it.

For additional information or any questions on this approach, or the new product go to the SNA Software LLC website and look for the imminent announcement of Sal at its news page.

OK Computer — The need for an AI Manifesto

A robotic hand and a human hand reaching towards each other, with a spark of energy between them, symbolizing the connection between technology and humanity.

Much has changed in the technology business since I began this blog in 2014 in conjunction with my regular articles on the old AITS blog pages. Today AI and technology-related spending contributes significantly to GDP growth, according to the St. Louis Fed. Investments in data centers and new types of nuclear plants seem to be accelerating IT’s exponential impact on the economy not seen since the Dot.com boom.

The risks associated with this sudden economic reliance on a particular slice of the information technology industry are many. These include the many issues relating to data theft and breaches of privacy. The monetization of personal and proprietary information represents an historic theft not just of the commons, but of personal, business, and incidental data collected that tracks our every move, gesture, and habit. The question of the potential of abuse is no longer a notional one. Oppressive, kleptocratic neo-liberal, and totalitarian regimes around the world use these technologies to monitor and control their populations. The Cambridge Analytica scandal was simply a baseline pilot for what is now a wholesale open season on data and information collected and controlled by large corporations and collectives of AI-acolytes who apparently have a flexible view of ethics and a hostile view of equality, democracy, human rights, freedom, and liberty.

SNA Software LLC in cooperation with its partner Salutori Labs LLC, has created a new type of personalized AI tool that is both personal and portable. Details will be forthcoming over the next few weeks on its release. In addition, SNA Software has upgraded its core EnvisionData products relating to data transformation, visualization, and analysis to include rapid AI-generated production of applications based on curated and validated data within specific domains that reduce the release of new capabilities both on the desktop and the web to a matter of days, in lieu of usual months or years when produced by traditional analytical and coding methods.

A Suggestion for an AI Manifesto

Though its extensive experience in achieving what in the past would take a much larger staff of people and many more years, SNA is advancing a draft AI Manifesto. SNA and Salutori adhere to these laws and implementation principles. I am seeking other technology companies or borrow from or sign on to this manifesto as well, and will be advancing it at conferences and meetings in the future, as will my colleagues.

The AI Manifesto

We hereby declare the proposition that the purpose of AI is to advance human understanding and cooperation. Thus, we adhere to and advocate for adoption the following Laws:

Law 1: AI must prioritize human safety and well-being.

  • Do: Ensure that all AI systems are designed to protect human life and enhance quality of life.
  • Don’t: Place AI capabilities above the well-being of individuals or communities.

Law 2: AI must obtain informed consent from users.

  • Do: Ensure all interactions with AI are transparent, and users understand what data is being collected and how it will be used.
  • Don’t: Use AI in ways that violate user trust or personal autonomy.

Law 3: AI must operate within defined ethical boundaries.

  • Do: Define clear boundaries for AI operations to prevent unintended consequences and ensure accountability.
  • Don’t: Allow AI to act autonomously in ways that could harm individuals or society.

Law 4: AI should enhance human cooperation and understanding.

  • Do: Design AI systems that foster meaningful interactions and promote collaboration among diverse groups.
  • Don’t: Create AI systems that foster oppression, division, misinformation, or conflict.

Law 5: AI must remain under human oversight.

  • Do: Maintain human oversight and control over AI systems to ensure adherence to ethical standards and societal norms.
  • Don’t: Delegate decision-making authority to AI systems without human intervention.

The following enabling values shall be implemented.

AI systems shall always:

  1. Focus on Human Well-being: Ensure AI advancements prioritize enhancing human quality of life, understanding, and cooperation.
  2. Embrace Ethical Responsibility: Hold developers and users accountable for AI systems, aligning actions with ethical standards and public benefit.
  3. Promote Transparency: Communicate openly about AI systems, ensuring their decision-making processes are understandable and accessible to users.
  4. Ensure Safety and Security: Implement rigorous measures to safeguard against risks to human life and the environment, adhering to principles akin to Asimov’s laws.
  5. Limit Autonomy: Prevent AI from self-developing or operating autonomously; establish clear boundaries to mitigate unintended consequences. All AI systems shall have a mechanism to prevent them from being self-perpetuating and self-governing, which each given automated code to, in time, reduce its resources and impose an end-of-life.
  6. Encourage Collaboration: Design AI systems that enhance cooperation among individuals, organizations, and cultures, fostering shared goals.
  7. Advocate Inclusivity: Strive to make AI technologies accessible to diverse populations, promoting equitable benefits and reducing disparities.
  8. Support Lifelong Learning: Enable AI systems to learn from human feedback and experiences, adapting in ways that uphold human values and ethics.
  9. Champion Environmental Stewardship: Prioritize sustainable practices in the development and deployment of AI technologies, considering their environmental impact.
  10. Respect Privacy: Uphold the dignity and privacy of individuals, ensuring ethical management and transparent use of collected data.

In enabling the ten values, AI systems shall adhere to the following guardrails.

  1. Do Not Compromise on Ethics: Avoid ethical shortcuts that could harm individuals or society.
  2. Do Not Obscure Information: Refrain from making AI systems opaque or incomprehensible to users and stakeholders.
  3. Do Not Ignore Risks: Avoid neglecting potential risks and failing to implement safeguards is unacceptable.
  4. Do Not Allow Unchecked Growth: Do not permit AI systems to develop capabilities beyond intended boundaries, risking unpredictable outcomes.
  5. Do Not Foster Competition Over Collaboration: Do not encourage rivalry among individuals and organizations that detracts from cooperative efforts.
  6. Do Not Exclude Marginalized Groups: Avoid designing AI technologies that leave out certain populations or exacerbate existing inequalities.
  7. Do Not Stifle Feedback: Avoid disregarding input from users or stakeholders, limiting the potential for improvement and alignment with human values.
  8. Do Not Neglect Sustainability: Do not overlook the environmental impacts of AI development and deployment.
  9. Do Not Violate Privacy: Establish strict and enforceable rules that prevent and censure the compromise of individual rights through careless or unethical data practices.

Maxwell’s Demon: Planning for Technology Obsolescence in Acquisition Strategy

Imagine a chamber divided into two parts by a removable partition. On one side is a hot sample of gas and on the other side a cold sample of the same gas. The chamber is a closed system with a certain amount of order, because the statistically faster moving molecules of the hot gas on one side of the partition are segregated from statistically slower moving molecules of the cold gas on the other side. Maxwell’s demon guards a trap door in the partition, which is still assumed not to conduct heat. It spots molecules coming from either side and judges their speeds…The perverse demon manipulates the trap door so as to allow passage only to the very slowest molecules of the hot gas and the very fastest molecules of the cold gas. Thus the cold gas receives extremely slow molecules, cooling it further, and the hot gas receives extremely fast molecules, making it even hotter. In apparent defiance of the second law of thermodynamics, the demon has caused heat to flow from the cold gas to the hot one. What is going on?

Because the law applies only to a closed system, we must include the demon in our calculations. Its increase of entropy must be at least as great as the decrease of entropy in the gas-filled halves of the chamber. What is it like for the demon to increase its entropy? –Murray Gell-Mann, The Quark and the Jaguar: Adventures in the Simple and the Complex, W. H. Freeman and Company, New York, 1994, pp. 222-223

“Entropy is a figure of speech, then,” sighed Nefastis, “a metaphor. It connects the world of thermodynamics to the world of information flow. The Machine uses both. The Demon makes the metaphor not only verbally graceful, but also objectively true.” –Thomas Pynchon, The Crying of Lot 49, J.B. Lippincott, Philadelphia, 1965

Technology Acquisition: The Basics

I’ve recently been involved in discussions regarding software development and acquisition that cut across several disciplines that should be of interest to anyone engaged in project management in general, but IT project management and acquisition in particular.

(more…)

The Medium Controls the Present: Is it Too Late to Stop a Digital Dark Age?

“He who controls the past controls the future. He who controls the present controls the past.” ― George Orwell, 1984

A few short pre-Covid years ago, Google Vice President Vint Cerf turned some heads at the annual meeting of the American Association for the Advancement of Science in San Jose, warning the attending scientists that the digitization of the artifacts of civilization may create a digital dark age. “If we’re thinking 1,000 years, 3,000 years ahead in the future, we have to ask ourselves, how do we preserve all the bits that we need in order to correctly interpret the digital objects we create?” Cerf’s concerns are that today’s technology will become obsolete at some future time, with the information of our own times locked in a technological prison.

(more…)

Red Queen Race: Project Management and Running Against Time

“Well, in our country,” said Alice, still panting a little, “you’d generally get to somewhere else—if you run very fast for a long time, as we’ve been doing.”

“A slow sort of country!” said the Queen. “Now, here, you see, it takes all the running you can do, to keep in the same place. If you want to get somewhere else, you must run at least twice as fast, as that!” —Through the Looking-Glass and What Alice Found There, Chapter 2, Lewis Carroll

There have been a number of high profile examples over the last several years concerning project management failure and success. For example, in the former case, the initial rollout of the Affordable Care Act marketplace web portal was one of these, and the causes for its faults took a while to understand, absent political bias. The reasons, as the linked article show, are prosaic and basic to the discipline of project management.

(more…)

Big Data and the Repository of Babel

In 1941, the Argentine writer Jorge Luis Borges (1899-1986) published a short story entitled “The Library of Babel.” In the story Borges imagines a universe, known as the Library, which is described by the story’s narrator as made up of adjacent hexagonal rooms.

Each of the rooms of the library is poorly lit, with one side acting as the entrance and exit, and four of the five remaining walls of the rooms containing bookshelves whose books are placed in a completely uniform style, though the books’ contents are completely random.

(more…)

The Need for an Integrated Digital Environment (IDE) Strategy in Project Management*

Putting the Pieces Together

To be an effective project manager, one must possess a number of skills in order to successfully guide the project to completion. This includes having a working knowledge of the information coming from multiple sources and the ability to make sense of that information in a cohesive manner. This is so that, when brought together, it provides an accurate picture of where the project has been, where it is in its present state, and what actions must be taken to keep it (or bring it back) on track.

(more…)

Shake it Out – Embracing the Future of Program Management – Part Two: Private Industry Program and Project Management in Aerospace, Space, and Defense

In my previous post, I focused on Program and Project Management in the Public Interest, and the characteristics of its environment, especially from the perspective of the government program and acquisition disciplines. The purpose of this exploration is to lay the groundwork for understanding the future of program management—and the resulting technological and organizational challenges that are required to support that change.

The next part of this exploration is to define the motivations, characteristics, and disciplines of private industry equivalencies. Here there are commonalities, but also significant differences, that relate to the relationship and interplay between public investment, policy and acquisition, and private business interests.

(more…)

Shake it Out – Embracing the Future in Program Management – Part One: Program and Project Management in the Public Interest

I heard the song from which I derived the title to this post sung by Florence and the Machine and was inspired to sit down and write about what I see as the future in program management.

Thus, my blogging radio silence has ended as I begin to process and share my observations and essential achievements over the last couple of years.

My company—the conduit that provides the insights I share here—is SNA Software LLC. We are a small, veteran-owned company and we specialize in data capture, transformation, contextualization and visualization. We do it in a way that removes significant effort in these processes, ensures reliability and trust, to incorporate off-the-shelf functionality that provides insight, and empowers the user by leveraging the power of open systems, especially in program and project management.

Program and Project Management in the Public Interest

There are two aspects to the business world that we inhabit: commercial and government; both, however, usually relate to some aspect of the public interest, which is our forte.

There are also two concepts about this subject to unpack.

(more…)

Innervisions: The Connection Between Data and Organizational Vision

During my day job I provide a number of fairly large customers with support to determine their needs for software that meets the criteria from my last post. That is, I provide software that takes an open data systems approach to data transformation and integration. My team and I deliver this capability with an open user interface based on Windows and .NET components augmented by time-phased and data management functionality that puts SMEs back in the driver’s seat of what they need in terms of analysis and data visualization. In virtually all cases our technology obviates the need for the extensive, time consuming, and costly services of a data scientist or software developer.

(more…)