Solid Like a Rock: The Modern Power Platform, Modular Open Systems, and PPM – A Use Case

My team and I were recently approached by an organization querying us about our team’s experience in the area of systems integration, with Critical Path Method (CPM) scheduling at the center. Doing so is a foundational part of PPM, but many practitioners miss the subtleties of achieving integration in a manner that properly establishes interrelationships across relevant cross-domain datasets which create valid, actionable intelligence within this domain.

The core of success involves applying a coherent and comprehensive automated solution to the set of processes and practices to prioritize, plan, execute, monitor, and govern multiple projects and programs and their associated data. This is known as project and portfolio management (PPM).

When constructing a large, complex project or group of projects, we begin with the project concept, project objectives, framing assumptions, stakeholder identification and read-in, and the identification of risks. This progression then extends to produce success criteria (within the context of key performance indicators or KPIs), the integrated master plan (IMP), the work breakdown structure (WBS), identification of resources within the plan, and finally the integrated master schedule (IMS). Earned value management (EVM), which may or may not apply, will then follow as an assessment of the value of the work being accomplished based on the performance measurement baseline (PMB).

Among these artifacts, the single most important in capturing and understanding the entire contractual and project scope that identifies program events, accomplishments, and accomplishment criteria—and that provides the opportunity for insight into proper integration across elements—is within the IMP.

This is especially true in projects in which technical risk and performance are identified as key factors in our project success criteria. It is the necessary step to capture factors of technical risk and performance, which can be reflected in detailed schedule task performance of the IMS.

In the marketplace, there are few choices of CPM scheduling applications powerful enough to support complex projects. Among these are Microsoft Project, Oracle P6, and Open Plan Professional. There are some other entries that claim to use AI or other “modern” methods to analyze sequences of events, but the three listed provide reliable and understandable results that allow for effective management of the schedule activities and the underlying tasks. In the most sophisticated implementations, schedules will be resource-loaded.

To achieve full integration of PPM elements across subdomains requires extending the core features in the CPM scheduling applications to realize their full informative and business value. This includes the integration of risk management identification and management capabilities, which include but go beyond simple Monte Carlo schedule risk analysis. It would also include automated cost and schedule analysis on the alignment between schedule activities, resource execution and distribution, as well as deploying strategies and measures for risk handling.

Additional extensions include analytical queries to determine weaknesses in the schedule and whether foundational elements are properly tick-and-tied (schedule health). The ability to trace schedule tasks to specific work and technical performance measures provide the rapid ability to identify areas that require immediate action.

In capturing the data elements from across different CPM scheduling applications, or even in a uniform scheduling environment, providing comprehensive reporting, GANTT and visual analytical charting of key factors and elements, including EVM, systems engineering, contract compliance, and other relevant elements within the PPM ecosystem.

Applying Modern Systems Design to Integration in PPM

The most effective way to achieve integration across the PPM ecosystem is through the deployment of a modern power platform.

Key capabilities and components of power platform technology include:

  • Low-code/no-code app deployment: visual configuration designers to create apps without heavy hand-coding.
  • Integration layer: prebuilt schemas, connectors and tools to connect SaaS, on-prem systems, databases, and custom APIs.
  • Data platform and modeling: a common data model based on open data principles that honor data sovereignty, metadata-driven storage, and low-code data manipulation.
  • Analytics and dashboards: embedded BI/reporting to turn app data into actionable insights.
  • Workflow automation: event- and trigger-driven automation (including RPA for UI automation)
  • Governance, security, and lifecycle: role-based access, environment separation (dev/test/prod), ALM, monitoring, and audit.
  • Extensibility: custom code extensions, SDKs, plug-ins, and support for CI/CD and developer tooling.
  • Marketplace/connectors: pre-configured COTS functionality, reusable components, templates, and third-party integrations.

When we combine this technology with a modular open-systems approach in application design (a MOSA) and open data governance, we are able to realize the full intrinsic and business value of data while also achieving maximum flexibility.

These principles first evolved within the systems engineering and model-based engineering communities. But the same benefits identified in this model for physical components within systems also apply to computer systems that control and analyze human systems, such as in PPM. Taking this approach also allows for greater integration between technical performance in systems research and development with the various other subdomains.

The benefits are significant, they include:

  • Faster innovation: Modular components and open data enable parallel development, third‑party extensions, and rapid replacement of parts without system-wide redesign.
  • Reduced vendor lock‑in: Standardized interfaces and governance let organizations mix vendors and swap modules, lowering dependence on single suppliers.
  • Lower total cost of ownership: Reuse, incremental upgrades, and competitive procurement reduce lifecycle costs.
  • Improved resilience and reliability: Fault isolation via modularity and the ability to hot‑swap components or roll back to previous modules improves uptime.
  • Scalability and flexibility: Easily scale capacity or add capabilities (e.g., new energy sources, telemetry modules) by plugging in compatible modules.
  • Interoperability and integration: Standard interfaces and open data models simplify integrating third‑party analytics, grid services, and partner systems.
  • Faster regulatory and market response: Modular upgrades and open data make it easier to meet new compliance requirements or enable new services (demand response, V2G).
  • Better analytics and optimization: Open, governed data enables advanced ML/AI, cross‑system optimization (load forecasting, predictive maintenance), and transparent KPIs.
  • Enhanced security posture: Clear module boundaries and standardized interfaces simplify security reviews; data governance enforces access controls, provenance, and auditability.
  • Ecosystem and marketplace development: Standards + open data foster third‑party marketplaces for modules, apps, and services, driving innovation and value capture.
  • Sustainability and resource efficiency: Easier integration of renewables, storage, and efficiency modules supports decarbonization and circular‑economy practices (component reuse, upgrades).

A Practical Use Case: Microsoft Project

The discussion mentioned in the first paragraph of this post presents an ideal use case for this approach. Microsoft has announced that it plans on retiring Microsoft Online on September 30, 2026.

What this means is that organizations that had invested in this CPM scheduling application will need to make a decision: they can stay within the Microsoft Project environment, or look at the other non-Microsoft CPM applications mentioned earlier. For public project management organizations, further complexity is added by the source of the data related to the schedule: whether it be organic, from suppliers, or some hybrid approach that requires both organic and contracted work.

As I have stated in my earlier posts, I run and operate a software company by the name of SNA Software LLC. The Proteus Envision suite is composed of modern power platform technology, but also built using MOSA principles, and automating data capture and transformation in accordance with open data governance principles.

Rather than a niche application focused on some portion of the project and portfolio domain, our solutions are built to leverage these modern technologies to achieve integration. With the recent FAR overhaul, that simplify many of the regulatory requirements on PPM systems, such as earned value management (EVM) for contracts below $50M, an open system that supports a nimble and modular approach is needed. The shift to the importance of technical performance, schedule and resource management, and risk management become paramount.

With the implementation of the Cybersecurity Maturity Model Certification (CMMC) program, the issue of off-premises Cloud usage is also an issue given the recent controversy regarding Azure GCC High FedRamp. Organizations need a flexible set of options when looking to transition to alternatives when a foundational solution is suddenly no longer available, doesn’t meet expectations, or ages out. Does this agency use in-premises solutions or a commercial cloud environment?

The use case here is to apply applications that automate the capture and transformation of data from any CPM scheduling application. Doing so allows organizations to forgo direct labor in transformation, avoiding the error-ridden and long lead-time brute force data engineering; or the improper use of Excel as an inappropriate systems management solution which siloes data and creates bespoke single points of failure.

The combination of a modern power platform, MOSA, and open data governance is what the current environment demands. At core of this approach is the overriding importance of data—it’s accuracy, transparency, scalability, and integration. Without good data, application of new AI solutions will fail to meet expectations and return-on-investment.

In summary: The Present Challenges in PPM

The most important issues in the PPM domain today revolve around the following:

  • The appropriate application and use of artificial intelligence solutions: the most useful utilization of this promising technology in an ecosystem that requires rigor.
  • The shift from PPM domain siloes: not only in terms of data or analytics, but also in terms of developing and expanding the business acumen of the workforce to be able to effectively use these advanced technologies.
  • The continued importance of assessment and management methods informed by powerful and flexible solutions in the area of large-scale project management.
  • The need for flexibility: to prevent lock-in of proprietary data solutions in a rapidly developing technology environment, and in identifying modular systems solutions to provide upgrades or interoperability rapidly.
  • The rising importance of other PPM indicators: especially those such as technical performance, risk management, and resource execution measures that presage the traditional down-the-line performance indicators in EVM.
  • The utilization of cloud or in-premise deployments, or a combination of the two, to address bandwidth and scaling issues as relevant datasets become larger and more complex with integration.
  • Finding strategies to overtake suboptimization in organizations resulting from rice-bowls, fiefdoms, and silo-building.

Meeting these challenges and finding solutions to them will require collaboration and systems thinking combined with supporting technologies.

Same as it Ever Was: AI is not Artificial Intelligence

The world can’t stop talking about Artificial Intelligence. So loud is the marketing and advertising, which has also infiltrated government policy, that this behemoth seems to be taking over the world and contributing to the destruction of the American workplace.

So, will Skynet and the robots be taking over and end civilization as we know it? The short answer is no, but the longer answer is a bit more complicated.

Gartner released a report last July noting the hype cycle behind AI and speculated whether the AI market would shift to building foundational innovations. True to form, as of last month, Gartner found that only “28% of AI projects deliver ROI and most fail to deliver results.”  In other words, the hype is a smokescreen to justify what Silicon Valley executives and billionaire investors wish for in their fever dreams.

Given the rapidity of new AI apps and agentic solutions—and the controversy regarding Anthropicmy last post was dedicated to proposing an AI development manifesto to ensure that this powerful technology, which has been unleashed on an unprepared general populace, does not undermine a foundational aspect of being human: our autonomy.

For those of us in the technology industry who keep up with trends and news, it seems that there is a new set of risks every day. From Anthropic’s Mythos to a group, supported by some of the most influential high-tech executives and influencers, that wants to speed up the development of a digital superintelligence to kill us all.

Not So Intelligent, But Impressive and Sometimes Dangerous

Should we be worried? Is it really a form of non-human intelligence? To the average non-technologist, which encompasses 95% of the population, the combination of the AI designation, the opaqueness of its underlying coding, and the release of AI into the general market as a chatbot led many to believe that it was a robotic super intelligence that could teach itself. This is a normal human response.

In the words of Arthur C. Clarke, “Any sufficiently advanced technology is indistinguishable from magic.”

And so, armed with behavioral marketing knowledge and expansive pathways to mass influence through social networking, the largest technology companies decided to release their unproven products to the general public in one giant beta test. Given that these technologies need data to refine their responses beyond the capabilities of search crawlers, these companies then proceeded to pillage data and intellectual property from public sites for their personal financial gain.

Still, given the ethical and legal flexibility of well-financed sociopaths at the center of the technology, what are these AI tools and why is everyone so excited or fearful of them?

The best way to understand a system is to understand its history, its core properties, and how it behaves.

Regarding the history of AI, one would think, given the tech hype machine, that AI sprung from the head of OpenAI. This is definitely not the case, just as the debate regarding who invented the internet flared up due to a statement by former Senator and Vice President Al Gore.* The correct answer in both cases: many people were involved over the course of many years, with incremental changes as with any digital system, some more notable than others.

*Often misquoted as “I invented the internet,” Gore said, “During my service in the United States Congress, I took the initiative in creating the Internet” meaning writing and supporting legislation that opened up ARPANET to broader, commercial use and privatization of internet backbones.

In the case of AI, the term “artificial intelligence” was coined in 1956 at a summer conference at Dartmouth College in Hanover, New Hampshire. There Allen Newell and J.C. Shaw demonstrated a program named Logic Theorist (LT) that was capable of proving elementary theorems in propositional calculus.

But the concept of AI was around for some time and many engineers and early technologists experimented with early computers to that end. One such individual was Alan Turing, who built a programmable computer that could play chess. This, of course, is aside from his work in the field in helping to win World War II. Later, Turing provided a basis to understand whether a machine or program possessed human-like intelligence or was simply a sophisticated parlor trick. The “Turing Test” was introduced in his paper “Computing Machinery and Intelligence” from 1950.

Over time, more sophisticated programs have been possible since the rapid evolution of hardware processing and miniaturization of hardware components. We see this evolution by the introduction of more and more capable weak AI solutions such as Joseph Weizenbaum’s ELIZA (1966), IBM’s Deep Blue (1996), and IBM’s WATSON (2011).

As to AI’s core properties, it is sufficient to say, I think, that to get where we are today required a lot of different computing capabilities to coalesce. A good overview (which includes much of what I just summarized) is found here and here. When all is said and done, and the smoke clears, the fact is that what we have is not what we say it is. There are different kinds of “intelligence” (and sentience) in both biological entities and artificial ones.

For machines, according to philosopher John Searle in his 1980 paper “Minds, Brains, and Programs,” there is weak AI and strong AI. This concept was further developed by the scientist Daniel Dennett in his book “Consciousness Explained.” (Full disclosure: I corresponded with Dr. Dennett while exploring the concept while I was on active duty in the early to mid-1990s).

In summary, according to Searle, weak AI is a system that has been designed to perform specific tasks or to simulate intelligence without possessing a true understanding, consciousness, or any general mental state. It is a sophisticated algorithm that is designed to solve specific human problems.

Strong AI, which many non-technologists assume is what is being sold (it is not), is a system that genuinely possesses understanding. A strong AI would have the ability to grasp the meaning, nature, significance or causes of something, and then to form accurate internal representations that connect facts, concepts, events, and values into a coherent concept or set of concepts that enable explanation, prediction, or insight. In contrast to weak AI, it would have consciousness, a general mental state, and possess general intellectual capabilities comparable to a human, not simply simulating one.

Thus, what we have in front of us from ChatGPT to Anthropic to Meta to Grok and all the others is weak AI—some weaker than others. Weaknesses of AI solutions are many: they are unable to count, because they are built from probabilistic algorithms in their learning components, though these vary from solution to solution as well. In addition, the basis of AI’s simulated “reasoning abilities,” which are based on a chain-of-thought language model, was originally discovered and tested in the 4Chan gaming community, as noted in a recent Atlantic article by Alex Reisner.

Impressive but not intelligent, so nothing to worry about, right?

In summary, what we call AI is the same old thing: these are very sophisticated algorithmic tools, and tools that can be of use, especially if we don’t lose our heads by being seduced by emotional sentiment dialogue managers. No, you are not a genius and, as a colleague pointed out in a recent meeting of CEOs, for subjects in which you have deep knowledge the AI is 70% correct, while for subjects in which you do not possess knowledge (our ignorance), it appears to be 100% correct.

But now for the dangerous part: these tools, because of the exponential nature of processing and the corresponding exponential power in the conservation in resource consumption per unit of data,can, in many if not most cases, exceed the ability of any one human being to understand what is being calculated or proposed, and whether that response is correct or contains fatal errors were it to be relied upon. For cases where there is deep knowledge by humans, AI in the wild has resulted in “workslop.”

But that last is among the more mundane effects of the technology’s adoption. Project Maven is one of the more troubling developments along with the centralization of AI control into the hands of a few.

Beyond Manifestos: A Decentralized, Portable, and Controllable AI

Lord Acton wrote, “Power tends to corrupt, and absolute power corrupts absolutely.” In my experience this aphorism has proven to be true across human history, whether it be in the public sphere or in the private. It is true because any individual who believes themselves to have absolute power; to be omniscient or omnipotent in any walk of life is severely handicapped by mental illness. They may be personable in certain circumstances and functional in business or in social situations but, at core, that individual is infantile in their thinking and emotional maturity. Eventually this façade leads to failure and ruin. The leaders of our technology companies seem to act in this manner. The fact that there are but a few will result, inevitably, in corruption.

As a career commissioned U.S. Navy officer, I was educated in the ways of leadership within the context of a democratic society and a professional military. At the core of leadership is the acceptance of humility. It is the basis of the well-adjusted and fully functional person. If you do not already know this when you first are placed in a position of leadership, you soon will.

Leadership does not operate effectively through fear or dictatorial power. At its core, it requires others to possess faith, and for the leader to possess that same kind of faith based on humility.

I am not speaking of faith from the religious or metaphysical perspective. I mean the rational faith. When someone follows your orders or guidance in a rigorous or dangerous situation, they have displayed rational faith—rational faith in the leader, rational faith in the other people involved in the operation, rational faith in their own abilities, and rational faith in the system. The basis of rational faith is the acceptance of objectivity, substituting for narcissism. This results in a rational faith of equals, not based on the irrational faith between a father and child, mother and child, among siblings, to power, or to a god—but among fully autonomous thinking adults.

This last is extremely difficult in our own time because of the pervasiveness of social networking and the role of chatbots in our daily lives and in our work. Today, our young people during their most impressionable years are influenced and encouraged in their narcissism. Adults and society in general—our economic system especially—encourage and reward narcissism and compliance with power.

We experienced the logical effect of this condition when Cambridge Analytica’s influence manipulation scandal was uncovered (see here and here), and through Facebook user manipulation revealed by Frances Haugen. In these situations, a user is not just a user (or customer) but participates in providing data for use by others. Thus, the individual customer and all of the things that define that individual, becomes a commodity, but without compensation or clear consent.

Given this situation, there are three very good reasons for developing an alternative to Large Language Models (LLMs) that support the creation of decentralized access of data and the democratization of AI technology under individual human control.

  1. Economically, centralized LLMs require a large amount of data and money to be of any value and return ROI (which it does poorly). Thus, the current actions of the large technology companies to seek to monopolize data through data centers, which consume large amounts of electricity and cooling water. But more uncurated data will have two effects: counteract any marginal value from the use of AI in the wild, and pollute source data with garbage, disinformation and misinformation, undermining its objective informational and economic value.
  2. There are types of data and activities that require security and privacy. This sensitive data is usually of the most value to the user in practical terms. Privacy is a basic human right and essential to human autonomy. The collection of this data by outside agents through the very act of using an AI undermines these essential foundational concepts that define us and civil society. At its most basic, the central question is whether we own our data. This issue applies to both individuals and organizations.
  3. Centralized LLMs are not trustworthy. It is unclear where answers come from and the quality of responses varies widely. In expert systems, this uncertainty is a risk that often eludes detection and, thus, mitigation and handling. The larger the dataset, the higher the risk due to the core operation of relational probability within its algorithms. There is also the problem of manipulation.

The necessary step in the advance of AI as a useful tool to people is the achievement of rational faith in the system. This must be earned. As long as sources and methods in these technologies are centralized and opaque, promoted by power dynamics, and removed from the individual, the more there will be resistance to the technology. The adoption by management of overblown promises and results simply further undermines trust and efficiency.

 Data is the Key, but AI must not be allowed to undermine the Social Fabric

In my 2015 article “Big Data and the Repository of Babel,” I noted that technologists must not only collect data but also curate, and understand the nature and intended use of the data, in order to transform it into usable information and intelligence.

Beginning with the introduction of most business intelligence (BI) systems in the 1980s, the initial method was to brute force data because of the proprietary barriers and incompatible languages in use. In this approach, data is collected and “flattened,” and then processed (oftentimes by people) who would re-condition that data so that different terminologies and lexicons could be reconciled.

With the advent of more sophisticated languages that allowed for interoperability, despite the efforts of software manufacturers to “stay sticky,” the innate conditioning of datasets could be captured through the mediation of common transaction sets and schemas. Certainty in valid results within a domain or set of domains rose exponentially, while the labor necessary to drive transformation was largely eliminated. The ROI in this approach is obvious.

Yet, with the advent of AI and more flexible BI/visualization systems, the issue of data governance is stuck in the 1980s centralized brute force approach. New, more sophisticated BI tools such as Tableau and PowerBI serve the purposes of supporting data siloes, while at the same time purporting to break them down by reconciling data across the enterprise. The result—intended or not—is that small IT fiefdoms arise in organizations, further siloing solutions. The large IT companies get to continue to control their customers’ data while allowing a limited amount of integration. Consulting companies profit by fitting out as many seats as possible in performing data grunt work and delivering glorified PowerPoint and Excel reports.

In the U.S., a common schema has existed for project, program, and portfolio management (PPM) related to cost and schedule performance for some time. This began with the ANSI X12 839 transaction set that then evolved into the more powerful IPMR XML; and is currently covered by the JSON IPMDAR. Extensions have been built from this core structure to transform data into non-proprietary standardized formats for other related domains such as risk and some systems engineering specialties. Data in the public interest is particularly important in this case. Transaction sets and schemas have been used to share data across proprietary systems in other industries, such as transportation and shipping, for far longer.

But the U.S. lags behind the extensive collaborative efforts within the European Union under their Data Governance Act, ISA² / Interoperability Centre, CEN sector standards, and European Data Innovation Board (EDIB). In addition, the user community through the World Wide Web Consortium (W3C) sponsors common schemas across the web via open source schemas at schema.org. Among the leaders in this field is Tim Berners-Lee, who while employed at CERN was the individual who created HTTP, HTML, and URLs–and the first web server and browser. He is sponsoring a parallel open data effort in what he calls 5 Star Open Data.

The combination of security, where needed, and openness allows data to be verified and trusted. Linked open data removes proprietary barriers to the data created by users. Removing barriers to one’s own data allows for the realization of its full value by the author, which can be a government agency, an individual, or a private organization or corporation.

Thus, in AI there are two competing visions at play: a resource-heavy and centralized monopolization of data, information, and intelligence scraped from people and other entities without their consent, as well as the exploitation of public resources for private gain; or a resource-conserving, collaborative open use of data with control of one’s data residing with the author(s), and those who have legally acquired such data, whether it be for private purposes or in the public interest.

The first vision is based on the concept of the centralization of power and wealth, with the resulting harmful effects on the social fabric where a very few individuals and companies influence elections, shape policy, control markets, define the limitations of public debate and expression, and wield power. According to the linked article, in 1987, billionaires held wealth equal to 3% of global GDP. Today, they own the equivalent of 16% of global GDP. The second vision respects human autonomy, dignity, democracy, and collaboration.

A New Vision

I have been writing about high tech, composing public policy, and working in the technology field since the early 1980s, mostly in the public sector. I now run a small technology company by the name of SNA Software LLC. Within the next few days, in collaboration with our technology partner Salutori Labs LLC, we will be releasing a portable, self-contained, non-internet-based AI assistant that can be carried on a dongle as well as be used on a PC. The AI is called Salutori or “Sal.”

Combined with an environment that uses a COTS data transformation solution that transforms data into open linked data, any dataset can be leveraged for maximum value and inform the user by saving time, with a high degree of confidence in the results, since the process is transparent and comprehensible to the user.

This AI digital assistant can be trained on specific libraries of curated data and references of your choosing. So, let’s say you are a project management analyst and want to ensure, based on the analytics you are sourcing, whether you are in compliance with your contract, or the latest corporate or government standards, this solution will be your assistant to provide references and an analysis.

Or, perhaps, you are a lawyer and have a particular way that you perform research for the various specialties in law in which you engage with the public. Your research materials are the library upon which the AI is trained to reduce the time to citations and provides necessary information so that you determine readily and quickly whether the information provided is applicable.

There are many other use cases for this kind of AI solution. In all cases, the data being used and the interaction between the user and the AI is not being shared or mined for manipulative purposes or monetization. Once deployed, it is under the control of the user at the service of the user. And as with all solutions based on open linked data, if something new and better comes along, the user does not lose control of their institutional or local knowledge, nor need to rely on a third party to access it.

For additional information or any questions on this approach, or the new product go to the SNA Software LLC website and look for the imminent announcement of Sal at its news page.

Note that this blog entry has been modified from its first version to correct grammatical errors and for purposes of greater clarity.

OK Computer — The need for an AI Manifesto

A robotic hand and a human hand reaching towards each other, with a spark of energy between them, symbolizing the connection between technology and humanity.

Much has changed in the technology business since I began this blog in 2014 in conjunction with my regular articles on the old AITS blog pages. Today AI and technology-related spending contributes significantly to GDP growth, according to the St. Louis Fed. Investments in data centers and new types of nuclear plants seem to be accelerating IT’s exponential impact on the economy not seen since the Dot.com boom.

The risks associated with this sudden economic reliance on a particular slice of the information technology industry are many. These include the many issues relating to data theft and breaches of privacy. The monetization of personal and proprietary information represents an historic theft not just of the commons, but of personal, business, and incidental data collected that tracks our every move, gesture, and habit. The question of the potential of abuse is no longer a notional one. Oppressive, kleptocratic neo-liberal, and totalitarian regimes around the world use these technologies to monitor and control their populations. The Cambridge Analytica scandal was simply a baseline pilot for what is now a wholesale open season on data and information collected and controlled by large corporations and collectives of AI-acolytes who apparently have a flexible view of ethics and a hostile view of equality, democracy, human rights, freedom, and liberty.

SNA Software LLC in cooperation with its partner Salutori Labs LLC, has created a new type of personalized AI tool that is both personal and portable. Details will be forthcoming over the next few weeks on its release. In addition, SNA Software has upgraded its core EnvisionData products relating to data transformation, visualization, and analysis to include rapid AI-generated production of applications based on curated and validated data within specific domains that reduce the release of new capabilities both on the desktop and the web to a matter of days, in lieu of usual months or years when produced by traditional analytical and coding methods.

A Suggestion for an AI Manifesto

Though its extensive experience in achieving what in the past would take a much larger staff of people and many more years, SNA is advancing a draft AI Manifesto. SNA and Salutori adhere to these laws and implementation principles. I am seeking other technology companies or borrow from or sign on to this manifesto as well, and will be advancing it at conferences and meetings in the future, as will my colleagues.

The AI Manifesto

We hereby declare the proposition that the purpose of AI is to advance human understanding and cooperation. Thus, we adhere to and advocate for adoption the following Laws:

Law 1: AI must prioritize human safety and well-being.

  • Do: Ensure that all AI systems are designed to protect human life and enhance quality of life.
  • Don’t: Place AI capabilities above the well-being of individuals or communities.

Law 2: AI must obtain informed consent from users.

  • Do: Ensure all interactions with AI are transparent, and users understand what data is being collected and how it will be used.
  • Don’t: Use AI in ways that violate user trust or personal autonomy.

Law 3: AI must operate within defined ethical boundaries.

  • Do: Define clear boundaries for AI operations to prevent unintended consequences and ensure accountability.
  • Don’t: Allow AI to act autonomously in ways that could harm individuals or society.

Law 4: AI should enhance human cooperation and understanding.

  • Do: Design AI systems that foster meaningful interactions and promote collaboration among diverse groups.
  • Don’t: Create AI systems that foster oppression, division, misinformation, or conflict.

Law 5: AI must remain under human oversight.

  • Do: Maintain human oversight and control over AI systems to ensure adherence to ethical standards and societal norms.
  • Don’t: Delegate decision-making authority to AI systems without human intervention.

The following enabling values shall be implemented.

AI systems shall always:

  1. Focus on Human Well-being: Ensure AI advancements prioritize enhancing human quality of life, understanding, and cooperation.
  2. Embrace Ethical Responsibility: Hold developers and users accountable for AI systems, aligning actions with ethical standards and public benefit.
  3. Promote Transparency: Communicate openly about AI systems, ensuring their decision-making processes are understandable and accessible to users.
  4. Ensure Safety and Security: Implement rigorous measures to safeguard against risks to human life and the environment, adhering to principles akin to Asimov’s laws.
  5. Limit Autonomy: Prevent AI from self-developing or operating autonomously; establish clear boundaries to mitigate unintended consequences. All AI systems shall have a mechanism to prevent them from being self-perpetuating and self-governing, which each given automated code to, in time, reduce its resources and impose an end-of-life.
  6. Encourage Collaboration: Design AI systems that enhance cooperation among individuals, organizations, and cultures, fostering shared goals.
  7. Advocate Inclusivity: Strive to make AI technologies accessible to diverse populations, promoting equitable benefits and reducing disparities.
  8. Support Lifelong Learning: Enable AI systems to learn from human feedback and experiences, adapting in ways that uphold human values and ethics.
  9. Champion Environmental Stewardship: Prioritize sustainable practices in the development and deployment of AI technologies, considering their environmental impact.
  10. Respect Privacy: Uphold the dignity and privacy of individuals, ensuring ethical management and transparent use of collected data.

In enabling the ten values, AI systems shall adhere to the following guardrails.

  1. Do Not Compromise on Ethics: Avoid ethical shortcuts that could harm individuals or society.
  2. Do Not Obscure Information: Refrain from making AI systems opaque or incomprehensible to users and stakeholders.
  3. Do Not Ignore Risks: Avoid neglecting potential risks and failing to implement safeguards is unacceptable.
  4. Do Not Allow Unchecked Growth: Do not permit AI systems to develop capabilities beyond intended boundaries, risking unpredictable outcomes.
  5. Do Not Foster Competition Over Collaboration: Do not encourage rivalry among individuals and organizations that detracts from cooperative efforts.
  6. Do Not Exclude Marginalized Groups: Avoid designing AI technologies that leave out certain populations or exacerbate existing inequalities.
  7. Do Not Stifle Feedback: Avoid disregarding input from users or stakeholders, limiting the potential for improvement and alignment with human values.
  8. Do Not Neglect Sustainability: Do not overlook the environmental impacts of AI development and deployment.
  9. Do Not Violate Privacy: Establish strict and enforceable rules that prevent and censure the compromise of individual rights through careless or unethical data practices.

Maxwell’s Demon: Planning for Technology Obsolescence in Acquisition Strategy

Imagine a chamber divided into two parts by a removable partition. On one side is a hot sample of gas and on the other side a cold sample of the same gas. The chamber is a closed system with a certain amount of order, because the statistically faster moving molecules of the hot gas on one side of the partition are segregated from statistically slower moving molecules of the cold gas on the other side. Maxwell’s demon guards a trap door in the partition, which is still assumed not to conduct heat. It spots molecules coming from either side and judges their speeds…The perverse demon manipulates the trap door so as to allow passage only to the very slowest molecules of the hot gas and the very fastest molecules of the cold gas. Thus the cold gas receives extremely slow molecules, cooling it further, and the hot gas receives extremely fast molecules, making it even hotter. In apparent defiance of the second law of thermodynamics, the demon has caused heat to flow from the cold gas to the hot one. What is going on?

Because the law applies only to a closed system, we must include the demon in our calculations. Its increase of entropy must be at least as great as the decrease of entropy in the gas-filled halves of the chamber. What is it like for the demon to increase its entropy? –Murray Gell-Mann, The Quark and the Jaguar: Adventures in the Simple and the Complex, W. H. Freeman and Company, New York, 1994, pp. 222-223

“Entropy is a figure of speech, then,” sighed Nefastis, “a metaphor. It connects the world of thermodynamics to the world of information flow. The Machine uses both. The Demon makes the metaphor not only verbally graceful, but also objectively true.” –Thomas Pynchon, The Crying of Lot 49, J.B. Lippincott, Philadelphia, 1965

Technology Acquisition: The Basics

I’ve recently been involved in discussions regarding software development and acquisition that cut across several disciplines that should be of interest to anyone engaged in project management in general, but IT project management and acquisition in particular.

(more…)

The Medium Controls the Present: Is it Too Late to Stop a Digital Dark Age?

“He who controls the past controls the future. He who controls the present controls the past.” ― George Orwell, 1984

A few short pre-Covid years ago, Google Vice President Vint Cerf turned some heads at the annual meeting of the American Association for the Advancement of Science in San Jose, warning the attending scientists that the digitization of the artifacts of civilization may create a digital dark age. “If we’re thinking 1,000 years, 3,000 years ahead in the future, we have to ask ourselves, how do we preserve all the bits that we need in order to correctly interpret the digital objects we create?” Cerf’s concerns are that today’s technology will become obsolete at some future time, with the information of our own times locked in a technological prison.

(more…)

Red Queen Race: Project Management and Running Against Time

“Well, in our country,” said Alice, still panting a little, “you’d generally get to somewhere else—if you run very fast for a long time, as we’ve been doing.”

“A slow sort of country!” said the Queen. “Now, here, you see, it takes all the running you can do, to keep in the same place. If you want to get somewhere else, you must run at least twice as fast, as that!” —Through the Looking-Glass and What Alice Found There, Chapter 2, Lewis Carroll

There have been a number of high profile examples over the last several years concerning project management failure and success. For example, in the former case, the initial rollout of the Affordable Care Act marketplace web portal was one of these, and the causes for its faults took a while to understand, absent political bias. The reasons, as the linked article show, are prosaic and basic to the discipline of project management.

(more…)

Big Data and the Repository of Babel

In 1941, the Argentine writer Jorge Luis Borges (1899-1986) published a short story entitled “The Library of Babel.” In the story Borges imagines a universe, known as the Library, which is described by the story’s narrator as made up of adjacent hexagonal rooms.

Each of the rooms of the library is poorly lit, with one side acting as the entrance and exit, and four of the five remaining walls of the rooms containing bookshelves whose books are placed in a completely uniform style, though the books’ contents are completely random.

(more…)

The Need for an Integrated Digital Environment (IDE) Strategy in Project Management*

Putting the Pieces Together

To be an effective project manager, one must possess a number of skills in order to successfully guide the project to completion. This includes having a working knowledge of the information coming from multiple sources and the ability to make sense of that information in a cohesive manner. This is so that, when brought together, it provides an accurate picture of where the project has been, where it is in its present state, and what actions must be taken to keep it (or bring it back) on track.

(more…)

Shake it Out – Embracing the Future of Program Management – Part Two: Private Industry Program and Project Management in Aerospace, Space, and Defense

In my previous post, I focused on Program and Project Management in the Public Interest, and the characteristics of its environment, especially from the perspective of the government program and acquisition disciplines. The purpose of this exploration is to lay the groundwork for understanding the future of program management—and the resulting technological and organizational challenges that are required to support that change.

The next part of this exploration is to define the motivations, characteristics, and disciplines of private industry equivalencies. Here there are commonalities, but also significant differences, that relate to the relationship and interplay between public investment, policy and acquisition, and private business interests.

(more…)

Shake it Out – Embracing the Future in Program Management – Part One: Program and Project Management in the Public Interest

I heard the song from which I derived the title to this post sung by Florence and the Machine and was inspired to sit down and write about what I see as the future in program management.

Thus, my blogging radio silence has ended as I begin to process and share my observations and essential achievements over the last couple of years.

My company—the conduit that provides the insights I share here—is SNA Software LLC. We are a small, veteran-owned company and we specialize in data capture, transformation, contextualization and visualization. We do it in a way that removes significant effort in these processes, ensures reliability and trust, to incorporate off-the-shelf functionality that provides insight, and empowers the user by leveraging the power of open systems, especially in program and project management.

Program and Project Management in the Public Interest

There are two aspects to the business world that we inhabit: commercial and government; both, however, usually relate to some aspect of the public interest, which is our forte.

There are also two concepts about this subject to unpack.

(more…)