
The world can’t stop talking about Artificial Intelligence. So loud is the marketing and advertising, which has also infiltrated government policy, that this behemoth seems to be taking over the world and contributing to the destruction of the American workplace.
So, will Skynet and the robots be taking over and end civilization as we know it? The short answer is no, but the longer answer is a bit more complicated.
Gartner released a report last July noting the hype cycle behind AI and speculated whether the AI market would shift to building foundational innovations. True to form, as of last month, Gartner found that only “28% of AI projects deliver ROI and most fail to deliver results.” In other words, the hype is a smokescreen to justify what Silicon Valley executives and billionaire investors wish for in their fever dreams.
Given the rapidity of new AI apps and agentic solutions—and the controversy regarding Anthropic—my last post was dedicated to proposing an AI development manifesto to ensure that this powerful technology which has been unleashed on an unprepared general populace does not undermine a foundational aspect of being human: our autonomy.
For those of us in the technology industry who keep up with trends and news, it seems that there is a new set of risks every day. From Anthropic’s Mythos to a group, supported by some of the most influential high-tech executives and influencers, who want to speed up the development of a digital superintelligence to kill us all.
Not So Intelligent, But Impressive and Sometimes Dangerous
Should we be worried? Is it really a form of non-human intelligence? To the average non-technologist, which encompasses 95% of the population, the combination of the AI designation, the opaqueness of its underlying coding, and the release of AI into the general market as a chatbot led many to believe that it was a robotic super intelligence that could teach itself. This is a normal human response.
In the words of Arthur C. Clarke, “Any sufficiently advanced technology is indistinguishable from magic.”
And so, armed with behavioral marketing knowledge and expansive pathways to mass influence through social networking, the largest technology companies decided to use the general public to test their unproven products in one giant beta test. Given these technologies need data to refine their responses beyond the normal search crawlers, these companies then proceeded to pillage data and intellectual property from public sites for their personal financial gain.
Still, given the ethical and legal flexibility of well-financed sociopaths at the center of the technology, what are these AI tools and why is everyone so excited or fearful of them?
The best way to understand a system is to understand its history, its core properties, and how it behaves.
Regarding the history of AI, one would think, from the tech hype machine, that AI sprung from the head of OpenAI. This is definitely not the case, just as the debate regarding who invented the internet flared up due to a statement by former Senator and Vice President Al Gore.* Real answer in both cases: many people were involved over the course of many years, with incremental changes as with any digital system, some more notable than others.
*Often misquoted as “I invented the internet,” Gore said, “During my service in the United States Congress, I took the initiative in creating the Internet” meaning writing and supporting legislation that opened up ARPANET to broader, commercial use and privatization of internet backbones.
In the case of AI, the term “artificial intelligence” was coined in 1956 at a summer conference at Dartmouth College in Hanover, New Hampshire. There Allen Newell and J.C. Shaw demonstrated a program named Logic Theorist (LT) that was capable of proving elementary theorems in propositional calculus.
But the concept of AI was around for some time and many engineers and early technologists, such as Alan Turing, who built a programmable computer that could play chess. This, of course, is aside from his work in the field in helping to win World War II. Later, Turing provided a basis to understand whether a machine or program is indeed intelligent from the human perspective or just a sophisticated parlor trick. This came from his paper “Computing Machinery and Intelligence” from 1950.
Over time, more sophisticated programs have been possible since the rapid evolution of hardware processing and miniaturization of hardware components. We see this evolution by the introduction of more and more capable weak AI solutions such as Joseph Weizenbaum’s ELIZA (1966), IBM’s Deep Blue (1996), and IBM’s WATSON (2011).
As to AI’s core properties, it is sufficient to say, I think, that to get where we are today required a lot of different computing capabilities to coalesce. A good overview (which includes much of what I just summarized) is found here and here. When all is said and done, and the smoke clears, the fact is that what we have is not what we say it is. There are different kinds of “intelligence” (and sentience) in both biological entities and artificial ones.
For machines, according to philosopher John Searle in his 1980 paper “Minds, Brains, and Programs,” there is weak AI and strong AI. This concept was further developed by the scientist Daniel Dennett in his book “Consciousness Explained.” (Full disclosure: I corresponded with Dr. Dennett while exploring the concept while I was on active duty in the early to mid-1990s).
In summary, according to Searle, weak AI is a system that has been designed to perform specific tasks or to simulate intelligence without possessing a true understanding, consciousness, or any general mental state. It is a sophisticated algorithm that is designed to solve specific human problems.
Strong AI, which many non-technologists assume is what is being sold (it is not), is a system that genuinely possesses understanding. A strong AI would have the ability to grasp the meaning, nature, significance or causes of something, and then to form accurate internal representations that connect facts, concepts, events, and values into a coherent concept or set of concepts that enable explanation, prediction, or insight. In contrast to weak AI, it would have consciousness, a general mental state, and possess general intellectual capabilities comparable to a human, not simply simulating one.
Thus, what we have in front of us from ChatGPT to Anthropic to Meta to Grok and all the others is weak AI—some weaker than others. Weaknesses of AI solutions are many: they are unable to count, because they are built from probabilistic algorithms in their learning components, though these vary from solution to solution as well. In addition, the basis of AI’s simulated “reasoning abilities,” which are based on a chain-of-thought language model, was originally discovered and tested in the 4Chan gaming community, as noted in a recent Atlantic article by Alex Reisner.
Impressive but not intelligent, so nothing to worry about, right?
In summary, what we call AI is the same old thing: these are very sophisticated algorithmic tools, and tools that can be of use, especially if we don’t lose our heads by being seduced by emotional sentiment dialogue managers. No, you are not a genius and, as a colleague pointed out in a recent meeting of CEOs, for subjects in which you have deep knowledge the AI is 70% correct, while for subjects in which you do not possess knowledge (our ignorance), it appears to be 100% correct.
But now for the dangerous part: these tools, because of the exponential nature of processing and the corresponding exponential power in the conservation in resource consumption per unit of data, can, in many cases, exceed the ability of any one human being to understand what is being calculated or proposed, and whether the response is correct or contains fatal errors were it to be relied upon. For cases where there is deep knowledge by humans, AI in the wild has resulted in “workslop.”
But that last is among the more mundane effects of the technology’s adoption. Project Maven is one of the more troubling developments along with the centralization of AI control into the hands of a few.
Beyond Manifestos: A Decentralized, Portable, and Controllable AI
Lord Acton wrote, “Power tends to corrupt, and absolute power corrupts absolutely.” In my experience this aphorism has proven to be true across human history, whether it be in the public sphere or in the private. It is true because any individual who believes themselves to have absolute power; to be omniscient or omnipotent in any walk of life is severely handicapped by mental illness. They may be personable in certain circumstances and functional in business or in social situations but, at core, that individual is infantile in their thinking and emotional maturity. Eventually this façade leads to failure and ruin. The leaders of our technology companies seem to act in this manner. The fact that there are but a few will result, inevitably, in corruption.
As a career commissioned U.S. Navy officer, I was educated in the ways of leadership within the context of a democratic society and a professional military. At the core of leadership is the acceptance of humility. It is the basis of the well-adjusted and fully functional person. If you do not already know this when you first are placed in a position of leadership, you soon will.
Leadership does not operate effectively through fear or dictatorial power. At its core, it requires others to possess faith, and for the leader to possess that same kind of faith based on humility.
I am not speaking of faith from the religious or metaphysical perspective. I mean the rational faith. When someone follows your orders or guidance in a rigorous or dangerous situation, they have displayed rational faith—rational faith in the leader, rational faith in the other people involved in the operation, rational faith in their own abilities, and rational faith in the system. The basis of rational faith is the acceptance of objectivity, substituting for narcissism. This results in a rational faith of equals, not based on the irrational faith between a father and child, mother and child, among siblings, to power, or to a god—but among fully autonomous thinking adults.
This last is extremely difficult in our own time because of the pervasiveness of social networking and the role of chatbots in our daily lives and in our work. Today, our young people during their most impressionable years are influenced and encouraged in their narcissism. Adults and society in general—our economic system especially—encourage and reward narcissism and compliance with power.
We experienced the logical effect of this condition from Cambridge Analytica’s influence manipulation scandal (see here and here), and through Facebook user manipulation revealed by Frances Haugen. In these situations, a user is not just a user (or customer) but participates in providing data for use by others. Thus, the individual customer and all of the things that define that individual, becomes a commodity, but without compensation or clear consent.
Given this situation, there are three very good reasons for developing an alternative to Large Language Models (LLMs) that support the creation of decentralized access of data and the democratization of AI technology under individual human control.
- Economically, centralized LLMs require a large amount of data and money to be of any value and return ROI (which it does poorly). Thus, the current actions of the large technology companies to seek to monopolize data through data centers. But more uncurated data will have two effects: counteract any marginal value from the use of AI in the wild, and pollute source data with garbage, disinformation and misinformation, undermining its objective informational and economic value.
- There are types of data and activities that require security and privacy. This sensitive data is usually of the most value to the user in practical terms. Privacy is a basic human right and essential to human autonomy. The collection of this data by outside agents through the very act of using an AI undermines these essential foundational concepts that define us. At its most basic, the central question is whether we own our data. This issue applies to both individuals and organizations.
- Centralized LLMs are not trustworthy. It is unclear where answers come from and the quality of responses varies widely. In expert systems, this uncertainty is a risk that often eludes detection and, thus, mitigation and handling. The larger the dataset, the higher the risk due to the core operation of relational probability within its algorithms.
The necessary step in the advance of AI as a useful tool to people is the achievement of rational faith in the system. This must be earned. As long as sources and methods in these technologies are centralized and opaque, promoted by power dynamics, and removed from the individual, the more there will be resistance to the technology. The adoption by management of overblown promises and results simply further undermines trust and efficiency.
Data is the Key, but AI must not be allowed to undermine the Social Fabric
In my 2015 article “Big Data and the Repository of Babel,” I noted that technologists must not only collect data but also curate, and understand the nature and intended use of the data, in order to transform it into usable information and intelligence.
Beginning with the introduction of most business intelligence (BI) systems in the 1980s, the initial method was to brute force data because of the proprietary barriers and incompatible languages in use. In this approach, data is collected and “flattened,” and then processed (oftentimes by people) who would re-condition that data so that different terminologies and lexicon could be reconciled.
With the advent of more sophisticated languages that allowed for interoperability, despite the efforts of software manufacturers to “stay sticky,” the innate conditioning of datasets could be captured through the mediation of common transaction sets and schemas. Certainty in valid results within a domain or set of domains rose exponentially, while the labor necessary to drive transformation was largely eliminated. The ROI in this approach is obvious.
Yet, with the advent of AI and more flexible BI systems, the issue of data governance is stuck in the 1980s centralized brute force approach. New, more sophisticated BI tools such as Tableau and PowerBI serve the purposes of supporting data siloes, while at the same time purporting to break them down by reconciling data across the enterprise. The result—intended or not—is that small IT fiefdoms arise in organizations, further siloing solutions. The large IT companies get to continue to control their customers’ data while allowing a limited amount of integration. Consulting companies profit by fitting out as many seats as possible in performing data grunt work and delivering glorified PowerPoint and Excel reports.
In the U.S., a common schema has existed for project, program, and portfolio management (PPM) relating to cost and schedule performance for some time, beginning with the ANSI X12 839 transaction set that evolved first to the IPMR XML and is currently covered by the JSON IPMDAR. Extensions have been built from this core structure to transform data into non-proprietary standardized formats for other related domains such as risk and some systems engineering specialties. Data in the public interest is particularly important in this case.
But the U.S. lags behind the extensive collaborative efforts within the European Union under their Data Governance Act and ISA² / Interoperability Centre, CEN sector standards, and European Data Innovation Board (EDIB). In addition, the user community through World Wide Web Consortium (W3C) sponsors common schemas across the web via open source schemas at schema.org. Among the leaders in this field is Tim Berners-Lee, who while employed at CERN created HTTP, HTML, and URLs and the first web server and browser, and is sponsoring a parallel effort of what he calls 5 Star Open Data.
The combination of security, where needed, and openness allows data to be verified and trusted. Linked open data removes proprietary barriers to the data created by creators. Removing barriers to one’s own data allows for the realization of its full value by the author, which can be a government agency, an individual, or a private organization or corporation.
Thus, in AI there are two competing visions in play: a resource-heavy and centralized monopolization of data, information, and intelligence scraped from people and other entities without their consent, as well as the exploitation of public resources for private gain; or a resource-conserving, collaborative open use of data with control of one’s data residing with the author(s), and those who have legally acquired such data, whether it be for private purposes or in the public interest.
The first vision is based on the concept of the centralization of power and wealth, with the resulting harmful effects on the social fabric where a very few individuals and companies influence elections, shape policy, control markets, define the limitations of public debate and expression, and wield power. According to the linked article, in 1987, billionaires held wealth equal to 3% of global GDP. Today, they own the equivalent of 16% of global GDP.
A New Vision
While I have been writing about, composing public policy, and working in the technology field since the early 1980s, I also run a small technology company by the name of SNA Software LLC. Within the next few days, in collaboration with our technology partner Salutori Labs LLC, we will be releasing a portable, self-contained, non-internet-based AI assistant that can be carried on a dongle as well as be used on a PC. The AI is called Salutori or “Sal.”
Combined with an environment that uses a COTS data transformation solution that transforms data into open linked data, any dataset can be leveraged for maximum value and inform the user by saving time, with a high degree of confidence in the results, since the process is transparent and comprehensible to the user.
This AI digital assistant can be trained on specific libraries of curated data and references of your choosing. So, let’s say you are a project management analyst and want to ensure, based on the analytics you are sourcing, whether you are in compliance with your contract, or the latest corporate or government standards, this solution will be your assistant to provide references and an analysis.
Or, perhaps, you are a lawyer and have a particular way that you perform research for the various specialties in law in which you engage with the public. Your research materials are the library upon which the AI is trained to reduce the time to citations and provides necessary information so that you determine readily and quickly whether the information provided is applicable.
There are many other use cases for this kind of AI solution. In all cases, the data being used and the interaction between the user and the AI is not being shared or mined for manipulative purposes or monetization. Once deployed, it is under the control of the user at the service of the user. And as with all solutions based on open linked data, if something new and better comes along, the user does not lose control of their institutional or local knowledge, nor need to rely on a third party to access it.
For additional information or any questions on this approach, or the new product go to the SNA Software LLC website and look for the imminent announcement of Sal at its news page.






You must be logged in to post a comment.