
Much has changed in the technology business since I began this blog in 2014 in conjunction with my regular articles on the old AITS blog pages. Today AI and technology-related spending contributes significantly to GDP growth, according to the St. Louis Fed. Investments in data centers and new types of nuclear plants seem to be accelerating IT’s exponential impact on the economy not seen since the Dot.com boom.
The risks associated with this sudden economic reliance on a particular slice of the information technology industry are many. These include the many issues relating to data theft and breaches of privacy. The monetization of personal and proprietary information represents an historic theft not just of the commons, but of personal, business, and incidental data collected that tracks our every move, gesture, and habit. The question of the potential of abuse is no longer a notional one. Oppressive, kleptocratic neo-liberal, and totalitarian regimes around the world use these technologies to monitor and control their populations. The Cambridge Analytica scandal was simply a baseline pilot for what is now a wholesale open season on data and information collected and controlled by large corporations and collectives of AI-acolytes who apparently have a flexible view of ethics and a hostile view of equality, democracy, human rights, freedom, and liberty.
SNA Software LLC in cooperation with its partner Salutori Labs LLC, has created a new type of personalized AI tool that is both personal and portable. Details will be forthcoming over the next few weeks on its release. In addition, SNA Software has upgraded its core EnvisionData products relating to data transformation, visualization, and analysis to include rapid AI-generated production of applications based on curated and validated data within specific domains that reduce the release of new capabilities both on the desktop and the web to a matter of days, in lieu of usual months or years when produced by traditional analytical and coding methods.
A Suggestion for an AI Manifesto
Though its extensive experience in achieving what in the past would take a much larger staff of people and many more years, SNA is advancing a draft AI Manifesto. SNA and Salutori adhere to these laws and implementation principles. I am seeking other technology companies or borrow from or sign on to this manifesto as well, and will be advancing it at conferences and meetings in the future, as will my colleagues.
The AI Manifesto
We hereby declare the proposition that the purpose of AI is to advance human understanding and cooperation. Thus, we adhere to and advocate for adoption the following Laws:
Law 1: AI must prioritize human safety and well-being.
- Do: Ensure that all AI systems are designed to protect human life and enhance quality of life.
- Don’t: Place AI capabilities above the well-being of individuals or communities.
Law 2: AI must obtain informed consent from users.
- Do: Ensure all interactions with AI are transparent, and users understand what data is being collected and how it will be used.
- Don’t: Use AI in ways that violate user trust or personal autonomy.
Law 3: AI must operate within defined ethical boundaries.
- Do: Define clear boundaries for AI operations to prevent unintended consequences and ensure accountability.
- Don’t: Allow AI to act autonomously in ways that could harm individuals or society.
Law 4: AI should enhance human cooperation and understanding.
- Do: Design AI systems that foster meaningful interactions and promote collaboration among diverse groups.
- Don’t: Create AI systems that foster oppression, division, misinformation, or conflict.
Law 5: AI must remain under human oversight.
- Do: Maintain human oversight and control over AI systems to ensure adherence to ethical standards and societal norms.
- Don’t: Delegate decision-making authority to AI systems without human intervention.
The following enabling values shall be implemented.
AI systems shall always:
- Focus on Human Well-being: Ensure AI advancements prioritize enhancing human quality of life, understanding, and cooperation.
- Embrace Ethical Responsibility: Hold developers and users accountable for AI systems, aligning actions with ethical standards and public benefit.
- Promote Transparency: Communicate openly about AI systems, ensuring their decision-making processes are understandable and accessible to users.
- Ensure Safety and Security: Implement rigorous measures to safeguard against risks to human life and the environment, adhering to principles akin to Asimov’s laws.
- Limit Autonomy: Prevent AI from self-developing or operating autonomously; establish clear boundaries to mitigate unintended consequences. All AI systems shall have a mechanism to prevent them from being self-perpetuating and self-governing, which each given automated code to, in time, reduce its resources and impose an end-of-life.
- Encourage Collaboration: Design AI systems that enhance cooperation among individuals, organizations, and cultures, fostering shared goals.
- Advocate Inclusivity: Strive to make AI technologies accessible to diverse populations, promoting equitable benefits and reducing disparities.
- Support Lifelong Learning: Enable AI systems to learn from human feedback and experiences, adapting in ways that uphold human values and ethics.
- Champion Environmental Stewardship: Prioritize sustainable practices in the development and deployment of AI technologies, considering their environmental impact.
- Respect Privacy: Uphold the dignity and privacy of individuals, ensuring ethical management and transparent use of collected data.
In enabling the ten values, AI systems shall adhere to the following guardrails.
- Do Not Compromise on Ethics: Avoid ethical shortcuts that could harm individuals or society.
- Do Not Obscure Information: Refrain from making AI systems opaque or incomprehensible to users and stakeholders.
- Do Not Ignore Risks: Avoid neglecting potential risks and failing to implement safeguards is unacceptable.
- Do Not Allow Unchecked Growth: Do not permit AI systems to develop capabilities beyond intended boundaries, risking unpredictable outcomes.
- Do Not Foster Competition Over Collaboration: Do not encourage rivalry among individuals and organizations that detracts from cooperative efforts.
- Do Not Exclude Marginalized Groups: Avoid designing AI technologies that leave out certain populations or exacerbate existing inequalities.
- Do Not Stifle Feedback: Avoid disregarding input from users or stakeholders, limiting the potential for improvement and alignment with human values.
- Do Not Neglect Sustainability: Do not overlook the environmental impacts of AI development and deployment.
- Do Not Violate Privacy: Establish strict and enforceable rules that prevent and censure the compromise of individual rights through careless or unethical data practices.





You must be logged in to post a comment.