Introduction

As artificial intelligence becomes more powerful and integrated into society, a critical question arises: Who should define the ethical principles that guide AI behavior? Companies like Anthropic have taken it upon themselves to create AI “constitutions”—ethical guidelines designed to regulate AI outputs. However, this approach raises serious concerns about accountability, legitimacy, and alignment with democratic governance. AI should not operate on values created by private entities but must instead adhere to ethical and legal frameworks established by policymakers. If AI is to be truly aligned with human society, its principles must be rooted in enforceable law, not corporate preferences.

The Problem with Private AI Ethics

Currently, AI companies such as Anthropic and OpenAI are developing their own ethical frameworks for AI alignment. While these efforts may be well-intentioned, they introduce several problems:

  • Lack of Democratic Oversight – AI companies are private entities, driven by business interests, investor demands, and competition. Unlike governments, they are not accountable to the public. Allowing them to define AI values means placing ethical decision-making in the hands of corporations rather than democratically elected representatives.

  • Inconsistent and Arbitrary Standards – Each AI company defines its own set of ethical principles, leading to conflicting standards between different AI models. This lack of uniformity creates confusion and makes enforcement nearly impossible.

  • No Legal Enforcement Mechanism – AI companies can change their ethical frameworks at any time without consequence. Unlike laws passed through democratic processes, corporate ethical guidelines have no legal force, making them unreliable in ensuring long-term AI alignment.

AI Must Follow the Same Legal and Ethical Principles as Society

If AI is to be safely integrated into human civilization, it must be governed in the same way as people and organizations. The best way to achieve this is to derive AI’s ethical values from the same legal frameworks that regulate human behavior:

  • Constitutional and Legal Alignment – In the United States, AI should be aligned with the U.S. Constitution and federal laws. The same principle should apply in other countries, where AI must comply with their national legal systems.

  • State-Level Adaptability – Just as states have their own legal codes, AI should be able to adjust to local regulations while still complying with overarching national laws. This ensures AI respects both federal and state-level governance structures.

  • Regulatory Oversight and Public Accountability – Governments should establish AI oversight bodies to enforce compliance with legal and ethical principles. This would prevent private companies from dictating AI behavior based on corporate interests rather than societal needs.

A Structured Framework for AI Governance

The Self-Alignment Framework (SAF) offers a structured solution for integrating AI into legal systems while allowing adaptability to local governance. SAF provides a closed-loop self-regulation system, where AI’s values are not arbitrarily defined by corporations but instead derived from enforceable legal structures.

Under SAF, AI’s ethical framework would be structured as follows:

  • National Values: AI adheres to the fundamental legal principles of the country it operates in.

  • State & Regional Values: AI can adapt to local regulations while maintaining compliance with national laws.

  • Regulatory Verification: AI alignment is certified through an independent oversight body, ensuring compliance with existing legal and ethical norms.

Conclusion

For AI to be truly aligned with human society, its ethical framework must be defined by policymakers, not corporate entities. Allowing private AI companies to dictate ethical principles risks undermining democracy, creating conflicting standards, and removing legal accountability. Instead, AI should follow the same legal frameworks that govern human behavior, ensuring alignment with constitutional rights, democratic governance, and enforceable laws.This debate is crucial as AI continues to evolve. If we fail to address this issue now, we risk creating an AI ecosystem where private interests—not public governance—dictate the ethic