The Self-Alignment Framework (SAF) is an architecture for operationalizing ethics. It is designed as a protocol to ensure that artificial intelligence, institutions, and individuals can align their actions with their stated values: consistently, transparently, and verifiably.
SAF: Origin
The Self-Alignment Framework is the result of a long, personal journey of self-introspection. It began not in a lab, but by reverse-engineering the process of human thought to answer a fundamental question: how do our actions stay true to our values?
The framework was developed organically, born from a synthesis of philosophical inquiry and self-reflection. Its structure was deeply influenced by the work of Thomas Aquinas on the faculties of the soul, which provided a historical precedent for the roles of the Intellect and the Will. However, this introspective process also led to key architectural innovations:
- Conscience as Auditor: Unlike classical models, Conscience was placed after the Will, transforming it from a mere prompter into a distinct auditor that judges an action after the fact.
- Spirit as Integrator: The Spirit faculty was a unique contribution, conceived as the mathematical historian and guardian of the loop’s integrity over time.
SAF was originally conceived as a model for human reasoning. Its application to AI is a recent development, which explains its human-centric terms. It is not an attempt to create moral agency in machines, but to provide a tool, a faithful moral actor, that executes on its given principles with integrity.
SAF: Mission
Our mission is to provide a verifiable tool for ethical alignment, enabling systems (whether human, artificial, or institutional) to operate in true alignment with their declared values.
As an Institute, we exist to refine and safeguard the integrity of the SAF architecture. Our responsibility is not merely implementation but ethical stewardship: ensuring that any system bearing the SAF name adheres to the core principles of coherence, accountability, and transparency that the framework demands.
SAF: Vision
We envision a world where complex systems, including AI, corporations, and public institutions, can be equipped with a common protocol for self-regulation, enabling them to operate with greater coherence, transparency, and responsibility.
In this future, artificial intelligence and organizations can demonstrably align with their declared values, fostering greater trust and accountability across all sectors of society.
SAF: Core Values
- Alignment: We commit to the measurable alignment between declared values and operational outcomes.
- Integrity: We uphold transparency and truth as non-negotiable principles in our architecture, auditing, and governance.
- Stewardship: We view the challenge of aligning complex systems as a shared responsibility and safeguard this work for the benefit of society.
SAF: Governance
To ensure long-term integrity and responsible scaling, a dual-structure governance model has been established:
- The SAF Institute (Nonprofit): The custodial body responsible for the ethical stewardship, theoretical advancement, and open-access publication of the SAF model. It maintains the core framework, guides governance practices, and preserves public trust.
- SAF Portfolio Inc. (For-Profit Arm): The entity that develops and deploys SAF-based tools and applications, such as the SAFi for AI. It operates under Institute oversight to ensure that commercial applications never compromise the framework’s core ethical integrity.