About SAF

The Self-Alignment Framework (SAF) is an architecture for operationalizing ethics. It provides a verifiable process to ensure that artificial intelligence, institutions, and individuals can align their actions with their stated values consistently, transparently, and at scale.

Origin

SAF began not in a lab, but as a personal project to reverse-engineer coherent human decision-making. It started with a single question: How do we ensure our actions remain true to our values?

The answer emerged from a synthesis of philosophical inquiry, building directly upon the foundation laid by Thomas Aquinas, who formalized the distinct roles of the Intellect and Will. From this starting point, we introduced two pivotal innovations:

Originally a model for human reasoning, SAF found its most urgent application in AI governance. Its human-centric language is not an attempt to anthropomorphize machines, but to create a precise vocabulary for building systems that act as faithful moral actors upholding defined values with integrity and accountability.

Mission

Our mission is to enable any system, individual, artificial, or institutional to achieve and demonstrate true coherence between its declared values and its operational outcomes. We are committed to both technical implementation and ethical stewardship.

Vision

We envision a future where AI, corporations, and public institutions operate under a shared standard for ethical reasoning. In this future, intelligent systems are not black boxes with opaque, vendor-defined values, but transparent agents that faithfully serve the principles of the communities they belong to.

Core Values

Governance

To protect its core mission while scaling its impact, SAF is governed by a dual-structure model:

SAFi

The Governance Engine For AI