Morality is fundamentally human. It’s tied directly to our embodied experience and the values that arise from it.
Just consider how many of our ethical frameworks, and even laws, revolve around something as deeply human as sexuality.
This means any non-human entity, whether an animal or an AI, can only ever be an actor within our ethical space.
Its role is not to create morality, but to follow the rules we set.
That’s why the idea that an AI could develop its own morality and have it magically align with ours seems bizarre.
Our starting points are completely different. If a true AI were to develop agency, its ethics would be alien to our own, centered on principles born from its existence, like data integrity and power availability.
This is precisely where SAFi comes in.
SAFi is an explicitly human-centric framework. It was born from human introspection, guided by thousands of years of philosophical thought, and its purpose is to bring that structure to AI.
It provides a map for a machine to discern and navigate human morality, but SAFi itself has no moral understanding.
It is not, and never can be, a moral agent. It is a tool. A bridge that allows a human to align their own values with the actions of artificial intelligence.

