SAF Portfolio

SAF Portfolio Inc. builds and scales technologies grounded in the Self-Alignment Framework (SAF)—a closed-loop model for structured ethical reasoning. Each product in our portfolio operationalizes SAF’s five-faculty loop—Values, Intellect, Will, Conscience, and Spirit—within real-world systems where moral alignment matters.

We don’t just build tools. We build conscience-driven intelligence for institutions and agents that must align action with principle.

1. SAFi

The Core Ethical Reasoning Engine
Status: 🟢 Live (Prototype)

SAFi is our flagship product—a modular reasoning engine that orchestrates value-aligned outputs through a structured ethical loop. Rather than simply filtering responses, SAFi treats large language models (LLMs) as components within a moral reasoning process. It engages each SAF faculty—Values, Intellect, Will, Conscience, and Spirit—to simulate ethical reflection, block misaligned outputs, and generate transparent, auditable answers.

It can interface with any major LLM and is domain-adaptable through modular value sets:

🟢 SAFi-Health (Live)

SAFi-Finance

SAFi-Education

SAFi-Governance

SAFi-Faith

Institutions can inject their own ethical frameworks or adopt SAF Institute–curated value sets as defaults.

2. SAFi Pulse

Ethical Drift Monitoring Across Communications
Status:Planned

SAFi Pulse will monitor and analyze institutional communications—emails, chats (Teams, Slack), meeting transcripts, and internal documents—through the SAF loop. Its purpose: track ethical alignment across time and at scale.

Key Features:

3. SAFi-Tuned LLMs

Domain-Specific Foundation Models with Built-In Conscience
Status:Planned

We plan to fine-tune open-source LLMs using SAF’s ethical loop and domain-specific value sets—embedding alignment at the model level.

Early Domains:

These models will enable:

Why We’re Building This

SAF Portfolio exists because we believe that alignment isn’t a safety net—it’s a foundation. As AI systems, institutions, and agents grow more autonomous, their moral architecture must be intelligible, trustworthy, and enforceable.

We’re building the conscience layer for intelligent systems.

If you’re an investor, partner, or institution building where values matter, we’d love to talk.