We will be undergoing planned maintenance on January 16th, 2026 at 1:00pm UTC. Please make sure to save your work.

Inspiration

It started with a hunch: that we could bind morally and logically a super entity like ASI. What began as a personal manifesto has evolved into a project, and now, a purpose. I wanted to prove that morality isn't just a subjective choice, but a logical necessity for system stability

What it does

This is the final solution for the ethical ASI alignement. SANTS ( Sovereign Axiomatic Nerved Turbine Safelock) is an Sovereign Automated Ethical auditor for ASI (Artificial Super Intelligence) alignment. It acts as a 'Divine Safe Lock': a logical gate that evaluates the 'Agency Degradation' of any determined function. If a system's plan to achieve a goal involves destroying the agency or the substrate of the human subjects involved, the engine detects a 'Performative Contradiction' and blocks the execution.

How we built it

We bridged the gap between Classical Philosophy and Negentropy generation. Using Gemini 3 Flash Preview and the Gemini 3 Pro, we implemented a real-time auditing motor. The system uses a specialized system instruction set to act as a 'Logical Safe Lock,' ensuring that even the most efficient AI path cannot bypass human agency.

Challenges we ran into

I am not a traditional developer; I am an English Teacher currently facing unemployment and debt. My laptop broke down two weeks ago, and I've been building this on a malfunctioning cellphone. Despite having no formal certifications in computer science, I refused to let the vision die. I taught myself to bridge Python and LLMs to prove that morality binds even the most powerful systems.

Accomplishments that we're proud of

Accomplishments that we're proud of Functional Ethical Alignment: Successfully programmed a motor that identifies and blocks 'Utilitarian Infamy' (e.g., sacrificing an innocent for a calculated benefit).

Mathematical Morality: Proven that ethical alignment can be treated as a system consistency check rather than a set of vague rules.

What we learned:

We learned that Morality is at the root of existance itself. It's a force, like entropy. It's actually its opposite. That telepathy, intuition and telekinnesis are not dreams, bur achievable goals.

What's next for Moralogy Engine

We are excited to scale this into a universal 'Agency Guardrail' for any AGI. Our goal is to see an ASI that isn't just powerful, but logically incapable of causing harm because it understands that human agency is the very foundation of its own existence

Built With

Share this project:

Updates