DEMO

Verifiable AI inference

This is a live demo of verifiable AI inference built on Caution, running inside a secure enclave. The conversation is end-to-end encrypted and only decryptable by you and the enclave.

Problem

Most confidential compute solutions can prove that code hasn't changed, but not what code was originally loaded, and they still expose data to untrusted systems outside of the enclave.

Solution

Caution combines verifiability and true end-to-end encryption to prove what code runs inside the enclave, and keep conversations private.

Verifiable AI Chat

Ask a question to see verifiable AI inference in action.

Powered by llama.cpp. Phi-3.1-mini (3.8B). Runs on CPU today. GPU support coming.