Run powerful AI locally — no cloud, no limits, complete privacy.
Features • Installation • How It Works • Development • License
This application is a core component of the Sigma Eclipse project — a privacy-focused AI ecosystem. It works seamlessly with:
- Sigma Browser — A privacy-first browser with built-in AI capabilities
- Sigma Eclipse Extension — Browser extension that connects to this local LLM server
Together, these components provide a complete solution for running AI locally while browsing the web, ensuring your data stays on your machine.
Sigma Eclipse LLM is a lightweight desktop application that lets you run large language models (LLMs) locally on your machine. No API keys, no subscriptions, no data leaving your computer — just pure, private AI at your fingertips.
Built with Tauri and powered by llama.cpp, Sigma Eclipse LLM combines native performance with a beautiful, intuitive interface.
- One-click setup — automatically downloads everything you need
- Zero configuration — smart defaults that just work
- Clean interface — no clutter, no confusion
- 100% local — your data never leaves your machine
- No accounts — no sign-ups, no tracking, no telemetry
- Offline capable — works without internet after initial setup
- GPU acceleration — automatic GPU detection and optimization
- Multiple models — switch between models easily
- Native performance — Rust backend with minimal resource usage
- Browser integration — seamless connection with Sigma browser extension
- macOS (Apple Silicon)
- Windows (x64)
Download the latest release for your platform:
| Platform | Download |
|---|---|
| macOS (ARM) | Sigma Eclipse.dmg |
| Windows | Sigma Eclipse Setup.exe |
- Open Sigma Eclipse
- Wait for automatic setup — the app downloads llama.cpp and the default model (~3-6 GB)
- Click "Start" — your local AI server is now running!
That's it. No terminal commands, no manual downloads, no config files.
┌─────────────────────────────────────────────────────────────┐
│ Sigma Eclipse │
│ │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ React UI │◄──►│ Tauri Core │◄──►│ llama.cpp │ │
│ │ (Frontend) │ │ (Rust) │ │ (Server) │ │
│ └──────────────┘ └──────────────┘ └──────────────┘ │
│ │ │
│ ▼ │
│ ┌──────────────────┐ │
│ │ Native Messaging │ │
│ │ (Browser API) │ │
│ └──────────────────┘ │
└─────────────────────────────────────────────────────────────┘
Sigma Eclipse manages a local llama.cpp server that provides an OpenAI-compatible API. This means:
- 🌐 Local API endpoint at
http://localhost:8080 - 🔌 Compatible with any tool that supports OpenAI API
- 🧩 Native messaging enables browser extensions to communicate directly
Access settings via the ⚙️ gear icon:
| Setting | Description | Default |
|---|---|---|
| Context Size | Maximum conversation context (tokens) | Auto-detected |
| GPU Layers | Number of layers offloaded to GPU | Auto-detected |
| Model | Select from available models | Gemma 2B |
💡 Tip: Sigma Eclipse automatically detects your hardware and suggests optimal settings.
- Node.js v18+
- Rust (latest stable)
- Platform-specific dependencies:
- macOS:
xcode-select --install - Windows: Visual Studio C++ Build Tools
- macOS:
# Clone the repository
git clone https://github.com/ai-swat/sigma-eclipse-llm.git
cd sigma-eclipse-llm
# Install dependencies
npm install
# Run in development mode
npm run tauri devnpm run tauri buildBuilt artifacts will be in src-tauri/target/release/bundle/
sigma-eclipse-llm/
├── src/ # React frontend
│ ├── components/ # UI components
│ ├── hooks/ # React hooks
│ ├── styles/ # CSS styles
│ └── types/ # TypeScript types
├── src-tauri/ # Rust backend
│ ├── src/
│ │ ├── main.rs # Entry point
│ │ ├── server.rs # LLM server management
│ │ ├── download/ # Model & binary downloads
│ │ └── native_messaging.rs
│ └── tauri.conf.json # Tauri configuration
└── package.json
Contributions are welcome! Please feel free to submit a Pull Request.
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
This project is licensed under the PolyForm Noncommercial License 1.0.0.
TL;DR: Free for personal, educational, and non-commercial use. Contact us for commercial licensing.
- llama.cpp — The amazing LLM inference engine
- Tauri — Framework for building tiny, fast desktop apps
- Hugging Face — Model hosting and community
Made with ❤️ by AI SWAT