Welcome to Spice.ai
Welcome to the Spice.ai Cloud Platform!
The Spice.ai Cloud Platform is an AI application and agent cloud — an AI-backend-as-a-service with composable, ready-to-use building blocks including high-speed SQL query, LLM inference, vector search, and RAG, built on cloud-scale, managed Spice.ai OSS.
This documentation covers the Spice.ai Cloud Platform.
For the self-hostable Spice.ai OSS runtime, visit docs.spiceai.org.
What You Can Do
With the Spice.ai Cloud Platform you can:
Query and accelerate data — Run high-performance SQL queries across multiple data sources with results optimized for AI applications and agents.
Use AI models — Perform LLM inference with OpenAI, Anthropic, xAI, and more for chat, completion, and generative AI workflows.
Build agentic AI apps — Combine data, models, search, and tools into production-grade AI agent backends.
Collaborate on Spicepods — Share, fork, and manage datasets, models, embeddings, evals, and tools in a collaborative hub indexed by spicerack.org.
Use Cases
Build AI agent backends with unified data and model access
Cache and accelerate hot data for low-latency applications
Federated queries across warehouses, lakes, and databases
Semantic search across enterprise data sources
Retrieval-augmented generation with your own data
Quick Start
Get up and running in minutes:
Community & Support
Slack — Ask questions and get help from the team at spice.ai/slack.
GitHub — File issues and contribute at github.com/spiceai/spiceai.
Enterprise support — Paid plans include priority support with an SLA.
Help Center — Browse the Help Center for troubleshooting, guides, and FAQs.