🧠 Precept: AI-Driven Experiential Platform
Unlock subconscious decision-making and photorealistic 3D visualization -- together!
🚗🧠 Inspiration
We're inspired by the belief that technology should feel both magical and deeply personal.
On one hand, planning a reunion with friends scattered around the globe is often logistical drudgery -- what if your brain could choose the destination before you even say a word?
On the other, the future of mobility demands seamless, personalized experiences -- from selecting eco‑friendly flights to virtually previewing local transport and vehicle options.
By combining real‑time neural sensing with cutting‑edge neural rendering, Precept creates immersive experiences that mind‑meld with emotion, democratize high‑fidelity visualization, and reimagine mobility as a whole.
🔍 What It Does
Precept consists of two integrated modules:
SEAT Car Digital Twin
We create a complete 3D virtual replica of a SEAT car that:
- Allows users to explore both the exterior and interior of the car in real-time with photorealistic quality
- Enables interactive viewing from any angle, with accurate lighting and reflection details
- Provides a foundation for applications in car customization, virtual showrooms, and engineering analysis
- Democratizes access to high-quality 3D car visualization without the need for traditional 3D modeling expertise
EEG-Powered Travel Planner
- Live brainwave input via Muse Headband scores 10 AI‑curated cities for engagement, arousal, mindfulness.
- Top 3 destinations shown; users finalize their pick subconsciously.
- Friends repeat or vote in real time; smart budget‑and‑harmony warnings keep everyone on board.
- Once chosen, Skyscanner integration surfaces optimal flights (greenest, cheapest, fastest).
🛠️ How We Built It
Digital Twin Rendering Pipeline:
- Data Capture: Hundreds of multi-angle photos of the SEAT car (interior & exterior).
- COLMAP: Camera poses & sparse point clouds.
- NeRF Training: High-fidelity but slower neural radiance field rendering.
- Gaussian Splatting: Real-time interactive visualization at 30+ FPS, 1080p.
UX Design:
- Immersive, emotionally aware decision flows and photoreal visuals.
Frontend (React/Vite):
- Real-time data visuals, city collages, voting UI, and 3D car viewer.
Backend (Python + WebSockets):
- EEG streaming, metrics extraction (alpha, beta, engagement, frustration).
- Local LLM for “vibe → city collage” and itinerary generation.
- Flight data via Skyscanner API (price, CO₂ emissions, distance).
EEG & LLM Pipeline:
- Real-time Muse EEG headband data ingestion via BLE.
- Signal processing into engagement, arousal, and mindfulness metrics.
- LLM-driven generation of 10 on-vibe cities for user evaluation and itinerary.
⚠️ Challenges We Ran Into
- Reflective Surfaces: Car exteriors are highly reflective, which complicates neural reconstruction methods that assume static appearances
- Complex Geometry: Intricate parts like grilles and wheels required specialized training approaches
- Scale Issues: Balancing detail between exterior and interior was challenging for consistent quality
- EEG Noise & Latency: Smoothing brain signals and optimizing bandwidth for real-time responsiveness.
- Multi-User Sync: Balancing privacy (budget constraints) with low-latency, collaborative voting.
- Hardware Integration: While Muse Headband integration works seamlessly, we could not integrate Leap Motion into our app due to the inavailability of their SDKs.
🏆 Accomplishments We’re Proud Of
- Deployed a full-stack EEG→vote pipeline end-to-end in under 48 hours.
- Built a real-time digital twin of a SEAT car at 30+ FPS, 1080p—no specialized hardware.
- Seamlessly merged LLM, sensor data, neural rendering, and live travel data into one platform.
- Created an experience that’s both technically sophisticated and genuinely fun — brains literally helping pick your next trip!
📚 What We Learned
- Brainwave Interfaces + UX: Subconscious inputs can elevate decision-making when paired with thoughtful design.
- Neural Rendering Trade-Offs: How to balance fidelity, speed, and training costs for photorealism.
- Data Prep & Calibration: Critical for both EEG accuracy and multiview reconstruction consistency.
- Real-Time Systems: The art of feedback loops, latency management, and user trust in synchronous experiences.
🔮 What’s Next for Precept
- Multi-User EEG Fusion: Parallel subconscious inputs from multiple participants, not just sequential voting.
- Mood-to-Map Visualization: Dynamic world maps that pulse with group emotional states.
- Carbon-Aware Itineraries: Integrate ground-travel options and climate impact scoring.
- In-App Booking & Group Discounts: Full end-to-end planning, from subconscious pick to ticket purchase.
- AR/VR & WebGL Integration: Bring both travel planning and car twins into immersive VR and browser–based experiences.
- Accessibility & Neurodiversity Features: Inclusive interfaces so everyone can contribute their neural flair.
By uniting the power of mind sensing and neural rendering, Precept paves the way for a new era of empathetic, immersive, and democratized digital experiences.
We can’t wait to see where our brains — and our cars — take us next!
Log in or sign up for Devpost to join the conversation.