Inspiration
Sleep is something we all experience, but few of us fully understand. We wanted to build an app that not only improves how we wake up, but also helps us better understand our subconscious mind. Lumio was inspired by our curiosity about dream patterns, REM cycles, and the lack of smart tools to reflect on what our sleep says about us. We aimed to bridge technology, psychology, and wellness — and bring our dreams to light.
What it does
Lumio is a cross-device sleep companion that:
• Within a user-defined wake window, we attempt to detect the lightest point in their sleep cycle using Apple Watch data to minimize grogginess. If no clear low point is detected during that window, we default to a randomized wake-up time within the range.
• Detects REM phases and prompts users to record their dreams
• Uses a voice-driven AI assistant to guide the user through recalling their dream
• Generates an AI-powered summary of the dream and logs it in a personalized dream journal
• Provides real-time sleep phase visualization and daily sleep insights
How we built it
We began by designing the user interface in Figma, focusing on creating a clean, intuitive experience for voice-driven sleep and dream interactions. These designs were then implemented in SwiftUI using Xcode as our primary development environment.
For iOS development, we used Swift along with HealthKit and WatchConnectivity to enable real-time sleep phase monitoring and communication between iPhone and Apple Watch. Local dream data is stored using Core Data, while a modular architecture manages sleep logic, smart wake notifications, voice input, and journaling flow.
The AI dream assistant is powered by the OpenAI GPT-4 API, which generates contextual dream summaries based on voice-to-text transcripts captured through the app.
Throughout the development process, we used Git and GitHub for version control and collaboration, and developed within the Xcode IDE for iOS most-natural productivity. Our project is licensed under the MIT License for open and flexible distribution.
Challenges we ran into
• Git merge-conflict – Four people hacking on Swift files at 3 a.m. meant we kept stepping on each other’s commits. One spectacular rebase nuked half the UI layer and took an hour of cherry-picking to fix.
• One of our biggest challenges was designing an AI assistant that could carry a natural, guided conversation to help users recall their dreams. Unlike a typical chatbot, our assistant needed to prompt users with emotionally aware and reflective questions, feel more like a calm companion, and adapt to unstructured, voice-to-text input.
• Integrating multiple asynchronous services (HealthKit, OpenAI, mic input) into a smooth SwiftUI experience.
• Working with WatchConnectivity, especially syncing real-time sleep data between iPhone and Apple Watch.
Accomplishments that we're proud of
• Built a working AI voice assistant that holds a natural, dream-guided conversation and summarizes user input using GPT-4 — all within a SwiftUI app.
• Successfully integrated Apple Watch sleep data using HealthKit and WatchConnectivity to detect sleep phases in real-time.
• Designed and implemented a clean, accessible UI in Figma and brought it to life with SwiftUI.
• Created an intelligent wake optimization system that triggers alarms during light sleep phases or defaults to a smart fallback within a user-defined window.
Chi - Built the core of the AI assistant workflow, including voice-to-text transcription, managing GPT prompt flow, and handling API interaction logic for generating contextual dream summaries.
Isaac - Led the integration of Apple Watch and HealthKit, enabling real-time sleep phase detection and communication between iPhone and Apple Watch. Also helped implement the smart wake logic based on sleep stage data.
Pranav - Helped develop the app’s SwiftUI frontend, managing dream journal architecture, screen transitions, and clean data flow between components. Also handled GitHub version control, build organization, and app structure in Xcode. Helped with a dream journaling in the backend as well and generating dream summaries using OpenAI.
Yejin - Designed the app’s user interface and user flow in Figma, ensuring a clean and intuitive UX. Contributed to frontend styling in SwiftUI and helped define the brand identity and visual direction for the app.
What we learned
Throughout this project, we gained a deeper understanding of how to bring together hardware, AI, and thoughtful UX to solve a complex problem. We learned how to work with HealthKit and WatchConnectivity to access real-time sleep data from Apple Watch, and how to build voice-based experiences using Apple’s Speech framework. We also explored the nuances of prompt engineering with GPT-4 to guide conversations that feel human, helpful, and emotionally aware. On the frontend side, we learned how to rapidly prototype in Figma and translate designs into responsive SwiftUI views. Most importantly, we learned how to collaborate under time pressure, break down responsibilities effectively, and turn a high-level concept into a working product.
What's next for Lumio
As we look towards the future for Lumio, we hope to continue improving on current features while also adding some more that we weren't able to get to during the hackathon. For instance, one feature we are interested in adding is the integration of login screens so that we can associate information with specific users and actually retain that data for whenever that particular user logs back in. As for some existing features, we hope to make some adjustments to the UI/UX later on to look cleaner and smoother, continue training and making our chatbot as error-free as possible, and hopefully as we originally intended, try integrating the app with the Apple Watch interface for accurate sleep monitoring.
Log in or sign up for Devpost to join the conversation.