Inspiration

We were inspired by the idea of transforming everyday smartphones into powerful autonomous navigation tools. Most robotics projects rely on dedicated sensors, but we realized that modern phones already have LiDAR technology capable of high-precision spatial mapping. We wanted to build something that could analyze environments, interpret depth data, and perform AI-driven tasks — all without extra hardware. KnightMobile was born from our desire to combine real-time LiDAR analysis with artificial intelligence, allowing users to issue simple commands and watch their phone understand and describe its surroundings like a real autonomous robot would.

What it does

KnightMobile turns your phone’s LiDAR system into an intelligent spatial awareness assistant. Users can send simple commands like “Find the red mug” through the dashboard, and the app interprets the LiDAR scan using AI. It analyzes 3D point clouds, estimates depth, identifies potential objects, and provides a detailed description of the scene. The backend processes and stores these scans while the Gemini-Pro model performs interpretation. Everything is visualized on our futuristic control dashboard, displaying mission logs, depth maps, and analysis results in real time. KnightMobile brings AI-assisted perception to the palm of your hand.

How we built it

We built KnightMobile using a combination of Flask, Socket.IO, and SQLAlchemy on the backend to handle LiDAR data, API endpoints, and database operations. The frontend, developed with React and Tailwind CSS, serves as an interactive mission control dashboard for issuing commands and viewing system feedback. Google’s Gemini-Pro AI model powers the LiDAR data interpretation, extracting semantic meaning from 3D scans. We implemented multithreading to prevent the backend from freezing during heavy AI tasks. The project blends hardware data, machine learning, and real-time web design into one responsive and visually engaging platform.

Challenges we ran into

We faced several major challenges throughout development. The biggest was integrating smartphone LiDAR data into Flask in a format the backend could process efficiently. iOS sends JSON with thousands of depth points per frame, which required optimizing data parsing. We also struggled with the Flask reloader freezing due to SocketIO conflicts and AI initialization delays. Synchronizing real-time LiDAR streams with Gemini’s analysis demanded threading to prevent blocking calls. Debugging network issues, local backend connection errors, and maintaining stable data flow between the phone and server were difficult, but each challenge improved our technical problem-solving under pressure.

Accomplishments that we're proud of

We’re proud of building a fully functional AI-LiDAR analysis system that runs entirely on a smartphone and laptop connection. Our Flask server successfully processes live LiDAR data, the Gemini model performs descriptive analysis, and our dashboard visualizes all of it cleanly. Achieving this without any external sensors or robotics hardware was a huge milestone. We also created a dynamic mission system where users can define objectives, and the AI generates contextual insights. KnightMobile proves that advanced perception and autonomy can be achieved using tools everyone already owns — making robotics more accessible to everyday innovators.

What we learned

This project taught us how crucial system design and threading are in real-time AI applications. We learned to manage concurrent AI calls, SocketIO communication, and LiDAR data streams while preventing backend stalls. Working with Google’s Generative AI SDK showed us how to structure prompts for technical interpretation instead of natural conversation. We also gained experience in balancing performance and readability between backend architecture and frontend UX. Most importantly, we learned that innovation doesn’t always require custom hardware — sometimes, reimagining the potential of existing technology can lead to groundbreaking new capabilities.

What's next for KnightMobile

Our next goal is to make KnightMobile fully autonomous. We plan to integrate pathfinding and AR visualization so users can see AI-detected objects directly overlaid on their camera feed. We also want to develop a companion mobile app that streams LiDAR data seamlessly to the Flask backend without manual configuration. Expanding into multi-sensor fusion — combining LiDAR, vision, and IMU data — would make the system smarter and more accurate. Ultimately, KnightMobile will evolve into a flexible AI framework for LiDAR-based navigation, mapping, and exploration across robotics, accessibility, and augmented-reality fields.

Built With

Share this project:

Updates