🫁 LungScapeAI
🌟 Inspiration
Chronic lung diseases such as COPD, pulmonary fibrosis, and interstitial lung disease progress slowly and unevenly across the lungs. While medical imaging provides detailed scans, these are typically viewed as static 2D slices, making it difficult to understand how different lung regions function and deteriorate.
For clinicians, subtle regional changes are easy to miss. For patients, the problem is even greater — they are often shown grayscale images and numerical metrics that offer little intuition about what is actually happening inside their lungs.
We were inspired to bridge this gap by asking a simple question:
What if lung health could be seen, explored, and understood - not just measured?
LungScapeAI was created to turn complex lung imaging into an interactive, educational, and spatial experience that empowers both clinicians and patients.
🪄 What it does
LungScapeAI is an AI-powered interactive lung visualization platform that transforms 2D lung scans into an explorable 3D model with functional insights.
Key features
3D Lung Reconstruction
Converts standard CT/MRI scans into anatomically accurate 3D lung models using imaging metadata and AI-based segmentation.Anatomical Layering
Users can toggle and explore lungs, bronchi, alveoli, blood vessels, pleura, and disease regions independently.Functional Overlays
Visualizes lung expansion, airflow pathways, stiffness, and air trapping directly on the 3D model.Insight-to-Anatomy Mapping
Clinical metrics are linked to specific lung regions, allowing users to understand where and why changes are occurring.Learning-First Interface
Preserves medical terminology while teaching anatomy through interactive highlights, animations, and guided exploration.Natural Interaction
Supports mouse-based and camera-based hand gestures for intuitive 3D navigation.AI Explanation Layer
Uses an LLM to explain anatomy, metrics, and visual findings in context - without performing diagnosis.
⚙️ How we built it
LungScapeAI combines medical imaging AI, geometry-based reconstruction, and interactive web visualization.
Pipeline overview
1. Input Processing
- Accepts lung CT/MRI scans (DICOM)
- Extracts spatial metadata such as slice thickness and orientation
2. AI Segmentation
- Uses U-Net–based models to segment lungs and internal structures
- Identifies regions such as airways, alveoli-dense zones, and diseased tissue
3. 3D Reconstruction
- Stacks segmented slices into volumetric data
- Generates optimized surface meshes using marching cubes
4. Functional Analysis
- Computes interpretable metrics (lung volume, expansion, stiffness proxies)
- Maps metrics back to anatomical regions
5. Visualization & Interaction
- Renders real-time 3D lungs in the browser
- Enables layer toggling, rotation, slicing, and region highlighting
6. LLM Explanation Layer
- Translates metrics and visuals into contextual explanations
- Powers guided learning and Q&A without medical inference
Tech stack
- AI: PyTorch, nnU-Net
- Medical Imaging: pydicom, SimpleITK
- 3D & Geometry: VTK, PyVista, marching cubes
- Frontend: React, Three.js, TypeScript, TailwindCSS
- Backend: FastAPI
- Interaction: OpenCV, MediaPipe
- LLMs: Explanation and UI guidance
Log in or sign up for Devpost to join the conversation.