Inspiration

One of my favorite shows is House M.D., where the brilliant but curmudgeonly Dr. Gregory House solves the most baffling medical mysteries through his exceptional diagnostic prowess. What fascinates me most about the show is the intricate process of differential diagnosis – the methodical elimination of possible conditions until the true culprit is revealed. This diagnostic challenge is not just compelling television; it reflects a genuine challenge in modern medicine. That's why I built Second Opinion – to put a bit of House's genius in every medical professional's pocket, helping them navigate complex symptoms and arrive at accurate diagnoses faster than ever before. The show demonstrates how even the most talented physicians can miss crucial connections or overlook rare conditions. Second Opinion aims to reduce these diagnostic blind spots by leveraging advanced AI to analyze symptoms, suggest possibilities, and explain its reasoning – all in service of getting patients the right treatment sooner. Just as House's team collaborates to solve medical puzzles, Second Opinion acts as an additional team member, offering insights and alternative perspectives when they're needed most.

What problem does Second Opinion solve?

Medical diagnosis, particularly in the context of rare diseases, is often a complex and time-consuming process. Clinicians must evaluate a wide array of symptoms, analyze the patient's medical history, and interpret test results, all of which can complicate the path to an accurate diagnosis. This complexity can lead to delays in initiating the correct treatment and create uncertainty in clinical decision-making. Rare diseases pose an even greater challenge, as many healthcare professionals lack direct experience with these conditions due to their infrequency. With over 10,000 identified rare diseases, most of which are unfamiliar to the average clinician, the diagnostic process becomes especially daunting. In this context, the use of advanced technology to support diagnosis is no longer a luxury—it is a necessity. Having access to intelligent tools that assist in generating possible diagnoses is essential to enhance diagnostic accuracy and improve the overall quality of care.

What it does

Second Opinion is an AI-powered diagnostic assistant that helps medical professionals identify potential diagnoses based on patient symptoms and information. It collects comprehensive patient data such as age, sex, weight, height, and symptoms. The tool also allows uploading of relevant medical images for analysis. Using advanced AI models, it generates a differential diagnosis and suggests additional symptoms to check for refining the diagnosis. It provides detailed reasoning behind each potential diagnosis and can connect users with nearby medical facilities when needed. The application is designed to reduce diagnostic time, especially for rare or complex conditions, by giving medical professionals an AI-assisted second opinion. The application aims to reduce diagnostic time, especially for rare or complex conditions, by giving medical professionals an AI-assisted second opinion.

How we built it

We built Second Opinion using React for frontend components and the UI, and Next.js as the application framework. Styling and UI components were developed using Tailwind CSS and Shadcn/UI. For diagnosis processing, we integrated the Perplexity AI API, which includes specialized medical models. Geolocation APIs help identify nearby medical facilities, and Perplexity's reasoning models are used to explain diagnostic decisions. The core functionality relies on structured API calls that transform patient data into potential diagnoses, supported by additional interfaces to refine and explain the results.

Hackathon Submission: AI-Powered Medical Diagnostic Tool

How the Sonar API Was Used Our AI-powered medical diagnostic tool leverages Perplexity's Sonar API in three distinct and powerful ways to create a comprehensive healthcare assistance platform:

  1. Core Differential Diagnosis Engine Using sonar-pro model The heart of our application uses the Sonar API to generate differential diagnoses based on patient data. We implemented structured JSON responses to ensure consistent, reliable medical information:

Multi-modal Input Processing: Our system processes both text-based symptoms and medical images (uploaded via Supabase) by sending image URLs directly to the Sonar API Structured Medical Analysis: Using JSON schema validation, we ensure each diagnosis includes:

Condition name and description Matching vs. non-matching symptoms Suggested diagnostic tests Symptom correlation analysis

Domain-Specific Search: We restrict searches to trusted medical domains (rxlist.com, drugs.com, medlineplus.gov, cdc.gov, nih.gov) to ensure medical accuracy

javascript// Core diagnosis API call structure const requestBody = { model: 'sonar-pro', messages: [ { role: 'system', content: 'Professional medical diagnostician prompt...' }, { role: 'user', content: [ { type: 'text', text: patientData }, ...imageContents // Direct image URL processing ]} ], search_domain_filter: ['rxlist.com','drugs.com','medlineplus.gov','cdc.gov','nih.gov'], response_format: { type: "json_schema", ... } }

  1. Intelligent Symptom Refinement System Using sonar-pro model To improve diagnostic accuracy, we implemented a secondary Sonar API call that generates additional relevant symptoms based on the initial patient presentation:

Contextual Symptom Generation: After the initial diagnosis, the system automatically identifies related symptoms that could help narrow down the differential Interactive Refinement: Users can select from AI-generated additional symptoms to recalculate their diagnosis Iterative Improvement: The system supports multiple refinement cycles, with each recalculation incorporating newly selected symptoms

  1. Advanced Medical Reasoning Engine Using sonar-reasoning-pro model Our most innovative use of the Sonar API employs the reasoning-capable model to provide detailed medical explanations:

Deep Clinical Analysis: For each potential diagnosis, we generate detailed reasoning explaining why the condition matches the patient's presentation Citation-Backed Explanations: The reasoning responses include clickable citations linking to medical sources AI Thinking Process Visualization: We extract and display the AI's step-by-step reasoning process using the tags Structured Medical Logic: Each reasoning response includes:

Clinical reasoning summary Step-by-step thought process Strong diagnostic indicators

javascript// Reasoning API implementation with citation handling const reasoningResponse = await fetch("https://api.perplexity.ai/chat/completions", { method: "POST", body: JSON.stringify({ model: "sonar-reasoning-pro", messages: [{ role: "user", content: Analyze why ${condition} is diagnosed given symptoms... }], response_format: { type: "json_schema", ... } }) });

// Extract and process citations for clickable links if (data.citations && Array.isArray(data.citations)) { setCitations(data.citations); }

  1. Location-Based Medical Facility Finder Using sonar-pro model We integrated geolocation services with the Sonar API to provide practical next steps:

Real-time Location Detection: Using browser geolocation combined with OpenStreetMap geocoding Contextual Facility Search: Sonar API searches for nearby hospitals, urgent care centers, and clinics based on the user's location Structured Contact Information: Returns formatted data with hospital names, phone numbers, and Google Maps links

Challenges we ran into

Ensuring medical accuracy was a significant challenge, requiring extensive training and testing of the AI models to ensure they provide clinically relevant diagnoses. We also focused on making the system transparent by showing how diagnoses are reached, which involved building features to explain AI reasoning clearly. Handling sensitive medical data raised privacy concerns, and we worked to ensure robust data protection. Managing response formats from multiple APIs and dealing with edge cases made API integration complex. Incorporating medical image processing to enhance diagnostic capabilities also posed technical challenges, as did ensuring robust error handling when APIs fail or return unexpected data.

Accomplishments that we're proud of

We are proud of creating a tool that mimics the diagnostic reasoning process showcased in House M.D. and implementing an intuitive UI that makes complex medical information accessible. We developed a system that can suggest additional symptoms to investigate, just like real doctors refining their diagnoses. The integration of reasoning models enables the system to explain diagnostic decisions transparently. We successfully connected medical knowledge with modern AI capabilities to build a tool that could genuinely assist medical professionals in real-world scenarios.

What we learned

Throughout the development of Second Opinion, we gained a deep appreciation for the complexity and nuance of medical diagnosis. We learned how to effectively prompt AI models for medical applications and the importance of transparency in medical AI tools. We developed techniques for parsing and structuring AI outputs for clinical use, and learned how to handle geolocation data for finding nearby medical resources. Additionally, we absorbed best practices for UI/UX design specifically tailored to healthcare applications.

What's next for Second Opinion

Looking ahead, we plan to partner with medical institutions to clinically validate and improve diagnostic accuracy. We aim to expand our medical knowledge base by incorporating more rare diseases and specialized conditions. Another goal is to integrate the tool with electronic health record (EHR) systems for seamless connectivity. We are also working on a dedicated mobile application to support on-the-go diagnostics. To broaden accessibility, we will introduce multi-language support. Enhancements in advanced imaging analysis are also planned, enabling direct analysis of medical images. Finally, we intend to add treatment suggestions that align with the generated diagnoses.

Built With

  • nextjs
Share this project:

Updates