Inspiration
Doctors will often use a CDSS to help aid their medical decisions. A CDSS is a clinical decision support system that will use AI to aid in making informed decisions. However, these systems usually give a blanket decision with no reasoning.
ML-based clinical decision support systems (CDSSs) are at the forefront of a paradigm shift in healthcare today. With the ability to sift through masses of unstructured data across text and imaging modalities, there is increasing need to adopt or augment traditional healthcare with the capabilities these tools provide. Despite their advances, a number of ethical concerns have been raised in the literature on their widespread adoption, most notably centered around a lack of explainability backing model decisions. Here, we present a simple framework for interpretable medical AI diagnostics, to better understand the decisions of any black-box classifier operating on medical scans.
What it does
Our approach is based on a two-stage process and incorporates a feedback loop where doctors can benefit from model insights and notes made by the appropriate lab technician. To demonstrate the approach, we consider one example: cancer detection in histological scans. In phase I, we take a model trained to classify one of 7 different types of cancer, and take the penultimate feature vector as the output of the model (as opposed to the prediction itself). We then apply a supervised dimensionality reduction algorithm to project the outputs of the model that correspond to each image to a 3D space, and allow the user to visualize and annotate clusters of images based on attributes of interest. In phase II, where a doctor relies on our system to support a diagnosis, we instead retrieve the nearest neighbors of the new scan, along with their ground truth label and technician provided explanation. Essentially, phase 1 is able to better explain the "black box" and why it made the decisions it made, while phase 2 will use these explanations to give a reasoned answer to the doctor on why the AI made a particular decision.
How we built it
We built two separate systems - one for the radiologists to make annotations on the predictions the model made, and another for the doctor to use these annotations to make a more informed decision.
Phase 1 primarily uses Python (dash, umap, pytorch, sklearn, flask).
Phase 2 uses React as a front-end and Python as a backend. We chose python as a backend as it allowed us to process our data easier, while still serving information to the front-end in a simple way.
Feel free to take a look at our Github to better understand our framework!
Challenges we ran into
Our primarily challenges lied in developing an easy to use AI, and being able to extract the details of machine learning to a higher level so someone with minimal experience could understand how to use it. For example, we chose to use an intuitive graph to allow the radiologist to interact with their data in a 3d space. Furthermore, we struggled to find the best way to inform the doctor of this information, but chose to use a simple UI.
Accomplishments that we're proud of
We are most proud of the uniqueness of our idea. Through research we found that doctors really value explained AI models. Through talking to medical professionals, we noticed if we were able to give them a safer reason to adopt AI, this would help make the predictions more reliable.
We are also proud of having a fully implemented idea. In the timeframe, we developed two independent, but connected subsystems that interfaced together in a very impactful way.
What we learned
We learned that things often go wrong at the last minute. Accounting for a grace period is very helpful in designing a better hackathon project.

Log in or sign up for Devpost to join the conversation.