Inspiration

While the 3 of our teammates are taking a class called HCI for healthcare, we had the chance to visit many healthcare facilities around the San Diego area. The visit to the geriatrics department left a deep impression on us as the physicians told us that many older adults are taking many drugs and they are often concerned about the medical complications and potential interactions. However, the doctor often lacks time to explain every detail to the patients and they also have trouble managing the drugs prescribed by different doctors. To ease the concerns of many older adults or patients who are taking multiple drugs at the same time and reduce the workload for physicians, we decided to design an easily accessible mobile app to help them check any potential drug or food interactions.

What it does

EasyMed can check drug-to-drug interactions (DDI), and drug-to-food interactions (DFI), educate you on more facts about common interaction pairs, and help you maintain a consistent drug list. It will check drug interactions first by scanning the barcode of the medicine and show the drug detail page including information on drug description, side effects, and interactions of this drug with medications in the user’s list. Check food interactions allow users to input the food ingredients by voice and then inform users if there are any interactions with the drugs in their list. Explore will update users with relevant articles and videos regularly. The drug list feature allows users to maintain a coherent list that they can show their doctor at the time of visit.

How we built it

We firstly researched much literature to learn how to predict drug-to-drug and drug-to-food interactions. By comparing the comprehensiveness of the database and the accuracy of the predictions, we selected the work from JY Ryu et al., 2018, which is expected to achieve 92.4% mean accuracy with the model employed. After deciding on the ML model, we used Azure Machine Learning to train our models, which saved a lot of time for us. To make our product more accessible, we also built an Android mobile application and used the Azure Container App and Azure SQL Database as our backend server which can save users’ data and interact with the ML model and Android application requests. Even though everyone may suffer from drug-to-drug and drug-to-food interactions, older adults are our primary users, so to make the user experience is more friendly for them, we also integrated Azure Cognitive Science services (text-to-speech and speech-to-text) into our application, which can help them input food name and understand the result easily. Also, most people are not from a medical background, we can’t directly show them the result from the scientific paper, so we also integrate a medical dictionary API to help users understand the complicated medical terms. And we utilize barcode scanning API and Azure speech-to-text service to help our users input drugs and food easily.

Challenges we ran into

The first challenge we met was to research and study drug-to-drug/food interaction literature from prior arts. We need to read a lot of literature about DDI/DFI and compare their implementation method and accuracy to find the best one as our ML model algorithm. The second challenge is we can’t directly use the ML model from literature, we need to optimize its efficiency and organize the output which can be understood easily by normal users.
Also, since many of our users are older adults, another challenge is how to build a friendly AI product for them. We interviewed older adults to learn their needs and pain points, and found it’s difficult for them to read a heavy text and understand medical terms, so we utilize the Azure text-to-speech service and medical dictionary API to help them use our application.

Accomplishments that we're proud of

We interviewed 6 potential users including 2 older adults, 1 geriatric physician, 1 registered nurse, and 2 younger adults. We showed our prototypes to them and they really like our overall idea and design. And all of them think an app that can check interactions for them would be very useful. According to an older adult Mary who is taking more than 6 drugs concurrently, she thinks that the app helps her to learn the medical complications and ease her concerns. According to the user testing feedback and advice from our instructor, CS professor Nadir Weibel at UCSD, we continuously iterated our app 4 times in a short period of time. We are glad that we are able to accomplish the whole process from research, and design to development in less than one month. We believe we couldn’t achieve such accomplishments without a hard-working team and many people who are willing to support us on the way.

What we learned

This competition gives us a great insight into the difficult yet interesting nature of incorporating AI into products. We see how powerful and useful AI systems like Azure can be which allows us to develop great products in a short time span but incorporating all different services together was difficult for us. We also realized that there are many limitations and complications with AI systems since there are many ethical considerations to take into account. We kept the Microsoft Responsible AI principles in mind when crafting our solutions to ensure that our product is fair and understandable to all. Thinking about the practicability and ethical consideration of using AI was a great learning experience for us.

Although all of us had prior experience with product development to some extent, it was still extremely difficult for us to come up with a good idea and implement it in less than a month. We came up with many solutions related to AI initially but decided to settle on this one after interviewing several older adults. We learned that team collaboration, a clear roadmap, and pre-defined goals are extremely important. Although we have set project goals with milestones listed for each timestamp, we are still struggling with unforeseen risks that might hinder us from submitting on time. Since we are all taking finals around the submission time, we had to balance our own courseload and the project to achieve the best results for all. But we learned to work under constraints and time limitations which would be beneficial to us in the long run. In conclusion, the project was a great learning experience for all of us and we really enjoyed the process of tackling each challenge.

What's next for EasyMed

There are a couple of things we want to do for EasyMed including further improving the model, iterating on the features, and conducting more user testing. Although the model we used is already close to the highest standard in the industry, we still want to improve the reliability and accuracy of the algorithm through further iteration to ensure we are giving more trustable results to the users. From user research, we also realized that dosage information is important for analyzing drug interactions so we also want to further train our model with this parameter. Through user research, we also learned that doctors and patients use medical record systems like Mychart a lot so we also want to possibly integrate our app with it to improve user experience. We also want to add more features like medicine reminders and a search function that we weren’t able to implement due to time constraints. In conclusion, the submission of the project doesn’t mark the end of our exploration, we would like to carry this project further and possibly launch it in Google Store and develop the IOS version in the future so it can benefit more people.

Built With

  • android
  • auzre-machine-learning
  • azure-cognitive-science
  • azure-cotainer-app
  • azure-sql-database
  • barcode-scanning-api
  • foodb
  • java
  • keras
  • numpy
  • python
  • scikit-learn
  • scipy
  • tensorflow
  • upc-lookup-database
  • webster-medical-dictionary
+ 1 more
Share this project:

Updates