Inspiration
EyeCare is a iOS App that may revolutionize the healthcare industry with the aid of computer vision, geometric computing, and machine learning. We start from a common eye disease called corneal ulcer, which affects 2-10% of populations and requires prompt treatment. The major challenge lies in the monitoring process of the treatment procedure, which remains manual and hence time consuming and prone to human errors. We wanted to develop an app that provides an easier way for patients and doctors to track their recovery progress effectively.
What it does
EyeCare is a mobile app aiming to help people who are suffering from corneal ulcers and epithelial defects. Not only being able to record and track a patient's recovery progress, EyeCare can also provide reliable automatic detection on any picture taken by an iPhone with a 15 - 25 magnifying lens. A patient can ask a person to take a picture using his smartphone with a 15 magnifying lens. EyeCare is able to detect the ulcer region on the patient's cornea and compute the defected area precisely. Patients will be presented with a comparison of their original picture and a new picture with the segmented defected area being highlighted. All tested photos will be stored on our server so that each user will be able to access their records everywhere through our mobile app. EyeCare will also generate a plot of the defected area to show the healing process. In addition, the physician can also user our web app to access and analyze his/her patients' data, therefore, the physician will be able to diagnose this disease more efficiently and remotely. Compare to traditional detection, our new approach is significantly less expensive and easier to perform.
Segmentation:

How we built it
Thanks to all the awesome technologies and libraries, we were able to build this rather complex app in less than 48 hours. We built this app using various technologies.
- The app uses a client-server architecture.
- An iOS framework and Swift to develop the mobile front-end used by patients.
- A web front-end using HTML, CSS and JavaScript.
- A MySQL database to store the data inorder to share the health records between patients and physicians
- A PHP backend to serve and process the data
- The Core of the app is an optimized and robust algorithm that can detect the ulcer region automatically implemented in Mathematica
- Algorithms from Geometric Computing for Biomedicine Class
- Mathematical Morphology
- Erosion
- Dilation
- Open
- Close
- Get largest component by either "8-connectivity" or "4-connectivity"
- Mathematical Morphology
- Algorithms from Computer Vision Class
- Image filter
- Edge Detection
- Convolutional Kernel
- Algorithms from Geometric Computing for Biomedicine Class
Pictures taken by patients are sent to the server and analyzed by our computer vision algorithms. The detection results are then sent back to the mobile front end to help patients to self-diagnose.

Challenges we ran into
Our goal is to help patients to diagnose themselves with a few simple steps and receive detection results instantaneously. In order to perform detection in real-time, we spent a lot of time to optimize our algorithm. Another challenge we ran into is to setup Mathematica on an Amazon Ubuntu instance and hook it up with our PHP backend. It turned out the setup process is not trivial. We spent quite some time to install all the required packages and troubleshoot problems we have never seen.
Accomplishments that we're proud of
- Super Fast: Detection and computation of ulcer region for each image (1-1.5MB) only takes 2.3 seconds
- My original implementation of the core algorithm is approximately 1000 - 1500 lines (based on Professor's requirement - like we cannot use many built-in functions, everything needs to be written from scratch, which is useful because ), but after optimization and make full use of Mathematica built-in function, it's only approximately 100 lines.
- I implement Iris Detection within 2 hours, I abandoned it eventually because there are too much computations of this algorithm which takes 5 minutes for one image. But I used to need at least 10 hours to understand an algorithm from paper and do protoyping. This time is fast due to the deadline. I will implement it again in C++.
What we learned
- How to setup Mathematica and deploy our backend on a AWS Ubuntu server.
- Mathematica built-in functions are extremely fast. I guess it calls C++. I also need to learn how to do this.
Last but definitely not the least, Xcode is not fun to work with :)
What's next for EyeCare
- 1. Add more eye anatomy application (below shows segmentation on retinal vessels, which provides important features for screening of diabetes, eye diseases, and cardiovascular diseases) - this is my final project for Computer vision class and geometric computing algorithm class and I will also try to use some important features to classify glaucoma eye and normal eye.

- 2. Extend application for pet eye care

Built With
- amazon-web-services
- apache
- bash
- biomedicine
- biometrics
- bootstrap
- computer-vision
- geometric-computing
- html5
- image-segmentation
- ios
- javascript
- machine-learning
- mathematica
- mathematical-morphology
- mysql
- php
- swift
- wolfram-technologies



Log in or sign up for Devpost to join the conversation.