Inspiration

Medical examinations are expensive for many people, but sometimes they are required, especially when a life-threatening illness is present. We wanted to take the guesswork out of diagnosing one of the most pervasive and perhaps most confusing types of cancers: melanoma. Many people have nevi, or moles, that may be interpreted as cancerous but are actually benign, and many others have malignant tumors that look just like benign nevi. Using AI algorithms, specifically transfer learning of the Google Inception model, we achieved 90% test accuracy in correctly diagnosing melanoma.

What it does

The application receives an image from the user and passes it through the neural network we trained to determine whether the mole is cancerous or non-cancerous. The network takes the image and produces a binary output, the probability it thinks a mole is cancerous or non-cancerous.

How we built it

Our hack uses the Google Inception Network and Tensorflow API to train, anaylze, resize, and calibrate our datasets. We compiled images of melanoma and nevi from the Internet, and organized them into training datasets for the APIs to analyze. Due to the large sizes of the dataset and pixel count, we employed transfer learning on the pre-trained Inception network to calibrate the network for our needs.

Challenges we ran into

Initially, we tried to train the neural network using gradient descent and random number initialization methodologies, but even small datasets proved to be too difficult to calculate, as the memory requirement froze one of our members' computers and forced a restart. Thus we agreed to perform transfer learning instead, and our training improved significantly even with a larger dataset.

Because Tensorflow used to not be available for Windows, many online tutorials included people first installing Docker, a VM-like container for software, before using Tensorflow. Our team decided to install Docker for the sake of learning the API, and while we eventually figured out how to install the system onto our laptops, we could not figure out how to do so on one of the workstations our members had access to. This problem was mainly due to operating system incompatibility, as the workstation is running Windows Server 2016. We simply circumvented this issue by analyzing the code in the tutorials and only making use of the tensorflow calls.

Our initial idea involves putting our program into an app format so that it is accessible to a wide range of people. However, tensorflow proved difficult to incorporate into Android Studio. We were able to find a repository for tensorflow on android, but the implementation required a Linux OS, which none of us had access to. So we used the workstation to run a VM Ubuntu image. While we were able to compile and build Android Studio with tensorflow, we were ultimately unable to install the APK to our phones, because the software requirements for the APK are beyond that of the devices in our possession. However, once we have access to more recent devices, we will be able to install our program onto mobile devices.

Accomplishments that we're proud of

Our primary objective, which was to build a neural network capable of recognizing the difference between non-cancerous and cancerous tumors, was accomplished. Indeed, the adage "the devil is in the details" could not have been more appropriate for our team. From insufficient RAM to incompatible software suites, our team faced many issues actually using our neural network in a practical setting. Despite these issues, we are confident that given the flexibility of the model we have built, we will be able to incorporate our solution to many devices and applications in the future.

What we learned

How to use tensorflow API to not only train a neural network from scratch but also use transfer learning to train a pre-built network to suit our specifications.

How to use Virtual Machines to host Linux OS, and using Linux OSes to install programs.

How Convolutional Neural Networks and transfer learning work.

What's next for SkinScan

There remains many opportunities to improve accessibility of our program. Once the model works on Android devices, it is only a matter of time before our program can be used in iOS and even WebApp format. Our goal is to help people make better decisions about their health, and providing ubiquitous access to our solution will help with that objective.

Built With

Share this project:

Updates