Inspiration

Recently, billions of desert locusts invaded East Africa at unprecedented numbers. It was so bad that Somalia declared a national emergency.

Food security remains one of the top challenges in the world. According to world hunger statistics, one out of nine people in the world does not have enough food to live a healthy, active life.

Pest & plant diseases remain one of the reasons why 11% of the world's population is undernourished.

Our idea was inspired by how unprecedented happenings like locust invasion could worsen the balance of food security in the world.

Our goal is to help farmers identify, track down diseases, and efficiently treat defected plants with the help of machine learning.

What it does

It simply allows farmers to identify plant diseases on infected plants that pose significant hazards to the successful cultivation of the world's most critical edible crops.

How we built it

  • Model Training

  • Model Deployment:

    • Hosting: For the purposes of the hackathon, we realized that it would be expensive to have real-time inference hosted on AWS SageMaker. Hence, we decided to host the model with Heroku Containers running with Python Flask API, however, this poses latency challenges. Once we see traction with the solution, deployment on Amazon SageMaker would be next.
    • Continuous Deployment: In addition to hosting the model on Heroku, we implemented a continuous deployment pipeline with Amazon CodeBuild, with the goal of embracing Continuous Delivery for Machine Learning.

Challenges we ran into

  • Currently, we had access to a dataset containing images of leaves (plant village dataset). This alone limits the performance of the model as it would not be able to classify the fruits. We hope to collect data on the fruits and improve on the performance of the model.
  • Expensive but affordable to train on GPU due to infrastructure cost. This is because of GPU even though on the cloud do not come cheap.
  • Deployment might also be expensive. However, we hosted the model for free on Heroku. If the project at some point starts generating revenue, we will move to scale the deployment on AWS SageMaker.
  • React Native does not support capturing frames from a live camera. We used Accelerometer as a workaround for this. Accelerometer provides access to the device accelerometer sensor(s) and associated listeners to respond to changes in acceleration in 3d space, meaning any movement or vibration. Once we detect a change, we automatically take a picture for analysis.
  • Deployment might also be expensive. However, we hosted the trained model on Heroku Containers. This brought a huge latency. There is usually a considerable delay in response time when trying to make an inference.

What we learned

We learned about ONNX.js, an npm package that allows you to score pre-trained models directly on browsers.

We thought we could solve the latency problem by leveraging on ONNX.js and compute inference on the device.

After all, the mobile app was built with react native.

Eventually, we didn't get the ONNX model to work with react native, so we created a web app instead. You can see a demo of the webapp here .

The web app has its downsides; the downside is the ONNX model must be downloaded before inferences can be made, which is over 43MB in size.

However, we learned from it.

What's next for DiseaseFinder

Team Members:

Share this project:

Updates