GitHub link

Inspiration/Problem:

Weather applications in the past have been very archaic, and because of this we wanted to challenge ourselves to create a product that is both visually and technically appealing, while giving the highest accessibility to those in need.

What it Does:

The application utilizes a deep learning model to predict the most significant instances of possible fire disasters in forests in parallel with a backend API that pulls global meteorological data from the Global Forecast System and Germany’s weather forecast database.

How we built it:

Frontend: By analyzing the tech stack early into development, we initialized a NextJS developer environment for a superior UX/UI experience while providing us developers with a dynamic web app that is both scalable and organized.

Design: The website was designed using Canva and Procreate to draw custom elements such as the fire icons. We specifically designed the application in a bright, earthy color scheme to achieve a more welcoming and accessible user experience without the usual convoluted appearance and excess jargon that comes with statistical applications.

Machine Learning: Through statistical data analysis, we found the correlations between features, and the extreme class imbalance within the dataset. Knowing this we utilized threshold formulas to allow the model to categorize the target into significant and non-significant forest fires. Utilizing the functional Keras API with a novel feature engineering step (threshold, and logarithmic scaling formulas), the model was able to still predict target values with the extremely small scale of data given to it.

Backend: Our backend collects, processes, and caches data from several continuous scientific data sources (such as the US Global Forecast System and the German DWD forecast agency) and exposes predictions based on that data to the frontend through a HTTP API.

Challenges and Accomplishments:

Frontend: We ran into a majority of our difficulties with implementing the 3D rotating globe, such as getting it to rotate, resizing it, and recoloring it to fit our scheme constantly. Our best accomplishment was the hero page because it’s well put together with the logo and globe.

Machine Learning: Our goal was to find the most severe cases of possible forest fires given certain climate data. However, this proved difficult as the dataset provided by UC Irvine was extremely unbalanced with many target values in the data being set to 0.0 m^2. Alongside this many of the month data points proved to be skewed as only 2 data points were in January, while more than 20 related to August.

Backend: The backend relies heavily on data sources that only use obtuse scientific formats like GRIB2 and netCDF4. We were able to parse and use this data in the end, but finding those sources and the technologies required to use them burnt hours of time. Python dependencies were also very difficult to manage; in the end, we had to run the backend in Docker because the GRIB2 library could not be installed on Windows.

What we learned:

Planning: Our plan was always to take our time planning our procedure before we jumped into anything too risky. Despite spending our first hour working on a very vague and uncertain project, we retained our ability to walk away and brainstorm a better project that caught each of our interests equally. Although we got lucky in terms of when we abandoned the first project, this hackathon reminded us that planning is the most vital step to a successful project. A plan that accounts for each member’s strengths and weaknesses is vital to the overall success of the endeavor.

What’s next:

Machine Learning: In the future we would find datasets that would contain more data points, so that the dataset is more representative of the sample. Because of this the model would be able to be generalized to more locations.

Built With

Share this project:

Updates