Inspiration
We were inspired by the recent climate events that have destroyed billions in property and displaced so many people from their homes. From the hurricanes on the east coast to the Palisades and Eaton fires that have destroyed so much of LA, we wanted to create a “one-stop-shop” containing all the information that would help people prepare for and stay informed of potential wildfires. The live interactive map allows users to search up any locations in the world, providing the most personalized, useful and accurate updates.
What it does
- Our webapp, Embr, generates real-time information on air quality and news.
- Once the user inputs their location (or any location!), the relevant data will be generated by various API calls and GenAIs.
- Live map with an overlay of updated locations of spreading fires
- Live Air quality index and Health Advisories
- Live news feed
- Nearby shelters, food banks, and hospitals based on location search, generated by GPT-4o
- A chatbot using a local LLM to help with your experience
- The website is published and working at embrfires.co
How we built it
- The app was built using REACT, Node js, Figma, Express framework, Gpt-4o, blenderbot LLM, fetching data from various online APIs
- Live map and location search function: NASA: live fire detection Openweathermap API Google Maps API
- Nearest hospitals, food banks, and shelters: GPT-4o prompting to find closest resources based on the location you input in the live map at the top of the page
- Live news feed: newsAPI for fetching most up to date news within 24 hrs
- Chatbot:
Using a small local LLM model called
'facebook/blenderbot-400M-distill'downloaded from HuggingFace All the weather, fire, windspeed, air quality related data are stored on a MongoDB Due to device limitations, we could not run a bigger model. With more resources like a more powerful computer or GPU, we can download and run much bigger and more accurate models, with more conversational skills.
Challenges we ran into
- Working with API's was difficult, particularly working with the keys and .env file, managing environment variables.
- Device capacity for running local LLMs. We tried running a larger LLM for the chatbot and our computer crashed and ran out of memory. So we used a smaller one instead but in the future would implement a larger model with more resources.
- Working with multiple collaborators was also difficult as the merges messed us up a couple times.
- Incorporating the front and backend together via Express frameworks, managing endpoints
- We spent a lot of time on making sure environment variables are consistent with the most updated .env file. We also had to ensure to not push access tokens online and get them revoked :)
- We have a lot of access tokens due to the many sources/APIs we’re fetching information from, including the HuggingFace LLM model and OpenAI, so managing them and making sure the right dependencies are most up to date for each feature demanded huge efforts
Accomplishments that we're proud of
Although we had a large number of feature plans set out from the beginning, we managed to finish all of them, despite having difficulties with device and resource limitation. We have a fully functioning website, and are able to make connections to the external APIs, OpenAI, and LLM resources on any devices.
What we learned
- Running LLMs locally calls for a lot of resources in terms of memory, it is best to do this with GPUs or external servers to avoid computers crashing :> We learned how to use a model from Hugging Face and Meta, fetching user inputs and providing instant feedbacks
- When environment variables are not updating, restart-refresh method is the first thing you should try
- We learned how to manage multiple endpoints with data fetched from various different API calls, with different credentials and access tokens
- Database management with all data gathered into MongoDB for easy and fast access
- We learned app design on Figma
- We learned how to effectively work on organized branches on Github and resolving merge conflicts when needed
What's next for Embr Fires
- Implementation of the customized action plan feature.
- Creating an overall safety score using a Machine Learning Algorithm. Expanding to natural disasters beyond just fires.
- Training and tuning a larger LLM(s) with more resources to serve the chatbots and even more features across the platform
Built With
- api
- css
- express.js
- figma
- html
- huggingface
- javascript
- llm
- node.js
- openai
- react

Log in or sign up for Devpost to join the conversation.