Wildfires are arguably one of the toughest environmental disasters to combat. When they strike, their effects are disastrous for the economy, the environment, habitats, infrastructure, and most notably, water supplies. Forests alone yield 40 percent of the water for the world's 100 largest cities. Not to mention, rivers, lakes, streams, and reservoirs are just as important to the plants and animals that inhabit forests. When wildfires occur, ash and other contaminants settle in these water sources making them very unsuitable for drinking water. They also destroy vegetation that retains water and stabilizes the water cycle. Additionally, heavy sediments can destroy millions of dollars worth of water filtering infrastructure (as it did in Australia, for example). These are among just the few ways they can wreak havoc on vital water supplies for people in both developing and developed countries.
What's even worse about wildfires is how spontaneous they can be. They can happen in any part of the globe and anytime--day or night. In order to curb their occurences, we worked on designing a software and hardware prototype (called EcoSight) that can be used to monitor forests 24/7 and alert fire fighters of any potential outbreaks, allowing them to take preemptive measures. Unlike traditional means of forest monitoring, this approach uses artificial intelligence and smart computer vision that allows for tremendous improvements in the efficiency and accuracy of detecting potential wildfire sources.
The functionality of EcoSight is fairly straightforward. It is designed to "scan" the environment for every specified interval of time--that could be every 2 hours, every 30 minutes, or perhaps even every 10 minutes. During the "scan", all of the thermal cameras snap an image of their environments. Each of the images are processed to locate regions of abnormal temperature. The geo-coordinates of these regions are also calculated using triangulation. Additional factors are determined including the number of blotches, the surface area of the blotches, their average temperature intensity, and their overall magnitude. Furthermore, temperature and humidity sensors as well as anemometers (which we designed using a hall effect sensor) are used to determine temperature, humidity, and wind conditions for that particular day. All of these factors are run through a mathematical algorithm that then computes a risk evaluation (see Github Repo for further details).
If any of the thermal cameras has a risk evaluation that is above a certain threshold, a radio frequency emitter sends the thermal image of the environment, the information on weather and the image, and finally the risk evaluation to the station housing a WiFi router. From there, the information is finally sent to local firefighters using a cloud communications API called PubNub. All of this information is displayed on the client's browser in an interactive way that allows them to zoom into and scroll through the landscape to get further details.
For the purpose of prototyping and testing, we designed the hardware using a Raspberry Pi 4B microcontroller. The thermal camera used was a FLIR Lepton 3.5 with a PureThermal 2 module. The Lepton was connected to the Pi through a USB 3.0 connection, and it's linux environment was also configured. Additionally, an Intel Movidius GPU stick was used for accelerated graphics computing. The RHT module was used for collecting the temperature and relative humidity and the hall effect sensor was used for an anemometer. The LED and speaker were only used for testing whether or not the risk evaluations were being properly calculated and were not part of the final hardware design.


Log in or sign up for Devpost to join the conversation.