Inspiration
The happiest times always fly by and beautiful moments are short lived. However, for tech amateurs like seniors and children, it may be difficult to take out and operate their gadgets in time to capture these moments. This is the purpose of this project, allowing tech amateurs to capture those moments without their tech barriers holding them back!
What it does
The prototype device fetches brainwave data from the sensors on the head and performs the action of taking a photo based on the interpretation of the data. It also allows the user to view the images on a web based GUI as well as a real time graph of the brainwave signals.
How we built it
The OpenBCI microcontroller is used to fetch the brainwave data from the sensors placed on the user's head and send them to the PC via a python program. The data is then transferred to react in real time using socket.io. The react script analyzes the data and display it on a dashboard interface with Chart.js. Upon detecting a brainwave change from Alpha waves to Beta waves (a transition from wakeful rest to focused attention, this kind of signal is best captured by the opening of the eye, which is going to be done repeated in this project to control the camera), the socket connection between the PC and the RaspberryPi is going to be set up, and a RaspberryPi camera is activated using python and the picture is displayed in react.
Challenges we ran into
We had no experience with how to do real time communication between python and react. This led to a number of threading issues because both the python-sensor interface and the server need to be ran in parallel, while also sharing data with each other in real time. The microcontroller is also lacking in accuracy, which means our threshold for detecting the brainwaves are higher than what we'd like.
Accomplishments that we're proud of
As a team of two, we are proud of being able to successfully produce a working prototype, as well as setting up the connections between multiple different kinds of interfaces. We are also proud of successfully fetching data from the sensor hardware and getting web sockets to transfer data between python and react in real time.
What we learned
We learned more about web sockets and how to work with hardware. We also realized threading and socket programming are areas where we lack knowledge and need further improvements in the future.
What's next for MindsEye
We want to be able to reduce the complexity of our device by amalgamating all the components into one integrated device. This would greatly increase user experience and remove external influence of noise.


Log in or sign up for Devpost to join the conversation.