using expert minimalist artisanal japanese construction techniques (origami), we've built an eco-friendly smart assistant from renewable materials (used coffee cup).
Inspiration
sleep deprivation
What it does
awexa (´・ω・`)™️ is your one-stop-shop fow all your voice-to-text-to-voice needs. you speak, awexa (´・ω・`)™️ listens, and then awexa (´・ω・`)™️ speaks back to you. its wike hawing your own personal wobot assistant, but without the wobot body.
How we built it
- 1 x jbl clip 3 speaker (won at lucky draw from hack n roll 2021)
- 1 x half-washed food clique kopi c kosong cup (with coffeestains)
- 1 x paper (from table)
- 1 x double sided tape (acquired from residential college 4 level 6 lounge)
we created an internal RESTful API with FastAPI to do the user registration and voice transcription functions.
the voice input (and output function) is handled by our voice_input POST endpoint, which first uses the OpenAI Whisper model on Replicate to do transcription, then forwards it to GPT-3 to generate a response. text-to-speech audio output is generated using the Google Text-to-Speech API.
the front-end was created with reactjs.
Challenges we ran into
reactjs
Accomplishments that we're proud of
getting 1 react component to work correctly
What we learned
sam altman and his funny gang of matrix multipliers are going to make me jobless
What's next for awexa (´・ω・`)™️
straight into the dumpster fire
Built With
- fastapi
- gpt-3
- react

Log in or sign up for Devpost to join the conversation.