Inspiration
Searching is an art, but how do we do it in a video? I have wasted enough time skimming through lengthy videos while searching for content in youtube. Supersearch is a step forward that allows users to search for content in videos.
What it does
The platform takes the user supplied youtube url and extracts the audio from it. It uploads the audio to Assembly AI using it's upload endpoint and transcribed by it's transcription model. We are using an nlp package to extract keywords from the transcribed audio. The keywords along with segment clips are indexed by typesense. The expectation is when the user looks up content; it searches the content in the indexer and returns the segments of video that has the content. These segements eventually get rendered in the front end.
How we built it
React was used as the front end platform and nodejs as the backend engine. The API that extracts audio, uploads it and transcribes it was written in nodejs and run as a container in google cloud.
Challenges we ran into
I am FE noob. Hence, developing the front end was a challenge and few errors consumed lot of time as a result the typesense integration was not complete.
Accomplishments that we're proud of
we are specifically proud of developing an API powered by Assembly AI's endpoint that faciliates searching inside video files.
What we learned
I learned a lot of FE development. Especially React Js.
What's next for Supersearch
Next on Super search is to look at a plan to monetize the search in video fucntionality And over a period of time convert it into a search engine that searches inside videos.
Built With
- assemblyai
- google-cloud
- javascript
- nextjs
- react
- typsesense
Log in or sign up for Devpost to join the conversation.