Built at ConUHacks 2026 The future of search isn't keywords; it’s semantics.
Traditional lost and found systems are fundamentally broken. They rely on manual spreadsheets or public lists that compromise security. Publicly listing a "Gold Rolex" invites false claims, while not listing it prevents recovery. findIt bridges this gap using a "blind matching" architecture that prioritizes privacy and security.
findIt automates the "handshake" between a lost report and a found item using high-dimensional mathematics. By leveraging Vector Embeddings, the system matches items based on their semantic meaning rather than exact word matches, solving the friction of manual searching in large-scale environments.
- Multimodal Ingestion: When a user uploads a photo, it is processed by a BLIP Processor to generate a descriptive textual representation.
- Embedding Generation: This description is passed to the Google Gemini API to generate a high-dimensional vector embedding.
- Vector Storage: Embeddings are stored in MongoDB Atlas, which acts as our primary source of truth.
- Semantic Retrieval: To find a match, the system performs a Vector Search using Cosine Similarity between the user's inquiry vector and the inventory vectors.
- Frontend: Angular (Reactive UI/UX)
- Backend: FastAPI (High-concurrency & Asynchronous processing)
- Database: MongoDB Atlas (Vector Search & NoSQL storage)
- AI/ML: Google Gemini API, BLIP Processor, Python
- Lifecycle Management: Managing the lifecycle of vector embeddings from generation to search-indexing.
- Mathematical Search: Leveraging MongoDB’s search indexes to perform complex math (Cosine Similarity) on thousands of items in milliseconds.
- Human-Centric Design: Designing software that solves an emotional human problem (losing valuables) with a balance of empathy and technical security.