Inspiration

We all wanted to test our skills and make AI more trustworthy in the way it handles private information. Many organizations struggle to know whether the files they upload contain sensitive data, and most tools don’t explain why something is flagged.

What it does

An AI system that analyzes multi-page documents, classifies them by sensitivity, and cites the exact pages or images behind each decision.

How we built it

Using Google Gemini for document understanding and language reasoning. Gemini processes extracted text and image data, applies prompt-based logic trees, and outputs both classifications and supporting evidence.

Challenges we ran into

Not knowing how to Front-End development and relying on sources inorder to get it running

Accomplishments that we're proud of

Built our first functional AI document classifier with real evidence-based outputs. Learned to integrate Gemini APIs and connect AI reasoning to UI elements. Developed a workflow that could actually help companies handle sensitive information safely.

What we learned

Never done front end, so learning how to use my previous skills for this project

What's next for Three Line Code

Build a cleaner dashboard with better visualizations and drag-and-drop uploads. Add dual-LLM verification (Gemini + GPT) to reduce review time.

Built With

Share this project:

Updates