Inspiration

VERDIX AI was inspired by real incidents where people close to us were misled by fake news, viral rumors, fraudulent links, and edited government notices circulating on social media platforms. Many of these messages looked official and urgent, causing panic, misinformation, and even financial loss.

We realized that while misinformation spreads rapidly, verification tools are either too technical or fragmented. This motivated us to build a single platform that helps everyday users quickly verify content before trusting or sharing it.

What it does

VERDIX AI is a digital trust verification platform that helps users:

  1. Verify news and textual claims
  2. Detect fraudulent or suspicious links
  3. Analyze virality and manipulation patterns 4.Identify tampered or AI-generated images, especially fake government notices

Instead of giving just a yes/no answer, VERDIX AI provides confidence scores, trust indicators, and source-based verification links, helping users understand 'why' content may be risky or trustworthy.

How we built it

Backend: Built using FastAPI (Python) with a machine learning model trained for fake news detection ML Layer: Combines probability-based classification with heuristic manipulation analysis Verification Engine: Cross-checks claims using trusted online sources and official domains Link Analysis: Detects unsafe URLs, missing HTTPS, and suspicious domains Image Forensics: Integrated a CNN-based image detector hosted on Hugging Face Frontend: Developed using HTML, CSS, and JavaScript with an intuitive mode-based UI Deployment: Backend hosted on Render, frontend deployed via GitHub Pages

Challenges we ran into

  • Preventing false positives, where fake content could mistakenly appear real
  • Handling unverified claims when no trusted sources were available
  • Ensuring smooth frontend–backend communication across hosted platforms
  • Designing a UI that balances technical depth with simplicity for non-technical users

Accomplishments that we're proud of

  • Successfully combining ML predictions with real-world verification
  • Designing a system that only marks content as Real when trusted sources are found
  • Building a modular verification UI that allows users to analyze content in multiple ways
  • Creating a solution that focuses on trust, transparency, and explainability

What we learned

  • Misinformation is as much a human behavior problem as a technical one
  • Confidence without explanation can be dangerous
  • Trust increases when users see sources, reasoning, and transparency
  • Small design decisions can significantly affect user perception of credibility

What's next for VERDIX AI

  • Integration with official PIB and government portals for notice verification
  • OCR-based comparison for detecting edited government circulars
  • Browser extensions for real-time verification
  • Support for regional languages and local news
  • WhatsApp and Telegram forward verification tools

Built With

Share this project:

Updates