Inspiration

In an age where timely and accurate information is crucial, our solution revolutionizes how you handle the influx of articles related to terror events. Our cutting-edge system streamlines and enhances the process of extracting critical insights from a deluge of incoming reports.

What it does

First, we perform coreference resolution to clarify contexts and references within the articles. Then, our advanced Large Language Model (LLM) extracts specific entities and their relationships, integrating them into a comprehensive knowledge graph. Articles are intelligently chunked, with entities linked to their respective extracts for precise, context-aware tracking.

For retrieval, we deploy a Multi-agent LLM system designed to excel in the high-stakes environment of terror event reporting. This system resolves complex relationships, identifies key entities, generates detailed follow-up questions, retrieves pertinent extracts, and provides concise summaries. Additionally, it features a decisor agent to make informed, real-time decisions based on the aggregated data.

Our solution transforms the overwhelming influx of terror-related articles into clear, actionable intelligence, empowering security professionals and decision-makers to respond swiftly and effectively. Stay informed, stay prepared, and make critical decisions with confidence.

How we built it

We conducted our prototyping with jupyter notebooks.

We used spaCy en_core_web_sm model to conduct coreference resolution. We utilised Neo4j Aura to store graphical data and Neo4j vector index to store vector embeddings. We utilised GPT 3.5 and GPT 4 for graph construction and retrieval. However, a fine tuned model such as GraphGPT will likely have improved performance.

How we answer the problem statement

We answer the problem statement by "designing and implementing an LLM that can extract entities from reports into a knowledge graph" and "answer questions based on the generated knowledge graph".

Built With

Share this project:

Updates