<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by Graphwise on Medium]]></title>
        <description><![CDATA[Stories by Graphwise on Medium]]></description>
        <link>https://medium.com/@graphwise?source=rss-39e3a6a41a63------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Wed, 08 Apr 2026 13:45:56 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@graphwise/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[PoolParty 10.2: Advancing Global Semantic Reach with a Unified Identity]]></title>
            <link>https://graphwise.medium.com/poolparty-10-2-advancing-global-semantic-reach-with-a-unified-identity-e472d6dd82e9?source=rss-39e3a6a41a63------2</link>
            <guid isPermaLink="false">https://medium.com/p/e472d6dd82e9</guid>
            <category><![CDATA[semantic-search]]></category>
            <category><![CDATA[semantic-technologies]]></category>
            <category><![CDATA[taxonomy]]></category>
            <category><![CDATA[semantic-taggin]]></category>
            <category><![CDATA[semantics]]></category>
            <dc:creator><![CDATA[Graphwise]]></dc:creator>
            <pubDate>Tue, 07 Apr 2026 07:38:37 GMT</pubDate>
            <atom:updated>2026-04-07T07:38:37.108Z</atom:updated>
            <content:encoded><![CDATA[<h3><strong>This update bridges the gap between global language support and pinpoint search precision, all while aligning our core applications under the new Graphwise brand.</strong></h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*_m3LxjCb0IGDKZL2kYOlgA.png" /></figure><p>The release of PoolParty 10.2 is a strategic step in our transition toward the future <strong>Graphwise Platform</strong>. While we are busy building a more integrated ecosystem, this release addresses two critical needs for the modern global enterprise: <strong>the ability to process knowledge in any language</strong> and <strong>the precision to find specific data points in an ocean of information</strong>.</p><h3>A unified identity under the Graphwise brand</h3><p>Following the merger of Ontotext and Semantic Web Company, we are officially bringing the <strong>Graphwise</strong> identity to our core products. It is more than just a fresh coat of paint; it is about providing a professional, integrated experience as you move between tools.</p><p>In this first phase of the rollout, you will notice a refreshed look for the log-in page, the Linked Data (LD) frontend, and the core <strong>Graph Modeling</strong> component (formerly known simply as the PoolParty suite). While the “PoolParty” name remains for this version, these visual updates prepare the groundwork for our evolution into the unified Graphwise Platform.</p><h3>Your knowledge graph, now fluent in over forty languages</h3><p>For years, many enterprise systems were “English-first,” requiring custom, high-maintenance workarounds for scripts like Japanese or Chinese. That ends with PoolParty 10.2. We have re-implemented our concept tagging engine using a native Lucene-based architecture.</p><ul><li><strong>Native multilingual support</strong>: Graphwise now supports over 40 languages natively, including Japanese, Chinese, Russian, and most European languages. No need for external “word form” files.</li><li><strong>Zero-touch indexing</strong>: Previously, users had to manually “Refresh the Extraction Model” whenever a taxonomy changed. Now, the extraction model is automatically rebuilt on the first request. It simply works in the background.</li><li><strong>Reduced complexity</strong>: By leveraging Finite State Transducers (FSTs), we have removed the dependency on external services for lookup logic. This means high-speed tagging with significantly less infrastructure overhead.</li></ul><p>Whether you are managing technical manuals in German or news snippets in Japanese, your <strong>taxonomy is now a global asset that detects and links concepts with native-level fluency</strong>.</p><h3>Precision when it matters: find the needle in the haystack</h3><p>In research-heavy sectors like Pharmaceuticals, a search for a project code like “TAK-123” shouldn’t return every document containing the number “123.” Standard search engines often return broad, noisy results for queries.</p><p>The new <strong>Exact Phrase Matching</strong> in GraphSearch makes the life of data analysts a lot easier:</p><ul><li><strong>Pinpoint accuracy</strong>: You can now toggle exact matching via the API, ensuring that multi-word queries are treated as a contiguous phrase rather than a collection of individual words.</li><li><strong>Flexible ranking</strong>: Choose between a “Mandatory” filter (only show exact matches) or a “Ranking Boost” that pushes exact matches to the top while still showing related content.</li><li><strong>Alphanumeric intelligence</strong>: We’ve optimized the system to handle complex identifiers and technical terms that involve hyphens or special characters, ensuring they aren’t fragmented during analysis.</li></ul><h3>Simpler workflows for developers and admins</h3><p>We’ve also cleaned up the “plumbing” to make life easier for the teams running PoolParty:</p><ul><li><strong>Unified Tagger API</strong>: We have consolidated several legacy endpoints into a single, high-performance /api/tag endpoint.</li><li><strong>Efficient multi-document handling</strong>: Whether you are uploading a ZIP file for aggregate analysis or sending multiple files for individual results, the process is now streamlined and more predictable.</li></ul><h3>What this means for you</h3><p>PoolParty 10.2 is about making your semantic operations more robust and easier to use on a global scale. It removes the friction of manual indexing, eliminates the “noise” in your search results, and provides a clear, unified interface that reflects our shared future as Graphwise.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/150/1*yKw74VdddLzeBI5qMT-JGA.jpeg" /><figcaption><a href="https://graphwise.ai/author/yasen-stoykov/"><strong>Yasen Stoykov</strong></a>, Product Marketing Manager at Graphwise</figcaption></figure><p><em>Originally published at </em><a href="https://graphwise.ai/blog/poolparty-10-2-advancing-global-semantic-reach-with-a-unified-identity/"><em>https://graphwise.ai</em></a><em>.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=e472d6dd82e9" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Automating Data Privacy Compliance: Knowledge Graphs, Generative AI, and Real-Time Risk]]></title>
            <link>https://graphwise.medium.com/automating-data-privacy-compliance-knowledge-graphs-generative-ai-and-real-time-risk-20653a688cc9?source=rss-39e3a6a41a63------2</link>
            <guid isPermaLink="false">https://medium.com/p/20653a688cc9</guid>
            <category><![CDATA[risk-management]]></category>
            <category><![CDATA[knowledge-graph]]></category>
            <category><![CDATA[llm]]></category>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[data-privacy]]></category>
            <dc:creator><![CDATA[Graphwise]]></dc:creator>
            <pubDate>Fri, 03 Apr 2026 04:06:01 GMT</pubDate>
            <atom:updated>2026-04-03T04:06:01.447Z</atom:updated>
            <content:encoded><![CDATA[<h3><strong>By combining knowledge graphs as the central privacy control plane with LLMs, NLP, GenAI, and machine learning, organizations can automate data privacy compliance, transform static risk management into real-time Data Privacy Intelligence, and govern emerging AI use cases effectively.</strong></h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*b80t5iPkVTBFPbatziB8_g.png" /></figure><p>Data privacy has shifted from a checkbox exercise to a board-level risk.</p><p>Global regulations such as the <strong>EU General Data Protection Regulation (</strong><a href="https://eur-lex.europa.eu/eli/reg/2016/679/oj/eng?utm_source"><strong>GDPR</strong></a><strong>)</strong> require organizations to maintain detailed <strong>Records of Processing Activities (RoPA)</strong> (Article 30) and to perform <strong>Data Protection Impact Assessments (DPIAs)</strong> for high-risk processing (Article 35).</p><p>In parallel, frameworks such as the <a href="https://www.nist.gov/privacy-framework?utm_source"><strong>NIST Privacy Framework 1.1</strong></a> help organizations treat privacy risk at the same level as cybersecurity and enterprise risk.</p><p>Yet most privacy teams still live in spreadsheets, manual questionnaires, and disconnected tools. That makes it hard to answer basic questions quickly and confidently:</p><ul><li><em>Where is our most sensitive data actually flowing?</em></li><li><em>Which third parties are the riskiest — and why?</em></li><li><em>If a new law or contract clause changes, what’s impacted?</em></li></ul><p>This is exactly where <strong>Knowledge Graphs</strong>, <strong>LLMs</strong>, <strong>NLP</strong>, <strong>GenAI</strong>, and <strong>ML</strong> fit together into a modern <strong>Data Privacy Intelligence</strong> stack.</p><p>In this post, we’ll walk through the role of each technology, how they interlock, and how <a href="https://zeniagraph.ai/"><strong>Zenia Graph</strong></a>, together with <strong>Graphwise,</strong> helps organizations operationalize data privacy, manage legal and regulatory risk, and gain a competitive edge.</p><h3>Knowledge Graphs: The Privacy Control Plane</h3><p>A <a href="https://zeniagraph.ai/resources/concept/what-are-knowledge-graphs/">knowledge graph</a> is a connected map of everything that matters to privacy and AI governance: people, systems, data, vendors, regulations, risk, compliance and controls. Instead of scattered spreadsheets and rigid forms, you have a living, queryable model — <strong>a privacy control plane</strong>.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/760/1*zu4DmVBveeu0ztcz3vYpkw.png" /></figure><h4>What a privacy knowledge graph models</h4><p>Typical entities and relationships include:</p><ul><li><strong>Systems &amp; applications</strong> — CRM, HR, marketing, data warehouses, SaaS tools</li><li><strong>Data assets</strong> — tables, fields, documents, logs, events</li><li><strong>Data subjects &amp; categories</strong> — customers, employees, minors, special categories</li><li><strong>Processing activities</strong> — purposes, lawful bases, retention, jurisdictions</li><li><strong>Vendors &amp; third parties</strong> — roles, locations, data flows, contracts, sub-processors</li><li><strong>Controls &amp; obligations</strong> — DPIAs, RoPAs, DPAs, SCCs, DSAR workflows, retention rules</li><li><strong>Risk Factors </strong>— Threats (Ransomware, Phishing), Vulnerabilities (CVEs), Likelihood vs. Severity scores</li><li><strong>Legal Frameworks</strong> — Regulations (GDPR, AI Act), Specific Articles, Lawful Bases, Consent versions</li><li><strong>Mitigations</strong> — Encryption levels, Pseudo-anonymization, Access controls, Residual risk status</li><li><strong>Incidents</strong> — Breach events, Notification deadlines, Root cause paths, Remediation steps</li></ul><p>With this in place, you can ask questions like:</p><ul><li>“Show me all processing involving biometric data in the EU with no DPIA.”</li><li>“Which vendors receive EU personal data and lack updated SCCs?”</li><li>“For this new feature, which datasets, systems, vendors, and risks are connected?”</li><li>“If we patch the SQL vulnerability in ‘Legacy Payments’, how much does our global Risk Score drop?”</li><li>“Which data transfers to non-EU countries rely on SCCs but are missing a Transfer Impact Assessment (TIA)?”</li><li>“Show me all ‘High Severity’ risks affecting ‘Special Category Data’ that remain unmitigated for &gt; 30 days.”</li></ul><h4>Why graphs beat rigid lists and forms</h4><p>Most legacy platforms sit on <strong>relational or proprietary databases</strong> with fixed schemas. That works when:</p><ul><li>Data flows are simple</li><li>Schemas rarely change</li><li>Only one region or regulation matters</li></ul><p>It breaks down when:</p><ul><li>The same attribute (e.g., “email address”) flows through dozens of systems and vendors</li><li>New regulations (like the EU AI Act) or AI use cases appear faster than the schema can be updated</li><li>You must understand not just what data exists, but how it moves and why it is allowed</li></ul><p>Graphs are:</p><ul><li><strong>Flexible</strong> — adding new node types (e.g., “AI model,” “training dataset,” “data residency zone”, “new risk”) and relationships doesn’t require redesigning every form.</li><li><strong>Relationship-first</strong> — flows, dependencies, and cross-border paths are modeled naturally.</li><li><strong>Contextual</strong> — ideal as the <strong>context layer</strong> on top of existing scanners, catalogs, and logs.</li></ul><p>Scanners and relational lists tell you what you have. The graph explains <strong>how it fits together and what it means for risk and compliance</strong>.</p><h4>Data sovereignty and localization</h4><p>For global companies, <strong>data sovereignty</strong> and localization are now core requirements. Because regions, legal entities, and data stores are first-class graph nodes, you can:</p><ul><li>Visualize <strong>cross-border data flows</strong> between cloud regions, vendors, and subsidiaries</li><li>Answer: “Which EU data flows to US-based systems?” or “Where do we process data for Brazilian customers?”</li><li>Support localization policies, such as keeping HR data for specific countries within certain regions</li></ul><p>List-based tools show you columns filtered by “country.” A graph shows the <strong>network of flows</strong> — something executives, auditors, and regulators understand immediately.</p><h3>LLMs and Graph RAG: Conversational Access to Compliance</h3><p>Once your privacy landscape is modeled as a knowledge graph, <strong>LLMs</strong> turn it into an <strong>interactive assistant</strong>.</p><p>Instead of learning SPARQL or hunting through dashboards, privacy, legal, or business stakeholders can ask:</p><ul><li>“Which vendors are high-risk in Europe and why?”</li><li>“What personal data do we store about candidates in Germany, and where does it go?”</li></ul><h4>Graph-aware LLMs (Graph RAG)</h4><p>Standard RAG pulls text chunks from documents. <a href="https://graphrag.info/">Graph RAG</a> retrieves structured facts and relationships from the graph and passes them to the LLM as context.</p><p>Example prompts:</p><ul><li>“Summarize all high-risk processing involving biometric data in France and list missing DPIAs.”</li><li>“Explain why Vendor X is marked high-risk and recommend remediation steps.”</li><li>“If we add a new AI model trained on dataset Y, what privacy and transfer risks arise?”</li></ul><p>Because answers are grounded in the graph, they are:</p><ul><li><strong>Traceable</strong> — linked back to concrete systems, vendors, and documents</li><li><strong>Consistent</strong> — aligned with a single source of truth</li><li><strong>Explainable</strong> — supported by explicit graph paths and attributes</li></ul><p>LLMs become a privacy copilot that:</p><ul><li>Lets users “<strong>Talk to Your Graph</strong>” in natural language (via Graphwise)</li><li>Provides <strong>just-in-time guidance</strong> inside workflows (e.g., suggesting lawful bases or flagging high-risk use cases)</li><li>Generates <strong>short, audience-specific summaries</strong> of RoPAs, DPIAs, and audit findings</li></ul><h3>NLP: From Documents and Policies to Structured Signal</h3><p>Most privacy-relevant information starts as <strong>unstructured text</strong>:</p><ul><li>Data Processing Agreements and terms of service</li><li>Vendor security questionnaires</li><li>Privacy notices and internal policies</li><li>Incident and logging reports</li><li>DSAR/complaint tickets</li></ul><p><a href="https://zeniagraph.ai/kgservice/nlp/">NLP</a> turns this unstructured content into structured graph entries.</p><p>Core capabilities include:</p><ul><li><strong>PII &amp; data category detection</strong> — spotting personal and sensitive data in schemas, samples, or documentation</li><li><strong>Contract &amp; policy understanding</strong> — extracting roles (controller/processor), purposes, retention, locations, and safeguards</li><li><strong>Entity &amp; relationship extraction</strong> — mapping systems, vendors, datasets, and obligations into the graph</li><li><strong>Classification &amp; tagging</strong> — labeling documents as DPIAs, DPAs, policies, retention schedules, etc., and linking them to processing activities</li></ul><p>Scanner tools like <strong>BigID</strong> or <strong>Securiti</strong> do a great job finding sensitive data. Zenia Graph ingests those outputs and adds <strong>business context</strong>: who owns the system, which purpose the data serves, what contracts apply, and which legal regimes matter.</p><p>This continuous ingestion keeps the graph — and therefore your privacy posture — <strong>up to date</strong>, instead of relying on annual surveys and stale forms.</p><h3>Generative AI: Drafts, Explanations, and “What-If” Analysis</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/666/1*hz8giJtdclhuIh9XPYqSAQ.png" /></figure><p><strong>Generative AI</strong> sits on top of the graph and NLP layer to remove manual drafting work.</p><p>Typical workflows:</p><ul><li><strong>Drafting DPIAs &amp; risk assessments</strong> — using graph data about systems, data categories, risks, and controls to pre-fill DPIA assessments, streamlining the mandatory Human-in-the-Loop (HITL) review process.</li><li><strong>RoPA generation and updates</strong> — automatically creating and updating RoPA entries as systems, vendors, and purposes change</li><li><strong>DSAR support</strong> — assembling first-draft responses by pulling together where an individual’s data lives, why it is processed, and how long it is kept</li><li><strong>“What-if” scenarios</strong> — answering questions like: “If we move this workload from the US to the EU, or swap Vendor A for Vendor B, how does that impact risk, transfers, and required controls?”</li></ul><p>Generative AI doesn’t replace the privacy officer or in-house counsel. It removes repetitive copy-and-paste tasks so experts can focus on decisions, negotiation, and strategy.</p><h3>ML and GNNs: Real-Time Risk Scoring</h3><p><strong>Machine learning (ML)</strong> and <strong>graph neural networks (GNNs)</strong> turn your graph into a real-time risk engine.</p><h4>ML for context-aware risk</h4><p>With graph features available, ML can:</p><ul><li><strong>Score vendor and data-sharing risk</strong> using attributes like data sensitivity, jurisdiction, certifications, incident history, and control coverage</li><li><strong>Detect anomalies</strong> in access patterns (for example, a sudden spike of HR data moving into an unusual SaaS tool)</li><li><strong>Predict missing controls</strong> — highlighting where lack of encryption, retention limits, or DPAs is likely to cause problems</li></ul><p>Instead of flat checklists, you get r<strong>isk scores that understand context</strong>.</p><h4>GNNs with PyG: learning risk from relationships</h4><p>In privacy, risk is rarely about a single node; it’s about <strong>how nodes are connected</strong>. That’s where GNNs and <strong>PyTorch Geometric (</strong><a href="https://www.pyg.org/"><strong>PyG</strong></a><strong>)</strong> come in:</p><p>Steps:</p><ol><li><strong>Build a learning graph</strong></li></ol><ul><li>Nodes: vendors, systems, datasets, jurisdictions, products</li><li>Edges: data flows, contracts, processing relations, regulatory links</li><li>Features: sensitivity, locations, certifications, incidents, missing controls</li></ul><ol><li><strong>Train a GNN to predict risk</strong></li></ol><ul><li>Use historical incidents, audit findings, and red flags as training labels</li><li>Let the model learn patterns like “vendors with this neighborhood and these gaps tend to be high risk”</li></ul><ol><li><strong>Score nodes and relationships</strong></li></ol><ul><li>Every vendor, system, or flow receives a <strong>graph-aware risk score</strong></li><li>Scores can be refreshed when reality changes (new vendor, new flow, expired certification, new AI model)</li></ul><ol><li><strong>Drive real-time heat maps</strong></li></ol><ul><li>Privacy, legal, and security teams see <strong>live risk dashboards</strong> instead of static annual reports</li><li>They can justify priorities to the board and regulators with clear evidence and reasoning</li></ul><h3>AI Governance and the EU AI Act</h3><p>The <strong>EU AI Act</strong> and similar initiatives mean organizations must govern not just data, but <strong>AI models</strong>:</p><ul><li>Which models exist and who owns them</li><li>Which datasets (and personal data categories) trained them</li><li>What decisions they influence</li><li>What risks and controls apply</li></ul><p>A knowledge graph is a natural backbone for <strong>AI governance</strong>:</p><ul><li>Models become first-class nodes linked to training data, business owners, and risk assessments</li><li>Data lineage captures which personal data feeds which models, under what lawful basis</li><li>Policies, monitoring controls, and impact assessments attach directly to AI models, just like to systems and vendors</li></ul><p>Vendors like Securiti are pivoting hard to “AI governance.” Zenia’s advantage is that <strong>the same graph-based infrastructure</strong> that powers data privacy compliance can govern <strong>data + models + obligations</strong> in one consistent view.</p><h3>A Unified Data Privacy Intelligence Stack</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/568/1*VobeDqM0SLIB66RSffE-mA.png" /></figure><p>Put together, the stack looks like this:</p><ol><li><strong>Ingest &amp; Discover — Connectors + NLP</strong></li></ol><ul><li>Bring in schemas, logs, scanner outputs, contracts, policies, and regulations</li><li>Extract entities, relationships, and privacy semantics</li></ul><ol><li><strong>Model &amp; Govern — Knowledge graph</strong></li></ol><ul><li>Maintain a shared model for RoPAs, DPIAs, vendors, AI models, and cross-border flows</li></ul><ol><li><strong>Analyze &amp; Quantify — Analytics + ML + GNNs</strong></li></ol><ul><li>Compute and update risk scores, detect anomalies, cluster similar activities and vendors</li></ul><ol><li><strong>Interact &amp; Assist — LLMs + Generative AI</strong></li></ol><ul><li>“Talk to Your Privacy Graph” via Graphwise</li><li>Draft DPIAs, RoPAs, DSAR responses, AI model registers, and remediation plans</li></ul><ol><li><strong>Orchestrate &amp; Execute — Agentic Automation</strong></li></ol><ul><li><strong>Move beyond conversation</strong>: empower AI agents to interact with API hooks, autonomously suspending vendor access or triggering security protocols when risk thresholds are breached</li></ul><ol><li><strong>Act &amp; Learn</strong> — Workflows + feedback</li></ol><ul><li>Trigger tasks, capture decisions, and feed results back into the graph for continuous improvement</li></ul><p>The result: <strong>continuous, graph-driven intelligence</strong>, not static compliance snapshots.</p><h3>How Zenia Graph and Graphwise Compete — and Win</h3><p><strong>Zenia Graph</strong> specializes in data privacy, AI governance, and third-party risk using knowledge graphs and AI. <strong>Graphwise</strong> delivers the conversational interface that lets users talk to that graph and RDF graph database.</p><h4>What Zenia Graph provides</h4><ul><li><strong>Privacy &amp; AI Governance Graph</strong> — a domain-specific model for processing activities, data, systems, vendors, contracts, AI models, and controls</li><li><strong>Connectors for multi-cloud and hybrid</strong> — ingesting from AWS, Azure, GCP, Snowflake, Databricks, on-prem, and existing compliance and discovery tools</li><li><strong>Graph-native risk analytics</strong> — ML and GNN-based scoring, risk heat maps by line of business, region, product, and vendor</li><li><strong>“Talk to Your Privacy Graph”</strong> assistant (via Graphwise) — natural-language Q&amp;A with traceable, graph-backed answers</li><li><strong>Generative content workflows</strong> — DPIAs, RoPAs, DSAR responses, AI model registers, and internal briefings, all grounded in the graph</li></ul><h4>Positioning vs. incumbents</h4><ul><li><strong>Vs. OneTrust / TrustArc (rigid form-based platforms)</strong></li><li>They focus on static forms and checklists.</li><li>Zenia delivers a <strong>dynamic, graph-based model</strong> that evolves as systems, vendors, AI models, and laws change.</li><li><strong>Vs. BigID / Securiti (scanner-first tools)</strong></li><li>They are excellent at <strong>discovering</strong> PII and sensitive data.</li><li>Zenia is the <strong>context layer</strong>: understanding why that data is there, who owns it, what contract governs it, and which obligations apply.</li><li><strong>Vs. Microsoft Purview (ecosystem-centric)</strong></li><li>Purview shines in Microsoft-heavy stacks.</li><li>Zenia is intentionally <strong>ecosystem-agnostic</strong>, unifying multi-cloud and hybrid environments where most real enterprises live.</li></ul><h3>Conclusion: From Static Compliance to Living Intelligence</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/910/1*oKW9JNAl6rUL1wz4qNa50A.png" /></figure><p>Organizations that can <strong>see, explain, and prove</strong> how they use personal data — and how they govern their AI models — will win the trust of regulators, customers, and boards.</p><p>By combining:</p><ul><li><strong>Knowledge graphs</strong> as the privacy and AI governance control plane</li><li>Our <strong>Graph Schema</strong> aligns with international standards like <strong>W3C DPV</strong>, ensuring interoperability.</li><li><strong>LLMs</strong> as an intelligent, conversational interface</li><li><strong>NLP</strong> to bring unstructured evidence into the graph</li><li><strong>Generative AI</strong> to draft and explain privacy and AI artifacts</li><li><strong>Agentic workflows</strong> to autonomously orchestrate remediation, enforce controls, and execute response protocols</li><li><strong>Machine learning</strong> and GNNs to quantify and detect risk in real time</li></ul><p>…you can transform privacy from a fragmented, manual burden into a <strong>connected, intelligent competitive advantage</strong>.</p><p><strong>Zenia Graph</strong>, together with <strong>Graphwise</strong>, helps organizations make that shift — turning rigid, form-based compliance into a living <a href="https://zeniagraph.ai/use-cases/data-privacy-compliance-and-regulatory/"><strong>Data Privacy Intelligence Platform</strong></a> that keeps pace with regulation, technology, and business change.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/200/1*OugBT1AtI0dPdijNi9g9kA.jpeg" /><figcaption><a href="https://graphwise.ai/author/aurelije-zovko/"><strong>Aurelije Zovko</strong></a>, Co-Founder &amp; CTO at Zenia Graph</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/200/1*mJmu74SVwv-UWh6EHjzNGg.png" /><figcaption><a href="https://graphwise.ai/author/nina-zovko/"><strong>Nina Zovko</strong></a>, Co-Founder &amp; CPO at Zenia Graph</figcaption></figure><p><em>Originally published at </em><a href="https://graphwise.ai/blog/automating-data-privacy-compliance-knowledge-graphs-generative-ai-and-real-time-risk/"><em>https://graphwise.ai</em></a><em>.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=20653a688cc9" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[AstraZeneca: Enabling New Medicines Through Semantic Knowledge Graphs]]></title>
            <link>https://graphwise.medium.com/astrazeneca-enabling-new-medicines-through-semantic-knowledge-graphs-8fca7299a9f6?source=rss-39e3a6a41a63------2</link>
            <guid isPermaLink="false">https://medium.com/p/8fca7299a9f6</guid>
            <category><![CDATA[medicine]]></category>
            <category><![CDATA[knowledge-graph]]></category>
            <category><![CDATA[fair]]></category>
            <category><![CDATA[scientific-inteligence]]></category>
            <category><![CDATA[semantics]]></category>
            <dc:creator><![CDATA[Graphwise]]></dc:creator>
            <pubDate>Wed, 01 Apr 2026 09:37:10 GMT</pubDate>
            <atom:updated>2026-04-01T09:37:10.188Z</atom:updated>
            <content:encoded><![CDATA[<p><strong>This is an abbreviated and updated version of a presentation from Graphwise Graph AI Summit 2025 by Ben Gardner, R&amp;D lead for Data Mesh and Semantic Infrastructure at AstraZeneca</strong></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*56Qjk-4uE6rsPyZy.png" /></figure><p>Our understanding of disease evolves constantly, and with it, our ability to treat patients effectively. A century ago, cancer was considered a single disease. Then we refined our understanding of cancers of specific organs — cancer of the lung, for instance. Later, we distinguished between different cell types, like non-small cell lung cancer. Today, we examine individual mutational profiles of cancers and develop medicines targeting specific subpopulations.</p><p>This evolution creates a fundamental challenge: as our understanding grows more precise, we need to understand patients participating in clinical studies in increasingly defined ways. We need to identify the patient and the exact subpopulation they represent within a disease. This precision enables more targeted treatment and better success rates.</p><p>But there’s a problem — our data has always been captured in verticals supporting specific processes. What precision medicine requires is horizontal analysis across those silos. That’s why we turned to <a href="https://graphwise.ai/fundamentals/what-is-a-knowledge-graph/">knowledge graph</a> technology.</p><h3>Building Scientific Intelligence: a use case-driven approach</h3><p>At <a href="https://www.astrazeneca.com/">AstraZeneca</a>, we built a tool called Scientific Intelligence to address this challenge. Our approach from the start has been to remain use case-driven. With the enormous volume of data available both internally and publicly, trying to “boil the ocean” simply doesn’t work. We recognized early that science evolves, questions evolve, and everything must be modeled on what is being asked and what the data shows us — not what we could theoretically model.</p><p>Scientists ask complex questions: “Find me subjects who participated in studies where the indication was non-small cell lung cancer, where the drug was Tagrisso, where they had adverse event X, with CT scans of their lung, but with a genetic profile of Y.” These queries span many different data modalities.</p><p>Our strategy leverages AstraZeneca’s data mesh architecture, where platforms aggregate and manage data around specific disciplines. We have clinical data in STDM format, omics data including gene expression and RNA levels, imaging data covering both medical images like CT scans and digital pathology slides, and critically, sample information about specimens from trial subjects. In principle, we serialize all of this into what we call a “knowledge map” — we use that term with our customers because we are driving navigation of the space. We then surface this knowledge graph through a front end, enabling relatively easy exploration.</p><p>The goal is simple: help people find patients or samples matching their profile, then submit compliant requests for data access. Since we work with some of the most ethically and privacy-sensitive data the company holds, we operate in a very compliant fashion. We generally show information about what happened and observations made, rather than the actual observations themselves.</p><h3>From studies to individual observations</h3><p>The knowledge graph centers on a few major nodes: the clinical study with everything we can say about it, the subjects who participated, the samples taken from those subjects, and the observations made on those samples or subjects.</p><p>We provide summary statistics around studies by indication and drug. For individual studies, we offer a 360-degree view including the title, drug, indication, status, number of patients recruited, milestones completed, and links to critical documents like the clinical study protocol. Moving to the subject level, we have connections radiating outward. We can provide summary views showing the number of adverse events or total lab tests performed, but also drill into individual subject demographics, adverse events, and lab tests. This is where researchers really start mining subpopulations and specifying the exact group they want to examine.</p><p>At the sample level, we display inventory information — what’s still available to order, what type of sample was taken (plasma, blood, biopsy), where in the body it originated. Finally, at the individual observation level, we can detail the number and types of images available, tumor types observed, and stains performed. We can get remarkably granular, starting at the clinical study and drilling down through subject, sample, and omics data.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*3pxJ3unZT61psE0dPWKV3Q.png" /></figure><p>The user interface presents dashboards for clinical studies showing where trials run globally and facets for filtering — by therapeutic area, indication, disease, drugs used, and development phase. Clicking facets builds queries with constraints. A researcher might specify: “Show me only oncology studies where non-small cell lung cancer was the indication, which are phase three.” This continuously narrows the result set. The interface includes an abstraction of the graph where circles represent nodes connected to clinical studies, allowing navigation across the graph to other dashboards covering subjects, samples, and observations.</p><h3>Unlocking unknown knowns</h3><p>But the real value becomes clear only through examples. A data scientist recently approached us wanting to build a predictive model for recovery in chemotherapy patients. Some patients experience white blood cell count drops requiring them to stop chemotherapy. Predicting this allows putting them on recovery therapy so they can take additional rounds — a massive impact on cancer survival chances.</p><p>The data scientists worked with a particular drug product team and knew of two or three studies likely holding relevant subjects, but they worried about not having enough subjects in the subpopulation to train their machine learning model. Using Scientific Intelligence, we identified five additional studies they weren’t aware of that contained the right population. This gave them confidence to request access to those studies, knowing the number of subjects would be sufficient for their model.</p><blockquote>“This was hugely enabling because it [the complex search process] has previously been a very manual process and it could take weeks to do. We went from weeks to minutes.” — Ben Gardner, R&amp;D lead for Data Mesh and Semantic Infrastructure at AstraZeneca</blockquote><p>For study design, researchers often need to know variance in liver tests or blood pressure for particular subpopulations — analysis that used to take weeks now takes hours. We can provide the data, though they still complete the final statistical analysis.</p><p>Landscape reviews represent another powerful application. These enable therapeutic areas to understand what data and samples they have available based on different subpopulations. This has proven transformative in oncology, changing how teams prosecute their drug programs.</p><p>These reviews used to take months to build and immediately went out of date as new data came in — a continuous “Forth Bridge painting” exercise. Now we build queries that update in real time. Every time new data becomes available, they get automatic updates.</p><h3>Teaching people to think differently</h3><p>An interesting challenge emerged with our scientists. Most aren’t experienced querying knowledge graphs. When we said, “We’ve pulled these data sources together into a graph and you can ask really interesting questions,” their first response was: “I don’t know what I can ask or what I should ask, because I’ve never been able to ask this sort of question before.”</p><p>This created a chicken-and-egg scenario where because they knew they couldn’t ask these questions previously, they never bothered thinking about them. So now we are doing extensive education to help people think differently and recognize new possibilities.</p><p>Another problem is that the questions that generate efficiency savings aren’t asked daily by each group. That data scientist asking about the predictive model for white blood cell drops in chemotherapy treatments might ask that question once every six months. By the time they need it again, they’ve forgotten how to use the system. We need better ways of enabling people to interact with and query the data.</p><p>We can build beautiful, complicated data structures, but the challenge is making them navigable and consumable for complex questions. I firmly believe we need a return of librarians. When I did my PhD, librarians helped me navigate a large building full of information with a non-intuitive indexing strategy. We need the equivalent for these complicated knowledge graphs we are building.</p><h3>Scaling through pragmatism</h3><p>Managing over 20 different data sources and modalities to create that unified picture is no small task. Initially, we handled everything from data capture through cleanup and curation, serializing into data types, integrating into the graph, and exposing through Scientific Intelligence. We could do it, but we burned out. The team size needed to continue evolving and handling this diversity far exceeded our funding capacity.</p><p>We asked three simple questions:</p><ul><li>How can we maximize our engineering skills to focus semantic engineers on what’s important while getting others to do standardization?</li><li>How do we better manage the graph ecosystem — updating infrastructure, orchestrating the smorgasbord of tooling, evolving from virtualization to physical?</li><li>How do we democratize the graph to people, making it useful not just through the Scientific Intelligence UI but through other data forms?</li></ul><p>The smart answer was to accept that classic SQL-type data cleanup using DBT and Snowflake with SQL-trained data engineers could accomplish significant work. We push the model into that data flow upfront — minting URIs, applying controlled vocabularies, flattening data — all in SQL. This becomes a data product we can serialize into the knowledge graph.</p><p>We chose Corporate Memory by eccenca, backed by <a href="https://graphwise.ai/components/graphdb/">GraphDB</a> as the quad store, because it allowed a single vendor to manage the whole stack and suite of tooling. Now we just deploy images rather than patching and maintaining many different pieces. Our semantic engineers can focus on semantics, not infrastructure maintenance. It also provided nice integration opportunities with the established ecosystem of data consumption tools. Our Scientific Intelligence interface is built with Discover from OntoForce.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*28_nnLiv0IJiEaXi4kGopw.png" /></figure><h3>Putting the graph into the data</h3><p>We initially thought we needed to transform all data and build it into a graph — essentially creating a graph across the entire enterprise. What we learned from our experience with virtualization upfront, serialization into the graph, and using controlled vocabularies changed our perspective fundamentally.</p><p>We now materialize controlled vocabularies and make them available not just to our data pipeline but to everyone else’s. We drive common patterns from the reference model and URI patterns we establish. What we realized is that we have pivoted. While we still put data into our knowledge graph for Scientific Intelligence, from an enterprise perspective, we are really putting the graph through artifacts like controlled vocabularies into the data.</p><p>This lets us scale differently. We provide controlled vocabularies — built as <a href="https://graphwise.ai/fundamentals/what-is-a-taxonomy-management/">taxonomies</a> — then flatten them into Snowflake tables for easy consumption by data pipelines. This means data products being produced already have the URIs we want to use in our graph, making them easy to consume. This represents our major learning and pivot.</p><h3>Key takeaways</h3><p>Evolving science has driven us to understand diseases with greater and greater granularity, creating new data demands and resulting in complicated data graphs. But we must help people navigate these complex spaces. Focus on generating common patterns. Look for opportunities to help others do hard work for you, enabling you to focus limited skills on truly important process elements.</p><p>There’s no shame in using traditional approaches where they make sense. Most importantly, invest in data interoperability. That means building taxonomies, flattening them, and making them available as simple flat lists for SQL engineers to consume — it makes a huge difference. Put editorial governance in place and hide the semantics, because most of the world doesn’t need the semantics but needs the artifacts we create to make their data FAIR.</p><p>On ROI, you can calculate time saved and multiply across multiple people and use cases. We looked at frameworks for the increasing value of data as you join together different modalities. With about 350 studies and 750,000 subjects, as we add genetic sequencing information, samples, and imaging data, you get multipliers — ten times as you bring two together, a hundred times with another, 300 times with the next. The potential value becomes enormous. It’s about use cases that enable drawing down against that potential value.</p><p>The question about hiding semantics isn’t about toxicity — it’s about avoiding confusion. It’s not a religious war of SQL versus semantics. It’s about blending tools together. At AstraZeneca, 95% of the IT organization knows nothing about graphs or semantics. There’s no point teaching 95% of the organization to work with SPARQL, SKOS, and the RDF stack. We have a specialist group working with this technology stack, and we work out how to enable the artifacts to be consumed by the rest of the organization in data forms they’re familiar with, using skills they already have.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/150/1*rZvpYxcEeLPdWd9ZZMPOOg.jpeg" /><figcaption><a href="https://graphwise.ai/author/ben-gardner/"><strong>Ben Gardner</strong></a>, R&amp;D lead for Data Mesh and Semantic Infrastructure, Astra Zeneca</figcaption></figure><p><em>Originally published at </em><a href="https://graphwise.ai/blog/astrazeneca-enabling-new-medicines-through-semantic-knowledge-graphs/"><em>https://graphwise.ai</em></a><em>.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=8fca7299a9f6" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[GraphDB 11.3:]]></title>
            <link>https://graphwise.medium.com/graphdb-11-3-bf8622800695?source=rss-39e3a6a41a63------2</link>
            <guid isPermaLink="false">https://medium.com/p/bf8622800695</guid>
            <category><![CDATA[python]]></category>
            <category><![CDATA[mcps]]></category>
            <category><![CDATA[data-management]]></category>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[knowledge-graph]]></category>
            <dc:creator><![CDATA[Graphwise]]></dc:creator>
            <pubDate>Wed, 11 Mar 2026 10:05:45 GMT</pubDate>
            <atom:updated>2026-03-11T10:05:45.465Z</atom:updated>
            <content:encoded><![CDATA[<h3>GraphDB 11.3: Safer Backups, Smarter AI Integrations, and a Workbench That Feels at Home in Graphwise</h3><h4><strong>The latest GraphDB release introduces safer data management, cutting-edge Model Context Protocol (MCP) support, and a new Python client for easier administration and integration.</strong></h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*ODrskfC4YFbZCoS6.png" /></figure><p>The transition from “experimental AI” to “enterprise-grade AI” requires more than just a <a href="https://graphwise.ai/fundamentals/what-is-large-language-model/">large language model;</a> it requires a data foundation that is both indestructible and highly accessible. With the release of <a href="https://graphwise.ai/components/graphdb/">GraphDB 11.3</a>, we are doubling down on these requirements by introducing smarter backup safeguards, expanded Model Context Protocol (MCP) support, a new Python client built into RDFLib, and a refreshed interface that aligns with our new identity as Graphwise.</p><p>Here is a closer look at how GraphDB 11.3 helps DevOps teams and AI developers move faster and with greater confidence.</p><h3>Bulletproof data operations: smarter backups</h3><p>For DevOps engineers managing large-scale <a href="https://graphwise.ai/fundamentals/what-is-a-knowledge-graph/">knowledge graphs</a>, a backup is only as good as its ability to be restored. Traditionally, verifying the success of a backup in a distributed or cloud environment (like S3) could be a “fingers-crossed” moment.</p><p>GraphDB 11.3 introduces a <strong>success checksum</strong> system to eliminate this uncertainty.</p><ul><li><strong>Automated validation</strong>: Upon completion, GraphDB now tags backups with a checksum and generates a .success file. This provides an immediate, machine-readable confirmation that the backup is healthy.</li><li><strong>Corruption guardrails</strong>: The engine now includes “guard code” that prevents the system from even attempting to apply a corrupt or incomplete backup.</li></ul><p>By automating the work of the guard code, GraphDB 11.3 removes the manual overhead of validating data integrity, ensuring that your recovery point objective (RPO) is backed by reality, not just a log entry.</p><h3>Building the future of AI with enhanced MCP support</h3><p>As the industry moves toward agentic AI, <strong>MCP</strong> has emerged as the standard for connecting AI models to data sources. GraphDB 11.3 significantly expands its MCP capabilities, making it easier for developers to use GraphDB as a robust knowledge backend for AI agents.</p><ul><li><strong>Latest protocol support</strong>: This release adds support for multiple MCP versions, including the latest <strong>2025–11–25 standard</strong>.</li><li><strong>Flexible transport modes</strong>: Developers can now choose between legacy SSE and the new <strong>Streamable HTTP</strong> transport. This flexibility allows you to optimize for high-concurrency environments and gracefully handle resource exhaustion.</li><li><strong>Rich metadata</strong>: With support for prompt and tool metadata via <em>spring-ai-mcp</em>, developers gain better tooling, easier debugging, and more granular analytics for their AI-driven applications.</li></ul><p>Whether you are building a simple retrieval system or a complex autonomous agent, GraphDB 11.3 provides the stable, high-performance bridge needed to feed your models the right context.</p><h3>Faster automation with the new GraphDB Python client</h3><p>For many DevOps engineers and developers, Python is the default way to script, automate, and integrate data systems. With GraphDB 11.3, working with GraphDB from Python becomes easier and more robust thanks to the new <a href="https://github.com/RDFLib/rdflib/releases/tag/7.6.0"><strong>GraphDB Python client</strong></a><strong>, contributed to the RDFLib library</strong>.</p><p>Instead of writing custom wrappers around the REST API, you can now rely on an officially supported, industry-standard library to:</p><ul><li>Monitor and administer GraphDB instances and clusters</li><li>Manage repositories, access control, authentication, and security</li><li>Import data and integrate GraphDB tasks into your existing Python tooling and CI/CD pipelines</li></ul><p>Because the client is part of RDFLib, it fits naturally into existing Python data and semantic workflows. You can standardize on a single library for RDF handling and GraphDB administration, reducing custom code and making your operational scripts more portable and maintainable. Future releases will extend coverage to even more administrative endpoints, but 11.3 already delivers a fast, resilient way to get real work done with GraphDB using the skills and tools your team already has.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*F0LRbvwaRCvjfGY9C9xa5w.png" /></figure><h3>A Fresh look: Graphwise Workbench</h3><p>You may have noticed things look a bit different. Following our rebranding to Graphwise, the GraphDB Workbench has been updated with our new brand colors and design guidelines.</p><p>While the powerful functionality you rely on remains the same, the updated look and feel represents our commitment to a unified, modern ecosystem. It’s a cleaner, more intuitive environment designed to help you focus on what matters: the data.</p><h3>Notable performance improvements</h3><p>In addition to the headline features, GraphDB 11.3 includes several requested enhancements:</p><ul><li><strong>Advanced vector search</strong>: You can now create vector fields out of nested object fields, allowing for more nuanced and complex similarity searches within your knowledge graph.</li><li><strong>GraphDB-Ontop integration</strong>: We’ve added support for all Ontop configuration keys, giving you total control over how you virtualize relational databases as RDF.</li></ul><h3>Ready to explore GraphDB?</h3><p>GraphDB 11.3 is designed to make your knowledge graph operations more resilient and your AI integrations more powerful.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/150/1*yKw74VdddLzeBI5qMT-JGA.jpeg" /><figcaption><a href="https://graphwise.ai/author/yasen-stoykov/"><strong>Yasen Stoykov</strong></a>, Product Marketing Manager at Graphwise</figcaption></figure><p><em>Originally published at </em><a href="https://graphwise.ai/blog/graphdb-11-3-safer-backups-smarter-ai-integrations-and-a-workbench-that-feels-at-home-in-graphwise/"><em>https://graphwise.ai</em></a><em>.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=bf8622800695" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[PoolParty 10.1: AI-Assisted Taxonomy Building for the Graphwise Era]]></title>
            <link>https://graphwise.medium.com/poolparty-10-1-ai-assisted-taxonomy-building-for-the-graphwise-era-db38b4daa0a2?source=rss-39e3a6a41a63------2</link>
            <guid isPermaLink="false">https://medium.com/p/db38b4daa0a2</guid>
            <category><![CDATA[taxonomy]]></category>
            <category><![CDATA[knowledge-management]]></category>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[knowledge-graph]]></category>
            <dc:creator><![CDATA[Graphwise]]></dc:creator>
            <pubDate>Fri, 06 Mar 2026 10:27:36 GMT</pubDate>
            <atom:updated>2026-03-06T10:27:36.212Z</atom:updated>
            <content:encoded><![CDATA[<p><strong>PoolParty 10.1 introduces an AI-powered Taxonomy Builder, stronger governance, and cleaner integrations, making it easier and cheaper to build the semantic backbone your AI relies on.</strong></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*rxRSecDBHZJmxgV6_iCzqQ.png" /></figure><p>PoolParty 10 strengthened the Graphwise platform foundation for enterprise semantic AI. With PoolParty 10.1, the focus shifts from infrastructure to accelerating <a href="https://graphwise.ai/fundamentals/what-is-a-knowledge-graph/">knowledge graph</a> building itself.</p><p>This aligns directly with the Graphwise strategy: <strong>activating knowledge, governing AI, and driving ROI</strong>. As enterprises push GenAI beyond pilots, one challenge keeps resurfacing: building high-quality semantic context is still perceived as complex, costly, and expert-heavy. PoolParty 10.1 is designed to lower that barrier.</p><h3>Taxonomy Builder: from blank page to working taxonomy (with AI in the loop)</h3><p><a href="https://graphwise.ai/fundamentals/how-to-build-business-taxonomy-comprehensive-guide/">Building a good taxonomy</a> has long been a significant bottleneck for knowledge graph initiatives. It often requires scarce specialists, long workshops, and extensive manual curation, which slows pilots and increases costs.</p><p>The Taxonomy Builder in PoolParty 10.1 is the first step in changing this.</p><p>Instead of creating concepts one by one, users can now:</p><ul><li><strong>Generate a hierarchical taxonomy skeleton</strong> with a top-down, LLM-powered approach</li><li><strong>Generate a structured hierarchy of concepts</strong> using GenAI to create the initial framework from a domain description.</li><li><strong>Generate labels and definitions</strong> with AI to improve clarity and discoverability</li></ul><p>In practice:</p><ul><li>Taxonomists can describe a domain, generate structure, and refine suggestions</li><li>Everyone stays in control through review and acceptance workflows</li></ul><p>This isn’t a “magic one-click graph creation.” It’s <strong>human-in-the-loop AI</strong> that accelerates expert work, reduces time-to-value, and helps more teams contribute to building the semantic backbone.</p><p>For Graphwise customers, this is a meaningful step toward making semantic backbone systems easier to start, faster to iterate, and stronger as a foundation for reliable AI.</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fplayer.vimeo.com%2Fvideo%2F1166269621%3Fh%3D30f4c567a2%26app_id%3D122963&amp;dntp=1&amp;display_name=Vimeo&amp;url=https%3A%2F%2Fvimeo.com%2F1166269621%2F30f4c567a2&amp;image=https%3A%2F%2Fi.vimeocdn.com%2Fvideo%2F2123850123-a9a8d9683b09257d870dda2abd4b8151d4a810d856597ac17e7d39d74c7da5cf-d_1280%3Fregion%3Dus&amp;type=text%2Fhtml&amp;schema=vimeo" width="1920" height="1080" frameborder="0" scrolling="no"><a href="https://medium.com/media/19c7344400d5de5bf82475f4b308938c/href">https://medium.com/media/19c7344400d5de5bf82475f4b308938c/href</a></iframe><h3>Enterprise-ready improvements around the core</h3><p>PoolParty 10.1 also brings targeted enhancements that improve day-to-day experience and set the stage for future automation:</p><h4>Custom roles</h4><p>Many large organizations need access models that go beyond rigid default roles. PoolParty 10.1 introduces the groundwork for <strong>capability-driven, more granular role management</strong>, supporting enterprise governance without sacrificing usability.</p><h4>Improved GraphSearch ranking</h4><p>Search is only valuable when results are prioritized meaningfully. PoolParty 10.1 improves ranking logic, allowing users to surface more useful results faster.</p><h4>Extractor API cleanup</h4><p>To simplify integrations and prepare for future improvements, the Extractor API has been restructured to reduce complexity, clarify parameter behavior, and align with cleaner API design principles.</p><h3>What this means for the Graphwise platform</h3><p>PoolParty 10.1 is a clear expression of Graphwise direction:</p><ul><li>Lower the cost and complexity of building semantic backbones</li><li>Pair GenAI acceleration with governance and quality control</li><li>Prepare the path toward broader AI-assisted knowledge modeling beyond taxonomies</li></ul><p>Taxonomies are the first step. As Graphwise continues to expand automation across knowledge creation and governance, PoolParty 10.1 helps ensure that semantic context is no longer a bottleneck, but a scalable advantage for enterprise AI.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/150/1*yKw74VdddLzeBI5qMT-JGA.jpeg" /><figcaption><a href="https://graphwise.ai/author/yasen-stoykov/"><strong>Yasen Stoykov</strong></a>, Product Marketing Manager at Graphwise</figcaption></figure><p><em>Originally published at </em><a href="https://graphwise.ai/blog/poolparty-10-1-ai-assisted-taxonomy-building-for-the-graphwise-era/"><em>https://graphwise.ai</em></a><em>.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=db38b4daa0a2" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[From Retrieval to Reasoning: Enhancing HippoRAG with Graph-Based Semantics]]></title>
            <link>https://graphwise.medium.com/from-retrieval-to-reasoning-enhancing-hipporag-with-graph-based-semantics-13df7b4d9018?source=rss-39e3a6a41a63------2</link>
            <guid isPermaLink="false">https://medium.com/p/13df7b4d9018</guid>
            <category><![CDATA[graphrag]]></category>
            <category><![CDATA[knowledge-graph]]></category>
            <category><![CDATA[knowledge-graph-rag]]></category>
            <category><![CDATA[hipporag]]></category>
            <category><![CDATA[ontology]]></category>
            <dc:creator><![CDATA[Graphwise]]></dc:creator>
            <pubDate>Fri, 27 Feb 2026 10:13:13 GMT</pubDate>
            <atom:updated>2026-02-27T10:13:13.141Z</atom:updated>
            <content:encoded><![CDATA[<p><strong>How an ontology-based knowledge graph boosts the multi-hop Q&amp;A accuracy of one of the leading schemaless GraphRAG systems</strong></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*WxCsLAzNKsvDtz3Z.png" /></figure><p>Retrieval Augmented Generation (RAG) has become the standard for grounding <a href="https://graphwise.ai/fundamentals/what-is-large-language-model/">large language models (LLMs)</a> in proprietary data. However, as we push RAG systems into more complex domains, we also see the limitations of such solutions.</p><p>Standard RAG relies heavily on vector similarity search. While efficient, it suffers from a fundamental “tunnel vision”: it retrieves chunks based on semantic proximity to the query, but often fails to “connect the dots” across disjointed pieces of information. It struggles with multi-hop reasoning, where the answer lies in the relationship between two distant documents, not in the documents most similar to the initial question.</p><p>Enter <strong>GraphRAG</strong>, and specifically <a href="https://arxiv.org/abs/2502.14802"><strong>HippoRAG 2</strong></a> (Hippocampal Retrieval-Augmented Generation). These systems attempt to solve this challenge by structuring data as a graph, allowing retrieval based on relationships rather than just surface-level similarity. HippoRAG takes this a step further by mimicking the neurobiological processes of human long-term memory to create additional associative links. The latter is the major difference to most of the popular GraphRAG implementations, which derive schemaless <a href="https://graphwise.ai/fundamentals/what-is-a-knowledge-graph/">knowledge graphs</a> from documents.</p><p>However, even HippoRAG has a limitation: it typically builds its graph based on simple co-occurrence or open information extraction. It lacks a true “world view”. It also still relies on semantic similarity to manage to traverse the relations it has found in the data properly.</p><p>In this post, we explore how to supercharge the graph-based retrieval process by injecting a <strong>semantic backbone</strong>. We will demonstrate how replacing generic graph construction with strict <a href="https://graphwise.ai/fundamentals/what-is-ontology/"><strong>ontologies</strong></a> and structured <strong>knowledge graphs</strong> transforms HippoRAG from an associative engine into a reasoning engine.</p><h3>How does HippoRAG work</h3><p>To understand how we improved the system, we first need to understand the unique architecture of <strong>HippoRAG</strong>.</p><p>Unlike standard GraphRAG approaches that might simply traverse a knowledge graph, HippoRAG is inspired by the <strong>Hippocampal Indexing Theory</strong> of human memory. In the human brain, the <strong>Neocortex</strong> stores actual memories (analogous to LLM parameters and a document corpus), while the <strong>Hippocampus</strong> acts as a dynamic index, storing the pointers and associative relationships between those memories.</p><p>HippoRAG replicates this duality to enable faster, multi-hop retrieval. It operates in two distinct phases:</p><h4>1. Offline indexing (the associative graph)</h4><p>In the standard implementation, HippoRAG processes the document corpus to create a schemaless graph.</p><ul><li><strong>Extraction</strong>: It uses an LLM (or OpenIE) to identify key noun phrases and entities within the documents.</li><li><strong>Graph construction</strong>: It builds a knowledge graph where nodes are these extracted entities. Edges are created based on co-occurrence (two entities appearing in the same passage) or strong semantic similarity.</li><li><strong>Result</strong>: A massive associative map where Document A is linked to Document B because they share the entity “Python,” even if the documents never explicitly reference each other.</li></ul><h4>2. Online retrieval (the neurobiological pattern)</h4><p>When a user asks a query, HippoRAG doesn’t just look for keywords; it simulates a neural activation process.</p><ul><li><strong>Vector search on triples</strong>: The system does a vector search based on the query to find the triples with the biggest vector similarity to the query.</li><li><strong>Node activation</strong>: These entities are located in the graph and are selected as the starting point for Personalized PageRank.</li><li><strong>Personalized PageRank</strong>: This is the core mathematical engine. The system uses the <a href="https://snap.stanford.edu/class/cs224w-readings/Brin98Anatomy.pdf">Personalized PageRank algorithm</a> to spread this activation across the graph, in a manner similar to the <a href="https://en.wikipedia.org/wiki/Priming_(psychology)">priming</a> in the human brain. By using the nodes with the highest vector similarity, it traverses the graph to find the most relevant documents.</li><li><strong>Ranking</strong>: The system identifies which <em>documents</em> are most strongly connected to the highly activated nodes in the graph and retrieves them.</li></ul><p>Because the activation spreads through the graph, HippoRAG can find a document that contains none of the query words, provided it is strongly linked to the query through a chain of intermediate entities. This is the essence of <strong>multi-hop retrieval</strong>.</p><p>However, the “Vanilla” HippoRAG relies on the LLM to hallucinate the connections or extract them loosely. By replacing this loose association with a rigorous <strong>ontology</strong> and a <strong>knowledge graph</strong>, we provide the PageRank algorithm with a much cleaner, noise-free highway to travel on.</p><h3>Extending HippoRAG with an ontology and a knowledge graph</h3><p>To transform HippoRAG from an associative engine into a reasoning one, we extended the pipeline by injecting a structured “spine”: an automatically generated ontology and a knowledge graph. This moves us away from loose, probability-based connections to explicit, logic-based relationships.</p><h4>1. Ontology creation</h4><p>The first step is moving from chaos to order. Instead of ingesting raw triples immediately, we iterate over the relations initially extracted by HippoRAG. We employ an LLM agent that acts as a schema architect. At each step, this agent reviews a batch of extracted relations alongside the ontology built so far. Guided by a user-defined goal, the agent infers necessary classes and relations, deciding iteratively how the world should be structured. This ensures the ontology isn’t just a list of words, but a coherent framework tailored to the specific domain.</p><h4>2. Knowledge graph creation</h4><p>Once the ontology provides the scaffolding, we populate the building. A second LLM agent iterates over the documents to extract a strict knowledge graph that conforms to our new ontology. To ensure data integrity, a dedicated “repair agent” follows behind, fixing syntax errors, and ensuring compliance.</p><p>Crucially, we also build an <strong>Inverted Index</strong>. In standard graph systems, you might know <em>that</em> “John knows Jane,” but you lose the context of <em>how</em>. The inverted index links every entity and triple in the knowledge graph back to the exact source text.</p><ul><li><strong>Without the index</strong>: The graph knows(John, knows, Jane).</li><li><strong>With the index</strong>: The system can trace the knows relation back to the paragraph describing “John met Jane at a jazz bar in Chicago.” This turns the knowledge graph from a simple fact store into a navigational map that points back to the rich nuance of the original documents.</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*zftUZTK8UBJo3KK3NGEq8A.png" /><figcaption>Figure 1. Adding semantics to HippoRAG</figcaption></figure><h4>3. Inference</h4><p>We split the inference process into two synergistic retrieval steps:</p><ol><li><strong>Associative Retrieval</strong>: We utilize HippoRAG’s Personalized PageRank to identify documents based on network proximity and activation.</li><li><strong>Structured Retrieval</strong>: Simultaneously, we perform <a href="https://graphwise.ai/fundamentals/what-is-sparql/">SPARQL</a> queries against the knowledge graph. To jumpstart this, we provide the model with a list of relevant triples, helping it anchor its search within the graph.</li></ol><p>The outputs from both streams — the document context from HippoRAG and the structured facts from the knowledge graph — are fed into a final synthesis agent. This agent combines the “associative” and the “logical” to answer the user’s query. A visualization of the inference pipeline can be explored in <em>Figure 2</em>.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*dI8vfHg7p3vz6ODqG9JbGw.jpeg" /><figcaption><em>Figure 2. Visualization of the Inference pipeline</em></figcaption></figure><h3>Evaluation and results</h3><p>To test the efficacy of this semantic backbone, we turn to <a href="https://arxiv.org/abs/2108.00573">MuSiQue</a>, widely considered the most complex multi-hop reasoning dataset available. MuSiQue requires a system to successfully navigate 2 to 4 distinct “hops” of information to arrive at an answer. For example, a 3-hop question could look something like “In which country was the director of the film ‘The Great Gatsby’ born?”</p><p>We avoided standard “exact match” metrics, which often penalize correct but verbose answers. Instead, we employed an LLM-as-a-judge evaluation protocol that assesses whether the answer is accurate and contains the necessary information, regardless of phrasing.</p><p>The majority of questions in MuSiQue are 2-hop. For that reason, we separated evaluations into one evaluation on 2-hop only questions and one evaluation on equal parts 2, 3 and 4 hop questions. For each evaluation, we took a random subset of 100 MuSiQue questions. Manual examination of the questions, however, revealed significant issues with the question spanning from incorrect answers to impossible connections. To filter out these unanswerable questions, we ran a system with exact context three times on each question and if it never produced the expected response the question was removed. The results presented below are based on the accuracy over the remaining <strong>answerable questions</strong>.</p><h4>Performance on 2-hop questions</h4><p>As seen in the results below, adding an ontology-based knowledge graph yields immediate dividends. <a href="https://graphwise.ai/graphwise-graphrag/">Semantic GraphRAG</a> manages to answer about <strong>95%</strong> of answerable questions, creating a clear separation from standard HippoRAG (86%) and a massive improvement over traditional Vector RAG (79%). The “LLM w/o RAG” category (71%) is also included to demonstrate the ability of LLMs to answer these questions with general world knowledge- something only possible because MuSiQue is a public dataset and therefore part of their training corpora.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*AeM6F4ZDat720LLINKoO8w.png" /><figcaption>Figure 3. Results on 2-hop questions</figcaption></figure><p>In this experiment our approach answered, on average, 69 of the 73 answerable questions correctly giving it a consistent advantage over HippoRAG.</p><h4>Performance on multi-hop questions</h4><p>When testing across the full spectrum of available question complexity, the distinction between approaches grows clearer and Semantic GraphRAG demonstrates superior resilience. While Vector RAG and HippoRAG plateau near 75%, Semantic GraphRAG increases its dominant lead to <strong>86%</strong>. This suggests that as reasoning chains get longer, the structured “highway” provided by the knowledge graph becomes increasingly critical for maintaining the trail of evidence.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*9F8eiRP0C04G4JNfkfEqjA.png" /><figcaption><em>Figure 4. Results on equal parts 2, 3, and 4-hop questions</em></figcaption></figure><p>People don’t have time to browse, interpret, or reconcile conflicting sources. They need to pull answers, not documents, and they need to be able to set the context themselves. That said, content operations are to be centred around designing content for retrieval, not publication. They also need to be done with a view to capturing meaning in an explicit way before worrying about format. Last but not least we should be treating definitions and other cornerstone content chunks as first-class content assets.</p><p>In practice of the 69 answerable questions here, Vector RAG and HippoRAG answer 51–52 questions on average while Semantic GraphRAG averages 59.</p><h3>Insights</h3><p>Our deep dive into the MuSiQue dataset revealed a significant noise floor: many questions in the benchmark itself are flawed or ambiguous. In fact, across a variety of question samples, we established that about a third of the questions are not answerable — usually the expected answers are wrong or impossible to derive. An example of the latter is: “What is the administrative territorial entity that contains the location of Eric Marcus?”. The expected answer in the dataset is: “KUAT-TV 6”, which is a TV station, not an administrative territorial entity. An example of the former is: “In which country was the tournament held?”, which makes no sense without specifying which tournament the question is about.</p><p>However, benchmarks like MuSiQue fail to capture the true “power” of Semantic GraphRAG implemented via natural language querying (NLQ), — <strong>Aggregation</strong>. Standard RAG and GraphRAG systems are retrieval engines, not calculation engines. If you ask, “<em>How many genes are associated with ALS?</em>” or “<em>Which gene is linked to the highest number of Alzheimer’s symptoms?</em>“, standard systems fail. They attempt to retrieve text chunks and rely on the LLM to count. LLMs are notoriously poor at counting and the required context often exceeds token limits.</p><p>Semantic GraphRAG solves this deterministically. It doesn’t ask the LLM to count. It generates a SPARQL query (for example, SELECT COUNT(?gene)...). The result is a single, perfectly accurate number returned instantly. This ability to switch between “reading” (RAG) and “calculating” (SPARQL) is a paradigm shift for complex data analysis. This capability is referred to as NLQ, where natural language queries are converted to queries to a database engine.</p><h3>Further improvements</h3><p>We have some ideas for further improvements that can go in the following directions:</p><ul><li>The Q&amp;A pairs of MuSiQue are designed to be answered via document chunks, without NLQ. We are developing a new benchmark designed specifically to test these capabilities. We aim to move beyond simple retrieval metrics and demonstrate what is possible when you treat your data as a true knowledge graph.</li><li>Semantic GraphRAG can be improved further, using more advanced ontology and <a href="https://graphwise.ai/fundamentals/what-is-entity-linking/">entity linking</a> techniques. One direction would be to experiment using an existing ontology, for example, <a href="http://schema.org/">Schema.ORG</a> or the ontology of Wikidata. This will save processing time (no need to generate an ontology). It will also make text-to-SPARQL generation faster and cheaper, because there will be no need to pass the ontology in the prompt — all major LLMs do know these popular ontologies.</li><li>Personalized PageRank can be implemented directly in <a href="https://graphwise.ai/components/graphdb/">Graphwise GraphDB</a>, which will void the need for a separate associative graph, stored in a separate engine.</li></ul><h3>Conclusion</h3><p>By fusing the neurobiological inspiration of HippoRAG with the rigid structure of ontologies and semantic knowledge graphs, we have created a system that offers the best of both worlds. We retain the associative, “human-like” memory retrieval of HippoRAG, while injecting the logical precision required for multi-hop reasoning and complex aggregations.</p><p>The results are clear: structure matters. When you give an LLM a map (the ontology) and structured data (the knowledge graph) rather than just a pile of documents, it doesn’t just retrieve better — it reasons better.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/150/1*Elzmrn4ta9cwbjbisjUuGw.jpeg" /><figcaption><a href="https://graphwise.ai/author/aleksis-datseris/"><strong>Aleksis Datseris</strong></a>, AI Engineer, Graphwise Innovation</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/150/1*2Gm9XniO-r7aTnakIForRQ.jpeg" /><figcaption><a href="https://graphwise.ai/author/andrey-tagarev/"><strong>Andrey Tagarev</strong></a>, AI &amp; Knowledge Engineer at Graphwise</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/150/1*Thm1Duft1aRl4E0rtlG_AQ.jpeg" /><figcaption><a href="https://graphwise.ai/author/atanas-kiryakov/"><strong>Atanas Kiryakov</strong></a>, President of Graphwise</figcaption></figure><p><em>Originally published at </em><a href="https://graphwise.ai/blog/from-retrieval-to-reasoning-enhancing-hipporag-with-graph-based-semantics/"><em>https://graphwise.ai</em></a><em>.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=13df7b4d9018" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Smarter Self-Service — How GraphRAG Boosts ROI in Customer and Employee Support]]></title>
            <link>https://graphwise.medium.com/smarter-self-service-how-graphrag-boosts-roi-in-customer-and-employee-support-df6b9541fea7?source=rss-39e3a6a41a63------2</link>
            <guid isPermaLink="false">https://medium.com/p/df6b9541fea7</guid>
            <category><![CDATA[knowledge-graph]]></category>
            <category><![CDATA[self-service]]></category>
            <category><![CDATA[graphrag]]></category>
            <category><![CDATA[roi]]></category>
            <category><![CDATA[ai]]></category>
            <dc:creator><![CDATA[Graphwise]]></dc:creator>
            <pubDate>Fri, 13 Feb 2026 09:18:41 GMT</pubDate>
            <atom:updated>2026-02-13T09:18:41.306Z</atom:updated>
            <content:encoded><![CDATA[<p><strong>How GraphRAG, powered by enterprise knowledge graphs, turns generic bots into reliable support assistants that boost ROI across both customer and employee channels.</strong></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*VpP9DGPGg2LAG3Z7.png" /></figure><p>You may have invested heavily in chatbots, virtual agents, and knowledge portals, but still see deflection rates and satisfaction flat or falling. That frustration is rising on the customer side as well, with Gartner finding that nearly <a href="https://www.gartner.com/en/newsroom/press-releases/2024-07-09-gartner-survey-finds-64-percent-of-customers-would-prefer-that-companies-didnt-use-ai-for-customer-service">64%</a> of customers do not want AI involved in their support experience.</p><p>The problem is not AI itself but its lack of structured context. LLM-powered bots sometimes hallucinate (confidently generate wrong or made‑up information) when they lack the necessary domain knowledge. And the result is a poor support experience: vague or wrong answers frustrate users and can even erode trust in your brand</p><p><a href="https://graphwise.ai/graphwise-graphrag/">GraphRAG </a>addresses this problem by grounding AI assistants in enterprise knowledge graphs. So every answer is rooted in a consistent <a href="https://graphwise.ai/blog/how-the-semantic-layer-enables-reliable-generative-ai-and-business-decisions/">semantic model</a> of your business, your content, and your users. When support AI can truly understand how everything is connected, self-service becomes reliable and explainable, and it provides a good return on investment.</p><p>In this article, you will go over how GraphRAG delivers measurable ROI for both customer and employee support.</p><h3>The problem with today’s AI-powered self-service</h3><p>Most AI-powered support tools today are built on thin <a href="https://graphwise.ai/fundamentals/what-is-large-language-model/">large language models</a> (LLMs) or basic retrieval-augmented generation (RAG) systems. They split content into pieces (chunks), turn them into vectors, and store them. When someone asks a question, the system finds similar chunks and prompts an LLM to assemble an answer.</p><p>This pipeline largely ignores the domain’s deep structure, such as product hierarchies, configuration options, regional rules, entitlements, service levels, and more.​</p><p>Since there’s no clear way to represent these relationships, the system often treats all similar content as interchangeable. For example, it might show a troubleshooting guide for Version A to a customer using Version B, apply policies from one region to another, or suggest a workaround for an outdated feature to a new customer.</p><p>The system is technically retrieving similar text, but it doesn’t understand the relevant context. The problem worsens when information is scattered across different systems, such as product docs, policies, ticket histories, and community content, all stored separately.</p><p>A study of knowledge workers found organizations use only about <a href="https://www.starmind.ai/press/global-intelligence-gap-hindering-future-workforce">38%</a> of their available expertise, with much of the rest trapped in isolated systems. Without a unified <a href="https://graphwise.ai/fundamentals/what-is-a-semantic-layer/">semantic layer</a>, the AI misses the relationships between customer data, product details, and corporate policies.</p><h3>What is GraphRAG and why it changes self-service</h3><p><a href="https://graphwise.ai/fundamentals/what-is-graph-rag/">GraphRAG</a> is based on the same idea as traditional retrieval-augmented generation, but adds the missing piece: an explicit <a href="https://graphwise.ai/fundamentals/what-is-a-knowledge-graph/">knowledge graph</a> of your domain. The graph supplies explicit semantics — a domain model — that an LLM alone lacks.</p><p>Every user query triggers a semantic search of the graph, gathering contextual facts that precisely match the query. The LLM then generates an answer grounded in those facts. GraphRAG added several key benefits for support:</p><ul><li><strong>Contextual retrieval</strong> — The system issues semantic graph queries that incorporate the user’s context. The AI fetches information that exactly matches the query, not just text that looks similar.</li><li><strong>Precise scoping</strong> — Answers are automatically narrowed to the user’s situation. Recorded metadata — product type, version, subscription tier, region, and entitlements — all serve as filters for the query.</li><li><strong>Trustworthy answers</strong> — Output is grounded in curated graph data, so hallucinations drop considerably. In benchmarks, graph-based RAG models have achieved higher accuracy than traditional vector-based RAG models, greatly reducing the number of wrong answers.</li></ul><h3>The ROI story: From frustrating bots to measurable outcomes</h3><p>Many organizations experience the same trajectory with first-generation chatbots and RAG assistants. After their launch, they often see a modest initial increase in deflection and self-service usage, as these systems handle the simplest FAQs.</p><p>But as soon as users bring more context-dependent questions, the limitations become obvious: answers feel generic, inconsistent, or hedged, and escalations continue to flow to human agents. Support teams end up spending more time reworking AI-generated answers, validating outputs in regulated scenarios, and manually updating content to patch retrieval failures.​</p><p>GraphRAG breaks through that barrier and solves far more queries correctly on first contact, which directly cuts support costs by up to <a href="https://isg-one.com/articles/ai-cuts-costs-by-30---but-75--of-customers-still-want-humans---here-s-why">30%</a>.</p><p>There are two main ROI levers in play:</p><ul><li><strong>Search efficiency</strong> — Using graph-powered retrieval can reduce the search time by half, and productivity increases by approximately <a href="https://graphwise.ai/use-cases/knowledge-discovery/">10–15%</a> when results are tailored to the user’s context.</li><li><strong>Operational gains</strong> — In customer and employee support scenarios, <a href="https://graphwise.ai/blog/ai-ready-graph-environments-the-key-to-scaling-ai-with-graphwises-knowledge-graphs/">GraphRAG deployments</a> often deliver double‑digit efficiency gains. Organizations report approximately <a href="https://graphwise.ai/use-cases/graph-rag/">15–20%</a> improvement in case resolution, driven by higher first‑contact resolution, fewer escalations to higher levels, and less time spent validating AI answers.</li></ul><h3>Smarter self-service for customers</h3><p>To see what this looks like in practice, consider a typical customer journey with a GraphRAG-powered assistant. A user logs in to the support portal and describes their issue in natural language, such as a billing discrepancy or an eligibility question for an upgrade.</p><p>Behind the scenes, the assistant enriches the query with semantic context from the knowledge graph: which products the customer has, which plan and region they are in, and whether there are known issues or policies related to similar patterns.​</p><p>Instead of a generic reply, the assistant pulls the specific policies, troubleshooting flows, and examples relevant to this customer’s exact situation. The response the user sees is a step-by-step answer that references their product, plan, and constraints.</p><p>Where appropriate, the assistant surfaces links to supporting documentation so the user can verify details without wading through irrelevant content.</p><p>Compared to a generic bot, the user does not need to rephrase the question multiple times, navigate between different channels, or repeat context when they escalate to a person. The assistant “remembers” the entities and relationships at play and uses them consistently throughout the conversation. This reduces friction, lowers effort, and builds trust.</p><p>Over time, organizations see higher self-service success rates, better <a href="https://www.ibm.com/think/topics/csat-customer-satisfaction-score">CSAT</a> for digital channels, and a healthier balance between automated and person-assisted support.</p><h3>Smarter self-service for employees (internal support)</h3><p>IT, HR, finance, and compliance teams frequently deal with complex employee inquiries. They face challenges when information is scattered, and policies are hard to access.</p><p>Internal helpdesks often rely on manual triage and informal knowledge, leading to lengthy email threads as employees struggle to find consistent answers aligned with company policies.</p><p>GraphRAG applies a knowledge-graph approach to internal support, connecting policies, documentation, catalogs, asset inventories, and historical tickets into a semantic layer. It lets employees use natural language to get answers reflecting actual enterprise operations.</p><p>Since every answer can be linked to specific documents, services, and policy artifacts in the graph, internal teams gain both confidence and auditability. New hires ramp up faster because they can self-serve answers instead of relying on a small group of experts. Expert teams see fewer repetitive questions and can focus on genuinely novel or high-risk issues.</p><p>Across departments, policy interpretation becomes more consistent, and sensitive areas like tax or financial reporting can be supported by AI without sacrificing control or traceability.</p><h3>How Graphwise delivers measurable ROI in support</h3><p>Graphwise’s <a href="https://graphwise.ai/bundles/graph-ai-suite/">Graph AI Suite</a> makes GraphRAG work at scale. It combines a semantic graph database (<a href="https://graphwise.ai/components/graphdb/">GraphDB</a>), ontology modeling tools (<a href="https://graphwise.ai/components/graph-modeling/">Graph Modeling</a>), data integration (<a href="https://graphwise.ai/components/graph-automation/">Graph Automation</a>), and the GraphRAG engine into one solution.</p><p>It lets teams import content, build taxonomies, and deploy the assistant without stitching together multiple vendors. You get graph modeling, automated tagging, and RAG all in a unified workbench, which speeds rollout and reduces costs.</p><p>Moreover, Graphwise embeds a built-in <a href="https://graphwise.ai/blog/graph-ai-suite-turning-enterprise-data-into-trustworthy-self-improving-ai/">AI Flywheel</a> — every user interaction is a learning opportunity. The system logs the graph entities and documents used to answer questions and flags frequently accessed content for refinement and quickly identifies knowledge gaps.</p><p>Over time, the graph’s <a href="https://graphwise.ai/fundamentals/what-is-ontology/">ontology</a> and <a href="https://graphwise.ai/fundamentals/what-is-metadata/">metadata</a> improve automatically. In effect, the assistant gets smarter as it’s used and reduces the need for constant manual updates. Support teams see this as recurring questions are answered instantly, while rare edge cases are added to the knowledge graph.</p><p>Real-world deployments show how Graphwise GraphRAG moves from theory to measurable support outcomes:</p><ul><li><a href="https://graphwise.ai/success-story/avalara-transforming-ai-reliability-by-building-knowledge-graph-powered-customer-support/">Avalara</a>, a tax-technology provider, used GraphRAG to overcome a “Precision Paradox” in its initial RAG chatbot. Graphwise converted their DITA documentation schema into an RDF ontology, enabling 100% precise content mapping. The resulting DOM GraphRAG assistant delivers deterministic, fact-based responses, leading to higher customer satisfaction and faster time-to-value.</li><li>Similarly, <a href="https://graphwise.ai/success-story/healthdirect-austalia-from-silos-to-smarter-search-and-self-service-tools/">Healthdirect Australia</a> runs a national telehealth advisory service that aggregates medical content from over 280 partner organizations. Their information was scattered across dozens of websites, systems, and databases. Using GraphRAG, Healthdirect built a unified health knowledge graph. Graphwise helped import and semantically tag all partner content, creating a single ontology for medical conditions, providers, services, and regions. This enhanced self-service has significantly “reduced pressure on call centres” and cut routine inquiries.</li></ul><h3>Conclusion</h3><p>Self-service does not fail because users dislike automation or because generative AI is inherently unreliable. It fails when assistants are forced to operate without the context and structure they need to reason about real-world products, policies, and situations. Traditional chatbots and basic RAG systems, built on top of fragmented knowledge and shallow similarity search, inevitably run into this wall.</p><p>GraphRAG, grounded in enterprise knowledge graphs, offers a way through. It enables AI assistants to deliver precise, explainable, and policy-aware answers to both customers and employees at scale.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/150/1*KsINTeAku5ld_GhfxQzuyQ.jpeg" /><figcaption><a href="https://graphwise.ai/author/haziqa-sajid/"><strong>Haziqa Sajid</strong></a>, Technical Content Writer</figcaption></figure><p><em>Originally published at </em><a href="https://graphwise.ai/blog/smarter-self-service-how-graphrag-boosts-roi-in-customer-and-employee-support/"><em>https://graphwise.ai</em></a><em>.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=df6b9541fea7" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Lessons Learned from Building an AI-ready Knowledge Hub]]></title>
            <link>https://graphwise.medium.com/lessons-learned-from-building-an-ai-ready-knowledge-hub-6468691a2910?source=rss-39e3a6a41a63------2</link>
            <guid isPermaLink="false">https://medium.com/p/6468691a2910</guid>
            <category><![CDATA[knowledge-hub]]></category>
            <category><![CDATA[llm]]></category>
            <category><![CDATA[knowledge-graph]]></category>
            <category><![CDATA[knowledge-management]]></category>
            <category><![CDATA[ai]]></category>
            <dc:creator><![CDATA[Graphwise]]></dc:creator>
            <pubDate>Fri, 06 Feb 2026 09:21:55 GMT</pubDate>
            <atom:updated>2026-02-06T09:21:55.778Z</atom:updated>
            <content:encoded><![CDATA[<h3><strong>In this post, you will learn how the Graphwise Sales Enablement team reimagined content to deliver actionable, context-aware knowledge through the Graphwise Knowledge Hub.</strong></h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*j59ZRSpDZDQ1os0bXKcyAw.png" /></figure><p>It’s 2026. The Web has grown into 1.2 billion websites and counting. Not only that but the architectonics of how people interact with content is being widely reshaped. The rise in AI tool adoption and the new discovery channels like AI-powered search and chatbots <a href="https://www.statista.com/topics/13797/online-traffic-worldwide/#topicOverview">have changed how audiences access online information</a>.</p><p>In that landscape, one thing has become impossible to ignore: users don’t need more content. They need better access to relevant knowledge as well as platforms that allow for meaningful, trustworthy, and traceable interaction with information sources. They need actionable results from searches across systems, retrieved quickly and accurately, in the moment of their need, tailored to their specific context.</p><h3>Content for action, not consumption</h3><p>In my case, as a Knowledge Steward at Graphwise, the users I work for are our Sales team people. In my role, my goal is very specific: to create content that enables them to do their job better and faster, and to be confident that the information they get is up-to-date, trustworthy, and value-oriented.</p><p>Such a job is not easy as the test for content is immediate — it either helps move the conversation forward, or it doesn’t. Case in point: when a sales rep is on a call, they don’t have time for browsing and berry-picking information. They need specific answers for a specific context, and not documents, blog posts, or webinar summaries.</p><p>That challenges our Sales Enablement team to create and maintain relevant content that can be retrieved faster, be trustworthy, and accessible in a conversational way. It also calls for a change in the mindset of ideating, planning, and publishing content. What was before a theoretical exploration of how organizational communication is increasingly moving from organization-controlled model stakeholder-centric and <a href="https://ideas.repec.org/h/elg/eechap/20979_5.html">less message-controlling participatory communication</a> is now a practical requirement we cannot afford to ignore.</p><p>And so we did not.</p><h3>Walking the AI-ready content talk with the Graphwise Knowledge Hub</h3><p>We, at the Graphwise Sales Enablement team, shifted the way we look at content ideation, creation and distribution. We stopped thinking about content as artifacts and started thinking about content from the perspective of knowledge management, <a href="https://graphwise.ai/fundamentals/what-is-a-knowledge-graph/">knowledge graph</a> building, and enterprise-wide coherence in communication.</p><p>It has now been almost a year since we started building the Graphwise Knowledge Hub, using Graphwise technologies, and strategically walking our own knowledge graph talk. Along the way, we learned a lot. And as I see more and more organizations trying to build their own “knowledge hub” or “single source of truth,” I want to share some of the key lessons we learned building our Graphwise Knowledge Hub.</p><p>Spoiler alert: These are three things people working in content already know but often don’t have the technology, mandate, or change-management support to truly enact.</p><h3>Graphwise Knowledge Hub (At a Glance)</h3><p>The Graphwise Knowledge Hub is an application on top of the Graphwise Knowledge Graph and is built using Graphwise Platform. It is currently powering several internal applications including a GraphRAG application, several Sales Agents and a semantic search over our technical documentation. The Graphwise Knowledge Hub is a constantly evolving infrastructure and with time will power more applications and agents e.g. a Marketing Assistant on our website, an interactive conversational application with facet filter, a Market Research Agent and more.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*47TCVV6llV58HOMei7bElA.png" /></figure><h3>Lesson 1: The push-to-pull shift is real and calls for a change</h3><p>You don’t need me to point you to the endless references about conversational content and the need to “talk with” audiences rather than “talk at” them. What is worth repeating though is the role that content grounded in knowledge graphs can play in making that talk a real walk along the alley of user-centric content.</p><p>In our case, building a Knowledge Hub for sales enablement meant creating content in a way that allowed sales teams to pull information and to talk to the content we had. To do that, we needed to not only create content in a way that allowed it to be composed ad hoc. We also needed to do it using agreed-upon concepts shared across systems and departments for enterprise-wide coherence in our communication.</p><p>This inevitably led to cross-department alignment around terms that organically grew into a messaging work: we needed to figure out not only the terms, but also the business logic through which they relate to each other. For example, how <a href="https://graphwise.ai/components/graphdb/">Graphwise GraphDB</a> serves a given use case and what are the capabilities it has that move a specific business needle.</p><p>Only after such diligent work on knowledge management, led by Helmut Nagy (VP Sales Enablement at Graphwise), were we able to start working with and creating content that is truly pullable, modular, and composable.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*W0WDn6tjaEEmHR_dEX0zpQ.png" /></figure><h3>Lesson 2: Good content is about building knowledge management scaffolding</h3><p>This is a lesson I have come across in my theoretical explorations and writing work for years as a content person, but only truly learned it in practice from Helmut Nagy: How important it is to bring people together, talk, discuss concepts, agree upon terms, messaging, operational synergies, etc.</p><p>Over the past year, the Graphwise Knowledge Hub was built as an AI-driven platform bringing together CRM data, marketing materials, product documentation, website content, market research, and more. But the real transformation happened when it evolved into a shared, knowledge graph-driven platform where people could find what they need based on tags, interests, topics.</p><p>What we learned along the way is building a knowledge hub that serves reliable answers and trustworthy content meant doing some grunt work in the first place. Case in point: first we went through the processes of defining core concepts clearly and aligning and mapping terminology across teams. Next, we put a substantial amount of effort into understanding well and modelling the relationships between products, industries, and use cases. Finally, we built feedback loops so that the knowledge we added in the Graphwise Knowledge Hub would be relevant and well-tied to the ecosystem of knowledge that was growing in the organization.</p><p>This work alone brought us to creating workflows in which we ideate and write content at the level of concepts, not pages. We also work in a way that allows us to further assemble content into multiple narratives (for example, for sales, for marketing, for SEO).</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*SHyxQ13LqvS9Yx9KiXCDgQ.png" /></figure><h3>Lesson 3: AI is only as good as the knowledge you feed it</h3><p>AI is often presented as the solution to enterprise knowledge problems, but we know from customer projects and from experience that the real story isn’t in the AI itself. The differentiator is the actionable content and contextual knowledge an organization manages to provide to the AI system.</p><p>What we didn’t know was that the Knowledge Hub would become also a focal point not only for people, but also for AI agents, as it was by default a hub for aligned meaning in the first place.</p><p>For me, from a content perspective, this not only means that content is to be used to “feed” our AI systems. It also means that it is to be created in a way that allows it to be assembled, and not simply authored. That said, what we now know is that instead of creating static assets, we can create conceptual building blocks anchored in the knowledge graph.</p><p>Thus, we can get more reliable output from an AI system, depending on who is asking, in what context, and for what purpose. To do that, we approached any AI agent or system as a consumer of structured knowledge and as an addition to our knowledge and content work, not a replacement or automatization.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*h25MrjLUoepWJo1zhntvGA.png" /></figure><h3>Instead of an epilogue: Join us for our webinar</h3><p>As a Knowledge Steward, my role is not to create more content, nor to police language. Neither is the work of my colleagues who are part of the Sales Enablement team. Our work is to design and maintain the scaffolding that allows knowledge to accumulate, connect, and stay usable over time.</p><p>And when we learn our lessons and follow sustainable practices for connecting knowledge, sales enablement accelerates, communication improves, onboarding becomes easier, and content creation becomes faster.</p><p>I really hope that our approach will help you on your own knowledge graph journey towards AI-ready content and enterprise insights.</p><p>If you liked this sneak peek into the content kitchen of the Graphwise Knowledge Hub, come to see our Graphwise Knowledge Hub in action and to hear about the ecosystemic rationale and impact of the Graphwise Knowledge Hub.</p><p>In our on-demand webinar: <a href="https://graphwise.ai/event/from-silos-to-shared-intelligence-inside-the-graphwise-knowledge-hub/">From Silos to Shared Intelligence: Inside the Graphwise Knowledge Hub</a>, Helmut Nagy and I share:</p><ul><li>What we built and why</li><li>The lessons learned from real internal use</li><li>How we evaluated whether the Hub truly accelerates enablement, communication, and content creation</li><li>A live demo of the Knowledge Hub so you can explore it yourself</li></ul><p>Because the future isn’t about creating more content. It’s about creating systems that make knowledge actionable — on demand.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/160/1*um2imZM0ZRLlfMlY9ILzFg.png" /><figcaption><a href="https://graphwise.ai/author/petkovat/"><strong>Teodora Petkova</strong></a>, Knowledge Steward at Graphwise</figcaption></figure><p><em>Originally published at </em><a href="https://graphwise.ai/blog/lessons-learned-from-building-an-ai-ready-knowledge-hub/"><em>https://graphwise.ai</em></a><em>.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=6468691a2910" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Reducing Risk and Rework — How GraphRAG Delivers ROI in Compliance and Legal Workflows]]></title>
            <link>https://graphwise.medium.com/reducing-risk-and-rework-how-graphrag-delivers-roi-in-compliance-and-legal-workflows-d09037cbfa4d?source=rss-39e3a6a41a63------2</link>
            <guid isPermaLink="false">https://medium.com/p/d09037cbfa4d</guid>
            <category><![CDATA[compliance]]></category>
            <category><![CDATA[roi]]></category>
            <category><![CDATA[return-of-investment]]></category>
            <category><![CDATA[graphrag]]></category>
            <category><![CDATA[regulatory-compliance]]></category>
            <dc:creator><![CDATA[Graphwise]]></dc:creator>
            <pubDate>Thu, 29 Jan 2026 09:00:58 GMT</pubDate>
            <atom:updated>2026-01-29T09:00:58.012Z</atom:updated>
            <content:encoded><![CDATA[<h3>Reducing Risk and Rework — How GraphRAG Delivers ROI in Compliance and Legal Workflows</h3><h4>Generative AI promises speed, but in legal and compliance, hallucinations create unacceptable risks and costly rework. This article explains how GraphRAG bases AI on verifiable facts to ensure accuracy, audit trails, and faster regulatory responses for measurable ROI.</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*lE0Rhjgq7nNTapsZ.png" /></figure><p>Legal and compliance teams are under pressure to keep up with constant regulatory changes while maintaining absolute accuracy. Meanwhile, growing volumes of fragmented data increase the risk of errors, and one mistake can cost millions.</p><p>Financial institutions like Citigroup face penalties of up to <a href="https://www.reuters.com/business/finance/us-bank-regulators-fine-citi-136-million-failing-address-longstanding-data-2024-07-10/">$136 million</a> for data management failures. Similarly, the Securities and Exchange Commission (SEC) recently charged nine investment advisers and three broker-dealers <a href="https://www.sec.gov/newsroom/press-releases/2025-6">$4 million to $12 million</a> in penalties for recordkeeping failures.</p><p>Many organizations have turned to artificial intelligence (AI) as a solution. Although <a href="https://www.akerman.com/en/perspectives/the-ai-legal-landscape-in-2025-beyond-the-hype.html">79%</a> of law firms have adopted AI tools, only a fraction have genuinely transformed their operations. The reason is a lack of trust in AI outputs, since traditional <a href="https://graphwise.ai/fundamentals/what-is-large-language-model/">large language models (LLMs)</a> made up information up to <a href="https://graphwise.ai/use-cases/graph-rag/">45%</a> of the time in legal contexts.</p><p>LLMs create compliance risks and make teams spend excessive time verifying AI-generated insights, rather than using them with confidence.</p><p>This article explores how <a href="https://graphwise.ai/fundamentals/what-is-graph-rag/">graph retrieval-augmented generation (GraphRAG)</a> addresses these challenges through semantic grounding. It discusses how it delivers measurable ROI through transforming legal and compliance teams’ approach to data.</p><h3>Understanding the hallucination crisis in legal AI</h3><p>We must first assess the scale of the hallucination problem to understand why compliance teams hesitate to fully embrace generative AI. In high-stakes environments, “mostly accurate” is never accurate enough.</p><h3>The scale of the problem</h3><p>The error rates in specialized fields remain dangerously high despite the hype surrounding generative AI. Even the best-performing legal AI products hallucinate in about <a href="https://dho.stanford.edu/wp-content/uploads/Legal_RAG_Hallucinations.pdf">one out of six instances</a>. When answering very specific legal queries, the hallucination rates for language models can range from <a href="https://hai.stanford.edu/news/hallucinating-law-legal-mistakes-large-language-models-are-pervasive">69% to 88%</a>.</p><p>AI is also being used in healthcare, so hallucinations are here, too. GPT-4 hallucination rates are <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC11153973/">28.6%</a> in medical systematic reviews, with precision as low as 13.4%. The weakness of standard LLMs is their probabilistic nature, which predicts the next likely word rather than verifying the facts.</p><p>We are already seeing real consequences of AI-generated fabricated case law appearing in legal briefs and judicial opinions. For example, Deloitte was supposed to repay the Albanese government after using generative AI to produce a <a href="https://www.theguardian.com/australia-news/2025/oct/06/deloitte-to-pay-money-back-to-albanese-government-after-using-ai-in-440000-report">$440,000 report</a> on the welfare system. The report was riddled with errors, false references, and incorrect footnotes.</p><p>This incident highlights that even major consultancies are not immune to the risks of ungrounded AI. When an organization relies on a system that makes up facts, they pay twice, once for the technology and again for the cleanup.</p><h3>Why are hallucinations especially dangerous in compliance</h3><p>Hallucination may be a quirky error in creative writing or marketing. But it’s a liability in compliance because it requires factual accuracy — errors can lead to legal issues. Also, professional liability turns personal, and the <a href="https://www.americanbar.org/groups/professional_responsibility/publications/model_rules_of_professional_conduct/">Model Rules of Professional Conduct</a> place responsibility on legal professionals to supervise AI-generated work.</p><p>Failing to put proper verification processes in place can trigger disciplinary action, sanctions, and loss of licensure, plus long-term reputational damage​.</p><h3>The manual verification burden</h3><p>The manual verification burden exacerbates the problem. Legal professionals spend <a href="https://kroolo.com/blog/legal-document-summarization-with-ai">60–80%</a> of their time verifying AI outputs, rather than focusing on strategic analysis. Document review processes that might take days or weeks with traditional manual approaches remain time-consuming even with AI assistance because every output must be checked for accuracy.</p><p>This inflates expenses in an hourly billing model — what should be efficient becomes a costly loop of checks and corrections. Furthermore, AI amplifies rather than alleviates the workload without better grounding.</p><h3>What makes GraphRAG different: The semantic grounding advantage</h3><p>The solution to the hallucination is in changing how AI retrieves and processes information. This is where <a href="https://graphwise.ai/use-cases/graph-rag/">GraphRAG</a> distinguishes itself from standard retrieval methods. GraphRAG works on <a href="https://graphwise.ai/fundamentals/what-is-a-knowledge-graph/">knowledge graphs</a>. These are structured representations where entities, regulations, policies, and clauses are explicitly connected through defined relationships.</p><h3>From probabilistic guessing to structured knowledge</h3><p>Standard RAG systems use vector-based retrieval that breaks documents into text chunks. An LLM receives retrieved text chunks as context to synthesize information, then generates a coherent output.</p><p>Yet this process relies entirely on the model’s learned patterns about what language should look like — probability. If the model has learned patterns that associate certain phrases or concepts, it may generate outputs that match those patterns even if no explicit connection exists in the source documents.</p><p>GraphRAG transitions from plain text chunks to structured entity information. It includes a graph database, <a href="https://graphwise.ai/components/graphdb/">Graphwise GraphDB</a>, as a source of contextual information sent to the LLM. An LLM generates the final text, but the verified graph structure constrains its generation. The model cannot hallucinate relationships that do not exist in the graph structure. It can only conclude explicitly mapped connections.</p><p>Consider a compliance scenario, a legal professional queries whether a specific data processing practice violates GDPR requirements. The LLM in a traditional RAG system retrieves GDPR documentation and information about the data processing practice. It then produces an answer based on patterns it has learned from legal reasoning.</p><p>The response may sound authoritative and even cite the retrieved documents. However, the real reasoning still happens inside the model’s probabilistic inference process. In GraphRAG, the system traverses the knowledge graph from the data processing practice node to related policy, regulation, and enforcement precedent nodes.</p><p>The relationships between these entities are clearly defined, capturing whether a practice falls under a specific GDPR article, has prompted enforcement actions, and which safeguards meet the requirements. The answer comes from these defined connections rather than from general linguistic patterns.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*GkbOMt6vGQd_9tdg34lq6A.png" /><figcaption>Graphwise platform overview</figcaption></figure><h3>Explainability and audit trails</h3><p>GraphRAG brings trust in compliance through auditability. Every answer traces back through knowledge graph relationships and provides transparent reasoning. Compliance professionals can see exactly which regulations, precedents, and policies supported a particular conclusion.</p><p>When auditors ask how the organization reached a compliance determination, the answer includes source citations and the explicit relationship path from the query through the graph to supporting evidence.</p><p>This transparency addresses professional liability concerns. Compliance professionals can point to a documented, verifiable reasoning chain — here’s the regulation, here’s our control — and satisfy the Model Rules of Professional Conduct requirement to supervise AI-generated work.</p><h3>The accuracy improvement</h3><p>The impact of semantic grounding is measurable, as it reduces hallucinations to single-digit percentages, compared to the 45% baseline for ungrounded LLMs. In enterprise implementations, organizations report moving from <a href="https://www.rws.com/content-management/blog/how-graphwise-is-transforming-the-graph-ai-landscape/">60%</a> accuracy in traditional RAG deployments to over 90% accuracy with GraphRAG.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*itzsgA73qvq2qPBH2g0dtQ.png" /><figcaption>% of correct answers achieved by LLM vs. VectorRAG vs. GraphRAG</figcaption></figure><h3>The ROI case: Measurable impact across compliance workflows</h3><p>The improved accuracy and explainability with GraphRAG yield ROI across time efficiency, cost reduction, risk mitigation, increased speed, and improved decision quality.</p><h3>Dramatic time savings</h3><p>The one immediate ROI comes from reclaiming lost hours. AI-powered legal document review, when grounded by GraphRAG, reduces processing time by <a href="https://kroolo.com/blog/legal-document-summarization-with-ai">60–80%</a>. Legal teams can handle vastly larger volumes of data without increasing headcount.</p><p>Policy interpretation and management also accelerate dramatically. Manual compliance work requires teams to collect relevant documents, process physical and digital files by converting them to searchable formats, and conduct reviews for relevance and regulatory requirements.</p><p>GraphRAG-powered policy management eliminates these time-consuming manual processes. It redirects resources to strategic analysis of compliance gaps and control optimization. Furthermore, document automation saves up to <a href="https://www.gavel.io/resources/study-legal-automation-saves-90-percent-drafting-time">90%</a> of time on legal document creation.</p><h3>Cost reduction through reduced manual review</h3><p>The reduction in manual review effort through semantic grounding and accurate AI outputs considerably lowers operational expenses. Firms stop paying high-value professionals to act as spellcheckers for their AI by eliminating the expensive iterative verification cycles caused by hallucinated content.</p><p>Also, there are lower audit preparation costs. Automated systems effortlessly maintain detailed compliance records and remove the costly “audit scramble” common in compliance departments.</p><h3>Risk mitigation and audit cost savings</h3><p>Beyond cost reduction, GraphRAG lowers professional liability exposure by providing accurate, explainable outputs that reduce malpractice risk. It also drives regulatory compliance improvements. Real-time monitoring, rather than quarterly reviews, prevents violations before they occur.</p><p>Shifting from reactive to proactive management reduces penalties, as firms can identify compliance gaps before audits rather than discovering them during reactive audits.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*wuu3ueWYxM2yb2d0jDWcNA.png" /><figcaption>Value drivers for a knowledge graph platform in data and AI</figcaption></figure><h3>Faster compliance cycles</h3><p>Speed is a competitive advantage in regulatory adherence, and GraphRAG lets you match that speed without sacrificing accuracy. You can integrate regulatory updates into the knowledge graphs in real time within hours to ensure the AI always checks the latest requirements — far faster than traditional, manual update processes.</p><p>At the same time, continuous assurance and monitoring replace time-consuming quarterly reviews. They automatically surface compliance gaps as soon as they arise and reduce the risk of surprises during audits.</p><p>You can use knowledge graphs to analyze the impact before rolling out policy changes, preventing regressions and ensuring controls remain effective. This integrated approach accelerates regulatory response and turns agility into a source of confidence and reduced audit risk.</p><h3>Improved decision quality</h3><p>GraphRAG delivers substantially improved decision quality by enabling context-rich insights from knowledge graphs that provide a complete view of the regulatory space. It also allows gap detection and early identification of requirements, even when controls are not mapped or evidence is missing.</p><p>GraphRAG also supports shared control optimization to help teams identify controls that satisfy multiple frameworks in parallel — <a href="https://graphwise.ai/trust-center/">ISO 27001, SOC 2, PCI DSS, and GDPR</a>.</p><h3>Build trust and ROI with Graphwise</h3><p>Building AI systems in legal and compliance that deliver trustworthy, accurate, and explainable results requires expertise and capital, too. This is where Graphwise stands apart: its purpose-built Graph AI platform offers a robust suite of capabilities tailored for trust and efficiency.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*S1ba5-H2xAXoUsKvevL8ow.png" /><figcaption>Graphwise platform overview</figcaption></figure><p>Key features of the Graphwise platform include:</p><h3>GraphDB</h3><p><a href="https://graphwise.ai/components/graphdb/">GraphDB</a> is Graphwise’s scalable and highly reliable semantic graph database stores and manages your compliance knowledge in a structured, interconnected format. GraphDB forms the single source of truth to enable semantic grounding that dramatically reduces hallucinations and supports precise fact-based reasoning.</p><p>GraphDB introduces streamlined integration with leading large language models (LLMs) and supports natural language search (<a href="https://graphdb.ontotext.com/documentation/11.1/talk-to-graph.html">Talk to Your Graph</a>) for fast, precise AI-driven decision-making.</p><h3>Knowledge graph</h3><p>Graphwise helps organizations convert vast legal texts, regulatory documents, and internal policies into rich, semantically linked <a href="https://graphwise.ai/bundles/data-management-suite/">knowledge graphs</a>. These graphs capture entities and regulations, define relationships and context, and enable LLMs to perform multi-hop reasoning for complex compliance queries.</p><h3>Graph AI Suite</h3><p>The <a href="https://graphwise.ai/bundles/graph-ai-suite/">Graph AI Suite</a> brings modeling tools, knowledge graph management, advanced connectors, and a <a href="https://modelcontextprotocol.io/docs/getting-started/intro">model context protocol (MCP) server</a> under one roof for smooth integration with third-party AI solutions. It lowers complexity and eases end-user adoption for building customized AI applications.</p><p>Moreover, the suite models and maps raw documents into structured knowledge graphs, automates taxonomy creation, and enriches metadata through semantic analysis. It also delivers GraphRAG, which uses semantic graphs to boost LLM accuracy in legal and compliance workflows.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*S3xMFCWOfdiA3kZcdo1Qew.png" /><figcaption>The Graphwise platform</figcaption></figure><p>Through these and many other tools, Graphwise helps enterprises reduce risk and rework, resulting in strong business returns. For instance, the <a href="https://graphwise.ai/success-story/avalara-transforming-ai-reliability-by-building-knowledge-graph-powered-customer-support/">Avalara vector-based RAG model lacked the accuracy</a> needed for mission-critical tax applications. This resulted in a “Precision Paradox” in which improved accuracy led to user dissatisfaction due to errors.</p><p>Avalara implemented a <a href="https://www.poolparty.biz/resources/dom-graph-rag/">DOM GraphRAG</a> proof-of-concept model using Graphwise’s Graph AI Suite to leverage their existing DITA-structured content. They built a reliable knowledge graph base for trustworthy AI solutions.</p><p>Similarly, <a href="https://graphwise.ai/success-story/healthdirect-austalia-from-silos-to-smarter-search-and-self-service-tools/">Healthdirect’s fragmented content</a> from hundreds of partners and siloed data systems made it difficult to deliver unified, user-friendly health services at scale. To address this, they used Graphwise’s Graph AI Suite to build a semantic knowledge graph that automated content classification, enabled dynamic content generation, and powered smart search.</p><h3>Wrapping up</h3><p>GraphRAG resolves AI’s legal pitfalls, grounding outputs for reliability and ROI. It helps teams save time, reduce costs, lower risk, and make better decisions.</p><p>With Graphwise, you gain a proven partner. The company has helped global financial institutions, pharmaceutical companies, and professional services firms transform scattered regulations and policies into a single, trustworthy source of truth.</p><p>Teams finally spend their time on strategy rather than second-guessing AI, auditors receive transparent trails on demand, and organizations stay ahead of regulatory change rather than react to it.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/150/1*KsINTeAku5ld_GhfxQzuyQ.jpeg" /><figcaption><a href="https://graphwise.ai/author/haziqa-sajid/"><strong>Haziqa Sajid</strong></a>, Technical Content Writer</figcaption></figure><p><em>Originally published at </em><a href="https://graphwise.ai/blog/reducing-risk-and-rework-how-graphrag-delivers-roi-in-compliance-and-legal-workflows/"><em>https://graphwise.ai</em></a><em>.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=d09037cbfa4d" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Introducing GraphRAG: The Trust Layer of the Graphwise Platform]]></title>
            <link>https://graphwise.medium.com/introducing-graphrag-the-trust-layer-of-the-graphwise-platform-815d47f87609?source=rss-39e3a6a41a63------2</link>
            <guid isPermaLink="false">https://medium.com/p/815d47f87609</guid>
            <category><![CDATA[graphrag]]></category>
            <category><![CDATA[semantic-layer]]></category>
            <category><![CDATA[knowledge-graph]]></category>
            <category><![CDATA[genai]]></category>
            <category><![CDATA[llm]]></category>
            <dc:creator><![CDATA[Graphwise]]></dc:creator>
            <pubDate>Mon, 26 Jan 2026 11:04:10 GMT</pubDate>
            <atom:updated>2026-01-26T11:04:10.137Z</atom:updated>
            <content:encoded><![CDATA[<p><strong>GraphRAG introduces a new trust layer for generative AI in the Graphwise platform, making enterprise answers explainable, auditable, and context-rich.</strong></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*ytan4ZKEiMF2-gl2.png" /></figure><p>For many organizations, the biggest barrier to generative AI isn’t the model, it’s trust. Teams struggle to justify AI-driven decisions when they can’t see where an answer came from, how it was produced, or whether it respects internal policies and regulations.</p><p>With <a href="https://graphwise.ai/use-cases/graph-rag/">GraphRAG</a>, we’re introducing the intelligence layer of the Graphwise platform: a governed, graph-powered retrieval-augmented generation engine designed specifically for enterprises that need explainability, compliance, and scale.</p><p>GraphRAG combines <a href="https://graphwise.ai/fundamentals/what-is-large-language-model/">Large Language Models (LLMs)</a> with a <a href="https://graphwise.ai/fundamentals/what-is-a-semantic-layer/">Semantic Layer</a> built on <a href="https://graphwise.ai/fundamentals/what-is-a-knowledge-graph/">knowledge graphs</a> to deliver context-rich, verifiable, and audit-ready answers. It’s where your institutional knowledge, graph data, and AI workflows meet.</p><h3>What is GraphRAG?</h3><p><a href="https://graphwise.ai/fundamentals/what-is-graph-rag/">GraphRAG</a> is the core GenAI engine in the Graphwise platform. It goes beyond traditional vector-only RAG by unifying:</p><ul><li><strong>Semantic reasoning</strong> over your knowledge graph</li><li><strong>Hybrid retrieval</strong> across graph, vector, and full-text search</li><li><strong>Multi-hop question answering</strong> for complex, relational queries</li></ul><p>Where standard RAG stops at similarity search, GraphRAG understands entities, relationships, and semantics. It transforms fragmented documents, taxonomies, and domain models into actionable answers that can be traced, explained, and audited.</p><blockquote>Our vision is simple:<br>To be the enterprise-standard engine for trusted generative AI, turning complex institutional knowledge into reliable, multi-hop answers for mission-critical decisions.</blockquote><h3>Why enterprises need a trusted GenAI layer</h3><p>Most enterprise AI teams face the same challenges:</p><ul><li><strong>Opaque pipelines</strong> — It is hard to see how an answer was produced or which sources were used</li><li><strong>Hallucinations and inconsistency</strong> — Models improvise when context is missing or ambiguous</li><li><strong>Regulatory pressure</strong> — This applies especially in finance, healthcare, ESG, and the public sector</li><li><strong>Siloed knowledge</strong> — Structured and unstructured data live in different systems with no shared semantic layer</li></ul><p>GraphRAG addresses these challenges by design:</p><ul><li><strong>Transparent retrieval and reasoning</strong> — Each step of the pipeline can be inspected</li><li><strong>Source-level provenance</strong> — Answers are backed by explicit documents and graph entities</li><li><strong>Semantic grounding</strong> — The knowledge graph acts as a source of truth, reducing hallucinations</li><li><strong>Regulatory-friendly traceability</strong> — It is built for environments where you must justify <em>why</em> an answer is correct, not just <em>what</em> it is</li></ul><h3>What makes GraphRAG different</h3><p>GraphRAG builds on years of experience with <a href="https://graphwise.ai/components/graphdb/">GraphDB</a>, PoolParty, and semantic technologies in regulated industries. At its core, the product is shaped around three pillars.</p><h3>Trust and explainability</h3><p>GraphRAG turns the traditional “black-box” RAG pipeline into a transparent, auditable system:</p><ul><li><strong>Explainable answers</strong> — Users can see <em>what</em> was retrieved and <em>why</em> it was used</li><li><strong>Source citations and provenance</strong> — Answers are linked back to specific documents and graph entities</li><li><strong>“Explain this answer” views</strong> — This breaks down how the system interpreted the question, expanded concepts, and selected context</li><li><strong>Regulatory-ready traceability</strong> — Retrieval, reasoning, and guardrails can be inspected for audits and reviews</li></ul><p>This is essential for teams working under strict oversight, where AI output must withstand internal and external scrutiny.</p><h3>Hybrid retrieval and knowledge-graph grounding</h3><p>GraphRAG isn’t tied to a single retrieval method. It combines:</p><ul><li><strong>Graph-based retrieval</strong> via the knowledge graph (GraphDB and ontologies)</li><li><strong>Vector search</strong> in your chosen vector store</li><li><strong>Full-text search (FTS)</strong> for keyword-driven discovery</li></ul><p>On top of that, GraphRAG uses <strong>knowledge-model-driven input processing</strong> to understand user intent. This means that it:</p><ul><li>Detects and enriches concepts from your taxonomy/ontology</li><li>Expands queries with related entities and terms</li><li>Builds a graph representation of the question to resolve ambiguity</li></ul><p>This makes a big difference for real-world questions that are short, vague, or rely heavily on domain-specific language.</p><p>Because the system retrieves from a graph of entities and relationships — not just isolated chunks — GraphRAG is particularly strong on <strong>multi-hop questions</strong> (“how does X impact Y across Z?”) and complex context.</p><h3>Enterprise flexibility and vendor-agnostic design</h3><p>GraphRAG is engineered to fit into existing enterprise stacks instead of locking you into one vendor:</p><ul><li><strong>Any major LLM</strong> — OpenAI, Claude, Azure AI, Amazon Bedrock, or your own hosted models</li><li><strong>Any vector store</strong> — OpenSearch, Elastic, and other enterprise-grade vector backends</li><li><strong>Any enterprise IdP</strong> — Integration with standard identity providers via OAuth2/OIDC</li></ul><p>Embedding models and vector stores are abstracted behind clear interfaces, so you can switch providers, update models, or scale infrastructure without rewriting your application.</p><h3>GraphRAG core capabilities</h3><p><strong>Version 1.0</strong> focuses on delivering a solid foundation for trusted conversational experiences over your knowledge graph and content.</p><ul><li><strong>Secure, authenticated access</strong></li><li>Keycloak-based authentication and authorization</li><li>Separation of users, services, and GraphRAG pipelines</li><li><strong>Conversational experience with short-term memory</strong></li><li>Multi-turn chat with context preserved within each conversation</li><li>AI-generated follow-up questions to help users go deeper</li><li>Concept descriptions based on your SKOS-style taxonomies</li><li><strong>Explainability and provenance out of the box</strong></li><li>A dedicated explainability panel that shows what tools were called and what they returned</li><li>Document panel with source URL, title, and description</li><li>Basic provenance listing which sources contributed to the answer</li><li><strong>Hybrid retrieval foundation</strong></li><li>Integration with GraphDB and external vector stores like Elastic/OpenSearch</li><li>Support for combining structured graph context with unstructured documents</li><li><strong>Enterprise-grade guardrails</strong></li><li>Input and output guardrailing integrated in the workflow</li><li>Safety and policy checks wired directly into the agent orchestration</li><li><strong>Modern APIs and streaming</strong></li><li>Synchronous and asynchronous querying</li><li>Server-sent events (SSE) for streaming answers, explainability messages, sources, follow-ups, concepts, and guardrail signals</li></ul><blockquote>In short: GraphRAG v1 gives you a governed, explainable conversational layer over your knowledge graph and content, ready to be embedded into applications, portals, and internal tools.</blockquote><h3>Tested on real-world, high-stakes scenarios</h3><p>To shape GraphRAG, we evaluated the product on a diverse set of demanding scenarios, including:</p><ul><li><strong>Financial regulation and monetary policy</strong> — Policy documents, regulatory texts, interconnected financial concepts</li><li><strong>Healthcare and clinical knowledge</strong> — Clinical pathways, medical guidelines, interactions, and care protocols</li><li><strong>International development and ESG</strong> — Project documents, sustainability reports, regulatory frameworks, and impact narratives</li><li><strong>Semantic technology and internal knowledge hubs</strong> — Deeply structured ontologies and knowledge graphs connected to technical documentation</li></ul><p>These use cases push the system across multiple dimensions: precision, reasoning depth, traceability, and robustness in domain-specific language. They’re exactly the environments where “just another chatbot” isn’t enough.</p><h3>Getting started with GraphRAG</h3><p>GraphRAG is available as a core component of the Graphwise platform and integrates natively with:</p><ul><li><strong>GraphDB</strong> for semantic graph storage and reasoning</li><li><strong>PoolParty</strong> and your knowledge models (taxonomies, ontologies)</li><li><strong>Your existing vector infrastructure and LLM providers</strong></li></ul><p>If you’re building or scaling enterprise AI initiatives and need answers that are <strong>explainable, compliant, and grounded in your knowledge graph</strong>, GraphRAG is designed for you.</p><p>We’d be happy to walk you through the first release, discuss your data landscape, and explore how GraphRAG can become the trust layer for your generative AI applications.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/150/1*yKw74VdddLzeBI5qMT-JGA.jpeg" /><figcaption><a href="https://graphwise.ai/author/yasen-stoykov/"><strong>Yasen Stoykov</strong></a>, Product Marketing Manager at Graphwise</figcaption></figure><p><em>Originally published at </em><a href="https://graphwise.ai/blog/introducing-graphrag-the-trust-layer-of-the-graphwise-platform/"><em>http://graphwise.ai</em></a><em>.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=815d47f87609" width="1" height="1" alt="">]]></content:encoded>
        </item>
    </channel>
</rss>