<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[ShellKode Blog - Medium]]></title>
        <description><![CDATA[Official tech blog from ShellKode, we publish posts about our engineering team’s awesome work on AWS, GCP, Data and OpenSource tools. - Medium]]></description>
        <link>https://blog.shellkode.com?source=rss----1137b34251a7---4</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Sat, 16 May 2026 05:24:45 GMT</lastBuildDate>
        <atom:link href="https://blog.shellkode.com/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[The Rise of the Agentic Enterprise: What Every CTO Must Know About AI Agents in 2026]]></title>
            <link>https://blog.shellkode.com/the-rise-of-the-agentic-enterprise-what-every-cto-must-know-about-ai-agents-in-2026-9ab90c9a537f?source=rss----1137b34251a7---4</link>
            <guid isPermaLink="false">https://medium.com/p/9ab90c9a537f</guid>
            <category><![CDATA[agentic-ai]]></category>
            <category><![CDATA[llm]]></category>
            <category><![CDATA[enterprise-architecture]]></category>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[ai-agent]]></category>
            <dc:creator><![CDATA[ShellKode Blogs]]></dc:creator>
            <pubDate>Mon, 13 Apr 2026 18:27:01 GMT</pubDate>
            <atom:updated>2026-04-13T18:43:43.493Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Wv1PjcDv7T3FOOH-xcCUMw.png" /></figure><h3>We have arrived at the Hard Question</h3><p>There is a moment in every major technology transition when the question shifts from “Should we?” to “How fast can we?”, and then, almost immediately, to “Why isn’t this working the way we expected?”</p><p>Agentic AI has arrived at that third question faster than any enterprise technology in recent memory.</p><p>This isn’t a piece about whether agentic AI matters. That question is settled. It’s about what it actually takes to make it work inside a real enterprise, and what separates organizations building genuine competitive advantage from those accumulating expensive technical debt dressed up as innovation.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*T4eG7iH-fKrcAI1nv9NiCQ.png" /></figure><p>But here is what those numbers don’t tell you.</p><blockquote><strong>35% of enterprises had already adopted AI Agents by 2023, yet the majority of those deployments never reached production</strong></blockquote><p>The adoption curve is steep. The success curve is not.</p><p>The gap between those two lines is where most organizations are right now, and understanding why it exists is the most important strategic work a CTO can do in 2026.</p><h3>6 Trends Every CTO Needs to Understand Right Now</h3><blockquote>Agentic AI is not one shift. It is six happening simultaneously across architecture, workforce, infrastructure, security, cost, and control. Miss one and the others will surprise you.</blockquote><h4>Trend 1: Multi-Agent Systems Are the New Default Architecture</h4><p>The question is no longer whether to deploy an agent. It is how to coordinate ten of them. Networks of specialized agents collaborating, delegating, and handing off context are becoming the standard deployment model.</p><p>The orchestration layer is now a core architectural decision, not a detail to figure out later.</p><h4>Trend 2. The “Silicon Workforce” Is Being Managed</h4><p>Agents are increasingly treated as collaborators, not tools. That changes everything: how teams are structured, who owns accountability, and how performance gets measured.</p><p>The organizations moving fastest are not just deploying agents. They are redesigning the work itself: identifying which decisions require human judgement, which can be delegated to agents, and how handoffs between the two are governed.</p><h4>Trend 3. Purpose-Built Infrastructure Is Replacing General Stacks</h4><p>Your stack was built for humans making decisions at human speed. Agents operate in real time, across systems, continuously. That gap, between what agents need and what most enterprise architectures provide, is where the majority of pilots quietly stall. Not because the model failed. Because the infrastructure was never ready.</p><p>Agentic workloads demand infrastructure that can operate in real time, across systems, and at machine speed, including:</p><ul><li>Real-time data access across enterprise platforms</li><li>Modular, API-first architectures that agents can interact with dynamically</li><li>Secure identity and access management for autonomous systems</li><li>Agent-compatible data pipelines capable of continuous decision loops</li></ul><h4>Trend 4. The Board Is Now Asking About ROI, And CTOs Need Answers</h4><p>AI budgets are under scrutiny. Deployment counts are not a business outcome. Organizations without clear cost governance frameworks for agentic workloads are accumulating spend without visibility, and boards are starting to ask hard questions that require better answers than “we’re building capability.”</p><p>CTOs who can articulate cost-per-outcome, error rates, and human oversight ratios are winning budget conversations. Those who cannot are losing them, or worse, discovering their AI programs are being quietly deprioritized.</p><h4>Trend 5. Vertical Industry Adoption Is Accelerating</h4><p>Agentic AI is rapidly moving beyond generic enterprise IT experimentation and into industry-specific operational deployments. The relevant benchmark is no longer the general enterprise AI market; it is how organizations within your industry are operationalizing agentic systems:</p><ul><li>Healthcare: reducing administrative burden and improving care coordination</li><li>Financial services: real-time compliance monitoring and fraud detection</li><li>Legal operations: document review, case analysis, and contract intelligence</li><li>Supply chains: logistics planning, inventory optimization, and demand forecasting</li></ul><p>Organizations in these verticals are embedding AI agents into domain-specific workflows, not running isolated experiments. The gap between early adopters is already measurable in operational efficiency and cost structure.</p><h4>Trend 6. Agent Identity Is the Security Gap Nobody Is Talking About</h4><blockquote><strong>95% of the enterprises have no identity protections for their autonomous agents - exposing transactions, not just content.</strong></blockquote><p>Agents do not generate content; they execute transactions. Without verifiable identity, scoped permissions, and audit trails, you have autonomous systems with unchecked access running inside your enterprise. That is not a future risk. It is a current one.</p><p>The security frameworks most organizations rely on were designed for human users and static applications. Neither model accounts for autonomous agents that act continuously across multiple systems, without direct human oversight at the point of execution</p><h3>Trend 7. Governance Is Finally Catching Up But Not Fast Enough</h3><p>Organizations are beginning to establish frameworks designed to address risks unique to autonomous systems, including:</p><ul><li>Autonomous decision errors that directly affect business operations</li><li>Prompt injection attacks that manipulate agent behavior</li><li>Credential misuse or unauthorized system access</li><li>Shadow AI deployments operating outside official governance structures</li></ul><p>The risks are operational, not theoretical, and regulatory scrutiny is accelerating.</p><p>Governance is no longer a compliance checkbox. It is the mechanism that keeps you in control of systems capable of acting independently, at machine speed, across your most sensitive infrastructure.</p><h3>Conclusion</h3><p>The organizations that will define the next decade of enterprise technology are not the ones that deployed the most agents. They are the ones who deployed them thoughtfully, with the infrastructure to support them, the governance to control them, and the clarity to measure what they actually deliver.</p><p>Agentic AI is not a pilot project anymore. It is an operational reality, and the gap between organizations that treat it that way and those still running experiments is widening every quarter.</p><p>For CTOs, the mandate is clear: move past deployment metrics and into operational maturity. That means</p><ul><li>Investing in orchestration architecture before you scale agent networks,</li><li>Building identity and access frameworks before you expose sensitive systems</li><li>Establishing cost governance before your board builds it for you.</li></ul><p>The enterprises winning with agentic AI share one characteristic: they stopped asking whether AI agents could do the work, and started engineering the conditions under which agents could do the work <em>well</em>, reliably, securely, and at scale.</p><p>The transition from <strong>“Should we?”</strong> to <strong>“How fast?”</strong> has already happened. The organizations asking “How do we make this work?” right now are the ones who will be setting the benchmarks everyone else chases in 2027.</p><p><strong>The question is whether you are building toward that position or watching others do it.</strong></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=9ab90c9a537f" width="1" height="1" alt=""><hr><p><a href="https://blog.shellkode.com/the-rise-of-the-agentic-enterprise-what-every-cto-must-know-about-ai-agents-in-2026-9ab90c9a537f">The Rise of the Agentic Enterprise: What Every CTO Must Know About AI Agents in 2026</a> was originally published in <a href="https://blog.shellkode.com">ShellKode Blog</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Building a Reliable, Enterprise Data Agent with Snowflake Intelligence]]></title>
            <link>https://blog.shellkode.com/building-a-reliable-enterprise-data-agent-with-snowflake-intelligence-6e31e6892c5c?source=rss----1137b34251a7---4</link>
            <guid isPermaLink="false">https://medium.com/p/6e31e6892c5c</guid>
            <category><![CDATA[snowflake-intelligence]]></category>
            <category><![CDATA[snowflake-analytics]]></category>
            <category><![CDATA[snowflake]]></category>
            <category><![CDATA[agentic-ai]]></category>
            <category><![CDATA[snowflake-cortex]]></category>
            <dc:creator><![CDATA[ShellKode Blogs]]></dc:creator>
            <pubDate>Mon, 08 Dec 2025 17:02:58 GMT</pubDate>
            <atom:updated>2025-12-08T18:39:44.222Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Y8WUzz_tzN2zkxF9_ZEL_g.png" /></figure><p>For CXOs looking to achieve true competitive advantage, the challenge isn’t data access; it’s actionable insight. This is why they are choosing Snowflake Intelligence to build their next generation of Data Agents. Snowflake Intelligence shifts the paradigm from mere data access to true data understanding and collaboration across the enterprise. It moves beyond traditional BI and Text-to-SQL by embedding AI-driven intelligence directly into the data fabric, enabling these agents to have a dynamic dialogue with data that evolves with business needs and user context. Snowflake Intelligence addresses the complexity and fragmentation in modern data ecosystems by unifying knowledge and automating relevance, thus empowering Data Agents to not just execute simple queries, but to assess your current business landscape and generate decisions that lead to actions, anticipate risks, and innovate, turning the data cloud itself into an adaptive, intelligent partner essential for business transformation.</p><h3>Setting the Stage, The Data Dilemma</h3><p>In today’s fast-paced corporate environment, business stakeholders are acutely aware that their organizations are sitting on a goldmine of data. Yet, the critical challenge remains the same: transforming this data abundance into actionable, decision-driving insight.</p><p>The fact is, traditional Business Intelligence (BI) and simple Text-to-SQL solutions are no longer sufficient to meet strategic demands. These tools fail to bridge the pervasive organizational gaps that prevent true data mastery.</p><p>Organizations across industries face several critical challenges when trying to leverage their data effectively:</p><ul><li><strong>Siloed Information:</strong> Data scattered across multiple systems and sources, like structured, semi-structured, and unstructured Data, makes <strong>comprehensive, unified analysis nearly impossible</strong>.</li><li><strong>Time-Consuming Processes:</strong> Custom dashboard creation and report generation can take weeks, by which time <strong>business conditions may have dramatically changed</strong>, rendering the insights obsolete.</li><li><strong>Limited Context:</strong> Traditional BI tools only answer the historical question of <em>“what happened”</em>, but critically struggle to explain the deep, operational <em>“why”</em> behind trends and anomalies.</li><li><strong>Resource Constraints:</strong> Data Analysts spend up to <strong>80% of their time responding to repetitive ad hoc requests</strong> rather than focusing on high-value, strategic analysis that moves the business forward.</li><li><strong>Technical Barriers:</strong> Business users lack the SQL or technical skills needed to query databases directly, creating high <strong>dependency on already constrained data teams</strong>.</li></ul><p>These fundamental challenges result in delayed decisions, missed opportunities, and enterprise-wide frustration. For the CXO focused on competitive advantage, the need is clear: a paradigm shift is required to empower adaptive, autonomous decision-making a need that transcends simple dashboards and points directly toward building intelligent Data Agents.</p><h3>Introducing Snowflake Intelligence: The Intelligent Data &amp; AI Partner</h3><p>​​If the challenge is the gap between data and decisive action, <strong>Snowflake Intelligence</strong> is the paradigm shift that closes it. It is not just another BI tool; it is the <strong>Intelligent Data &amp; AI Partner</strong> embedded directly into Snowflake, unifying data, knowledge, and AI capabilities to enable true, adaptive decision-making.</p><h4><strong>Moving Beyond Querying: The Shift to True Data Understanding</strong></h4><p>Snowflake Intelligence fundamentally changes the user’s relationship with data. It moves the conversation from the restrictive, technical questions of <strong>“Show me the numbers”</strong> (traditional querying) to the strategic, open-ended questions of <strong>“What should I do next?”</strong> or <strong>“Why did this happen?”</strong></p><p>This shift is powered by:</p><ul><li><strong>Holistic Context:</strong> Unlike systems that only look at structured data tables, Snowflake Intelligence allows Data Agents to pull context from <strong>all data types,</strong> structured tables, unstructured documents, PDFs, customer tickets, and median to synthesize a comprehensive understanding of the business landscape.</li><li><strong>Dynamic Dialogue:</strong> It supports <strong>multi-turn conversations</strong>, allowing users (or agents) to ask follow-up questions that build on previous insights, creating a continuous, learning dialogue that evolves with user context and business needs.</li></ul><h4><strong>The AI-Driven Fabric: Intelligence Embedded, Not Bolted On</strong></h4><p>The critical advantage of Snowflake Intelligence is that its AI is <strong>embedded directly</strong> within the data fabric, powered by the <strong>Snowflake Cortex</strong> suite of services. This eliminates the complexity, cost, and risk associated with moving data to external, third-party AI platforms.</p><h4>Core Capabilities: Unifying Knowledge and Automating Relevance</h4><p>The power of Snowflake Intelligence for building <strong>Data Agents</strong> lies in its ability to orchestrate tasks across all your data assets using specialized, managed tools:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*i6YZK5oS1JcZv099DeGaVQ.png" /></figure><h3><strong>Practical Guide: Getting Started with Snowflake Intelligence</strong></h3><p><strong>Prerequisites and Setup</strong></p><p>To begin using Snowflake Intelligence, you’ll need:</p><ul><li>Access to a Snowflake account</li><li>Regional access to supported AI models or cross-region inference enabled</li><li>Existing data models, schema, and tables in Snowflake</li></ul><p><strong>Step 1: Create the Snowflake Intelligence Object</strong></p><p>The first step is establishing the intelligence framework in your account:</p><pre> — Create the Snowflake Intelligence object<br> CREATE SNOWFLAKE INTELLIGENCE SNOWFLAKE_INTELLIGENCE_OBJECT_DEFAULT;<br><br>— Set up the database and schema for agents<br>CREATE DATABASE IF NOT EXISTS snowflake_intelligence;<br> GRANT USAGE ON DATABASE snowflake_intelligence TO ROLE PUBLIC;<br><br>CREATE SCHEMA IF NOT EXISTS snowflake_intelligence.agents;<br> GRANT USAGE ON SCHEMA snowflake_intelligence.agents TO ROLE PUBLIC;<br><br>— Enable cross-region inference if needed<br> ALTER ACCOUNT SET CORTEX_ENABLED_CROSS_REGION = ‘ANY_REGION’;</pre><p>Note: Only the ACCOUNTADMIN role has the CREATE SNOWFLAKE INTELLIGENCE ON ACCOUNT privilege required for this step.</p><p><strong>Step 2: Configure AI Agents</strong></p><p>Agents are the building blocks of Snowflake Intelligence.</p><p>One of the most powerful aspects of Snowflake Intelligence is the ease with which agents can be created and configured. The process is remarkably straightforward:</p><ul><li>Navigate to AI &amp; ML » Agents in the Snowflake interface</li><li>Select the Snowflake Intelligence tab</li><li>Click “Create Agent” to launch the simplified agent builder</li><li>Provide a name and description for your agent</li><li>Select the data sources and tools the agent should have access to</li><li>Configure permissions and grant USAGE privileges to appropriate roles</li></ul><p>The entire process can be completed in minutes, without writing complex code or configuration files. The intuitive interface guides you through each step, making it accessible even to users without deep technical expertise.</p><p><strong>Tools and Capabilities for Agents</strong></p><p>When creating an agent, you can equip it with various tools that dramatically expand its analytical capabilities:</p><p><strong>Cortex Analyst Integration</strong></p><p>To enable Cortex Analyst for your agent, you’ll work with semantic models defined in YAML format:</p><pre>— Grant necessary database and schema privileges<br> GRANT USAGE ON DATABASE snowflake_intelligence TO ROLE data_engineer;<br> GRANT USAGE ON SCHEMA snowflake_intelligence.agents TO ROLE data_engineer;<br><br> — Grant Cortex access via database roles<br> GRANT DATABASE ROLE SNOWFLAKE.CORTEX_USER TO ROLE data_engineer;<br><br> — Create a stage for semantic models<br> CREATE STAGE IF NOT EXISTS semantic_models_stage;</pre><ul><li>Define your semantic model in YAML (example structure)</li><li>Upload semantic_model_sales.yaml to the stage via SnowSQL or UI</li><li>The YAML file defines tables, columns, metrics, and relationships</li></ul><p>Example semantic model YAML structure:</p><pre>name: sales_semantic_model<br> description: Sales performance and customer analytics<br> tables:<br><br>· name: sales_data<br> description: Transaction-level sales data<br> base_table:<br> database: SALES_DB<br> schema: PUBLIC<br> table: SALES_DATA<br> columns: <br>o name: order_date<br> description: Date when order was placed<br> data_type: DATE <br>o name: revenue<br> description: Total order value in USD<br> data_type: NUMBER</pre><ul><li>Reference the semantic model when creating agents</li><li>The agent will use this YAML to understand your data structure</li></ul><p><strong>Cortex Search Integration</strong></p><p>To add Cortex Search capabilities to your agent:</p><pre> — Create a stage for documents<br> CREATE STAGE sales_documents_stage<br> DIRECTORY = (ENABLE = TRUE);<br><br> — Upload documents (via SnowSQL or web interface)<br> PUT file://local/path/sales_reports/*.pdf @sales_documents_stage;<br><br>— Create Cortex Search service<br> CREATE CORTEX SEARCH SERVICE sales_docs_search<br> ON sales_documents<br> WAREHOUSE = search_wh<br> TARGET_LAG = ’30 minutes’<br> AS (<br> SELECT<br> relative_path AS doc_name,<br> file_url,<br> GET_PRESIGNED_URL(@sales_documents_stage, relative_path) AS content<br> FROM DIRECTORY(@sales_documents_stage)<br> );</pre><p>With Cortex Search, your agent can answer questions like “What does our product documentation say about integration with AWS services?” by searching through thousands of documents and synthesizing relevant information.</p><p><strong>Additional Tools for Agents</strong></p><p>Beyond Cortex Analyst and Cortex Search, agents can be equipped with:</p><ul><li><strong>Cortex Functions:</strong> Pre-built AI functions for sentiment analysis, translation, summarization, and text classification</li><li><strong>Custom Python Functions:</strong> User-defined functions for specialized analytics or business logic</li><li><strong>External API Access:</strong> Integration with third-party data sources and services</li><li><strong>Snowflake Marketplace Data:</strong> Access to external datasets for market intelligence and benchmarking</li><li><strong>Data Quality Tools:</strong> Automated data profiling, anomaly detection, and validation checks</li></ul><p><strong>Agent Configuration Example</strong></p><p>Here’s how to create a comprehensive sales intelligence agent through the Snowflake UI:</p><p>1. Go to AI &amp; ML → Agents → Create Agent</p><p>2. Agent Configuration:</p><ul><li>Name: sales_intelligence_agent</li><li>Description: Analyzes sales performance, customer behavior, and revenue trends</li></ul><p>3. Select Tools as required:</p><ul><li>Cortex Analyst (for structured data queries)</li><li>Cortex Search (for document search)</li><li>Sentiment Analysis (Cortex Function)</li><li>Custom forecast function</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*drUPMIJYkQmDrFX2cskIwA.png" /></figure><p>4. Choose Data Sources:</p><ul><li>Database: SALES_DB, Schema: PUBLIC</li><li>Tables: sales_data, customer_data, product_catalog</li><li>Stage: @sales_documents_stage</li></ul><p>5. Set Permissions:</p><ul><li>Grant USAGE to roles: sales_team, sales_manager</li></ul><p>6. Click “Create” to deploy</p><p><strong>After creation, test your agent directly in the UI:</strong></p><p>Simply type natural language questions in the Snowflake Intelligence interface:</p><ul><li>“What were our top-performing products last quarter and why?”</li><li>“Show me customer sentiment trends for premium products”</li><li>“Forecast next quarter’s revenue based on current trends”</li></ul><p>The agent now has the combined power of structured data analysis (via Cortex Analyst), document search (via Cortex Search), sentiment analysis on customer feedback, and forecasting capabilities — all accessible through simple natural language questions in the UI.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/814/1*iOhzaP4cdFl4WvYRHU7BrQ.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1018/1*X-k6z3bpTNYBUc_IXYg0jg.png" /></figure><h3>Why Work With Us for Snowflake Intelligence Adoption</h3><p>Adopting Snowflake Intelligence lays the foundation for your next-generation Data Agents, but moving from a powerful platform to <strong>measurable, enterprise-wide outcomes</strong> requires specialized expertise. This is where <strong>Shellkode</strong> steps in as your dedicated Snowflake Partner Solutions Expert.</p><p>We don’t just implement technology; we architect <strong>decision-ready intelligence</strong> by combining our deep domain knowledge with end-to-end agentic development mastery on Snowflake Intelligence.</p><h4>The Shellkode Advantage: Transforming Data into Action</h4><p>We position Snowflake Intelligence as an <strong>adaptive, intelligent partner</strong> in your transformation journey. Our unique value proposition ensures that your Data Agent implementation is not just technically sound, but <strong>strategically aligned</strong> with your C-suite objectives:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*YfG3lVkETZaPAxO9NyejYg.png" /></figure><h3>Conclusion</h3><p>The fundamental truth driving modern enterprise strategy is clear: <strong>data systems and AI systems are inexorably moving closer, forming a single, intelligent fabric.</strong></p><p>The adoption of Snowflake Intelligence confirms this convergence. By unifying the high-performance Data &amp; AI Cloud with the native intelligence of Snowflake Cortex, you achieve a catalyst for transformation.</p><p>By partnering with Shellkode and choosing Snowflake Intelligence, you transform your approach: <strong>you move from manual querying to autonomous, dynamic conversations with data.</strong> This accelerates research, provides adaptive foresight, and empowers your organization with decision-ready intelligence. Build the autonomous Data Agents today to secure your competitive advantage and transform your enterprise.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=6e31e6892c5c" width="1" height="1" alt=""><hr><p><a href="https://blog.shellkode.com/building-a-reliable-enterprise-data-agent-with-snowflake-intelligence-6e31e6892c5c">Building a Reliable, Enterprise Data Agent with Snowflake Intelligence</a> was originally published in <a href="https://blog.shellkode.com">ShellKode Blog</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[ShellKode Named AWS Pattern Partner to Accelerate Enterprise AI Modernization]]></title>
            <link>https://blog.shellkode.com/shellkode-named-aws-pattern-partner-to-accelerate-enterprise-ai-modernization-cc692a45ec91?source=rss----1137b34251a7---4</link>
            <guid isPermaLink="false">https://medium.com/p/cc692a45ec91</guid>
            <category><![CDATA[mcp-server]]></category>
            <category><![CDATA[agentic-ai]]></category>
            <category><![CDATA[aws]]></category>
            <category><![CDATA[ai-agent]]></category>
            <category><![CDATA[a2a-protocol]]></category>
            <dc:creator><![CDATA[ShellKode Blogs]]></dc:creator>
            <pubDate>Wed, 03 Dec 2025 00:10:57 GMT</pubDate>
            <atom:updated>2025-12-03T00:10:56.081Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*efPbSZl2Aux5QbrxpxSWZg.png" /></figure><p>Today, ShellKode is proud to join AWS as a launch partner in the AWS Pattern Partners program, an invite-only initiative that brings together a select cohort of consulting partners to define <strong>how enterprises adopt next-generation AI and emerging technologies on AWS</strong>.</p><p>As an AWS Pattern Partner, ShellKode brings deep experience in <strong>Agentic AI, GenAI, enterprise modernization, and responsible AI operations</strong>. The program accelerates AI adoption by codifying success into repeatable, scalable <strong>patterns,</strong> including architectures, operating models, guardrails, and delivery playbooks that have already been validated with customers.</p><p>For enterprise leaders, ShellKode’s selection signals that AWS has reviewed and endorsed both <strong>our outcomes</strong> and <strong>our operating model</strong> across multiple real-world deployments. Organizations looking to modernize processes, adopt AI safely, and manage new regulatory requirements now gain access to proven patterns that deliver measurable business impact.</p><h3>Our Focus Patterns with AWS</h3><p>As part of the Pattern Partners program, ShellKode will help scale four high-impact enterprise patterns:</p><ul><li><strong>Process to Agent (P2A)</strong> — Transforming business workflows into autonomous, production-ready AI agents.</li><li><strong>Agent to Agent (A2A)</strong> — Enabling multi-agent collaboration for complex enterprise operations.</li><li><strong>Modernization with GenAI</strong> — Accelerating legacy modernization and automated conversions using GenAI.</li><li><strong>Responsible AI Operations (RAI-Ops)</strong> — Ensuring enterprise-grade AI governance, evaluation, observability, and safety.</li></ul><p>These patterns represent areas where we have already delivered strong customer outcomes across logistics, retail, BFSI, manufacturing, and technology sectors.</p><h3>Our Flagship Pattern: Enterprise Process-to-Agent (P2A)</h3><h3>The customer challenge</h3><p>Across industries, organizations face a common reality:</p><ul><li>Processes are fragmented across systems, slowing automation.</li><li>AI proofs-of-concept rarely scale due to governance and compliance constraints.</li><li>Teams lack clear guardrails, roles, and operating models for enterprise AI.</li><li>Regulations and internal standards evolve rapidly, creating adoption friction.</li></ul><p>This makes it difficult to move from promising prototypes to <strong>enterprise-grade, production-ready Agentic AI</strong>.</p><h3>Our joint approach with AWS</h3><p>ShellKode and AWS have co-designed the <strong>P2A Pattern</strong>, a blueprint that allows enterprises to convert complex workflows into autonomous agents using:</p><ul><li><strong>AWS-native architecture</strong> leveraging Amazon Bedrock, Agents for Bedrock, Amazon OpenSearch, Amazon ECS, DynamoDB, CloudWatch, and other foundational services.</li><li>A <strong>clear operating model</strong> covering governance, security, roles, runbooks, and cross-functional responsibilities.</li><li><strong>Accelerators</strong> including pre-built connectors, compliance packs, policy templates, dashboards, monitoring frameworks, and reusable agents.</li></ul><p>The pattern is currently being refined through a time-boxed incubation with multiple lighthouse customers. As the blueprint stabilizes, AWS will make the Pattern Package available globally for rapid and consistent adoption.</p><h3>Early results</h3><p>Early adopters are already reporting measurable results using the P2A pattern:</p><ul><li><strong>Time to first value reduced to under a few weeks</strong>, even in regulated environments.</li><li><strong>30–60% cycle-time reduction</strong> in targeted workflows once agents are deployed.</li><li><strong>Higher accuracy and stronger compliance posture</strong> due to embedded guardrails.</li><li><strong>Improved analyst and developer productivity</strong> through automated workflows and reusable components.</li></ul><p>For example, in one implementation for a global enterprise (customer name masked), the P2A pattern helped reduce a multi-step operational workflow by <strong>over 40%</strong>, without compromising auditability or internal controls. These early wins directly guide which use cases we prioritize next.</p><p>As these outcomes are validated across additional lighthouse customers, the Pattern Package will be rolled out to AWS field teams worldwide to support customers across industries and regions.</p><h3>How the Pattern Partners Program Helps Customers</h3><p>When enterprises engage ShellKode through the Pattern Partners program, they begin not from scratch but from a <strong>validated blueprint</strong>:</p><ul><li>Proven architectures</li><li>Repeatable operating models</li><li>Guardrails, runbooks, and compliance frameworks</li><li>Pre-built accelerators and integrations</li><li>Joint engagement with AWS Consulting COE, AWS service teams, and ShellKode experts</li></ul><p>This enables <strong>fast but responsible experimentation</strong>. Organizations can move from idea to pilot within weeks while ensuring enterprise-grade security, auditability, and governance.<br> The program also includes a structured path from pilot to scale, enabling multi-region, multi-unit deployments with clear controls and observability.</p><p>ShellKode works closely with AWS solution architects and product teams to ensure the patterns remain aligned to the latest AWS innovations in GenAI, agentic systems, and responsible AI.</p><h3>Partner Perspective</h3><blockquote>“Joining AWS Pattern Partners is a strategic milestone for ShellKode. Through patterns like P2A, we are transforming our strongest customer successes into a <strong>clear, repeatable path for enterprise AI adoption</strong> enabling organizations to move from pilots to production with greater speed, safety, and confidence.”<br> <strong>— Bhuvanesh, CTO, ShellKode</strong></blockquote><h3>AWS Perspective</h3><blockquote>“AWS created Pattern Partners to work with a select cohort of builders who can set the standard for how enterprises adopt emerging technology on AWS. ShellKode brings deep expertise in Agentic AI and a proven P2A pattern that is already delivering measurable outcomes. We look forward to scaling this work together across regions and industries.”<br> — <strong>Brian Bohan, Managing Director, Consulting COE, AWS</strong></blockquote><h3>Next Steps</h3><p>Customers interested in understanding how these patterns can accelerate their AI roadmap can contact <strong>ShellKode</strong> at <a href="https://www.shellkode.com/contact-us"><strong>Link</strong></a>.</p><p>Organizations that want to explore the P2A pattern in detail can request a <strong>focused discovery session</strong>. In this session, AWS and ShellKode jointly map business challenges to the pattern, estimate potential impact, and define a practical path to adoption.</p><p>Together, ShellKode and AWS look forward to helping enterprises turn complex business challenges into <strong>repeatable, scalable patterns for growth</strong>.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=cc692a45ec91" width="1" height="1" alt=""><hr><p><a href="https://blog.shellkode.com/shellkode-named-aws-pattern-partner-to-accelerate-enterprise-ai-modernization-cc692a45ec91">ShellKode Named AWS Pattern Partner to Accelerate Enterprise AI Modernization</a> was originally published in <a href="https://blog.shellkode.com">ShellKode Blog</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[GenAI in Retail: Transforming the Customer Journey from Checkout to Agents]]></title>
            <link>https://blog.shellkode.com/genai-in-retail-transforming-the-customer-journey-from-checkout-to-agents-a3ce7c3ae944?source=rss----1137b34251a7---4</link>
            <guid isPermaLink="false">https://medium.com/p/a3ce7c3ae944</guid>
            <category><![CDATA[customer-experience]]></category>
            <category><![CDATA[genai]]></category>
            <category><![CDATA[amazon-bedrock]]></category>
            <category><![CDATA[generative-ai-use-cases]]></category>
            <category><![CDATA[retail]]></category>
            <dc:creator><![CDATA[ShellKode Blogs]]></dc:creator>
            <pubDate>Mon, 21 Jul 2025 05:34:58 GMT</pubDate>
            <atom:updated>2025-07-21T05:34:58.655Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*1hnIHHQnJtMfAMihWc039w.png" /></figure><h3><strong>What Worked in Retail, Doesn’t Work Anymore</strong></h3><p>Retail used to be about great products. Now, it’s about great experiences. The way people discover, evaluate, buy, and seek customer service capabilities<strong> </strong>has changed dramatically. Customers expect speed, personalization, real-time updates, and intelligent automation at every step. And they’re no longer comparing brands within the same category they’re comparing against their best digital experiences.</p><p>This is where <strong>Generative AI in retail</strong> sets a new standard. For D2C brands and retail leaders, it’s more than automation, it’s the engine behind intelligent, human-like interactions across the entire retail customer journey. GenAI understands context, adapts in real time, and delivers hyperpersonalized experiences at scale</p><blockquote><strong>According to McKinsey, retailers using AI for personalization see a 10–20% uplift in customer satisfaction and 5–15% increase in revenue.</strong></blockquote><h3><strong>The Shift: Customers Changed, Tech Didn’t Until GenAI</strong></h3><p>Today’s shoppers want more than transactions. They want:</p><ul><li>Real-time personalization</li><li>Language and cultural fluency</li><li>Visual confidence (great images, rich info)</li><li>Conversational support that solves, not stalls</li><li>Omnichannel support from web to WhatsApp to checkout</li></ul><p>Legacy tech can’t meet these demands at scale. GenAI can. Where older AI automated tasks, Generative AI in retail crafts:</p><ul><li>Personalized product descriptions</li><li>Localized content in seconds</li><li>Conversational flows that adapt</li><li>Smart agents that handle 80%+ of support interactions autonomously</li></ul><p>It’s not just smarter tech — it’s a scalable, human-like customer experience.</p><blockquote><strong>You can’t just drop AI into your business and expect magic. The real impact starts with asking the right question</strong></blockquote><h3><strong>How do we become more relevant to the people we serve?</strong></h3><p>Walmart didn’t lead with technology they led with empathy. Customers were unsure about their purchases, overwhelmed by options, and lacked real-time guidance. So Walmart focused its GenAI strategy on solving that. They used LLMs to make product discovery smarter, recommendations more personal, and Q&amp;A actually useful. Suddenly, shoppers could find exactly what they needed and feel confident about their choices.</p><p>That shift from just selling to serving with relevance turned GenAI from a tool into a true differentiator. Because the most powerful AI? It’s built on understanding human problems.</p><h3><strong>From Discovery to Loyalty: A Smarter Retail Customer Journey with GenAI</strong></h3><p>Let’s look at the <strong>D2C customer journey</strong> through the customer’s lens, what they want at each stage, where friction shows up, and how <strong>GenAI in Retail</strong> can remove it.</p><p>With Retlocx, powered by Shellkode, you’re not just streamlining operations, you’re creating experiences that feel thoughtful, seamless, and tailored at every step.</p><h4>1. Discovery That Understands What Customers Want</h4><p>Customers don’t want to hunt, they want to find. When they search, they expect:</p><ul><li>Results that are relevant and fast</li><li>Product pages in their language</li><li>Descriptions tailored to their needs</li></ul><p><strong>Capabilities</strong></p><p><strong>Attribute Generation -</strong> starts with being findable. If customers can’t filter, search, or sort properly, they’re gone. Attribute Generation uses Claude 3.5 to tag products with relevant details like material, fit, and use case. It updates in real-time and keeps your catalog structured, searchable, and optimized for SEO.</p><ul><li><strong>Result:</strong> +80% improvement in discoverability for context-based search</li></ul><p><strong>Translation for Product Description and Content -</strong> If your pages speak one language, you’re leaving revenue on the table. Translation engine rewrites your product content in multiple Indian and global languages, making sure your brand speaks to every customer with cultural accuracy.</p><ul><li><strong>Result:</strong> +30% customer satisfaction</li></ul><h4>2. Check out That Feels Like a Conversation</h4><p>Customers want <strong>speed, clarity, and confidence</strong> when they buy. They expect <strong>AI agents in retail</strong> that:</p><ul><li>Guide users step-by-step</li><li>Offer personalized upsells</li><li>Provide real-time answers</li></ul><h4><strong>Capabilities</strong></h4><p><strong>Smart Checkout Assistant — </strong>It enhances the final steps of the buying journey by dynamically surfacing personalized offers, alternatives based on customer behavior, cart content, and real-time trends. It identifies hesitation signals (like color or cart idle time) and responds with the right nudges such as better-matched products, timely discounts, or contextual upsells to prevent drop-offs and boost order value. By integrating browsing history, purchase patterns, and local demand signals, it delivers a checkout experience that feels tailored, not transactional.</p><p><strong>Result: </strong>15–30% increase in Average Order Value (AOV) and reduced cart abandonment</p><h4>3. AI-Powered Customer Support That Listens and Learns</h4><p>Frustrated customers want to feel heard. They need:</p><ul><li>Fast answers</li><li>A human tone</li><li>No repetition or handoffs</li></ul><p><strong>Capabilities</strong></p><p><strong>Agent Assistant for Retail &amp; E-commerce — </strong>The Agent Assistant for Retail and E-commerce enhances customer experiences with GenAI-powered real-time query resolution, order tracking, and automated payments. It recommends personalized products based on browsing and purchase history, driving conversions and customer satisfaction. This solution streamlines operations while improving engagement and revenue.</p><ul><li><strong>Result:</strong> Upto 50% automation in customer query handling using AI Agents</li></ul><p><strong>Post-Call Analytics -</strong> Support teams can’t fix what they can’t see. This solution automatically analyzes and summarizes every call, highlights patterns, and feeds back insights to improve agent training and workflows.</p><ul><li><strong>Result:</strong> Fewer repeat tickets, faster resolutions</li></ul><h4>4. Predictive Retail Operations with GenAI</h4><p>Customers don’t think about inventory. They just want their size in stock, their order delivered, and their experience uninterrupted. Retail leaders like Zara and Decathlon use data to anticipate demand and act quickly. This is where GenAI helps retailers stay one step ahead, forecasting demand, optimizing stock, and ensuring the experience runs like clockwork.</p><p><strong>Capabilities</strong></p><p><strong>Inventory Optimization - </strong>Using GenAI and advanced data analytics the solution transforms inventory management by analyzing real-time and historical sales data, seasonal trends, and supply chain metrics. It provides real-time visibility into inventory, automates stock-level recommendations, and balances stock between warehouses and stores to ensure products are always available where needed and minimizes both overstocks and stockouts.</p><ul><li><strong>Result:</strong> 20% lower holding costs, 40% fewer stockouts</li></ul><p><strong>Footfall Analytics -</strong> What if your stores could tell you when they’re busy, where people walk, and what they ignore? This solution does just that. It tracks footfall, dwell time, and zones to help you plan layouts, staff, and promos smarter.</p><ul><li><strong>Result:</strong> 80% improvement in decisions, 25% boost in sales</li></ul><h3><strong>Retlocx: Reimagine Retail and Logistics CX with GenAI</strong></h3><p>Retlocx leverages domain-specific GenAI agents to improve customer experience across retail and logistics functions.</p><ul><li>Integrates with existing systems</li><li>Easy to make using low-code tools</li><li>Continuously trains on enterprise data</li><li>Offers pre-built GenAI agents via the Retlocx marketplace</li></ul><p>Whether you’re scaling customer support, automating fulfillment, or enhancing store decisions — Retlocx delivers smart, self-optimizing AI experiences. <a href="https://www.retlocx.ai/"><strong><em>Learn More Here</em></strong></a></p><h3>Conclusion</h3><p>Generative AI is not just a technological shift it’s unlocking entirely new ways for retailers to operate, engage, and grow. From hyper-personalized customer journeys to autonomous AI agents handling support, merchandising, and marketing, the possibilities are expanding rapidly.<br>For retailers, adopting GenAI early in the journey can serve as a powerful differentiator, helping you evolve into a modern, adaptive, and customer-centric business. That said, this is still an emerging space. The technology is evolving fast, and success will depend on your ability to stay agile, experiment thoughtfully, and build systems that are frictionless to adapt and scale as the landscape matures.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=a3ce7c3ae944" width="1" height="1" alt=""><hr><p><a href="https://blog.shellkode.com/genai-in-retail-transforming-the-customer-journey-from-checkout-to-agents-a3ce7c3ae944">GenAI in Retail: Transforming the Customer Journey from Checkout to Agents</a> was originally published in <a href="https://blog.shellkode.com">ShellKode Blog</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Why AWS App Runner Outshines Amazon EKS/ECS for Containerized Web App Deployments]]></title>
            <link>https://blog.shellkode.com/why-aws-app-runner-outshines-amazon-eks-ecs-for-containerized-web-app-deployments-0019dc9ba4c5?source=rss----1137b34251a7---4</link>
            <guid isPermaLink="false">https://medium.com/p/0019dc9ba4c5</guid>
            <category><![CDATA[amazon-eks]]></category>
            <category><![CDATA[devops]]></category>
            <category><![CDATA[aws]]></category>
            <category><![CDATA[cloud-deployment]]></category>
            <category><![CDATA[aws-app-runner]]></category>
            <dc:creator><![CDATA[ShellKode Blogs]]></dc:creator>
            <pubDate>Tue, 18 Mar 2025 07:52:47 GMT</pubDate>
            <atom:updated>2025-03-18T09:01:04.690Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*WKmeA2fCyvWd83j3hwzEsw.png" /></figure><p>When it comes to cloud deployments, it’s not just about technology, it’s about making strategic choices that drive real business impact. Selecting the right deployment platform is one such critical aspect that is paramount to ensuring long-term success. While AWS offers powerful container orchestration tools like EKS and ECS, AWS App Runner stands out as a game changer, particularly for use cases like containerized web apps.</p><p>In fact, its benefits far outweigh those of Amazon EKS and Amazon ECS. For instance, choosing AWS App Runner enabled us to accelerate deployment by up to 65%, and cut infrastructure costs by up to 40%, for a critical synchronization application migration- all while ensuring scalability and reliability. And this is just the tip of the iceberg — there’s much more to explore. Dive into this blog to learn more about how AWS App Runner can make a real difference for your cloud deployment strategy.</p><h3>What is AWS App Runner?</h3><p>AWS App Runner is a fully managed service that deploys containerized applications in under ten minutes, with zero infrastructure management. It automates scaling, load balancing, and monitoring, freeing developers to focus on innovation rather than operations.</p><p>With built-in CI/ CD integration, App Runner simplifies deployment workflows by automatically pulling the latest code changes from repositories like GitHub and AWS CodeCommit. It also provides end-to-end security, including HTTPS encryption and automatic certificate management, ensuring secure application deployment. And the best part? It scales on demand, adjusting resources based on traffic so businesses only pay for what they use- maximizing efficiency while keeping costs in check.</p><h3>Why Choose App Runner over EKS and ECS?</h3><p>Selecting the right deployment platform is a critical decision that impacts scalability, cost, and operational efficiency. While Kubernetes-based solutions like EKS and ECS offer robust container orchestration, they often come with added complexity, requiring significant setup and ongoing management.</p><p>For small-scale deployments, EKS and ECS can be unnecessarily complex. These services are built for managing large-scale, multi-container environments, requiring extensive setup, ongoing maintenance, and operational overhead. By leveraging App Runner, we streamlined the process- eliminating 3–5 days of setup time typically spent configuring clusters, nodes, and load balancers. Our focus was on delivering a seamless, cost-effective, and scalable solution without the burden of unnecessary complexity.</p><figure><img alt="Table drawing comparison among Amazon EKS, Amazon ECS and AWS App Runner" src="https://cdn-images-1.medium.com/max/1013/1*sdEzmIC0LBwRJzJzI0j3uw.png" /></figure><h3>Benefits of App Runner</h3><p>By choosing AWS App Runner, we optimized deployment speed, reduced costs, and enhanced scalability, delivering tangible business benefits. Let’s take a closer look at how it made a difference.</p><h4>Simplified Deployment with Minimal Overhead</h4><p>Traditional container orchestration platforms require configuring clusters, managing nodes, and setting up complex orchestration tools- all of which add operational overhead. With App Runner, deployment is simplified to just a few clicks, considerably reducing the set up time from days to minutes. This also has been proved to reduce operational overhead by up to 50%, allowing teams to focus on innovation rather than infrastructure management.</p><h4>Optimized Costs with a Pay-as-You-Go Model</h4><p>Managing infrastructure costs efficiently is crucial for any business, especially in cost-sensitive industries like FinTech. App Runner’s pay-as-you-go pricing eliminates upfront investment and significantly reduces operational expenses. In this case, compared to EKS, our customer saw by up to 40% decrease in monthly infrastructure spending, as they only paid for actual usage-without the overhead of maintaining idle resources.</p><h4>Seamless Auto-Scaling for Peak Performance</h4><p>In a high-demand environment like FinTech, performance consistency is non-negotiable. App Runner’s traffic-driven auto scaling dynamically adjusts resources based on real-time demand, ensuring smooth operation even during peak loads. This automated scaling reduced latency by up to 25%, maintaining seamless user experiences without manual intervention. With 99.9% uptime, the system remained highly reliable, a critical factor for real-time transactions where even a slight delay could impact business operations and customer trust.</p><p>Now that we’ve explored AWS App Runner, let us take a look at how to set it up.</p><h3>Step-by-Step Guide to Deploying Applications with AWS App Runner</h3><p>As established, deploying applications with AWS App Runner ensures seamless scaling and automation. Here, we walk you through the key steps to get AWS App Runner up and running.</p><ul><li>Click on create app runner</li></ul><figure><img alt="AWS console where users can start using AWS App Runner" src="https://cdn-images-1.medium.com/max/941/1*6FulnPcUg8pKUUkbNwbm3Q.png" /></figure><ul><li>Choose the appropriate repository type<br>- If your application images are in ECR you can opt for ECR<br>- If you’re using other 3rd party repositories like Git, Bitbucket you can choose Source Code Repository</li><li>Automatic deployment mode will detect the latest version of images being pushed at ECR and will deploy it on the App Runner on behalf of you using the ECR access role</li></ul><figure><img alt="Screenshot of AWS console where users can choose the repository type" src="https://cdn-images-1.medium.com/max/941/1*JLf9cCMiRdEOpiOhbnTeqg.png" /></figure><ul><li>Assign the right amount of resources to the container. (i.e: this is for single container)</li></ul><figure><img alt="Screenshot of AWS console where users assign resources to the container" src="https://cdn-images-1.medium.com/max/941/1*JZlV6ZjRJYIX3_21uelfKQ.png" /></figure><ul><li>Following that create a new Auto Scaling Configuration for your container to handle the workload with minimum and maximum capacity</li></ul><figure><img alt="Screenshot of AWS console where users can create auto scaling configurations" src="https://cdn-images-1.medium.com/max/941/1*ONJfja8d0JGG2R9ZnHd8nw.png" /></figure><figure><img alt="Screenshot of AWS console where users can add custom auto scaling configuration" src="https://cdn-images-1.medium.com/max/941/1*rPMxgjgL-6HFV-Fmg0OkTg.png" /></figure><ul><li>Assign the right role for your container if they need to communicate with any other AWS services</li><li>In my sample case I have added a role that can read and write from secrets manager</li></ul><figure><img alt="Screenshot of AWS console where users can assign roles for container" src="https://cdn-images-1.medium.com/max/941/1*my8S8ATvHcEKyZPabb2H2Q.png" /></figure><ul><li>For networking, we can choose the outgoing network to be either to Public or via VPC</li><li>We need to create a VPC connector on our custom VPC with only a private subnet to route the traffic</li><li>We can leverage this VPC connector if the container needs to communicate with any compute services running inside VPC like EC2 or RDS</li></ul><figure><img alt="Screenshot of AWS console where users can create VPC connector" src="https://cdn-images-1.medium.com/max/611/0*aQmCkMa1soBTrHSa" /></figure><figure><img alt="Screenshot of AWS console where users can create VPC connector" src="https://cdn-images-1.medium.com/max/941/1*rWfr_-tUBTFOXAtoMXwALg.png" /></figure><ul><li>Now validate the configurations on the final page then click on create and deploy</li></ul><figure><img alt="Screenshot of AWS console where users can create App Runner" src="https://cdn-images-1.medium.com/max/941/1*S35evBcwzzsLQlQ4yRqiQw.png" /></figure><ul><li>App Runner is created and running</li><li>On the main dashboard we can see the App Runner logs, Deployment logs and Application Logs</li></ul><figure><img alt="Screenshot of AWS console where users can see App Runner logs, Deployment logs and Application logs" src="https://cdn-images-1.medium.com/max/941/1*wmGmwcdA6rS0fMuxDhLG1g.png" /></figure><h3>Potential Challenges and Pitfalls</h3><p>As we saw, AWS App Runner is a managed service that simplifies the deployment of containerized applications, allowing developers to focus on their code rather than infrastructure management. However, while it offers ease of use and automation, it may not be the ideal solution for every workload. Here are some limitations to consider:</p><h4>Single -container Focus</h4><p>App Runner is designed for single-container applications, making it less suitable for complex deployments that require multi-container orchestration. If your application depends on sidecars, catching layers, or service mesh configurations, solutions like Amazon ECS or EKS provide greater flexibility.</p><h4>Limited Customization</h4><p>The abstraction of infrastructure management makes App Runner easy to use, but it also limits control over networking, security policies, and runtime configurations. For businesses that require fine-grained tuning of their cloud environment, ECS or EKS may be a better fit.</p><h4>Scaling Constraints</h4><p>While App Runner provides automatic scaling, it has predefined vertical and horizontal scaling limits. This may not be sufficient for high-demand applications that require precise scaling strategies or need to handle unpredictable traffic spikes.</p><h4>Limited Runtime Support</h4><p>Currently, App Runner does not support ARM-based runtimes, restricting its usability for workloads optimized for ARM architectures. If your application relies on ARM for cost or performance benefits, you may need to consider alternative AWS services.</p><p>By understanding these constraints, businesses can make an informed decision about whether AWS App Runner aligns with their operational needs and long-term scalability goals.</p><h3>Conclusion</h3><p>In the digital transformation journey, selecting the right cloud platform is critical. AWS App Runner offers a streamlined solution for deploying and managing containerized applications, automating tasks like infrastructure management and scaling. Designed for single-container applications, it reduces operational complexity, accelerates time-to-market, and minimizes overhead.</p><p>By eliminating the need for manual intervention, App Runner allows businesses to focus on innovation and customer value. Its simplicity and efficiency enable organizations to adapt quickly to changing market demands, ensuring long-term agility and competitiveness. For companies seeking a faster, smarter approach to cloud deployments, AWS App Runner is a game-changer.</p><p>Author: <a href="https://www.linkedin.com/in/nirmal-prabhu/"><em>Nirmal Prabhu, Head Cloud Practice</em></a></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*VgrhzPl_GfuHrrCHJpYjKQ.png" /></figure><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=0019dc9ba4c5" width="1" height="1" alt=""><hr><p><a href="https://blog.shellkode.com/why-aws-app-runner-outshines-amazon-eks-ecs-for-containerized-web-app-deployments-0019dc9ba4c5">Why AWS App Runner Outshines Amazon EKS/ECS for Containerized Web App Deployments</a> was originally published in <a href="https://blog.shellkode.com">ShellKode Blog</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Streaming Agent Responses using Bedrock Agent API]]></title>
            <link>https://blog.shellkode.com/streaming-agent-responses-using-bedrock-agent-api-8dcef26476ee?source=rss----1137b34251a7---4</link>
            <guid isPermaLink="false">https://medium.com/p/8dcef26476ee</guid>
            <category><![CDATA[agentic-applications]]></category>
            <category><![CDATA[customer-experience]]></category>
            <category><![CDATA[aws-bedrock-agent]]></category>
            <category><![CDATA[amazon-bedrock]]></category>
            <category><![CDATA[ai-agent]]></category>
            <dc:creator><![CDATA[ShellKode Blogs]]></dc:creator>
            <pubDate>Tue, 04 Feb 2025 05:58:52 GMT</pubDate>
            <atom:updated>2025-02-14T12:10:58.862Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*XmA24l7lbJFQN32XYaEfSQ.png" /></figure><p>AI agents are quickly replacing traditional automation solutions and chatbots. It’s safe to say that they are the next big thing in the tech landscape that organizations cannot afford to overlook. They bring a level of autonomy and operational efficiency that revolutionize the way businesses operate. While agents demonstrate remarkable capabilities in handling complex tasks, their response times in customer-facing applications can present challenges, leading to frustration and a less engaging experience for users.</p><p>A solution to this challenge is streaming responses, which allows partial results to be displayed as soon as they are available. In this blog, we’ll demonstrate how to implement a streaming solution using AWS Bedrock Agents, Boto3 API, Flask, and ReactJS. We’ll walk you through creating a Flask API to stream responses from Bedrock Agents, building a ReactJS frontend to handle the streaming API, and combining these components into a seamless real-time user experience.</p><h3>Addressing Response Latency with Real-time Streaming</h3><p>When invoking Bedrock Agents, one significant challenge is the response generation time, which can take up to 15–20 seconds depending on the complexity of the request. This latency can create a perception of a slow application and diminish user engagement, as waiting for the complete response delays interaction and reduces communication efficiency. To address this challenge, we implement a streaming solution that delivers responses incrementally to the frontend, allowing users to see partial outputs in real-time, enhancing responsiveness and usability. Let us see how to implement this solution.</p><h4>Prerequisites:</h4><p>Before we dive into the implementation, make sure you have:</p><ul><li>Python 3.10 or higher installed</li><li>Node.js and npm installed</li><li>Any IDE as per your need</li><li>AWS credentials configured on your machine</li><li>Basic understanding of Flask and React</li></ul><h4>Backend: Flask API with Streaming</h4><p>First, let’s set up our Python environment and install the necessary packages.</p><ul><li>Create a virtual environment and activate it:</li></ul><pre>python3 -m venv venv<br>source venv/bin/activate # On Windows: venv\Scripts\activate</pre><ul><li>Install the required Python packages:</li></ul><pre>pip install flask boto3 #And other packages to run your application</pre><ul><li>The backend will use Flask to set up an API endpoint. It streams the agent’s response using AWS Bedrock’s invoke_agent method. Here’s a simplified implementation:</li></ul><pre><br>from flask import Flask, Response, request<br>import boto3<br>import os<br>import json<br><br>app = Flask(__name__)<br><br># Initialize Bedrock client<br>bedrock_client = boto3.client(&#39;bedrock-runtime&#39;, region_name=&#39;us-east-1&#39;)<br><br>@app.route(&#39;/api/stream-agent-response&#39;, methods=[&#39;POST&#39;])<br>def stream_agent_response():<br>    user_input = request.json.get(&#39;input&#39;)<br>    session_id = request.json.get(&#39;sessionId&#39;)<br>    agent_id = os.getenv(&#39;AGENT_ID&#39;)<br><br>    def generate():<br>        try:<br>            response = bedrock_client.invoke_agent(<br>                agentAliasId=os.getenv(&#39;ALIAS_ID&#39;),<br>                agentId=agent_id,<br>                enableTrace=False,<br>                endSession=False,<br>                inputText=user_input,<br>                sessionId=session_id,<br>                streamingConfigurations={&#39;streamFinalResponse&#39;: True}<br>            )<br><br>if response.get(&#39;completion&#39;):<br>   for event_chunk in response[&#39;completion&#39;]:<br>       if &#39;chunk&#39; in event_chunk and &#39;bytes&#39; in event_chunk[&#39;chunk&#39;]:<br>           chunk_text = event_chunk[&#39;chunk&#39;][&#39;bytes&#39;].decode(&#39;utf-8&#39;)<br><br>           yield f&quot;data: {json.dumps({&#39;response&#39;: chunk_text, <br>                                      &#39;sessionId&#39;: session_id})}\n\n&quot;<br><br>        except Exception as e:<br>            error_response = json.dumps({&#39;error&#39;: str(e)})<br>            yield f&quot;data: {error_response}\n\n&quot;<br><br>    return Response(generate(), mimetype=&#39;text/event-stream&#39;)<br><br><br>if __name__ == &#39;__main__&#39;:<br>    app.run(debug=True)</pre><p><strong>Key Points</strong></p><ol><li><strong>Generator Function</strong>: Streams chunks of data as they are received from Bedrock</li><li><strong>Error Handling</strong>: Captures exceptions and streams error messages to the client</li><li><strong>Session Management</strong>: Supports unique sessions for different users or conversations</li></ol><h4>Frontend: ReactJS with Streaming API</h4><p>The frontend will consume the streaming API using fetch and display the incremental responses in real time. Here’s an implementation snippet:</p><pre>const handleSendMessage = async (query, languageCode) =&gt; <br><br><br> const request = {<br>     input: query,<br>     sessionId: sessionId,<br> };<br><br>  try {<br>     const response = await  <br>     fetch(`${backendURL}api/stream-agent-response`, {<br>         method: &quot;POST&quot;, body: JSON.stringify(request),});<br><br>     const reader = response.body.getReader();<br>     const decoder = new TextDecoder();<br>     let ongoingBotResponse = &quot;&quot;;<br><br>     while (true) {<br>         const { done, value } = await reader.read();<br>         if (done) break;<br><br><br>         const chunk = decoder.decode(value, { stream: true });<br>         const messages = chunk.split(&quot;\n\n&quot;);<br><br><br>         for (const message of messages) {<br>             if (message.trim().startsWith(&quot;data: &quot;)) {<br>                 try {<br>                     const jsonStr = message.trim().slice(6);<br>                     const data = JSON.parse(jsonStr);<br><br><br>                     if (data.response) {<br>                         ongoingBotResponse += data.response;<br>                         setSessionConversation((prev) =&gt; [<br>                             ...prev,<br>                     { type: &quot;AGENT&quot;, body: ongoingBotResponse },]);<br>                     }<br>                 } catch (e) {<br>                     console.warn(&quot;Error parsing chunk:&quot;, e);<br>                 }<br>             }<br>         }<br>     }<br> } catch (error) {<br>     console.error(&quot;Streaming error:&quot;, error);<br> }<br>};</pre><p><strong>Key Points</strong></p><ol><li><strong>Streaming Responses</strong>: Processes chunks of data in real-time</li><li><strong>Error Handling</strong>: Logs errors and avoids breaking the UI</li><li><strong>State Management</strong>: Updates the conversation dynamically</li></ol><h4><strong>End-to-End Workflow</strong></h4><p>Here’s how the components work together:</p><ol><li><strong>User Interaction</strong>: The user submits a query via the ReactJS frontend</li><li><strong>API Call</strong>: The frontend sends the query to the Flask backend</li><li><strong>Streaming Response</strong>: Flask streams partial responses from Bedrock to the frontend</li><li><strong>Real-Time Updates</strong>: ReactJS updates the UI with each chunk of data</li></ol><h3>Conclusion</h3><p>Given their indispensability, agents are here to stay in customer-facing applications, not only for their conversational abilities but also for their adaptability, automation, and contextual understanding.</p><p>Minimizing response delays is crucial for delivering a frictionless user experience. As explored in this blog, streaming agent responses can significantly enhance engagement by reducing perceived latency from 15–20 seconds to just 5–6 seconds. This makes applications feel faster, more responsive, and intuitive.</p><p>By leveraging the power of AWS Bedrock, Flask, and React, you can build highly efficient, real-time AI-driven applications that provide seamless, intelligent, and instant assistance — ensuring users stay engaged and satisfied.</p><p><strong>Author</strong></p><p>Sai Chandan — <a href="http://www.linkedin.com/in/sai-chandan">www.linkedin.com/in/sai-chandan</a></p><p><strong>Contributor</strong></p><p>Bakrudeen — <a href="https://www.linkedin.com/in/bakrudeen-k-6790219b/">https://www.linkedin.com/in/bakrudeen-k-6790219b/</a></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*78HscmoyA6j3cJpKso5gEg.png" /></figure><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=8dcef26476ee" width="1" height="1" alt=""><hr><p><a href="https://blog.shellkode.com/streaming-agent-responses-using-bedrock-agent-api-8dcef26476ee">Streaming Agent Responses using Bedrock Agent API</a> was originally published in <a href="https://blog.shellkode.com">ShellKode Blog</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[DeepSeek R1 Benchmark & Comparison Evaluating Performance & Cost Efficiency]]></title>
            <link>https://blog.shellkode.com/deepseek-r1-benchmark-comparison-evaluating-performance-cost-efficiency-35835a41c840?source=rss----1137b34251a7---4</link>
            <guid isPermaLink="false">https://medium.com/p/35835a41c840</guid>
            <category><![CDATA[llm]]></category>
            <category><![CDATA[benchmarking]]></category>
            <category><![CDATA[deepseek-r1]]></category>
            <category><![CDATA[generative-ai-consulting]]></category>
            <category><![CDATA[genai]]></category>
            <dc:creator><![CDATA[ShellKode Blogs]]></dc:creator>
            <pubDate>Fri, 31 Jan 2025 19:53:48 GMT</pubDate>
            <atom:updated>2025-02-14T12:14:56.089Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/800/1*WJQMv3Wu3kjLATt9VhnyMw.png" /></figure><p>The AI landscape has long been dominated by models that require enormous financial and computational resources, making them accessible only to tech giants. But the rise of DeepSeek is shifting the narrative. By dramatically reducing the cost of training high-performing models, DeepSeek has triggered a wave of innovation, enabling more entrepreneurs, developers, and researchers to explore AI’s potential without breaking the bank. This democratization of LLMs is fueling a new era of AI adoption, sparking fresh competition, and accelerating advancements in generative AI across industries.</p><h3>The Rise of DeepSeek-R1</h3><p>DeepSeek-R1 has rapidly emerged as a game-changer in the AI space. Unlike its costly, proprietary counterparts, DeepSeek-R1 is an open-source marvel, making advanced AI capabilities accessible to a much wider audience. The model was trained using just 2,048 Nvidia H800 GPUs over two months, at a remarkably low cost of $5.6 million. To put this in perspective, many of the top models in the industry require training budgets that reach well over $100 million. By adopting model distillation, chain-of-thought prompting, and reinforcement learning, DeepSeek has effectively slashed the cost barrier, proving that high-performance AI doesn’t have to come with an astronomical price tag.</p><h3>Breaking Down the DeepSeek Models</h3><p>To bring you this analysis, we went deep into our own pockets (seriously, our wallets are crying) to benchmark these models across key tasks: reasoning, coding, creative writing, and overall cost. Here’s what we found:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*ciRXUCT0YFsblBtIsv06Yw.png" /></figure><ul><li>1 — Excellent</li><li>2 — Satisfactory</li><li>3 — Could Be Better</li></ul><p>So, after burning a hole in our budget, here’s what we learned: If you’re looking for a model that delivers excellent reasoning, coding, and creative writing, the 651B variant is king — but it comes at a premium price. On the other hand, if cost-efficiency is your top priority, the Qwen-1.5B version is an absolute steal, though it struggles with more complex tasks. The LLama 8B variant strikes a middle ground, making it a solid choice for balanced workloads.</p><h3>Choosing the Right Model for Your Needs</h3><ul><li><strong>DeepSeekR1-Distill-Qwen-1.5B</strong> 🏗️ <em>Best for</em>: Low-cost applications, basic chatbot interactions, and lightweight text analysis. <em>Why?</em> Cheap to run but struggles with complex reasoning or creative writing.</li><li><strong>DeepSeekR1-Distill-LLama 8B</strong> ⚖️ <em>Best for</em>: Mid-tier AI applications, general-purpose coding, and reasonable creative tasks. <em>Why?</em> Balances cost and performance without breaking the bank.</li><li><strong>DeepSeekR1–651B</strong> 🚀 <em>Best for</em>: High-end AI applications, enterprise-level reasoning, and top-tier generative tasks. <em>Why?</em> Unmatched power, but at a hefty price.</li></ul><h3>Final Thoughts</h3><p>DeepSeek isn’t just another AI model — it’s a movement towards making cutting-edge AI more accessible. Whether you’re a solo entrepreneur experimenting with LLMs or a company looking to integrate AI into critical workflows, there’s a DeepSeek model that fits your needs. And as long as they keep pushing boundaries (and helping us keep a few extra bucks in our pockets), we’re here for it!</p><p><strong>Author</strong></p><p>Bakrudeen — <a href="https://www.linkedin.com/in/bakrudeen-k-6790219b/">https://www.linkedin.com/in/bakrudeen-k-6790219b/</a></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*jYnhF-DM38ORVftwB6PWVw.png" /></figure><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=35835a41c840" width="1" height="1" alt=""><hr><p><a href="https://blog.shellkode.com/deepseek-r1-benchmark-comparison-evaluating-performance-cost-efficiency-35835a41c840">DeepSeek R1 Benchmark &amp; Comparison Evaluating Performance &amp; Cost Efficiency</a> was originally published in <a href="https://blog.shellkode.com">ShellKode Blog</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Embracing GenBI with Data Stories and Amazon Q Business Integration with QuickSight]]></title>
            <link>https://blog.shellkode.com/embracing-genbi-with-data-stories-and-amazon-q-business-integration-with-quicksight-dcb62f9dc52e?source=rss----1137b34251a7---4</link>
            <guid isPermaLink="false">https://medium.com/p/dcb62f9dc52e</guid>
            <category><![CDATA[business-intelligence]]></category>
            <category><![CDATA[amazon-quicksight]]></category>
            <category><![CDATA[data-analytics]]></category>
            <category><![CDATA[amazon-q-business]]></category>
            <category><![CDATA[genbi]]></category>
            <dc:creator><![CDATA[ShellKode Blogs]]></dc:creator>
            <pubDate>Thu, 09 Jan 2025 08:48:23 GMT</pubDate>
            <atom:updated>2025-01-09T08:48:03.924Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="Embracing GenBI with Data Stories and Amazon Q Business Integration with QuickSight" src="https://cdn-images-1.medium.com/max/1024/1*fABL-6q_EkXcP1ITr3MP2A.png" /></figure><p><strong><em>Author</em></strong><em>: </em><a href="https://www.linkedin.com/in/manasa-kallakuri-34b2a1141/"><em>Manasa K, Head Data Practice</em></a><em> | </em><strong><em>Co-Author</em></strong><em>: </em><a href="https://www.linkedin.com/in/sruthi-ramaiah-9b8a40198/"><em>Sruthi</em></a></p><p>In today’s data-driven world, businesses face a critical challenge in extracting meaningful insights from unstructured data such as documents, emails, and reports, which often complement structured datasets. While traditional BI tools excel at analyzing structured data, they fall short when it comes to incorporating unstructured data sources, limiting the depth of analysis and decision-making. AWS is addressing this challenge by integrating Amazon Q Business with QuickSight, leveraging Generative AI to enable businesses to analyze both structured and unstructured data across various formats, unlocking a more comprehensive and insightful view of their data.</p><p>In this second blog of the series <strong>Conversational AI and Semantic Insights, a New Era in BI,</strong> we will explore Data Stories and Amazon Q Business Integration with QuickSight. Discover how this latest advancement helps in creating a unified experience for advanced decision-making.</p><h3>Amazon Q Business: Bridging the Gap Between Structured and Unstructured Data</h3><p>For a long time, the focus has been on extracting valuable insights from structured data, while equally valuable, if not more, insights from unstructured data remained untapped. Unstructured data, combined with structured data can be a treasure trove of information for organisations. So naturally, integrating both ensures a complete view for decision-making. Amazon Q Business integration with Quicksight effectively bridges the gap by seamlessly combining insights from structured data, such as databases and spreadsheets, with unstructured data from pdf and so on.</p><p>Lets say a business analyzing internal sales data and customer feedback also wants to understand external factors like market trends or competitor actions, with much of this data taking the unstructured format like PDFs. With Q Business in the picture, the business can easily extract and analyze insights from both structured and unstructured data, enabling a more comprehensive understanding of the factors influencing its performance and helping to drive more informed, strategic decisions.</p><h3>Crafting Compelling Narratives with Data Stories</h3><p>The amount of data you have does not directly determine its value but it’s how effectively you leverage it that influences your potential for growth. While Amazon QuickSight helps turn data into clear, actionable insights through simple and interactive visuals, sometimes narrative-driven visualizations can add a touch of cohesive storytelling. This is exactly what Amazon QuickSight’s ‘Stories’ feature helps with, enabling you to present your findings in an engaging and easy-to-understand format, great for explaining data to non-technical audiences.</p><p>Now let us see how data story works.</p><h4>How Data Story Works</h4><ol><li>Go to the <strong>Data Stories </strong>section and click New Data Story</li></ol><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*WFL79hVkGv864MxG" /></figure><p>2. Enter a Prompt</p><p>In the data story builder, you can provide a prompt that specifies the focus of the story. For instance, describe the data insights you want to explore, we can prompt it to generate a data story around revenue growth analysis, customer segmentation, and so on by selecting relevant visuals based on the narrative or executive summary you want to generate.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*UQYdG_BcOhzXg-Ds" /></figure><p>3. Add Visuals, choose the dashboard containing the visuals you need</p><p>4. Select the visuals to include (up to 20)</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*i83kkrUD1iMWSJ6b" /></figure><h3>How to use Q Business with QuickSight?</h3><p>As you can see Data stories brings a new way of storytelling with compelling narratives introducing dynamic and interactive elements that make data exploration seamless. However, Data Stories is limited to the data available in QuickSight, and this is where integration with Q Business plays a key role in bringing your external data, specifically unstructured data, for a more holistic storytelling.</p><p>To get started with Amazon Q Business in QuickSight, the first step is to set up an application within Q Business for QuickSight, which will allow you to integrate and leverage To get started with Amazon Q Business in QuickSight, the first step is to set up an application within Q Business for QuickSight, which will allow you to integrate and leverage unstructured data alongside your structured datasets for more powerful insights and analytics.</p><p>5. Go to Amazon Q Business in AWS console and create an application</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*IcH6Fj8LOzXzf0_e" /></figure><p>6. Upload all the documents to the Q Business application that are required or connect to the data source of your choice</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/940/1*Fn5FQCY4x1EQqmq75vhKNQ.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*1teW2q7kco_3l6Y5" /></figure><p>7. You will see the uploaded files as below. For example the uploaded file here has employee feedback data like feedback from the managers, peers and self feedback</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/941/1*3oF94vFYVQCWparUlly9lw.png" /></figure><p>We can integrate this Q Business Application in QuickSight as below.</p><p>8. Navigate to the Manage QuickSight option in the QuickSight console</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*ZQB2maiMRc91rELM" /></figure><p>9. In Security &amp; Permissions section, click on Manage option</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*7jZCVRSWLtkuFZuH" /></figure><p>10. Choose Amazon Q Business and select the application created</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*6svX_vZPeQ2tzYnR" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*CyV803joQ5Xhc1lU" /></figure><p>Now the Q Business Application is integrated with QuickSight.</p><p>Amazon Q Business integration can be leveraged in topics and data stories in QuickSight. We will talk about both the integrations below.</p><h4><strong>Q Business Integration with Data Stories</strong></h4><p>By integrating structured and unstructured data into Data stories, users gain access to more comprehensive insights, ensuring key context isn’t missed. This capability allows organizations to create richer, more impactful data stories. Enable <strong>Use insights from Amazon Q Business </strong>option so now a narrative will be created along with the unstructured data in Q business</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*EPCEyUbJKsm_BG5S" /></figure><ol><li>Click <strong>Build</strong> to generate your story. You can review and either keep, edit, or discard the draft</li></ol><p>2. This part of the summary is from the selected visuals and the prompt added</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*T-RVAfg-4mUBMrG1" /></figure><p>3. This part of the summary generated in data story is from Q Business integration</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/989/0*s31RVDXhungHDrLx" /></figure><h4><strong>Q Business Integration with QuickSight Q Topics</strong></h4><p>To generate detailed insights through PDF documents along with QuickSight Q, we need to enable Q business within the topic settings to unlock enhanced insights.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*ux-ZPD4TU-e5PiSO" /></figure><p>In our scenario, we are going to analyze Employee Performance where we have employee ratings, projects, KPIs, etc as in the datasets and we have feedback from the managers, peers, and self feedback from an unstructured text file in Q business, now let’s see how the integrations work.</p><p>Below, we asked a query about top performing employee and following is the response generated through QuickSight Q.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*3eDK99d0vAUjPInB" /></figure><p>Along with insights from text documents, it also provides information on what source files it has utilised to generate the insights.</p><h3>Potential Technical and Business Pitfalls</h3><p>While the integration of Amazon Q Business with QuickSight is all about offering powerful capabilities, it is imperative to consider the following points when adopting this solution when it comes to ensuring efficient data processing.</p><p><strong>Larger the Dataset, longer the processing time</strong></p><p>The process of generating outputs can be slow, as the system often requires considerable time to process and aggregate large datasets, particularly when complex queries or multiple data sources are involved. This is common in systems that combine machine learning-driven insights with large-scale data analytics​.</p><p><strong>Accuracy issues, due to overlapping Data Sources</strong></p><p>When two documents provide overlapping data, Amazon Q Business may face certain challenges in selecting the correct one, leading to random data source selection. This happens because the model might not properly resolve data conflicts between sources. Although QuickSight identifies the data source in the output, it still presents potential issues when multiple sources have similar or identical content​. This may occur when the metadata and contextual rules around data are not robust enough to enforce consistency in selection.</p><h3><strong>Conclusion</strong></h3><p>As we saw in this blog, the integration of Amazon Q Business with QuickSight enables organisations to seamlessly analyze and extract insights from both structured and unstructured data types, paving the way for a more holistic approach. Further, the introduction of advanced features like data stories marks yet another significant stride in the business intelligence landscape by AWS.</p><p>Evolving from NLP-powered BI to more advanced, AI-driven insights, QuickSight Q has set the stage for Generative BI. Want to learn more about QuickSight Q and how you can enhance and tune the output from it, and eventually drive better decision-making? Check out our previous blog <a href="https://blog.shellkode.com/mastering-quicksight-q-setting-the-stage-for-semantic-insights-cdc1e1cc024a"><strong>Mastering QuickSight Q: Setting the Stage for Semantic Insights</strong></a> by <a href="https://www.linkedin.com/in/manasa-kallakuri-34b2a1141/">Manasa K </a>and <a href="https://www.linkedin.com/in/priyanka-gopalakrishnan-31079125a/">Priyanka Gopalakrishnan</a>.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=dcb62f9dc52e" width="1" height="1" alt=""><hr><p><a href="https://blog.shellkode.com/embracing-genbi-with-data-stories-and-amazon-q-business-integration-with-quicksight-dcb62f9dc52e">Embracing GenBI with Data Stories and Amazon Q Business Integration with QuickSight</a> was originally published in <a href="https://blog.shellkode.com">ShellKode Blog</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Mastering QuickSight Q: Setting the Stage for Semantic Insights]]></title>
            <link>https://blog.shellkode.com/mastering-quicksight-q-setting-the-stage-for-semantic-insights-cdc1e1cc024a?source=rss----1137b34251a7---4</link>
            <guid isPermaLink="false">https://medium.com/p/cdc1e1cc024a</guid>
            <category><![CDATA[data-analysis]]></category>
            <category><![CDATA[generative-ai-solution]]></category>
            <category><![CDATA[genai]]></category>
            <category><![CDATA[data-visualization]]></category>
            <category><![CDATA[amazon-quicksight]]></category>
            <dc:creator><![CDATA[ShellKode Blogs]]></dc:creator>
            <pubDate>Mon, 16 Dec 2024 06:19:10 GMT</pubDate>
            <atom:updated>2024-12-13T13:21:24.110Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*A8xtSnzriLUDPGv77ZDsGQ.png" /></figure><p><strong><em>Author</em></strong><em>: </em><a href="https://www.linkedin.com/in/manasa-kallakuri-34b2a1141/"><em>Manasa K, Head Data Practice</em></a><em> | </em><strong><em>Co-Author</em></strong><em>: </em><a href="https://www.linkedin.com/in/priyanka-gopalakrishnan-31079125a/"><em>Priyanka Gopalakrishnan</em></a></p><p>Without question, companies leveraging data insights thrive with improved operational efficiency and profitability. As such, the increasing need for enterprises to effectively leverage data, coupled with paradigm shifts in technology is reshaping the business intelligence landscape more so in recent times. This evolution is marked by a move toward a semantic understanding of data and intelligence, enabling deeper insights and more informed decision-making.</p><p>For instance, the emergence of Generative AI has revolutionized business intelligence tools, offering capabilities far beyond traditional interactive dashboards. These tools now enable processing complex natural language queries, allowing users to “chat” with their data to uncover insights effortlessly. AWS is at the forefront of this transformation with Amazon QuickSight, introducing innovative features like Amazon QuickSight in Q, Data Stories, Data Scenarios, and integration with Amazon Q Business. These advancements empower users to engage with their data more intuitively.</p><p>As we move towards a new approach in data analytics, this marks the beginning of a transformation in how businesses interact with their data. While we will be delving into all things new about QuickSight in the upcoming blogs, in this first edition of a three-part blog series, we will explore more about QuickSight in Q and the steps to enhance and tune the output from QuickSight Q.</p><h3>What is Amazon Q in QuickSight?</h3><p>Amazon QuickSight is a powerful, cloud-native BI service offered by AWS. It allows businesses to create interactive dashboards and visualize data from multiple sources. The latest version, <strong>Amazon Q in QuickSight</strong>, integrates generative AI capabilities to significantly enhance the user experience by enabling natural language queries. This integration transforms how users interact with their data, making it easier to access, analyze, and derive insights without needing advanced technical skills. With Amazon Q in Quicksight, businesses can now explore their data through natural language, bringing a new level of accessibility and efficiency to business intelligence.</p><h3>Getting Started with QuickSight Q</h3><p>Now that we’ve introduced the transformative potential of Amazon Q in QuickSight, let’s dive deeper into how you can start leveraging this tool to maximize your data analysis capabilities. One of the fundamental concepts in QuickSight Q is topics, which is the backbone of how the system understands and processes natural language queries. It includes metadata, user-friendly field names, synonyms, and filters to make the data more intuitive and relevant for specific business use cases.</p><p>Creating and managing these topics effectively is crucial to making the most out of QuickSight Q and having a better user experience. A topic can be mapped to one dataset or multiple datasets with relevant data, so business users can get relevant insights.</p><ul><li>When working with multiple datasets in QuickSight Q, you might typically create separate topics for each dataset and switch between topics every time you need to ask a question specific to a dataset.</li><li>However, this can be time-consuming and inconvenient. Instead, you can simplify this process by creating a single topic that includes all the datasets that are relevant and needed.</li><li>Add multiple datasets by clicking on <strong>“ADD DATASETS“ </strong>in the DATASETS section.</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*O09U0Z7Je8GlvTMk" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*Z6-ip7ZQ6H9tIwvf" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*tSfMQJJMVIsYxubQ" /></figure><ul><li>With this functionality, we can seamlessly ask queries without switching topics, allowing for more efficient analysis across datasets.</li></ul><h3>Tailoring QuickSight Q to Fit your Data and Business needs</h3><p>Data holds greater value when it is not just available, but accurate, accessible, and actionable. Tailoring Amazon Q in QuickSight to suit your business needs plays a key role in unlocking its value by delivering the most relevant insights — like a genie by your side, giving the right data at the right time, just the way you want to see it.</p><p>Now let us explore how to finetune QuickSight Q effectively.</p><h4>The Role of Context in Simplifying Data Retrieval and Analysis</h4><p>Sifting through data without any context can be inefficient and overwhelming. Context, however, provides clarity by offering additional information about the data, like its nature and the type of information it contains. This organized approach enables QuickSight Q to gain a deeper understanding of your data, allowing it to better interpret and respond to queries with accuracy.</p><p><strong>Synonyms</strong></p><p>Adding synonyms for columns makes the data more accessible and lets users make queries using alternative words or phrases.</p><p>For instance, the column <strong><em>Employee ID </em></strong>could have synonyms like <em>ID </em>or <em>Code</em>, allowing users to ask questions in multiple ways and still get accurate results. This flexibility ensures a smoother and more intuitive user experience.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*i3hk3eSjl1n15SSs" /></figure><p><strong>Semantics</strong></p><p>Assigning semantic types to fields is one of the key ways to improve the quality of answers in QuickSight Q. Doing so helps QuickSight Q better understand how to interpret and utilize your data effectively, ensuring more accurate and contextually relevant responses.</p><p>Semantic types define critical aspects of a field, including field role, data type, default aggregation and additional context.<strong> </strong>By regularly updating and refining the semantic types for your fields, you empower QuickSight Q to deliver precise, high-quality answers, creating a smoother and more intuitive user experience.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*USooJFLlFnnXLDAg" /></figure><p><strong>Description</strong></p><p>We can enhance QuickSight Q’s ability to understand and retrieve accurate data by providing a description of each field. Serving as a bridge between the dataset and user queries, these descriptions, without doubt, improve both the accuracy of results and the overall user experience.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*A7YEPEtAFbeqDGxY" /></figure><h3>Enhancing Q’s Understanding with Business Terms</h3><p>Business-specific terms can sometimes, if not always, be confusing, or complex to interpret if not properly defined. Defining these terms allows QuickSight Q to better interpret queries and provide results tailored to your business context. It effectively bridges the gap between technical data and business terms, improving the precision of your reports and making the analysis more intuitive.</p><p><strong>Named Entity</strong></p><p>Group relevant columns together using named entities in QuickSight Q for better query performance and accuracy. This approach simplifies querying by providing a structured way to reference and organize related data fields, ensuring more efficient and precise results.</p><p><strong>Without adding a named entity:</strong></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*GUGsufjfxw8KYbqY" /></figure><p><strong>Adding named entity:</strong></p><p>For example, under a Product Named Entity, you can configure the following columns:</p><ul><li>Category</li><li>Discount</li><li>Price</li><li>Rating</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*P7qZbHSnO5Z2MaOP" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*iNHXuV3CGEtZdM5t" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*tKgVnj3My1DYKq8S" /></figure><ul><li>By leveraging Named Entities, whenever you ask a question related to Category, QuickSight Q will directly pull the configured columns instead of searching randomly through all 100 columns.</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*x2Y_iNrvnXohO7Jk" /></figure><h3>Improving the Reliability of Insights Generated by QuickSight Q</h3><p>Sure, you can simplify data retrieval by adding context and enable Q to better understand your queries using named entities to get the results you want. But how can you ensure optimised query performance, allowing QuickSight Q to refine how it processes similar questions in the future and ensuring the reliability of responses? Well, there are not one, but two ways to do that too.</p><p><strong>Feedback</strong></p><ul><li>QuickSight Q allows users to provide feedback so it can learn and improve over time, refining how it processes similar questions in the future. If the results do not meet the expectations, users can send feedback with suggestions on how the system could better match the query intent.</li><li>Positive feedback reinforces the system’s understanding of user preferences and ensures consistent performance across similar queries.</li><li>By actively providing feedback — whether for improvements or validations — you contribute to optimising query responses, making the tool more effective and reliable for all users.</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*1D-_cfRZpfaPJ8Yw" /></figure><p><strong>Mark as Verified</strong></p><p>As you go about making queries and getting accurate results, also make sure to verify answers to the questions asked. Verified answers are prioritized in search results, helping users quickly find reliable information. Additionally, the Mark as verified option also plays a key role in driving constant improvements by helping users track how often questions are asked, and if users find the answers helpful.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*AofbHz7rIxye_3v6" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*5CvM-OZ7vpw36KFA" /></figure><h3>Conclusion</h3><p>As outlined in this blog, following these best practices optimizes the output from QuickSight Q, and enhances the value of your data, driving better decision-making and, ultimately, better business outcomes. QuickSight Q continues to evolve, becoming smarter and all the more powerful, not only addressing the limitations of conventional BI tools but also paving the way for a deeper understanding of data.</p><p>To continue your journey toward a deeper understanding of your data, stay tuned for the upcoming blogs, where we will explore the latest advancements in QuickSight Q, including Data stories and Data Scenarios and how they can further elevate your business intelligence capabilities.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=cdc1e1cc024a" width="1" height="1" alt=""><hr><p><a href="https://blog.shellkode.com/mastering-quicksight-q-setting-the-stage-for-semantic-insights-cdc1e1cc024a">Mastering QuickSight Q: Setting the Stage for Semantic Insights</a> was originally published in <a href="https://blog.shellkode.com">ShellKode Blog</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[ZeroETL — DynamoDB to OpenSearch Serverless Integration with CFT]]></title>
            <link>https://blog.shellkode.com/zeroetl-dynamodb-to-opensearch-serverless-integration-with-cft-a0580fc90765?source=rss----1137b34251a7---4</link>
            <guid isPermaLink="false">https://medium.com/p/a0580fc90765</guid>
            <category><![CDATA[opensearch]]></category>
            <category><![CDATA[dynamodb]]></category>
            <category><![CDATA[zero-etl]]></category>
            <category><![CDATA[aws]]></category>
            <category><![CDATA[data-pipeline]]></category>
            <dc:creator><![CDATA[Bhuvanesh]]></dc:creator>
            <pubDate>Mon, 19 Aug 2024 05:42:32 GMT</pubDate>
            <atom:updated>2024-11-05T08:04:17.168Z</atom:updated>
            <content:encoded><![CDATA[<h3>ZeroETL — DynamoDB to OpenSearch Serverless Integration with CFT</h3><p>Amazon OpenSearch a managed service for OpenSearch provides fully managed and scalable search services. It also provides a serverless variant to run the workloads on any scale. OpenSearch pipelines provide an out-of-the-box Zero ETL to sync data from multiple data sources. Zero ETL is not a new technology, AWS already introduced Aurora to RedShift replication using Zero ETL and it became a huge success. Then they introduced many Zero ETL integrations with various data services. Here we are going to set up a Zero ETL pipeline from Amazon DynamoDB to Amazon OpenSearch service using OpenSearch Pipeline and use CloudFormation template to deploy this. Even though it&#39;s a Zero ETL, we need to set up a pipeline configuration in JSON/YAML format. Behind the scenes, AWS using OpenSearch Data Prepper to integrate the DynamoDB data into OpenSearch.</p><h4>OpenSearch Serverless Concept:</h4><ul><li>Collection — An OpenSearch Serverless collection is a group of OpenSearch indexes.</li><li>Data Access Policy — Fine-grained access control is at the collection and index levels.</li><li>Network Access Policy — Control the traffic internally (VPC via endpoints) and externally (Public).</li><li>Pipeline — Data ingestion pipeline from various sources.</li></ul><h4>Create an OpenSearch collection:</h4><p>This is very straightforward, if we do it via console we can give the collection name and optional fields for network policy and data access policy.</p><p>We’ll not cover the basic stuff like creating vpc, etc, it just creates one Security group. VPC and KMS details have to be provided while launching the template.</p><p>But it will create a new VPC endpoint on point, make sure you don&#39;t have any existing endpoints, because for 1 VPC only one VPC endpoint is allowed.</p><pre>AWSTemplateFormatVersion: &#39;2010-09-09&#39;<br>Description: &#39;CloudFormation template for OpenSearch Serverless Collection&#39;<br><br>Parameters:<br>  VpcId:<br>    Type: AWS::EC2::VPC::Id<br>    Description: VPC ID where the OpenSearch collection will be deployed<br><br>  VpcCidr:<br>    Type: String<br>    Description: CIDR block of the VPC<br><br>  PrivateSubnetIds:<br>    Type: List&lt;AWS::EC2::Subnet::Id&gt;<br>    Description: List of private subnet IDs for the VPC endpoint<br><br>  KmsKeyId:<br>    Type: String<br>    Description: KMS Key ID for encryption<br><br>  CollectionDescription:<br>    Type: String<br>    Description: Description of the OpenSearch collection<br><br>  CollectionName:<br>    Type: String<br>    Description: Name of the OpenSearch collection<br><br>  EtlRoleArn:<br>    Type: String<br>    Description: ARN of the ETL role<br><br>Resources:<br>  OpenSearchSecurityGroup:<br>    Type: AWS::EC2::SecurityGroup<br>    Properties:<br>      GroupDescription: Security group for OpenSearch VPC endpoint<br>      VpcId: !Ref VpcId<br>      SecurityGroupIngress:<br>        - IpProtocol: https<br>          FromPort: 443<br>          ToPort: 443<br>          CidrIp: !Ref VpcCidr<br>      SecurityGroupEgress:<br>        - IpProtocol: -1<br>          FromPort: -1<br>          ToPort: -1<br>          CidrIp: 0.0.0.0/0<br><br>  OpenSearchVPCEndpoint:<br>    Type: AWS::OpenSearchServerless::VpcEndpoint<br>    Properties:<br>      Name: !Sub &#39;${CollectionName}-vpcendpoint-os&#39;<br>      VpcId: !Ref VpcId<br>      SubnetIds: !Ref PrivateSubnetIds<br>      SecurityGroupIds: <br>        - !Ref OpenSearchSecurityGroup<br><br>  OpenSearchEncryptionPolicy:<br>    Type: AWS::OpenSearchServerless::SecurityPolicy<br>    Properties:<br>      Name: !Sub &#39;${CollectionName}-encryption-policy-os&#39;<br>      Type: encryption<br>      Description: Encryption policy for OpenSearch collection<br>      Policy: !Sub |<br>        {<br>          &quot;Rules&quot;:[<br>            {<br>              &quot;ResourceType&quot;:&quot;collection&quot;,<br>              &quot;Resource&quot;:[<br>                &quot;collection/${CollectionName}&quot;<br>              ]<br>            }<br>          ],<br>          &quot;AWSOwnedKey&quot;:false,<br>          &quot;KmsARN&quot;:&quot;${KmsKeyId}&quot;<br>        }<br><br>  OpenSearchServerlessCollection:<br>    Type: AWS::OpenSearchServerless::Collection<br>    DependsOn: OpenSearchEncryptionPolicy<br>    Properties:<br>      Name: !Ref CollectionName<br>      Description: !Ref CollectionDescription<br>      Type: SEARCH<br>      StandbyReplicas: ENABLED<br><br>  OpenSearchServerlessAccessPolicy:<br>    Type: AWS::OpenSearchServerless::AccessPolicy<br>    Properties:<br>      Name: !Sub &#39;${CollectionName}-access-policy&#39;<br>      Type: data<br>      Description: Access policy for OpenSearch collection<br>      Policy: !Sub |<br>        [<br>          {<br>            &quot;Rules&quot;:[<br>              {<br>                &quot;Resource&quot;:[<br>                  &quot;collection/${CollectionName}&quot;<br>                ],<br>                &quot;Permission&quot;:[<br>                  &quot;aoss:CreateCollectionItems&quot;,<br>                  &quot;aoss:DeleteCollectionItems&quot;,<br>                  &quot;aoss:UpdateCollectionItems&quot;,<br>                  &quot;aoss:DescribeCollectionItems&quot;<br>                ],<br>                &quot;ResourceType&quot;:&quot;collection&quot;<br>              },<br>              {<br>                &quot;Resource&quot;:[<br>                  &quot;index/${CollectionName}/*&quot;<br>                ],<br>                &quot;Permission&quot;:[<br>                  &quot;aoss:CreateIndex&quot;,<br>                  &quot;aoss:DeleteIndex&quot;,<br>                  &quot;aoss:UpdateIndex&quot;,<br>                  &quot;aoss:DescribeIndex&quot;,<br>                  &quot;aoss:ReadDocument&quot;,<br>                  &quot;aoss:WriteDocument&quot;<br>                ],<br>                &quot;ResourceType&quot;:&quot;index&quot;<br>              }<br>            ],<br>            &quot;Principal&quot;:[<br>              &quot;${EtlRoleArn}&quot;<br>            ],<br>            &quot;Description&quot;:&quot;etl&quot;<br>          }<br>        ]<br><br>  OpenSearchNetworkPolicy:<br>    Type: AWS::OpenSearchServerless::SecurityPolicy<br>    Properties:<br>      Name: !Sub &#39;${CollectionName}-network-policy-os&#39;<br>      Type: network<br>      Description: Network policy for OpenSearch collection<br>      Policy: !Sub |<br>        [<br>          {<br>            &quot;Rules&quot;:[<br>              {<br>                &quot;ResourceType&quot;:&quot;collection&quot;,<br>                &quot;Resource&quot;:[<br>                  &quot;collection/${CollectionName}&quot;<br>                ]<br>              },<br>              {<br>                &quot;ResourceType&quot;:&quot;dashboard&quot;,<br>                &quot;Resource&quot;:[<br>                  &quot;collection/${CollectionName}&quot;<br>                ]<br>              }<br>            ],<br>            &quot;AllowFromPublic&quot;:false,<br>            &quot;SourceVPCEs&quot;:[<br>              &quot;${OpenSearchVPCEndpoint}&quot;<br>            ]<br>          }<br>        ]<br><br>Outputs:<br>  CollectionId:<br>    Description: ID of the created OpenSearch Serverless Collection<br>    Value: !Ref OpenSearchServerlessCollection<br><br>  VpcEndpointId:<br>    Description: ID of the created VPC Endpoint<br>    Value: !Ref OpenSearchVPCEndpoint<br><br>  SecurityGroupId:<br>    Description: ID of the created Security Group<br>    Value: !Ref OpenSearchSecurityGroup</pre><h4>Configure DynamoDB Table:</h4><p>To support ZeroETL from DynamoDB to OpenSearch collection, we need to enable Point in Time recovery and Streams on the DynamoDB side.</p><ul><li>Point-in-time recovery will export the existing data from the DynamoDB to the S3 bucket.</li><li>Streams will do the change data capture(CDC) and do the continuous replication to OpenSearch.</li></ul><blockquote>DynamoDB Streams only stores data in a log for up to 24 hours. If ingestion from an initial snapshot of a large table takes 24 hours or more, there will be some initial data loss. To mitigate this data loss, estimate the size of the table and configure appropriate compute units of OpenSearch Ingestion pipelines.</blockquote><p>Go to DynamoDB, select your table, and navigate to Backups.</p><p>Under the point-in-time recovery, click on Edit and turn on the point-in-time recovery feature.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*GRvsTO-iCKldfbsf64zWmA.png" /></figure><p>Now, go to the Exports and Streams options, then turn on the streams.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*ir-LUQEgJcyy1da2IQsQSw.png" /></figure><h4>Create OpenSearch Pipeline from DynamoDB:</h4><p>We created a CloudFormation template to provision the OpenSearch pipeline.</p><ol><li>This will create an IAM Role for the pipeline that will do DynamoDB export to S3.</li><li>It required the KMS access to encrypt the export data files.</li><li>Create Network access policy and add the VPC endpoint</li></ol><pre>AWSTemplateFormatVersion: &#39;2010-09-09&#39;<br>Description: &#39;CloudFormation template for DynamoDB to OpenSearch pipeline (Zero ETL)&#39;<br><br>Parameters:<br>  DynamoDBTableName:<br>    Type: String<br>    Description: &#39;Name of the DynamoDB table&#39;<br><br>  S3BucketName:<br>    Type: String<br>    Description: &#39;Name of the S3 bucket&#39;<br><br>  S3Prefix:<br>    Type: String<br>    Description: &#39;S3 prefix for DynamoDB export&#39;<br><br>  OpenSearchIndex:<br>    Type: String<br>    Description: &#39;OpenSearch index name - LowerCase Only&#39;<br><br>  DLQPrefix:<br>    Type: String<br>    Description: &#39;DLQ prefix for OpenSearch&#39;<br><br>  OpenSearchCollectionId:<br>    Type: String<br>    Description: &#39;OpenSearch Collection ID&#39;<br><br>  SubnetIds:<br>    Type: List&lt;AWS::EC2::Subnet::Id&gt;<br>    Description: &#39;List of Subnet IDs for the pipeline&#39;<br><br>  SecurityGroupId:<br>    Type: AWS::EC2::SecurityGroup::Id<br>    Description: &#39;Security Group ID for the pipeline&#39;<br>  <br>  PipelineName:<br>    Type: String<br>    Description: &#39;Name of the Pipeline - lowercase, numbers and hypends&#39;   <br><br>  ZeroETLRoleName:<br>    Type: String<br>    Description: &#39;Name of the role&#39;   <br><br>  NetworkPolicyName:<br>    Type: String<br>    Description: &#39;Name of the NetworkPolicyName&#39;   <br><br><br>Resources:<br>  ZeroETLRole:<br>    Type: &#39;AWS::IAM::Role&#39;<br>    Properties:<br>      RoleName: !Ref ZeroETLRoleName<br>      AssumeRolePolicyDocument:<br>        Version: &#39;2012-10-17&#39;<br>        Statement:<br>          - Effect: Allow<br>            Principal:<br>              Service:<br>                - osis-pipelines.amazonaws.com<br>            Action: &#39;sts:AssumeRole&#39;<br>      Policies:<br>        - PolicyName: DynamoDBExportPolicy<br>          PolicyDocument:<br>            Version: &#39;2012-10-17&#39;<br>            Statement:<br>              - Sid: allowRunExportJob<br>                Effect: Allow<br>                Action:<br>                  - &#39;dynamodb:DescribeTable&#39;<br>                  - &#39;dynamodb:DescribeContinuousBackups&#39;<br>                  - &#39;dynamodb:ExportTableToPointInTime&#39;<br>                Resource:<br>                  - !Sub &#39;arn:aws:dynamodb:${AWS::Region}:${AWS::AccountId}:table/${DynamoDBTableName}&#39;<br>              - Sid: allowCheckExportjob<br>                Effect: Allow<br>                Action:<br>                  - &#39;dynamodb:DescribeExport&#39;<br>                Resource:<br>                  - !Sub &#39;arn:aws:dynamodb:${AWS::Region}:${AWS::AccountId}:table/${DynamoDBTableName}/export/*&#39;<br>              - Sid: allowReadFromStream<br>                Effect: Allow<br>                Action:<br>                  - &#39;dynamodb:DescribeStream&#39;<br>                  - &#39;dynamodb:GetRecords&#39;<br>                  - &#39;dynamodb:GetShardIterator&#39;<br>                Resource:<br>                  - !Sub &#39;arn:aws:dynamodb:${AWS::Region}:${AWS::AccountId}:table/${DynamoDBTableName}/stream/*&#39;<br>              - Sid: allowReadAndWriteToS3ForExport<br>                Effect: Allow<br>                Action:<br>                  - &#39;s3:GetObject&#39;<br>                  - &#39;s3:AbortMultipartUpload&#39;<br>                  - &#39;s3:PutObject&#39;<br>                  - &#39;s3:PutObjectAcl&#39;<br>                Resource:<br>                  - !Sub &#39;arn:aws:s3:::${S3BucketName}/*&#39;<br>        - PolicyName: KMSAccessPolicy<br>          PolicyDocument:<br>            Version: &#39;2012-10-17&#39;<br>            Statement:<br>              - Sid: KMSAccess<br>                Effect: Allow<br>                Action:<br>                  - &#39;kms:Decrypt&#39;<br>                  - &#39;kms:GenerateDataKey&#39;<br>                Resource: !Sub &#39;arn:aws:kms:${AWS::Region}:${AWS::AccountId}:key/*&#39;<br>        - PolicyName: AOSSIngestionPolicy<br>          PolicyDocument:<br>            Version: &#39;2012-10-17&#39;<br>            Statement: <br>              - Sid: &quot;VisualEditor0&quot;<br>                Effect: &quot;Allow&quot;<br>                Action: <br>                  - &quot;aoss:APIAccessAll&quot;<br>                  - &quot;aoss:BatchGetCollection&quot;<br>                Resource: <br>                  - !Sub &#39;arn:aws:aoss:${AWS::Region}:${AWS::AccountId}:collection/${OpenSearchCollectionId}&#39;<br>              - Sid: &quot;VisualEditor1&quot;<br>                Effect: &quot;Allow&quot;<br>                Action: <br>                  - &quot;aoss:CreateSecurityPolicy&quot;<br>                  - &quot;aoss:UpdateSecurityPolicy&quot;<br>                  - &quot;aoss:GetSecurityPolicy&quot;<br>                Resource: &quot;*&quot;<br>              - Sid: OSISIngest<br>                Effect: Allow<br>                Action: &#39;osis:Ingest&#39;<br>                Resource: !Sub &#39;arn:aws:osis:${AWS::Region}:${AWS::AccountId}:pipeline/*&#39;<br>        - PolicyName: CloudWatchLogsPolicy<br>          PolicyDocument:<br>            Version: &#39;2012-10-17&#39;<br>            Statement:<br>              - Effect: Allow<br>                Action:<br>                  - &#39;logs:CreateLogStream&#39;<br>                  - &#39;logs:PutLogEvents&#39;<br>                Resource: !GetAtt ZeroETLPipelineLogGroup.Arn<br><br>  ZeroETLPipelineLogGroup:<br>    Type: AWS::Logs::LogGroup<br>    Properties:<br>      LogGroupName: !Sub &#39;/aws/vendedlogs/OpenSearchIngestion/${AWS::StackName}-ZeroETLPipeline&#39;<br>      RetentionInDays: 30<br><br>  ZeroETLPipeline:<br>    Type: &#39;AWS::OSIS::Pipeline&#39;<br>    Properties:<br>      PipelineName: !Ref PipelineName<br>      MinUnits: 1<br>      MaxUnits: 1<br>      PipelineConfigurationBody: <br>        Fn::Sub:<br>          - |<br>            version: &#39;2&#39;<br>            dynamodb-pipeline:<br>              source:<br>                dynamodb:<br>                  acknowledgments: true<br>                  tables:<br>                    - table_arn: arn:aws:dynamodb:${AWS::Region}:${AWS::AccountId}:table/${DynamoDBTableName}<br>                      stream:<br>                        start_position: LATEST<br>                      export:<br>                        s3_bucket: ${S3BucketName}<br>                        s3_region: ${AWS::Region}<br>                        s3_prefix: ${S3Prefix}<br>                  aws:<br>                    sts_role_arn: ${ZeroETLRoleArn}<br>                    region: ${AWS::Region}<br>              sink:<br>                - opensearch:<br>                    hosts:<br>                      - ${OpenSearchHost}<br>                    index: &#39;${OpenSearchIndex}&#39;<br>                    index_type: management_disabled<br>                    document_id: ${DocumentId}<br>                    action: ${Action}<br>                    document_version: ${DocumentVersion}<br>                    document_version_type: external<br>                    aws:<br>                      sts_role_arn: ${ZeroETLRoleArn}<br>                      region: ${AWS::Region}<br>                      serverless: true<br>                      serverless_options:<br>                        network_policy_name: &#39;${NetworkPolicyName}&#39;<br>                    dlq:<br>                      s3:<br>                        bucket: ${S3BucketName}<br>                        key_path_prefix: ${DLQPrefix}<br>                        region: ${AWS::Region}<br>                        sts_role_arn: ${ZeroETLRoleArn}<br>          - ZeroETLRoleArn: !GetAtt ZeroETLRole.Arn<br>            OpenSearchHost: !Sub &#39;https://${OpenSearchCollectionId}.${AWS::Region}.aoss.amazonaws.com&#39;<br>            DocumentId: &quot;${getMetadata(\&quot;primary_key\&quot;)}&quot;<br>            Action: &quot;${getMetadata(\&quot;opensearch_action\&quot;)}&quot;<br>            DocumentVersion: &quot;${getMetadata(\&quot;document_version\&quot;)}&quot;<br>      VpcOptions: <br>        SubnetIds: !Ref SubnetIds<br>        SecurityGroupIds: <br>          - !Ref SecurityGroupId<br>      LogPublishingOptions:<br>        IsLoggingEnabled: true<br>        CloudWatchLogDestination:<br>          LogGroup: !Ref ZeroETLPipelineLogGroup<br><br>Outputs:<br>  ZeroETLRoleARN:<br>    Description: &#39;ARN of the Zero ETL IAM Role&#39;<br>    Value: !GetAtt ZeroETLRole.Arn<br>  ZeroETLPipelineARN:<br>    Description: &#39;ARN of the Zero ETL Pipeline&#39;<br>    Value: !Ref ZeroETLPipeline<br>  ZeroETLPipelineLogGroupName:<br>    Description: &#39;Name of the Zero ETL Pipeline Log Group&#39;<br>    Value: !Ref ZeroETLPipelineLogGroup</pre><h4>Lessons learned:</h4><ul><li>The index will be automatically created, but the data types might not be relevant.</li><li>In the DynamoDB table, the rounded numbers like 10, 20,30,…. are exported to S3 as flot numbers(10.0,20.0,30.0,..), but the other numbers like 1,3,5,34 etc are proper integer values. So make sure you create the index with Flot or do a transform in your pipeline.</li><li>You cannot pause and resume the pipeline, and it uses serverless capacity units. So you might pay for this even if your pipeline is idle(minimum OCU).</li></ul><h4>Conclusion:</h4><p>Once the pipeline has been created, you can monitor it on the Pipeline console via CloudWatch. Hope you found this helpful.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*RI2YDqZLRgQ--kZnOeBz9Q.png" /></figure><h4>Author</h4><p>This blog post is written by our CTO <a href="https://www.linkedin.com/in/rbhuvanesh/"><strong><em>Bhuvanesh</em></strong></a><strong><em> </em></strong>and<strong> </strong><a href="https://www.linkedin.com/in/lkhatter/overlay/about-this-profile/"><strong><em>Lalit Khatter</em></strong></a> (PSA, AWS Ambassador APJ Lead)</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=a0580fc90765" width="1" height="1" alt=""><hr><p><a href="https://blog.shellkode.com/zeroetl-dynamodb-to-opensearch-serverless-integration-with-cft-a0580fc90765">ZeroETL — DynamoDB to OpenSearch Serverless Integration with CFT</a> was originally published in <a href="https://blog.shellkode.com">ShellKode Blog</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
    </channel>
</rss>