<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Simform Engineering - Medium]]></title>
        <description><![CDATA[Our Engineering blog gives an inside look at our technologies from the perspective of our engineers. - Medium]]></description>
        <link>https://medium.com/simform-engineering?source=rss----ce67e0b67c0d---4</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Mon, 06 Apr 2026 03:03:45 GMT</lastBuildDate>
        <atom:link href="https://medium.com/feed/simform-engineering" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Building Event‑Driven Backends with Azure Functions: HTTP, Queue, and Event Grid Triggers]]></title>
            <link>https://medium.com/simform-engineering/building-event-driven-backends-with-azure-functions-http-queue-and-event-grid-triggers-2e2847d2d4ab?source=rss----ce67e0b67c0d---4</link>
            <guid isPermaLink="false">https://medium.com/p/2e2847d2d4ab</guid>
            <category><![CDATA[event-driven-architecture]]></category>
            <category><![CDATA[microsoft-azure]]></category>
            <category><![CDATA[azure-functions]]></category>
            <category><![CDATA[azure-event-grid]]></category>
            <category><![CDATA[microservice-architecture]]></category>
            <dc:creator><![CDATA[Denish Bhimani]]></dc:creator>
            <pubDate>Fri, 06 Mar 2026 11:20:43 GMT</pubDate>
            <atom:updated>2026-03-06T11:20:41.524Z</atom:updated>
            <content:encoded><![CDATA[<h3>Introduction</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*uYQ_tP2_LlLVv48nDm9bBQ.png" /></figure><p>Event‑driven architectures are ideal for building scalable, resilient, and decoupled backends<strong>.</strong> Instead of tightly coupling synchronous calls, systems react to events and progress workflows asynchronously.</p><p>Azure Functions provides first‑class support for this style through HTTP triggers, queue triggers, and Event Grid triggers. In this post, we’ll build a real‑world order processing backend, walk through the code (Node.js runtime), explain the architectural responsibilities of each function, and show how to set everything up in Azure, queues, dead‑letter handling, Event Grid topics, and subscribers.</p><p>This is not a toy demo; it mirrors how production systems handle orders, payments, and notifications.</p><h3>Solution Architecture Overview</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*F-1M7SoqaGK4SM_eVWzC0w.png" /></figure><pre>Client<br>  → apiHTTPFunction (HTTP Trigger)<br>    → orders-queue<br>      → orderProcessFunction (Queue Trigger)<br>        → Event Grid Topic (order-events)<br>           → billingFunction (Event Grid Trigger)<br>            → notificationFunction (Event Grid Trigger)</pre><h4><strong>Responsibilities</strong></h4><ul><li><strong>apiHTTPFunction</strong><br>Accepts client intent, validates input, creates an order in the database (PENDING), and enqueues a command.</li><li><strong>orderProcessFunction</strong><br>Continues the workflow applies business rules, updates order state, and publishes domain events.</li><li><strong>billingFunction &amp; notificationFunction</strong><br>React independently to domain events using Event Grid fan‑out.</li></ul><p>This separation keeps HTTP fast, isolates failures, and allows independent scaling.</p><h3>Implementing the Functions (Node.js)</h3><blockquote>All examples use <strong>Azure Functions v4</strong> with <strong>Node.js</strong>.</blockquote><h4><strong>1. HTTP Trigger — Accepting Orders</strong></h4><p>function.json</p><pre>{<br>  &quot;bindings&quot;: [<br>    {<br>      &quot;type&quot;: &quot;httpTrigger&quot;,<br>      &quot;direction&quot;: &quot;in&quot;,<br>      &quot;name&quot;: &quot;req&quot;,<br>      &quot;methods&quot;: [&quot;post&quot;],<br>      &quot;authLevel&quot;: &quot;function&quot;<br>    },<br>    {<br>      &quot;type&quot;: &quot;http&quot;,<br>      &quot;direction&quot;: &quot;out&quot;,<br>      &quot;name&quot;: &quot;res&quot;<br>    },<br>    {<br>      &quot;type&quot;: &quot;queue&quot;,<br>      &quot;direction&quot;: &quot;out&quot;,<br>      &quot;name&quot;: &quot;orderMessage&quot;,<br>      &quot;queueName&quot;: &quot;orders-queue&quot;,<br>      &quot;connection&quot;: &quot;AzureWebJobsStorage&quot;<br>    }<br>  ]<br>}</pre><p>index.js</p><pre>const crypto = require(&quot;crypto&quot;);<br><br>module.exports = async function (context, req) {<br>  const body = req.body || {};<br>  <br>  if (!body.customer_id || !Array.isArray(body.items)) {<br>    context.res = {<br>      status: 400,<br>      body: { error: &quot;customer_id and items array are required&quot; }<br>    };<br>    return;<br>  }<br>  <br>  // generate unique correlationId<br>  const correlationId = req.headers[&quot;x-correlation-id&quot;] || crypto.randomUUID();<br>  <br>  // Insert into the table<br>  // Minimal pseudo code<br>  const orderId = crypto.randomUUID();<br>  <br>  const orderDocument = {<br>    orderId,<br>    customerId: body.customer_id,<br>    items: body.items,<br>    status: &quot;PENDING&quot;,<br>    createdAt: new Date().toISOString()<br>  };<br>  <br>  // enqueue command<br>  context.bindings.orderMessage = JSON.stringify({<br>    orderId,<br>    correlationId<br>  });<br>  <br>  context.res = {<br>    status: 202,<br>    body: {<br>      message: &quot;Order accepted&quot;,<br>      orderId,<br>      correlationId,<br>      status: orderDocument.status<br>    }<br>  };<br>};</pre><p><strong>Why this works</strong></p><ul><li>HTTP remains low latency</li><li>Only minimal DB work happens synchronously</li><li>The queue guarantees retry if downstream processing fails</li></ul><h4>2. Queue Trigger — Processing Orders</h4><p>This function owns business workflow progression, not simple CRUD.</p><p>function.json</p><pre>{<br>  &quot;bindings&quot;: [<br>    {<br>      &quot;type&quot;: &quot;queueTrigger&quot;,<br>      &quot;direction&quot;: &quot;in&quot;,<br>      &quot;name&quot;: &quot;queueMessage&quot;,<br>      &quot;queueName&quot;: &quot;orders-queue&quot;,<br>      &quot;connection&quot;: &quot;AzureWebJobsStorage&quot;<br>    }<br>  ]<br>}</pre><p>index.js</p><pre>const { EventGridPublisherClient, AzureKeyCredential } = require(&quot;@azure/eventgrid&quot;);<br><br>module.exports = async function (context, queueMessage) {<br>  const { orderId, correlationId } = queueMessage;<br>  <br>  context.log({ correlationId, orderId }, &quot;Processing order&quot;);<br>  <br>  // Idempotency check (pseudo)<br>  // if (order.status !== &quot;PENDING&quot;) return;<br>  <br>  // Apply business logic<br>  const confirmed = true; // inventory / validation checks<br>  <br>  const eventType = confirmed ? &quot;OrderConfirmed&quot; : &quot;OrderRejected&quot;;<br>  <br>  // Update DB state transactionally (pseudo)<br>  // status: PENDING → CONFIRMED | REJECTED<br>  <br>  const client = new EventGridPublisherClient(<br>    process.env.EVENT_GRID_TOPIC_ENDPOINT,<br>    new AzureKeyCredential(process.env.EVENT_GRID_TOPIC_KEY)<br>  );<br>  <br>  const event = {<br>    id: orderId,<br>    eventType,<br>    subject: `orders/${orderId}`,<br>    eventTime: new Date(),<br>    dataVersion: &quot;1.0&quot;,<br>    data: { orderId, correlationId, status: eventType }<br>  };<br>  <br>  await client.send([event]);<br>  <br>  context.log({ correlationId, orderId }, &quot;Domain event published&quot;);<br>};</pre><p><strong>Why Event Grid here?</strong></p><ul><li>Emits facts, not commands</li><li>Enables fan‑out without coupling</li><li>Subscribers evolve independently</li></ul><h4>3. Event Grid Triggers — Billing &amp; Notification</h4><p>billingFunction/function.json</p><pre>{<br>  &quot;bindings&quot;: [<br>    {<br>      &quot;type&quot;: &quot;eventGridTrigger&quot;,<br>      &quot;direction&quot;: &quot;in&quot;,<br>      &quot;name&quot;: &quot;event&quot;<br>    }<br>   ]<br>}</pre><p>billingFunction/index.js</p><pre>module.exports = async function (context, event) {<br>  if (event.eventType === &quot;OrderConfirmed&quot;) {<br>    context.log(&quot;Charging customer for&quot;, event.data.orderId);<br>    // charge payment gateway<br>  }<br>};</pre><p>notificationFunction/index.js</p><pre>module.exports = async function (context, event) {<br>  if (event.eventType === &quot;OrderConfirmed&quot;) {<br>    // send confirmation email<br>  } else if (event.eventType === &quot;OrderRejected&quot;) {<br>    // send rejection email<br>  }<br>};</pre><p>Each subscriber can fail or retry independently without affecting others.</p><h3>Setting Up Azure Resources</h3><h4>1. Storage Queue + Dead‑Letter Queue</h4><p>Create queues:</p><pre>az storage queue create --name orders-queue<br>az storage queue create --name orders-queue-poison</pre><p>Configure retries in host.json:</p><pre>{<br>  &quot;extensions&quot;: {<br>    &quot;queues&quot;: {<br>      &quot;maxDequeueCount&quot;: 5<br>    }<br>  }<br>}</pre><p>Messages exceeding retries land in orders-queue-poison.</p><h4>2. Event Grid Topic</h4><pre>az eventgrid topic create \<br>--name order-events \<br>--resource-group &lt;rg&gt; \<br>--location &lt;region&gt;</pre><p>Store the endpoint and key as <strong>Function App settings</strong>.</p><h4>3. Event Grid Subscriptions</h4><pre>az eventgrid event-subscription create \<br>--name billing-sub \<br>--source-resource-id &lt;topic-id&gt; \<br>--endpoint &lt;billing-function-endpoint&gt;<br><br><br>az eventgrid event-subscription create \<br>--name notification-sub \<br>--source-resource-id &lt;topic-id&gt; \<br>--endpoint &lt;notification-function-endpoint&gt;</pre><p>Apply filters on eventType to reduce noise.</p><h3>Key Design Principles</h3><ul><li><strong>Idempotency</strong>: Required for queues and events (at‑least‑once delivery)</li><li><strong>Transactional State Changes</strong>: Update DB before publishing events</li><li><strong>Correlation Tracking</strong>: Pass correlationId through HTTP → Queue → Event Grid</li><li><strong>Event‑Driven Fan‑out</strong>: Use Event Grid for facts, not commands</li></ul><h3>Common Pitfalls</h3><ul><li>Missing idempotency → duplicate billing</li><li>No poison queue monitoring</li><li>Publishing commands via Event Grid</li><li>Hard‑coding secrets instead of app settings / Key Vault</li><li>Treating queues as CRUD instead of workflow continuation</li></ul><h3>Local Testing</h3><ul><li>Use <strong>Azurite</strong> for queues</li><li>Use local.settings.json for environment variables</li><li>Test each function independently, then end‑to‑end</li></ul><h3>Azure Functions Triggers (Reference)</h3><p>For completeness, below is a reference list of commonly used Azure Functions triggers. You will not use all of these in a single system, but it’s important to know what is available when designing event-driven architectures.</p><h4>HTTP &amp; API</h4><ul><li><strong>HTTP Trigger</strong> — Invoke a function via HTTP/HTTPS (REST APIs, webhooks)</li></ul><h4>Messaging &amp; Eventing</h4><ul><li><strong>Queue Trigger (Azure Storage Queue)</strong> — Background processing, simple commands</li><li><strong>Service Bus Queue Trigger</strong> — Advanced messaging, DLQ, FIFO with sessions</li><li><strong>Service Bus Topic Trigger</strong> — Pub/sub messaging with multiple consumers</li><li><strong>Event Grid Trigger</strong> — Reactive fan-out for domain and infrastructure events</li><li><strong>Event Hub Trigger</strong> — High-throughput streaming (telemetry, logs, IoT)</li></ul><h4>Data &amp; Storage</h4><ul><li><strong>Blob Trigger</strong> — React to blob uploads/changes</li><li><strong>Cosmos DB Trigger</strong> — React to inserts/updates via change feed</li><li><strong>Table Storage Trigger</strong> — React to table entity changes (limited scenarios)</li></ul><h4>Scheduling &amp; Automation</h4><ul><li><strong>Timer Trigger</strong> — Scheduled jobs (cron-like)</li></ul><h4>Platform &amp; Integration</h4><ul><li><strong>Durable Functions Triggers</strong> — Stateful workflows and orchestrations</li><li><strong>SignalR Trigger</strong> — Real-time messaging</li></ul><h4>Observations</h4><ul><li><strong>Queues and Service Bus</strong> are best for <em>commands and workflow progression</em></li><li><strong>Event Grid</strong> is best for <em>facts and fan-out</em></li><li><strong>Event Hub</strong> is optimized for <em>high-volume streams</em>, not business workflows</li><li><strong>Durable Functions</strong> are useful when orchestration logic becomes complex</li></ul><p>Knowing the trigger landscape helps you choose the right tool for the job, rather than overloading a single trigger type.</p><h3>Conclusion</h3><p>Azure Functions, combined with queues and Event Grid, provide a production‑grade foundation for event‑driven backends. By separating intent intake, workflow processing, and domain event fan‑out, you get scalability, resilience, and clean evolution paths.</p><p>This architecture is widely used in real systems handling orders, payments, provisioning, and integrations and it scales far beyond simple CRUD applications.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=2e2847d2d4ab" width="1" height="1" alt=""><hr><p><a href="https://medium.com/simform-engineering/building-event-driven-backends-with-azure-functions-http-queue-and-event-grid-triggers-2e2847d2d4ab">Building Event‑Driven Backends with Azure Functions: HTTP, Queue, and Event Grid Triggers</a> was originally published in <a href="https://medium.com/simform-engineering">Simform Engineering</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Streamlining Development with Agentic Workflows: A Shift to Smarter Automation]]></title>
            <description><![CDATA[<div class="medium-feed-item"><p class="medium-feed-image"><a href="https://medium.com/simform-engineering/streamlining-development-with-agentic-workflows-a-shift-to-smarter-automation-a3f8b4a57103?source=rss----ce67e0b67c0d---4"><img src="https://cdn-images-1.medium.com/max/2560/1*Kh0pFPg_SOjT095gBNgRVg.png" width="2560"></a></p><p class="medium-feed-snippet">Turning Repetitive Tasks into Seamless Automation</p><p class="medium-feed-link"><a href="https://medium.com/simform-engineering/streamlining-development-with-agentic-workflows-a-shift-to-smarter-automation-a3f8b4a57103?source=rss----ce67e0b67c0d---4">Continue reading on Simform Engineering »</a></p></div>]]></description>
            <link>https://medium.com/simform-engineering/streamlining-development-with-agentic-workflows-a-shift-to-smarter-automation-a3f8b4a57103?source=rss----ce67e0b67c0d---4</link>
            <guid isPermaLink="false">https://medium.com/p/a3f8b4a57103</guid>
            <category><![CDATA[github-actions]]></category>
            <category><![CDATA[ai-powered-development]]></category>
            <category><![CDATA[github-copilot]]></category>
            <category><![CDATA[agentic-ai]]></category>
            <category><![CDATA[workflow-automation]]></category>
            <dc:creator><![CDATA[Meetvaghasiya]]></dc:creator>
            <pubDate>Thu, 05 Mar 2026 07:37:52 GMT</pubDate>
            <atom:updated>2026-03-05T07:37:51.205Z</atom:updated>
        </item>
        <item>
            <title><![CDATA[Scalable Code Evaluation Systems: A Backend Engineering Case Study]]></title>
            <link>https://medium.com/simform-engineering/scalable-code-evaluation-systems-a-backend-engineering-case-study-7cdb8fb82e29?source=rss----ce67e0b67c0d---4</link>
            <guid isPermaLink="false">https://medium.com/p/7cdb8fb82e29</guid>
            <category><![CDATA[scalability]]></category>
            <category><![CDATA[system-design-concepts]]></category>
            <category><![CDATA[container-orchestration]]></category>
            <category><![CDATA[high-level-design]]></category>
            <category><![CDATA[event-driven-architecture]]></category>
            <dc:creator><![CDATA[Tanishq Rawat]]></dc:creator>
            <pubDate>Fri, 20 Feb 2026 11:51:58 GMT</pubDate>
            <atom:updated>2026-02-20T11:51:56.890Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*awXKS2d_Wmmv2mCSWbvMDA.png" /></figure><p>Have you ever wondered <em>how your code compiles instantly on browser</em> without even having compilers installed locally?</p><p>All codes are <em>compiled/interpreted by Servers not your browser</em>. Codes are sent to the server, and results are presented via response.</p><p>But it’s not just a simple <strong>REST API</strong>; there is a whole complex system working behind.</p><p>In this article, we’ll walk through the system design of such a platform, focusing on:</p><ol><li>Core components and their responsibilities</li><li>Job queuing and asynchronous execution</li><li>Why polling is better than WebSockets for result tracking</li><li>How <strong>SQS visibility timeout</strong> protects against worker failures</li><li>Why <strong>Redis</strong> is used while the database remains the <em>source of truth</em></li><li>Asynchronous code storage in S3</li><li><strong>Docker</strong>-based isolated execution at scale</li></ol><h3>1. High-Level Architecture and Components</h3><p>At a high level, the submission pipeline looks like this:</p><p><strong>Client → API Gateway → Load Balancer → App Containers → Queue (SQS/RabbitMQ) → Workers → Docker Runners → Postgres + Redis + S3</strong></p><p>A summarized flow of how the application will work?</p><pre>1. The user submits code, which passes through the API Gateway and <br>Load Balancer to the backend service, where a submission record is <br>created and a job is pushed to the queue.<br><br>2. The backend immediately returns a submission_id to the client while <br>the code is uploaded asynchronously to object storage.<br><br>3. Worker nodes continuously poll the queue, fetch the code, and execute it <br>inside isolated Docker containers.<br><br>4. After execution, the worker updates the final result in the database and <br>refreshes the cache.<br><br>5. The client polls the status API, which reads from cache first and falls <br>back to the database if needed.</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*tCZIr44l753YURjXqb57sw.png" /><figcaption><strong>High Level Architecture Diagram</strong></figcaption></figure><p>Let’s deep dive into the component and how it solves a specific problem.</p><h4>Client (Browser / Mobile App)</h4><ul><li>Sends code submissions.</li><li>Polls for execution status and results.</li><li>Displays verdicts and runtime statistics.</li></ul><h4>API Gateway</h4><ul><li>Entry point for all requests.</li><li>Handles authentication, rate limiting, and request validation.</li><li>Shields internal services from direct exposure.</li><li>Enables centralized logging and throttling.</li></ul><h4>Load Balancer</h4><ul><li>Distributes traffic across multiple app containers.</li><li>Ensures high availability and horizontal scalability.</li><li>Allows rolling deployments with zero downtime.</li></ul><h4>App Containers (Backend Services)</h4><ul><li>Accept submissions.</li><li>Generate a unique submission_id.</li><li>Store metadata in the database (initial status = PENDING).</li><li>Upload code to object storage asynchronously.</li><li>Push a job event into the queue.</li><li>Expose APIs for status lookup.</li></ul><p>These services are stateless, so they can scale easily.</p><h4>Queue (Amazon SQS / RabbitMQ)</h4><ul><li>Decouples request handling from execution.</li><li>Buffers spikes in traffic.</li><li>Guarantees reliable delivery of jobs to workers.</li><li>Enables retry and failure handling.</li></ul><h4>Workers</h4><ul><li>Continuously poll the queue.</li><li>Pick up submission jobs.</li><li>Fetch code from storage.</li><li>Run code inside isolated Docker containers.</li><li>Capture output, runtime, and memory usage.</li><li>Update the result in the database and cache.</li></ul><h4>Docker Execution Environment</h4><ul><li>Runs untrusted user code safely.</li><li>Provides language-specific runtime images (Python, Java, C++, Go, JS).</li><li>Enforces CPU, memory, and time limits.</li></ul><h4>Postgres (Primary Database)</h4><ul><li>Stores submissions, users, problems, and verdicts.</li><li>Maintains strong consistency and durability.</li><li>Acts as the source of truth.</li></ul><h4>Redis (Cache)</h4><ul><li>Caches submission status for fast reads.</li><li>Reduces load on the database.</li><li>Handles high-frequency polling efficiently.</li></ul><h4>S3 / Blob Storage</h4><ul><li>Stores raw code submissions and artifacts.</li><li>Cheap, durable, scalable storage.</li><li>Decouples code persistence from the request lifecycle.</li></ul><h3>2. Job Queuing and Asynchronous Execution</h3><h4>Why not execute code synchronously?</h4><p>Code execution can take seconds, and sometimes minutes (large inputs, timeouts, container startup). Keeping HTTP requests open would:</p><ul><li>Block server threads.</li><li>Limit throughput.</li><li>Cause timeouts and poor UX.</li></ul><h4>Solution: Async pipeline using a queue</h4><p><strong>Flow:</strong></p><ol><li>User submits code.</li><li>Backend:</li></ol><ul><li>Creates submission_id.</li><li>Writes a DB record with status = PENDING.</li><li>Pushes a job message to the queue.</li></ul><p>3. API immediately returns:</p><ul><li>{ &quot;submission_id&quot;: &quot;abc123&quot; }</li></ul><p>4. Worker consumes the job and executes it asynchronously.</p><p>This allows:</p><ul><li>Independent scaling of API and workers.</li><li>Natural backpressure handling via the queue.</li><li>Fault isolation between submission and execution.</li></ul><h3>3. Why Polling is Better than WebSockets Here</h3><p>At first glance, <strong><em>WebSockets</em></strong> seem attractive: push results in real time. In practice, polling is often a better tradeoff for online judges.</p><h4>⚠️ Problems with WebSockets</h4><ol><li><strong>Connection explosion</strong></li></ol><ul><li>Thousands of users → thousands of persistent connections.</li><li>Memory and file descriptor pressure.</li><li>Harder to scale across nodes without sticky sessions.</li></ul><p><strong>2. Infrastructure complexity</strong></p><ul><li>Requires stateful connection routing.</li><li>Load balancers and firewalls complicate upgrades.</li></ul><p><strong>3. Intermittent usage pattern</strong></p><ul><li>Users submit once, wait briefly, then leave.</li><li>Maintaining long-lived connections is wasteful.</li></ul><p><strong>4. Failure handling</strong></p><ul><li>Reconnection logic becomes complex.</li><li>Message delivery guarantees are harder.</li></ul><h4>✅ Why Polling Works Better</h4><ul><li>Status checks are lightweight reads (Redis).</li><li>Polling interval can be controlled (e.g., every 1–2 seconds).</li><li>Fully stateless HTTP requests.</li><li>Easy horizontal scaling.</li><li>Natural fit for mobile and browser clients.</li></ul><h3>4. How SQS Visibility Timeout Protects Worker Failures</h3><p>When a worker picks a message from SQS:</p><ul><li>The message becomes <strong>invisible</strong> for a configured duration (visibility timeout).</li><li>If the worker finishes successfully:</li><li>It deletes the message.</li><li>If the worker crashes, hangs, or fails:</li><li>The message becomes visible again after a timeout.</li><li>Another worker reprocesses it.</li></ul><h4>Why this is important</h4><p>Worker failures can happen due to:</p><ul><li>Container crashes.</li><li>Out-of-memory errors.</li><li>Node restarts.</li><li>Network failures.</li></ul><p><strong>Visibility timeout ensures</strong>:</p><ul><li>No job is permanently lost.</li><li>At-least-once processing guarantee.</li><li>Automatic retry without manual intervention.</li></ul><h4>Design considerations</h4><ul><li>Timeout should exceed maximum execution time + buffer.</li><li>Use idempotent processing to avoid duplicate side effects.</li><li>Optionally use a Dead Letter Queue after repeated failures.</li></ul><p>This gives strong fault tolerance with minimal complexity.</p><h3>5. Redis for Speed, Database as Source of Truth</h3><h4>Why Redis?</h4><ul><li>Extremely fast in-memory reads.</li><li>Handles high QPS from polling clients.</li><li>Reduces pressure on the primary database.</li></ul><p><strong>Usage:</strong></p><ul><li>Cache submission status.</li><li>Cache recent execution results.</li><li>TTL-based eviction.</li></ul><h4>Why Database remains source of truth?</h4><p><strong>Redis</strong> is:</p><ul><li>In-memory.</li><li>Can lose data on restart (depending on persistence).</li><li>Eventually consistent.</li></ul><p><strong>Postgres</strong> provides:</p><ul><li>Durability.</li><li>Transactions.</li><li>Auditable history.</li><li>Strong consistency.</li></ul><h4>Update pattern</h4><ol><li>Worker finishes execution.</li><li>Update database in a transaction.</li><li>After commit, update Redis cache.</li><li>Clients read from Redis first.</li><li>On cache miss → fallback to DB.</li><li>This avoids stale or inconsistent results and keeps correctness intact.</li></ol><p><strong>Note:</strong> We cache the submission status with a short TTL so that during polling, we geta response instantly, and we don’t hit a DB query at every poll request. Keeping a short TTL helps manage memory in redis as it is just a cache, and if we keep a lot of data in redis then it can lead to performance issues, even though we use cache.</p><h3>6. Asynchronous Code Storage in S3</h3><p>Storing raw code in S3 provides:</p><ul><li>Cheap long-term storage.</li><li>Easy audit and replay.</li><li>Decoupling from database growth.</li></ul><h4>Why async upload?</h4><p>Uploading large payloads synchronously:</p><ul><li>Increases API latency.</li><li>Blocks request threads.</li><li>Reduces throughput.</li></ul><h4>Pattern</h4><ul><li>API accepts submission.</li><li>Metadata saved immediately.</li><li>Code upload happens asynchronously.</li><li>Background task.</li><li>Event-driven worker.</li><li>Queue job references S3 path instead of raw code.</li></ul><p>This keeps submission APIs fast and scalable.</p><p><strong>Note:</strong> To partition source code, we use <strong>question_id</strong> as the <strong><em>partition key</em></strong> because if we partition based on <strong>user_id,</strong> it can lead to performance issues are these platforms have <em>millions of users,</em> and it’s not feasible to create these many partitions and comparing the number of questions, it will always be in thousands.</p><h3>7. Docker Containers for Isolated Execution at Scale</h3><p>Executing user code directly on host machines is dangerous.</p><h4>Risks</h4><ul><li>Malicious code execution. (<em>Leaking sensitive information from the server</em>)</li><li>Infinite loops. (<em>Can stuck the main server</em>)</li><li>Memory exhaustion. (<em>Allocating tons of memory without cleanup can lead to system failures</em>)</li><li>File system access. (<em>Trying to delete OS/Kernel level files</em>)</li><li>Network abuse. (<em>Trying to hit unwanted URLs or making a DOS attack on gov websites</em>)</li></ul><h4>Docker provides</h4><ul><li>Process isolation.</li><li>Resource limits (CPU, memory).</li><li>Read-only filesystem.</li><li>Controlled networking.</li><li>Clean execution environment per run.</li></ul><h4>Scaling model</h4><ul><li>Workers spin up containers dynamically.</li><li>Language-specific base images.</li><li>Containers destroyed after execution.</li><li>Horizontal scaling based on queue depth.</li></ul><p>This enables safe multi-tenant execution without risking platform stability.</p><h3>Conclusion</h3><p>An online judging platform is fundamentally a distributed execution pipeline:</p><ul><li><strong>Queues decouple submission from execution.</strong></li><li><strong>Polling simplifies scalability and reliability.</strong></li><li><strong>Visibility timeout protects against worker failures.</strong></li><li><strong>Redis optimizes read performance while DB ensures correctness.</strong></li><li><strong>S3 decouples storage from compute.</strong></li><li><strong>Docker enforces isolation and security at scale.</strong></li></ul><p>These design choices balance correctness, performance, cost, and operational simplicity, the same tradeoffs used by real production platforms.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=7cdb8fb82e29" width="1" height="1" alt=""><hr><p><a href="https://medium.com/simform-engineering/scalable-code-evaluation-systems-a-backend-engineering-case-study-7cdb8fb82e29">Scalable Code Evaluation Systems: A Backend Engineering Case Study</a> was originally published in <a href="https://medium.com/simform-engineering">Simform Engineering</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[System Design in Action: Building a Production-Ready News Feed]]></title>
            <link>https://medium.com/simform-engineering/system-design-in-action-building-a-production-ready-news-feed-7a334c20fee7?source=rss----ce67e0b67c0d---4</link>
            <guid isPermaLink="false">https://medium.com/p/7a334c20fee7</guid>
            <category><![CDATA[system-design-concepts]]></category>
            <category><![CDATA[socila-media]]></category>
            <category><![CDATA[system-design-project]]></category>
            <dc:creator><![CDATA[Akash Chauhan]]></dc:creator>
            <pubDate>Thu, 19 Feb 2026 06:27:13 GMT</pubDate>
            <atom:updated>2026-02-19T06:27:11.989Z</atom:updated>
            <content:encoded><![CDATA[<blockquote>How I built a social media feed that handles millions of likes, works offline, and never shows a loading spinner</blockquote><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*lqPJcQSdTN0Xz5hMGHoXLw.png" /></figure><p>Your app works perfectly on your laptop.</p><p>Then it meets real users. The server crashes under load. The network drops in a subway tunnel. Someone rapidly taps the like button 47 times. A user on a 3G connection in a rural area wonders why nothing loads.</p><p>Sound familiar?</p><p>I spent months studying system design theory for interviews. I could draw architectural diagrams on whiteboards. But when it came time to build something real, I couldn’t connect those boxes and arrows to actual code.</p><p>I built a complete social media feed from scratch — not a simulated or demo-based implementation, but a production-ready application following patterns used by platforms such as Twitter, Facebook, and Instagram.</p><p><a href="https://github.com/vue-simform/system-design-in-practice"><strong>View the live demo and full source code →</strong></a></p><h3>The Requirements That Changed Everything</h3><p>Build a social media feed.</p><p>Seems simple. Then the real requirements arrive:</p><ul><li><strong>Works offline</strong> (subway commuters still want to scroll)</li><li><strong>Shows likes instantly</strong> (no waiting for server round-trips)</li><li><strong>Handles millions of posts</strong> (pagination that doesn’t get slower over time)</li><li><strong>Runs when the server is down</strong> (graceful degradation)</li><li><strong>Accessible for screen readers</strong> (WCAG 2.1 AA compliance)</li><li><strong>Feels instant</strong> (skeleton screens, not spinners)</li></ul><p>Each requirement seems reasonable. Together, they require a fundamentally different architecture than the typical React tutorial.</p><p><em>Welcome to production system design.</em></p><h3>The Architecture That Makes It Work</h3><p>Here’s the mental model:</p><pre>User<br>    ↓<br>React Components (presentation)<br>    ↓<br>Custom Hooks (business logic)<br>    ↓<br>State Management<br>    - React Query (server data)<br>    - Zustand (client state)<br>    - Context (auth)<br>    ↓<br>Cache (3 layers)<br>    - L1: Memory (instant)<br>    - L2: IndexedDB (persistent)<br>    - L3: Service Worker (static)<br>    ↓<br>API Service<br>    ↓<br>Backend Server</pre><p><strong>Why this specific structure?</strong></p><p>Each layer has exactly one job. Components render. Hooks handle logic. Services manage network calls. This separation means you can test each layer independently, reuse logic across components, and add features without breaking existing code.</p><p>The three-layer cache is the secret sauce. More on that shortly.</p><h3>Following a Like Button Click</h3><p>The best way to understand the system is to follow a single user action through every layer. Let’s trace what happens when someone taps the heart icon.</p><h3>The Click (0ms)</h3><pre>// components/feed/PostItem.tsx<br>function PostItem({ post }) {<br>  const { mutate: likePost } = useLikePost()<br>  return (<br>    &lt;button onClick={() =&gt; likePost({ postId: post.id, isLiked: post.isLiked })}&gt;<br>      ❤️ {post.likes}<br>    &lt;/button&gt;<br>  )<br>}</pre><p>Simple React component. Calls a mutation when clicked.</p><h3>The Optimistic Update (1ms)</h3><p>Here’s where it gets interesting:</p><pre>// hooks/useLikePost.ts<br>function useLikePost() {<br>  return useOptimisticMutation({<br>    onMutate: (oldData, { postId, isLiked }) =&gt; {<br>      return {<br>        ...oldData,<br>        pages: oldData.pages.map(page =&gt; ({<br>          ...page,<br>          posts: page.posts.map(post =&gt;<br>            post.id === postId<br>              ? {<br>                  ...post,<br>                  isLiked: !isLiked,<br>                  likes: post.likes + (isLiked ? -1 : 1)<br>                }<br>              : post<br>          )<br>        }))<br>      }<br>    },<br>  <br>mutationFn: ({ postId, isLiked }) =&gt; {<br>      return isLiked<br>        ? api.delete(`/api/posts/${postId}/like`)<br>        : api.post(`/api/posts/${postId}/like`)<br>    },<br>  })<br>}</pre><p><strong>The key insight: We update the UI before the server responds.</strong></p><p>The heart turns red and the count increments in 1 millisecond. The server call happens in the background. If it succeeds, we keep the changes. If it fails, we roll back.</p><p><strong>The timeline:</strong></p><pre>0ms:    User clicks<br>1ms:    Heart turns red, count updates<br>500ms:  Server responds</pre><p>Compare this to the traditional approach:</p><blockquote><strong><em>Old way:</em></strong><em> Click → Wait 500ms → See result (feels slow)</em></blockquote><blockquote><strong><em>New way:</em></strong><em> Click → See result → Server confirms (feels instant)</em></blockquote><p>This is why Twitter, Facebook, and Instagram feel responsive even on slow connections.</p><h3>The Offline Fallback (when there’s no network)</h3><pre>if (!isOnline) {<br>  await offlineQueue.enqueue(&#39;LIKE_POST&#39;, { postId, userId })<br>  toast.info(&#39;Will sync when you\&#39;re back online&#39;)<br>  return<br>}</pre><p>If the user is offline, we queue the action. They still see the heart turn red. When they reconnect, the queue processes automatically.</p><p><strong>Result:</strong> The user can use the app on the subway. No frustrating error messages. Everything syncs when they surface.</p><h3>The Three-Layer Cache (The Real Magic)</h3><p>This is where most production applications differ fundamentally from tutorials.</p><h3>The Problem with Simple Caching</h3><p>You have two bad options:</p><ol><li><strong>Always show cached data:</strong> Fast, but users see stale information</li><li><strong>Always fetch fresh:</strong> Accurate, but slow and wasteful</li></ol><p>The solution? Multiple cache layers with different characteristics.</p><h3>Layer 1: Memory (React Query)</h3><p><strong>Speed:</strong> &lt; 1ms <strong>Persistence:</strong> Until page refresh</p><pre>const queryClient = new QueryClient({<br>  defaultOptions: {<br>    queries: {<br>      staleTime: 1000 * 60,      // Fresh for 1 minute<br>      gcTime: 1000 * 60 * 10,    // Cached for 10 minutes<br>      retry: 3,<br>      networkMode: &#39;offlineFirst&#39;<br>    }<br>  }<br>})</pre><p>When you load the feed, data goes into memory. Return 30 seconds later? Instant display from cache. Return 2 minutes later? Show cached data immediately, then refresh in the background.</p><h3>Layer 2: IndexedDB</h3><p><strong>Speed:</strong> 5–10ms <strong>Persistence:</strong> Survives browser restarts</p><pre>class IndexedDBCache {<br>  private readonly DEFAULT_TTL = 7 * 24 * 60 * 60 * 1000  // 7 days<br>  private readonly MAX_CACHE_SIZE = 50 * 1024 * 1024      // 50MB<br><br>  async set(store: string, key: string, value: any) {<br>    const entry: CacheEntry = {<br>      id: key,<br>      data: value,<br>      cachedAt: Date.now(),<br>      expiresAt: Date.now() + this.DEFAULT_TTL,<br>      size: JSON.stringify(value).length<br>    }<br>    await db.put(store, entry)<br>    if (totalSize &gt; this.MAX_CACHE_SIZE) {<br>      await this.evictLRU()<br>    }<br>  }<br>}</pre><p><strong>Real scenario:</strong></p><pre>Day 1, 2:00 PM - Browse feed, cache 100 posts in IndexedDB<br>Day 1, 5:00 PM - Close browser completely<br>Day 2, 9:00 AM - Open app, see feed instantly from IndexedDB<br>                 Fresh data loads in background</pre><p>Without IndexedDB, returning users see a blank screen with a spinner. With it, they see their feed immediately.</p><h3>Layer 3: Service Worker</h3><p><strong>Speed:</strong> 1–2ms <strong>Persistence:</strong> Forever (until app update)</p><pre>self.addEventListener(&#39;fetch&#39;, (event) =&gt; {<br>  const { request } = event<br><br>  // Static assets: Cache first<br>  if (/\.(js|css|woff2?|png|jpg|webp)$/.test(request.url)) {<br>    event.respondWith(cacheFirst(request))<br>  }<br>  // API calls: Network first, fallback to cache<br>  else if (url.pathname.startsWith(&#39;/api/&#39;)) {<br>    event.respondWith(networkFirst(request))<br>  }<br>  // HTML: Show cached while fetching fresh<br>  else if (request.mode === &#39;navigate&#39;) {<br>    event.respondWith(staleWhileRevalidate(request))<br>  }<br>})</pre><p>The service worker handles static assets and provides offline functionality. JavaScript, CSS, and images load from cache. The app shell appears instantly, even offline.</p><h3>How the Layers Coordinate</h3><pre>Request feed data<br>    ↓<br>L1 (Memory): Hit? Return in &lt; 1ms<br>    ↓<br>L2 (IndexedDB): Hit? Return in 5-10ms, populate L1<br>    ↓<br>L3 (Service Worker): Hit? Return from cache<br>    ↓<br>Network: Fetch fresh data, update all layers</pre><p><strong>The impact is dramatic:</strong></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/778/1*aWdzMly9Du45RbeHtzCjRQ.png" /></figure><h3>Why Offset Pagination Breaks at Scale</h3><p>Here’s something that trips up many developers: the standard pagination approach fails with large datasets.</p><h3>The Problem</h3><pre>-- Page 1: Fast<br>SELECT * FROM posts ORDER BY created_at DESC LIMIT 10 OFFSET 0;<br><br>-- Page 100: Slow<br>SELECT * FROM posts ORDER BY created_at DESC LIMIT 10 OFFSET 1000;<br><br>-- Page 10,000: Unusable<br>SELECT * FROM posts ORDER BY created_at DESC LIMIT 10 OFFSET 100000;</pre><p>With offset pagination, the database must scan and discard all preceding rows. Page 10,000 means scanning 100,000 rows to return 10.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/634/1*pNzzqyq0s2gXuGdEeGPHFQ.png" /></figure><p>Twitter has billions of tweets. Offset pagination would never work.</p><h3>The Solution: Cursor Pagination</h3><p>Instead of “skip N rows,” we say “get rows after this specific post.”</p><pre>// Cursor: base64-encoded position marker<br>const cursor = btoa(JSON.stringify({<br>  lastId: &#39;p42&#39;,<br>  timestamp: &#39;2024-01-15T10:30:00Z&#39;<br>}))<br><br>// SQL equivalent<br>SELECT * FROM posts<br>WHERE (created_at, id) &lt; (&#39;2024-01-15T10:30:00Z&#39;, &#39;p42&#39;)<br>ORDER BY created_at DESC, id DESC<br>LIMIT 10;</pre><p><strong>Performance:</strong> O(1) at any position. Page 10,000 is as fast as page 1.</p><h3>Frontend Implementation</h3><pre>function useInfiniteScroll() {<br>  const { data, fetchNextPage, hasNextPage } = useInfiniteQuery({<br>    queryKey: [&#39;feed&#39;, &#39;infinite&#39;],<br><br>   queryFn: async ({ pageParam = null }) =&gt; {<br>      const response = await api.get(&#39;/api/feed&#39;, {<br>        params: { limit: 10, cursor: pageParam }<br>      })<br>      return response.data<br>    },<br>    getNextPageParam: (lastPage) =&gt; {<br>      return lastPage.hasMore ? lastPage.nextCursor : undefined<br>    },<br>  })<br>  const loadMoreRef = useCallback((node) =&gt; {<br>    if (!node || !hasNextPage) return<br>    const observer = new IntersectionObserver((entries) =&gt; {<br>      if (entries[0].isIntersecting) {<br>        fetchNextPage()<br>      }<br>    }, { rootMargin: &#39;200px&#39; })<br>    observer.observe(node)<br>  }, [hasNextPage, fetchNextPage])<br>  return { posts: data?.pages.flatMap(page =&gt; page.posts) ?? [], loadMoreRef }<br>}</pre><p>The intersection observer triggers 200px before the user reaches the bottom. New posts load seamlessly. The user never waits.</p><h3>Building Resilience: When Things Go Wrong</h3><p>Servers fail. Networks drop. Users close their laptops mid-request. Production applications need to handle all of it gracefully.</p><h3>Classifying Errors</h3><p>Not all errors deserve the same treatment:</p><pre>enum ErrorType {<br>  NETWORK = &#39;NETWORK&#39;,        // No internet → Retry<br>  TIMEOUT = &#39;TIMEOUT&#39;,        // Too slow → Retry<br>  SERVER = &#39;SERVER&#39;,          // 5xx → Retry<br>  VALIDATION = &#39;VALIDATION&#39;,  // 400 → Don&#39;t retry<br>  AUTH = &#39;AUTH&#39;,              // 401 → Don&#39;t retry<br>}</pre><ul><li><strong>Network errors?</strong> Retry automatically. The user might have reconnected.</li><li><strong>Validation errors?</strong> Show the message. Retrying won’t help.</li><li><strong>Auth errors?</strong> Redirect to login. The session expired.</li></ul><h3>Exponential Backoff</h3><p>When the server is struggling, hammering it with retries makes things worse.</p><pre>const RETRY_CONFIG = {<br>  maxAttempts: 3,<br>  baseDelay: 1000,    // 1 second<br>  maxDelay: 30000,    // 30 seconds<br>  backoffFactor: 2    // Double each time<br>}<br><br>function calculateDelay(attempt: number): number {<br>  const exponentialDelay = 1000 * Math.pow(2, attempt)<br>  const cappedDelay = Math.min(exponentialDelay, 30000)<br>  const jitter = Math.random() * 200 - 100  // Prevent thundering herd<br>  return cappedDelay + jitter<br>}</pre><p><strong>Wait times:</strong></p><ul><li>Attempt 1: Wait 1 second</li><li>Attempt 2: Wait 2 seconds</li><li>Attempt 3: Wait 4 seconds</li><li>Give up</li></ul><p>The jitter is important. Without it, a thousand clients that failed simultaneously will all retry simultaneously, creating another spike.</p><h3>Circuit Breaker</h3><p>When the server is completely down, retrying is pointless. The circuit breaker pattern fails fast:</p><pre>class CircuitBreaker {<br>  private state = &#39;CLOSED&#39;  // CLOSED, OPEN, HALF_OPEN<br>  private failureCount = 0<br>  private lastFailureTime = 0<br><br>  async execute&lt;T&gt;(fn: () =&gt; Promise&lt;T&gt;): Promise&lt;T&gt; {<br>    // After 60 seconds, try one request<br>    if (this.state === &#39;OPEN&#39; &amp;&amp; Date.now() - this.lastFailureTime &gt; 60000) {<br>      this.state = &#39;HALF_OPEN&#39;<br>    }<br>    // Fast fail if circuit is open<br>    if (this.state === &#39;OPEN&#39;) {<br>      throw new Error(&#39;Circuit breaker OPEN. Server is down.&#39;)<br>    }<br>    try {<br>      const result = await fn()<br>      if (this.state === &#39;HALF_OPEN&#39;) {<br>        this.state = &#39;CLOSED&#39;<br>        this.failureCount = 0<br>      }<br>      return result<br>    } catch (error) {<br>      this.failureCount++<br>      this.lastFailureTime = Date.now()<br>      if (this.failureCount &gt;= 5) {<br>        this.state = &#39;OPEN&#39;<br>      }<br>      throw error<br>    }<br>  }<br>}</pre><p><strong>The flow:</strong></p><pre>CLOSED (Normal operation)<br>  → 5 failures<br>OPEN (Reject all requests immediately)<br>  → Wait 60 seconds<br>HALF-OPEN (Try one request)<br>  → Success: Back to CLOSED<br>  → Failure: Back to OPEN</pre><p>Without this, users see 30-second timeouts on every action during an outage. With it, they see instant “Server unavailable” messages and can at least browse cached content.</p><h3>The Offline Queue: Never Lose User Data</h3><p>Users on spotty connections shouldn’t lose their work. The offline queue ensures every action eventually reaches the server.</p><pre>class OfflineQueue {<br>  async enqueue(type: ActionType, payload: any) {<br>    const action = {<br>      id: `${type}_${Date.now()}`,<br>      type,<br>      payload,<br>      status: &#39;pending&#39;,<br>      retries: 0,<br>      timestamp: Date.now()<br>    }<br>    await db.put(&#39;offline-queue&#39;, action)<br>  }<br><br>  async processQueue() {<br>      const pending = await this.getPendingActions()<br>      for (const action of pending) {<br>        try {<br>          await this.executeAction(action)<br>          await this.markCompleted(action.id)<br>          toast.success(`${action.type} synced`)<br>        } catch (error) {<br>          action.retries++<br>          if (action.retries &gt;= 3) {<br>            await this.markFailed(action.id)<br>            toast.error(`${action.type} failed after 3 retries`)<br>          }<br>        }<br>      }<br>    }<br>  }</pre><p><strong>The user experience:</strong></p><pre>User enters subway tunnel → Connection lost<br>User likes a post → Heart turns red, action queued<br>User creates a comment → Comment appears locally, action queued<br>User exits tunnel → Reconnects<br>Queue processes automatically → &quot;All changes synced&quot;</pre><p>The 30-second auto-sync interval handles cases where the connection returns silently. The online event handler catches explicit reconnections.</p><h3>State Management: The Right Tool for Each Job</h3><p>One of the most common architectural mistakes is using a single state solution for everything.</p><h3>The Decision Tree</h3><pre>Is this data from an API?<br>  → React Query (caching, refetching, pagination built-in)<br><br>Is this UI state shared across components?<br>  → Zustand (theme, modals, toasts)<br><br>Is this local to one component?<br>  → useState (form inputs, toggles)<br><br>Is this auth data needed everywhere?<br>  → React Context (user, permissions)</pre><h3>React Query for Server State</h3><pre>function useFeedPosts() {<br>  return useInfiniteQuery({<br>    queryKey: [&#39;feed&#39;, &#39;infinite&#39;],<br>    queryFn: async ({ pageParam = null }) =&gt; {<br>      const response = await api.get(&#39;/api/feed&#39;, {<br>        params: { limit: 10, cursor: pageParam }<br>      })<br>      return response.data<br>    },<br>    getNextPageParam: (lastPage) =&gt; lastPage.hasMore ? lastPage.nextCursor : undefined,<br>  })<br>}</pre><p>You get caching, background refetching, loading states, error handling, request deduplication, and pagination support. For free.</p><h3>Zustand for Client State</h3><pre>const useUIStore = create(<br>  persist(<br>    (set) =&gt; ({<br>      theme: &#39;light&#39;,<br>      toggleTheme: () =&gt; set((state) =&gt; ({<br>        theme: state.theme === &#39;light&#39; ? &#39;dark&#39; : &#39;light&#39;<br>      })),<br>    }),<br>    { name: &#39;ui-storage&#39; }<br>  )<br>)</pre><p>Three lines to create a theme toggle with persistence. Compare that to Redux boilerplate.</p><h3>Performance Optimizations That Actually Matter</h3><h3>Lazy Loading Images</h3><p>Loading 200 images on page load destroys performance. Load them as users scroll:</p><pre>function LazyImage({ src, alt }) {<br>  const [isInView, setIsInView] = useState(false)<br>  const imgRef = useRef()<br><br>  useEffect(() =&gt; {<br>    const observer = new IntersectionObserver(<br>      ([entry]) =&gt; {<br>        if (entry.isIntersecting) {<br>          setIsInView(true)<br>          observer.disconnect()<br>        }<br>      },<br>      { rootMargin: &#39;100px&#39; }  // Load 100px before visible<br>    )<br>    if (imgRef.current) observer.observe(imgRef.current)<br>    return () =&gt; observer.disconnect()<br>  }, [])<br>  return (<br>    &lt;div ref={imgRef}&gt;<br>      {isInView ? &lt;img src={src} alt={alt} /&gt; : &lt;div className=&quot;skeleton&quot; /&gt;}<br>    &lt;/div&gt;<br>  )<br>}</pre><p><strong>Impact:</strong> 100 posts with 2 images each goes from loading 200 images (20MB, 10 seconds) to loading 10 visible images (1MB, 1 second).</p><h3>Code Splitting</h3><p>Don’t load the analytics dashboard until someone visits it:</p><pre>const Analytics = lazy(() =&gt; import(&#39;./pages/AnalyticsDashboard&#39;))<br><br>function App() {<br>  return (<br>    &lt;Suspense fallback={&lt;LoadingSpinner /&gt;}&gt;<br>      &lt;Routes&gt;<br>        &lt;Route path=&quot;/&quot; element={&lt;FeedContainer /&gt;} /&gt;<br>        &lt;Route path=&quot;/analytics&quot; element={&lt;Analytics /&gt;} /&gt;<br>      &lt;/Routes&gt;<br>    &lt;/Suspense&gt;<br>  )<br>}</pre><p><strong>Impact:</strong> Initial bundle drops from 500KB to 200KB. Time to interactive goes from 3 seconds to 1 second.</p><h3>Preventing Re-renders with memo</h3><p>When one post’s like count changes, don’t re-render 99 other posts:</p><pre>const PostItem = memo(({ post, onLike }) =&gt; {<br>  return (<br>    &lt;div&gt;<br>      &lt;p&gt;{post.content}&lt;/p&gt;<br>      &lt;button onClick={() =&gt; onLike(post.id)}&gt;<br>        ❤️ {post.likes}<br>      &lt;/button&gt;<br>    &lt;/div&gt;<br>  )<br>})</pre><p><strong>Impact:</strong> Liking a post re-renders 1 component instead of 100.</p><h3>The Trade-offs</h3><p>Every architectural decision involves trade-offs. Here’s how I thought through the major ones:</p><h3>Consistency vs. Availability</h3><p>When a user likes a post offline, I have two options:</p><p><strong>Option A: Strong Consistency</strong></p><ul><li>Data is always correct</li><li>App breaks when offline</li><li>Every action waits for the server</li></ul><p><strong>Option B: Eventual Consistency (chosen)</strong></p><ul><li>Works offline</li><li>Fast optimistic updates</li><li>Temporary inconsistencies possible</li></ul><p>For a social feed, speed and availability matter more than perfect consistency. Users accept occasional rollbacks over 500ms waits on every action.</p><h3>Cache Duration Trade-offs</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/667/1*6t30qRkGKENBlsuWPKkMmw.png" /></figure><p>Different data have different freshness requirements.</p><h3>Server vs. Client Rendering</h3><p><strong>Server-side:</strong> Better SEO, faster first paint, but complex setup and higher costs.</p><p><strong>Client-side:</strong> Simple deployment, cheap static hosting, rich interactivity, but slower first paint.</p><p>For a social feed where interactivity matters more than SEO, client-side wins. The three-layer cache mitigates the slow first paint problem.</p><h3>What’s Not Included (And Why)</h3><p>This codebase intentionally omits some production features to stay focused on core system design patterns:</p><ul><li><strong>Authentication:</strong> Uses a mock user. Real auth adds complexity without teaching new patterns.</li><li><strong>Real-time updates:</strong> WebSockets would be nice, but aren’t essential for demonstrating caching and offline support.</li><li><strong>File uploads:</strong> Validation exists, but no actual CDN integration.</li><li><strong>Rate limiting:</strong> Server returns 429,s but doesn’t actually limit.</li><li><strong>Push notifications:</strong> PWA manifest exists, but no push implementation.</li></ul><p>Each of these could be a separate article.</p><h3>What I Learned</h3><p>Building this taught me things that whiteboards never could:</p><ol><li><strong>Optimistic updates change everything.</strong> The perceived performance improvement from instant feedback is worth the complexity of rollback handling.</li><li><strong>Caching is harder than it looks.</strong> A single cache layer isn’t enough. You need different strategies for different data types and different persistence requirements.</li><li><strong>Cursor pagination is non-negotiable at scale.</strong> Offset pagination’s linear degradation is unacceptable for any dataset that might grow.</li><li><strong>Resilience patterns compound.</strong> Retry logic, circuit breakers, and offline queues work together to create an application that survives real-world conditions.</li><li><strong>State management isn’t one-size-fits-all.</strong> Server state and client state have fundamentally different requirements.</li></ol><h3>Explore the Code</h3><p>The complete source code is available with detailed comments explaining each pattern:</p><p><a href="https://github.com/vue-simform/system-design-in-practice"><strong>github.com/vue-simform/system-design-in-practice</strong></a></p><p><strong>Start with these files:</strong></p><ul><li>src/hooks/useOptimisticMutation.ts — Optimistic updates pattern</li><li>src/hooks/useInfiniteScroll.ts — Cursor pagination</li><li>src/utils/indexedDBCache.ts — Persistent cache layer</li><li>public/service-worker.js — Offline support</li></ul><p><strong>Then explore:</strong></p><ul><li>src/utils/offlineQueue.ts — Offline mutation queue</li><li>src/utils/errorHandling.ts — Retry logic and circuit breaker</li><li>server/server.js — Backend cursor pagination</li></ul><h3>Further Reading &amp; References</h3><h4>State Management &amp; Data Fetching</h4><ul><li><a href="https://tanstack.com/query/latest/docs/framework/react/guides/important-defaults">TanStack Query — Important Defaults</a></li><li><a href="https://tanstack.com/query/v5/docs/react/guides/caching">TanStack Query — Caching Examples</a></li><li><a href="https://tanstack.com/query/latest/docs/framework/react/guides/optimistic-updates">TanStack Query — Optimistic Updates</a></li><li><a href="https://tanstack.com/query/latest/docs/framework/react/guides/mutations">TanStack Query — Mutations</a></li><li><a href="https://tkdodo.eu/blog/mastering-mutations-in-react-query">TkDodo — Mastering Mutations in React Query</a></li><li><a href="https://tkdodo.eu/blog/concurrent-optimistic-updates-in-react-query">TkDodo — Concurrent Optimistic Updates</a></li><li><a href="https://zustand.docs.pmnd.rs/">Zustand Documentation</a></li><li><a href="https://github.com/pmndrs/zustand">Zustand GitHub</a></li><li><a href="https://frontendmasters.com/blog/introducing-zustand/">Frontend Masters — Introducing Zustand</a></li></ul><h4>Client-Side Storage &amp; Offline</h4><ul><li><a href="https://developer.mozilla.org/en-US/docs/Web/API/IndexedDB_API">MDN — IndexedDB API</a></li><li><a href="https://developer.mozilla.org/en-US/docs/Web/API/IndexedDB_API/Using_IndexedDB">MDN — Using IndexedDB</a></li><li><a href="https://developer.mozilla.org/en-US/docs/Web/API/IndexedDB_API/Basic_Terminology">MDN — IndexedDB Terminology</a></li><li><a href="https://web.dev/articles/indexeddb">web.dev — Work with IndexedDB</a></li><li><a href="https://developer.mozilla.org/en-US/docs/Web/API/Service_Worker_API/Using_Service_Workers">MDN — Service Workers</a></li><li><a href="https://developer.mozilla.org/en-US/docs/Web/Progressive_web_apps/Guides/Caching">MDN — Caching in PWAs</a></li><li><a href="https://developer.mozilla.org/en-US/docs/Web/Progressive_web_apps/Guides/Offline_and_background_operation">MDN — Offline and Background Operation</a></li><li><a href="https://developer.chrome.com/docs/workbox/caching-strategies-overview">Chrome — Workbox Caching Strategies</a></li><li><a href="https://web.dev/articles/service-worker-caching-and-http-caching">web.dev — Service Worker and HTTP Caching</a></li></ul><h4>Progressive Web Apps</h4><ul><li><a href="https://web.dev/learn/pwa/progressive-web-apps/">web.dev — Progressive Web Apps</a></li><li><a href="https://developers.google.com/codelabs/pwa-training/pwa03--going-offline">Google Codelabs — Going Offline</a></li><li><a href="https://developer.mozilla.org/en-US/docs/Web/Progressive_web_apps/Tutorials/js13kGames/Offline_Service_workers">MDN — Making PWA Work Offline</a></li><li><a href="https://github.com/pazguille/offline-first">GitHub — Offline First Resources</a></li></ul><h4>Pagination</h4><ul><li><a href="https://developer.zendesk.com/documentation/api-basics/pagination/comparing-cursor-pagination-and-offset-pagination/">Zendesk — Cursor vs Offset Pagination</a></li><li><a href="https://www.milanjovanovic.tech/blog/understanding-cursor-pagination-and-why-its-so-fast-deep-dive">Milan Jovanovic — Why Cursor Pagination is Fast</a></li><li><a href="https://www.prisma.io/docs/orm/prisma-client/queries/pagination">Prisma — Pagination Documentation</a></li><li><a href="https://www.pingcap.com/article/limit-offset-pagination-vs-cursor-pagination-in-mysql/">PingCAP — Pagination in MySQL</a></li></ul><h4>Resilience Patterns</h4><ul><li><a href="https://martinfowler.com/bliki/CircuitBreaker.html">Martin Fowler — Circuit Breaker</a></li><li><a href="https://martinfowler.com/microservices/">Martin Fowler — Microservices Guide</a></li><li><a href="https://java-design-patterns.com/patterns/circuit-breaker/">Java Design Patterns — Circuit Breaker</a></li><li><a href="https://aws.amazon.com/builders-library/timeouts-retries-and-backoff-with-jitter/">AWS — Exponential Backoff and Jitter</a></li><li><a href="https://docs.aws.amazon.com/prescriptive-guidance/latest/cloud-design-patterns/retry-backoff.html">AWS — Retry with Backoff Pattern</a></li><li><a href="https://learn.microsoft.com/en-us/dotnet/architecture/microservices/implement-resilient-applications/implement-retries-exponential-backoff">Microsoft — Implement Retries with Exponential Backoff</a></li><li><a href="https://en.wikipedia.org/wiki/Exponential_backoff">Wikipedia — Exponential Backoff</a></li><li><a href="https://www.baeldung.com/resilience4j-backoff-jitter">Baeldung — Resilience4j Backoff and Jitter</a></li></ul><h4>React Performance</h4><ul><li><a href="https://react.dev/reference/react/memo">React — memo</a></li><li><a href="https://react.dev/reference/react/useMemo">React — useMemo</a></li><li><a href="https://react.dev/reference/react/useCallback">React — useCallback</a></li><li><a href="https://react.dev/reference/react/lazy">React — lazy</a></li><li><a href="https://legacy.reactjs.org/docs/code-splitting.html">Legacy React Docs — Code Splitting</a></li><li><a href="https://web.dev/articles/code-splitting-suspense">web.dev — Code Splitting with React.lazy</a></li><li><a href="https://kentcdodds.com/blog/usememo-and-usecallback">Kent C. Dodds — When to useMemo and useCallback</a></li><li><a href="https://developer.mozilla.org/en-US/docs/Web/API/Intersection_Observer_API">MDN — Intersection Observer API</a></li><li><a href="https://blog.logrocket.com/lazy-loading-using-the-intersection-observer-api/">LogRocket — Lazy Loading with Intersection Observer</a></li></ul><h4>Distributed Systems</h4><ul><li><a href="https://www.ibm.com/think/topics/cap-theorem">IBM — CAP Theorem</a></li><li><a href="https://en.wikipedia.org/wiki/CAP_theorem">Wikipedia — CAP Theorem</a></li><li><a href="https://docs.aws.amazon.com/whitepapers/latest/availability-and-beyond-improving-resilience/cap-theorem.html">AWS — CAP Theorem Whitepaper</a></li><li><a href="https://www.splunk.com/en_us/blog/learn/cap-theorem.html">Splunk — CAP Theorem Strategies</a></li><li><a href="https://blog.algomaster.io/p/cap-theorem-explained">AlgoMaster — CAP Theorem Explained</a></li></ul><h4>Accessibility</h4><ul><li><a href="https://www.w3.org/TR/WCAG21/">WCAG 2.1 Specification</a></li><li><a href="https://www.w3.org/WAI/standards-guidelines/wcag/">WCAG 2 Overview</a></li><li><a href="https://www.w3.org/WAI/WCAG21/Understanding/">Understanding WCAG 2.1</a></li><li><a href="https://www.w3.org/WAI/standards-guidelines/wcag/new-in-21/">What’s New in WCAG 2.1</a></li><li><a href="https://developer.mozilla.org/en-US/docs/Web/Accessibility/Guides/Understanding_WCAG">MDN — Understanding WCAG</a></li><li><a href="https://webaim.org/standards/wcag/checklist">WebAIM — WCAG 2 Checklist</a></li></ul><h4>Books</h4><ul><li><em>“Release It!”</em> by Michael Nygard — The book that popularized the Circuit Breaker pattern</li><li><em>“Designing Data-Intensive Applications”</em> by Martin Kleppmann — Deep dive into distributed systems, CAP theorem, and data storage</li><li><em>“High Performance Browser Networking”</em> by Ilya Grigorik — Understanding network performance, caching, and service workers</li></ul><p>Building production applications requires moving beyond tutorials. The patterns in this codebase have been battle-tested at companies handling millions of users. Now you have a working code to study and adapt.</p><p><em>If this was helpful, consider starring the repo. Questions? Open an issue.</em></p><blockquote><strong>For more updates on the latest tools and technologies, follow the </strong><a href="https://medium.com/simform-engineering"><strong>Simform Engineering</strong></a><strong> blog.</strong></blockquote><blockquote><strong>Follow us: </strong><a href="https://twitter.com/simform"><strong>Twitter</strong></a><strong> | </strong><a href="https://www.linkedin.com/company/simform/"><strong>LinkedIn</strong></a></blockquote><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=7a334c20fee7" width="1" height="1" alt=""><hr><p><a href="https://medium.com/simform-engineering/system-design-in-action-building-a-production-ready-news-feed-7a334c20fee7">System Design in Action: Building a Production-Ready News Feed</a> was originally published in <a href="https://medium.com/simform-engineering">Simform Engineering</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[The Real-World Monorepo Guide: What They Don’t Tell You]]></title>
            <link>https://medium.com/simform-engineering/the-real-world-monorepo-guide-what-they-dont-tell-you-b03e68ffe579?source=rss----ce67e0b67c0d---4</link>
            <guid isPermaLink="false">https://medium.com/p/b03e68ffe579</guid>
            <category><![CDATA[nx-monorepo]]></category>
            <category><![CDATA[monorepo]]></category>
            <category><![CDATA[turborepo]]></category>
            <dc:creator><![CDATA[Akash Chauhan]]></dc:creator>
            <pubDate>Thu, 08 Jan 2026 10:04:10 GMT</pubDate>
            <atom:updated>2026-01-08T10:04:08.886Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*CdI1nxoHuj3phzjYelXoNA.png" /></figure><p>I spent four hours trying to configure Nx for my first monorepo. Then I discovered Turborepo and had it running in 15 minutes. That moment taught me something important: monorepos aren’t about the tools, they’re about solving real problems with your codebase.</p><p>This isn’t another “here’s how Google does it” article. This is what worked (and what spectacularly failed) for me and my teams across three different production systems.</p><blockquote><strong><em>Full Demo Project</em></strong><em>: Want to see all these patterns in action? Check out the complete </em><a href="https://github.com/Akash52/resto-mgmt-monorepo"><em>Restaurant Management System Monorepo</em></a><em>, a production-ready example implementing everything discussed in this guide.</em></blockquote><h4>Why This Guide Exists</h4><p>You’ve probably seen monorepo articles that either:</p><ul><li>Show you a toy example that falls apart in production</li><li>Assume you have Google’s infrastructure budget</li><li>Skip the messy parts where things actually break</li></ul><p>This guide is different. We’re building a real task management app from scratch, making actual mistakes, and fixing them the way you would in production.</p><h4>What You’re Actually Building</h4><p>Not another TODO app. A production-ready restaurant management system with:</p><p><strong>Backend API</strong> (Express on port 3001)</p><ul><li>REST endpoints with proper validation</li><li>Complex billing engine with configurable rules</li><li>Database queries that won’t make your DBA cry</li><li>Error handling that actually helps debugging</li></ul><p><strong>Customer Frontend</strong> (React on port 5173)</p><ul><li>Real-time order billing preview</li><li>Menu browsing and cart management</li><li>API calls that handle failures gracefully</li></ul><p><strong>Admin Dashboard</strong> (React on port 3002)</p><ul><li>Restaurant configuration and management</li><li>Pricing, tax, and discount rule configuration</li><li>Menu and order management</li></ul><p><strong>Shared Packages</strong></p><ul><li>TypeScript types that prevent runtime errors</li><li>Zod validators that work on both frontend and backend</li><li>Reusable UI components</li><li>Billing engine with rule-based calculations</li><li>Database with Prisma ORM</li></ul><p>The thing that changed everything for me: <strong>you validate once, use everywhere</strong>. No more “the API accepted this, but the frontend rejected it” bugs at 3 am.</p><blockquote><strong><em>See it in action</em></strong><em>: This entire system is built and documented in the </em><a href="https://github.com/Akash52/resto-mgmt-monorepo"><em>demo repository.</em></a><em> Clone it, run it, and see exactly how everything fits together.</em></blockquote><h4>The Monorepo Decision Tree</h4><h4>Use a Monorepo When:</h4><p><strong>You’re copy-pasting code between projects — </strong>If you’ve ever though,t “I just fixed this bug in Project A, now I need to copy the fix to Project B, C, and D” — monorepo solves this.</p><p><strong>Frontend and backend teams are out of sync — </strong>Backend ships a breaking API change. Frontend doesn’t know until production explodes. With monorepos, TypeScript catches this at build time.</p><p><strong>You want atomic commits across projects — </strong>Change the API signature? Update all callers in one pull request. No coordination nightmares across repos.</p><p><strong>Your team is small to medium (2–20 devs) — </strong>Large enough to have code duplication problems, small enough that you don’t need dedicated tooling teams.</p><h4>Avoid Monorepos When:</h4><p><strong>You need strict access control — </strong>Monorepos are all-or-nothing. If contractors shouldn’t see proprietary algorithms, you need separate repos.</p><p><strong>Your repo is already 2GB+ — </strong>Git performance falls off a cliff. Fresh clones take forever. Developers hate waiting.</p><p><strong>Teams are truly independent —</strong> If your mobile team never talks to your web team and they share zero code, separate repos are simpler.</p><p><strong>You’re planning to open-source parts — </strong>Extracting code from monorepos into separate repos later is painful. Plan for this upfront.</p><h4>The Truth About Monorepo Tools</h4><h4>The Tools Landscape (December 2024)</h4><p><strong>pnpm Workspaces</strong> — Start here</p><ul><li>Zero config beyond one YAML file</li><li>Handles 90% of use cases</li><li>Works with everything</li></ul><p><strong>Turborepo</strong> — Add this next</p><ul><li>Caching that actually speeds things up</li><li>Simple config (20 lines, not 200)</li><li>Built by Vercel, actively maintained</li></ul><p><strong>Nx</strong> — For the advanced use cases</p><ul><li>Comprehensive but complex</li><li>Code generators save time at scale</li><li>Best when you have 10+ packages</li></ul><p>I spent two weeks evaluating tools. Here’s what I learned: start simple, add complexity only when you feel the pain.</p><h4>My Actual Tool Migration Path</h4><p><strong>Month 1–3</strong>: Plain pnpm workspaces</p><ul><li>Good: Learning curve was 30 minutes</li><li>Bad: Rebuilding everything on every change</li><li>Breaking point: CI taking 8 minutes for trivial changes</li></ul><p><strong>Month 4–8</strong>: Added Turborepo</p><ul><li>Good: CI dropped to 2 minutes</li><li>Bad: Still rebuilding too much locally</li><li>Breaking point: Still manageable</li></ul><p><strong>Month 9+</strong>: Considering Nx</p><ul><li>Not there yet — Turborepo still handles our load</li></ul><p>The key insight: tool complexity should match your pain level, not your ego.</p><h4>Setting Up Your Production Monorepo</h4><h4>Project Structure That Scales</h4><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/aac8dd1d62ee78840ef102ff1d8c36f2/href">https://medium.com/media/aac8dd1d62ee78840ef102ff1d8c36f2/href</a></iframe><p><strong>Why this structure wins:</strong></p><p><strong>Apps folder</strong> = things users interact with <strong>Packages folder</strong> = code that apps share <strong>Config package</strong> = one source of truth for tooling</p><p>I tried other structures. This one survived three major refactorings.</p><h4>Step 1: Initialize the Workspace</h4><p><strong>Root package.json</strong>:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/87de81894a979b9239de1f57da605872/href">https://medium.com/media/87de81894a979b9239de1f57da605872/href</a></iframe><p><strong>pnpm-workspace.yaml</strong>:</p><pre>packages:<br>  - &#39;apps/*&#39;<br>  - &#39;packages/*&#39;</pre><p><strong>turbo.json</strong> (the file that saves you hours):</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/bb10db34fa18de38fa3516b1cdd21388/href">https://medium.com/media/bb10db34fa18de38fa3516b1cdd21388/href</a></iframe><p>That dependsOn: [&quot;^build&quot;] line? It means &quot;build my dependencies first&quot;. Saves you from mysterious build failures.</p><h4>The Seven Rules That Keep Monorepos Sane</h4><h4>Rule 1: Namespace Everything</h4><p><strong>Bad</strong>:</p><pre>{<br>  &quot;name&quot;: &quot;types&quot;,<br>  &quot;name&quot;: &quot;ui&quot;<br>}</pre><p><strong>Good</strong>:</p><pre>{<br>  &quot;name&quot;: &quot;@demo/types&quot;,<br>  &quot;name&quot;: &quot;@demo/ui&quot;,<br>  &quot;name&quot;: &quot;@demo/database&quot;,<br>  &quot;name&quot;: &quot;@demo/billing-engine&quot;<br>}</pre><p>Why this matters: I once spent two hours debugging why imports weren’t working. Turned out our types package was conflicting with @types/node. Namespacing prevents this entirely.</p><h4>Rule 2: Use Workspace Protocol</h4><p><strong>In your package.json dependencies</strong>:</p><pre>{<br>  &quot;dependencies&quot;: {<br>    &quot;@demo/types&quot;: &quot;workspace:*&quot;,<br>    &quot;@demo/database&quot;: &quot;workspace:*&quot;,<br>    &quot;@demo/billing-engine&quot;: &quot;workspace:*&quot;,<br>    &quot;express&quot;: &quot;^4.18.2&quot;<br>  }<br>}</pre><p>That workspace:* tells pnpm &quot;link to my local package&quot;. Changes propagate instantly. No version bumps needed during development.</p><h4>Rule 3: Share TypeScript Config</h4><p><strong>packages/config/tsconfig.base.json</strong>:</p><pre>{<br>  &quot;compilerOptions&quot;: {<br>    &quot;strict&quot;: true,<br>    &quot;esModuleInterop&quot;: true,<br>    &quot;skipLibCheck&quot;: true,<br>    &quot;forceConsistentCasingInFileNames&quot;: true,<br>    &quot;resolveJsonModule&quot;: true,<br>    &quot;isolatedModules&quot;: true,<br>    &quot;noUnusedLocals&quot;: true,<br>    &quot;noUnusedParameters&quot;: true,<br>    &quot;noFallthroughCasesInSwitch&quot;: true<br>  }<br>}</pre><p><strong>Each app extends it:</strong></p><pre>{<br>  &quot;extends&quot;: &quot;@demo/typescript-config/node.json&quot;,<br>  &quot;compilerOptions&quot;: {<br>    &quot;outDir&quot;: &quot;dist&quot;,<br>    &quot;rootDir&quot;: &quot;src&quot;<br>  }<br>}</pre><p>Update TypeScript settings once, applies everywhere. Changed my life.</p><h4>Rule 4: Controlled Package Exports</h4><p><strong>packages/types/package.json</strong>:</p><pre>{<br>  &quot;name&quot;: &quot;@demo/types&quot;,<br>  &quot;exports&quot;: {<br>    &quot;.&quot;: {<br>      &quot;types&quot;: &quot;./dist/index.d.ts&quot;,<br>      &quot;default&quot;: &quot;./dist/index.js&quot;<br>    },<br>    &quot;./validators&quot;: {<br>      &quot;types&quot;: &quot;./dist/validators.d.ts&quot;,<br>      &quot;default&quot;: &quot;./dist/validators.js&quot;<br>    }<br>  }<br>}</pre><p>Now you can:</p><pre>import { Restaurant, MenuItem } from &#39;@demo/types&#39;;<br>import { createRestaurantSchema, createMenuItemSchema } from &#39;@demo/types/validators&#39;;</pre><p>Prevents importing server-only code in the browser. Saved me from a 500KB bundle size disaster.</p><h4>Rule 5: One Source of Truth for Validation</h4><p>This is the game-changer. <strong>packages/types/src/validators.ts</strong>:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/56925ad5eadc1565ca4d2f61ba2acbd2/href">https://medium.com/media/56925ad5eadc1565ca4d2f61ba2acbd2/href</a></iframe><p><strong>Backend uses it</strong>:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/8323cb3b94eb0589d7754a3906f0f04f/href">https://medium.com/media/8323cb3b94eb0589d7754a3906f0f04f/href</a></iframe><p><strong>Frontend uses it</strong>:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/b98b1a67e9b25b360907d5f461215774/href">https://medium.com/media/b98b1a67e9b25b360907d5f461215774/href</a></iframe><p><strong>Why this is huge</strong>: Change the validation rules once, both frontend and backend enforce them. No more “API rejected valid frontend data” bugs.</p><h4>Rule 6: Dev Server Proxy (CORS Killer)</h4><p><strong>apps/web/vite.config.ts</strong>:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/fd89a9b070c8d7ff0e32395edafff7e5/href">https://medium.com/media/fd89a9b070c8d7ff0e32395edafff7e5/href</a></iframe><p><strong>What this does</strong>:</p><ul><li>Frontend runs on localhost:5173</li><li>Backend runs on localhost:3001</li><li>Request to /api/tasks gets proxied to backend</li><li>Zero CORS configuration needed</li></ul><p>In production, both are on same domain. No code changes required.</p><h4>Rule 7: Build Outputs</h4><p><strong>Option A: Separate Builds</strong> Each app builds independently, deployed separately.</p><p><strong>Option B: Unified Build</strong> (what I use) Frontend builds into API’s public folder:</p><pre>// apps/web/vite.config.ts<br>export default {<br>  build: {<br>    outDir: &#39;../api/public&#39;<br>  }<br>}</pre><p>API serves static files:</p><pre>// apps/api/src/index.ts<br>app.use(express.static(&#39;public&#39;));<br>app.get(&#39;*&#39;, (req, res) =&gt; {<br>  res.sendFile(path.join(__dirname, &#39;../public/index.html&#39;));<br>});</pre><p>One build, one deployment. Simpler infrastructure.</p><h4>Common Mistakes (That I Made So You Don’t Have To)</h4><h4>Mistake 1: Not Setting Up Caching</h4><p><strong>Symptom</strong>: Every change rebuilds everything. CI takes forever.</p><p><strong>Fix</strong>: Add this to turbo.json:</p><pre>{<br>  &quot;tasks&quot;: {<br>    &quot;build&quot;: {<br>      &quot;outputs&quot;: [&quot;dist/**&quot;],<br>      &quot;dependsOn&quot;: [&quot;^build&quot;]<br>    }<br>  }<br>}</pre><p>Turborepo caches build outputs. Second build is instant if nothing changed.</p><h4>Mistake 2: Circular Dependencies</h4><p><strong>What happened</strong>: Package A imports Package B, Package B imports Package A. Build fails mysteriously.</p><p><strong>Fix</strong>: Design rule — shared packages can’t import from apps. Apps import from packages. Never the reverse.</p><pre>✅ apps/api → packages/types<br>✅ apps/api → packages/database<br>✅ apps/api → packages/billing-engine<br>✅ apps/web → packages/types<br>✅ apps/web → packages/ui<br>✅ apps/dashboard → packages/types<br>✅ apps/dashboard → packages/ui<br>✅ packages/ui → packages/types<br>✅ packages/billing-engine → packages/types<br>❌ packages/types → apps/api</pre><h4>Mistake 3: Forgetting to Build Packages</h4><p><strong>Symptom</strong>: Import from @demo/types shows &quot;Cannot find module&quot;.</p><p><strong>Fix</strong>: Packages need build steps. In package.json:</p><pre>{<br>  &quot;scripts&quot;: {<br>    &quot;build&quot;: &quot;tsc&quot;,<br>    &quot;dev&quot;: &quot;tsc --watch&quot;<br>  }<br>}</pre><p>Run pnpm build from root. Turborepo handles the order.</p><h4>Mistake 4: Not Using Path Aliases</h4><p><strong>Problem</strong>: Import paths like ../../../../shared/utils are fragile.</p><p><strong>Fix</strong>: Use unique prefixes per app:</p><p><strong>apps/web/tsconfig.json:</strong></p><pre>{<br>  &quot;compilerOptions&quot;: {<br>    &quot;paths&quot;: {<br>      &quot;@/web/*&quot;: [&quot;./src/*&quot;]<br>    }<br>  }<br>}</pre><p><strong>apps/api/tsconfig.json</strong>:</p><pre>{<br>  &quot;compilerOptions&quot;: {<br>    &quot;paths&quot;: {<br>      &quot;@/api/*&quot;: [&quot;./src/*&quot;]<br>    }<br>  }<br>}</pre><p>Now you can:</p><pre>import { TaskList } from &#39;@/web/components/TaskList&#39;;<br>import { taskService } from &#39;@/api/services/tasks&#39;;</pre><p>Unique prefixes prevent conflicts.</p><h4>Mistake 5: Overusing Shared Code</h4><p><strong>What I did wrong</strong>: Created packages/utils and threw everything in there.</p><p><strong>Why it’s bad</strong>: The shared package becomes a dumping ground. Grows to 50+ utilities. Half are used once.</p><p><strong>Better approach</strong>: Create specific shared packages:</p><ul><li>packages/types - only types and validators</li><li>packages/ui - only React components</li><li>packages/database - only Prisma schema and client</li><li>packages/billing-engine - only billing calculation logic</li><li>packages/typescript-config - only shared TypeScript configs</li></ul><p>Keep packages focused. If a utility is used by only one app, keep it in that app.</p><h4>Real Performance Numbers</h4><p>From my actual projects:</p><h4>Installation Speed</h4><ul><li><strong>npm</strong>: 45 seconds (fresh install)</li><li><strong>yarn</strong>: 38 seconds</li><li><strong>pnpm</strong>: 15 seconds</li></ul><p>pnpm wins by sharing packages across projects. 50–80% disk space savings.</p><h4>Build Speed (with Turborepo)</h4><ul><li><strong>Without caching</strong>: 28 seconds</li><li><strong>With caching (no changes)</strong>: 0.8 seconds</li><li><strong>With caching (types changed)</strong>: 4.2 seconds</li></ul><p>Turborepo only rebuilds what changed. 40–85% faster CI builds.</p><h4>Git Clone Size</h4><ul><li><strong>Small monorepo (10 packages)</strong>: 45MB</li><li><strong>Medium monorepo (30 packages)</strong>: 180MB</li><li><strong>Large monorepo (50+ packages)</strong>: 350MB+</li></ul><p>Storage grows fast. This is the monorepo tax. Plan accordingly.</p><h4>When Things Go Wrong: Debugging Guide</h4><h4>“Module not found” errors</h4><p><strong>Check these in order</strong>:</p><ol><li>Did you run pnpm install at root?</li><li>Did you build the package? (pnpm build)</li><li>Is the package name correct in dependencies?</li><li>Does the package have an exports field?</li></ol><p><strong>Quick fix</strong>:</p><pre>cd packages/types<br>pnpm build<br>cd ../..<br>pnpm install</pre><h4>Turborepo not caching</h4><p><strong>Check</strong>:</p><ol><li>Is turbo.json in root?</li><li>Do tasks have outputs defined?</li><li>Did you change env variables? (Busts cache)</li></ol><p><strong>Force clean cache</strong>:</p><pre>turbo run build --force</pre><h4>TypeScript errors in imports</h4><p><strong>Common cause</strong>: Source files in src/, built files in dist/, but imports point to src/.</p><p><strong>Fix in package.json</strong>:</p><pre>{<br>  &quot;main&quot;: &quot;./dist/index.js&quot;,<br>  &quot;types&quot;: &quot;./dist/index.d.ts&quot;<br>}</pre><p>Point to built files, not source.</p><h4>Circular dependency hell</h4><p><strong>Symptoms</strong>: Mysterious build order failures, “Cannot access before initialization” errors.</p><p><strong>Find the culprit</strong>:</p><pre>npx madge --circular --extensions ts ./apps ./packages</pre><p><strong>Fix</strong>: Restructure imports to break the cycle. Usually means extracting shared code to a lower-level package.</p><h4>The Migration Path: Moving to Monorepo</h4><h4>If You Have Existing Repos</h4><p><strong>Week 1: Setup</strong></p><ol><li>Create new monorepo</li><li>Set up pnpm-workspace.yaml</li><li>Add turbo.json</li><li>Don’t move code yet</li></ol><p><strong>Week 2–3: Move One App</strong></p><ol><li>Copy first app to apps/</li><li>Update imports to use workspaces</li><li>Verify builds work</li><li>Keep original repo as backup</li></ol><p><strong>Week 4+: Move Remaining Apps</strong></p><ol><li>One app per week</li><li>Extract shared code to packages as you go</li><li>Update CI/CD pipelines</li><li>Delete old repos only after successful deploys</li></ol><p><strong>Don’t</strong>: Move everything at once. Recipe for disaster.</p><p><strong>Do</strong>: Incremental migration with rollback plan.</p><h4>Alternative Approaches (When Monorepo Isn’t Right)</h4><h4>Git Submodules</h4><p>Separate repos, referenced in parent repo.</p><p><strong>Pros</strong>: Separate histories, fine-grained access <strong>Cons</strong>: Complex to work with, easy to mess up</p><h4>Private npm Registry</h4><p>Publish internal packages privately.</p><p><strong>Options</strong>:</p><ul><li>Verdaccio (free, self-hosted)</li><li>GitHub Packages (free with GitHub)</li><li>npm private packages ($7/user/month)</li></ul><p><strong>When to use</strong>: Multiple teams, strict access control, willing to manage versions.</p><h4>My Actual Development Workflow</h4><p><strong>Starting work</strong>:</p><pre>git pull<br>pnpm install           # Installs all deps<br>pnpm dev              # Starts all apps</pre><p>Turborepo runs everything in parallel. API on 3001, web on 5173.</p><p><strong>Making changes</strong>:</p><ol><li>Edit code in any package</li><li>Hot reload updates instantly</li><li>TypeScript errors show immediately</li><li>Shared packages rebuild automatically</li></ol><p><strong>Before committing</strong>:</p><pre>pnpm lint             # Lints everything<br>pnpm test             # Runs all tests<br>pnpm build            # Builds for production</pre><p>Turborepo runs tasks in dependency order. Caches what hasn’t changed.</p><p><strong>Committing</strong>:</p><pre>git add .<br>git commit -m &quot;Add task filtering&quot;<br>git push</pre><p>CI runs same commands. Cache hits make it fast.</p><h4>The Verdict After Three Years</h4><p><strong>What worked</strong>:</p><ul><li>Shared types eliminated entire classes of bugs</li><li>Turborepo caching genuinely saved hours</li><li>One repo simplified onboarding</li><li>Atomic commits across frontend/backend were liberating</li></ul><p><strong>What didn’t</strong>:</p><ul><li>Git history got messy at 50+ packages</li><li>Hard to give contractors partial access</li><li>Some deployment platforms assumed one app per repo</li><li>Learning curve steeper than expected</li></ul><p><strong>Would I do it again?</strong></p><p>For my use case (small team, shared codebase, TypeScript stack): <strong>Absolutely.</strong></p><p>For a different context (large org, strict access control, polyglot): <strong>Probably not</strong>.</p><h4>Quick Decision Checklist</h4><p>Use monorepo if:</p><ul><li>You have 2–20 developers</li><li>Teams work on overlapping code</li><li>You value type safety across boundaries</li><li>CI/CD complexity isn’t a blocker</li><li>Everyone can access all code</li></ul><p>Avoid monorepo if:</p><ul><li>Need strict access control</li><li>Repo is already 2GB+</li><li>Teams are truly independent</li><li>Deployment tooling fights single repos</li><li>Team resists the change</li></ul><h4>Resources That Actually Helped</h4><p><strong>Must-reads</strong>:</p><ul><li>Turborepo docs (best quickstart guide)</li><li>pnpm workspace guide (simple, practical)</li><li>Nx documentation (when you need advanced features)</li></ul><p><strong>Avoid</strong>:</p><ul><li>Articles that only mention Google/Facebook scale</li><li>Tutorials that skip production deployment</li><li>Comparisons that cherry-pick metrics</li></ul><p><strong>Join communities</strong>:</p><ul><li>Turborepo Discord (active, helpful)</li><li>Nx community (advanced discussions)</li><li>r/javascript, r/typescript (general advice)</li></ul><h3>Final Thoughts</h3><p>Monorepos aren’t magic. They’re a tool that solves specific problems:✅</p><ul><li>Code duplication between projects</li><li>Type safety across boundaries</li><li>Coordinating changes across apps</li><li>Sharing configs and tooling</li></ul><p>They create new problems: ❌</p><ul><li>Git performance at scale</li><li>Access control complexity</li><li>Deployment integration work</li><li>Steeper learning curve</li></ul><p>The decision isn’t “are monorepos good?” It’s “do monorepos solve my specific problems better than the alternatives?”</p><p>For me, on three different projects, the answer was yes. Your mileage will vary.</p><p>Start simple. Add complexity when you feel the pain. Measure results. Iterate.</p><p>That’s the real secret to monorepo success.</p><h4>Complete Working Example</h4><p>All code examples in this guide come from a real, working project:</p><p><strong>R</strong><a href="https://github.com/Akash52/resto-mgmt-monorepo"><strong>estaurant Management System — Full Monorepo Demo</strong></a></p><ul><li>Complete source code with all patterns implemented</li><li>Comprehensive documentation and setup guides</li><li>Real-world billing engine with configurable rules</li><li>Production-ready structure verified against best practices</li><li>Ready to clone, run, and learn from</li></ul><p>Clone it, explore it, use it as a template for your own monorepo!</p><p><strong>Questions? Hit me up.</strong> I’ve made every mistake there is to make. Happy to save you some time.</p><blockquote><strong>For more updates on the latest tools and technologies, follow the </strong><a href="https://medium.com/simform-engineering"><strong>Simform Engineering</strong></a><strong> blog.</strong></blockquote><blockquote><strong>Follow us: </strong><a href="https://twitter.com/simform"><strong>Twitter</strong></a><strong> | </strong><a href="https://www.linkedin.com/company/simform/"><strong>LinkedIn</strong></a></blockquote><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=b03e68ffe579" width="1" height="1" alt=""><hr><p><a href="https://medium.com/simform-engineering/the-real-world-monorepo-guide-what-they-dont-tell-you-b03e68ffe579">The Real-World Monorepo Guide: What They Don’t Tell You</a> was originally published in <a href="https://medium.com/simform-engineering">Simform Engineering</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Unlock Native Power in Flutter [Part 2]]]></title>
            <link>https://medium.com/simform-engineering/unlock-native-power-in-flutter-part-2-eac514311bed?source=rss----ce67e0b67c0d---4</link>
            <guid isPermaLink="false">https://medium.com/p/eac514311bed</guid>
            <category><![CDATA[android]]></category>
            <category><![CDATA[dart]]></category>
            <category><![CDATA[flutter]]></category>
            <category><![CDATA[cross-platform]]></category>
            <category><![CDATA[ios]]></category>
            <dc:creator><![CDATA[AhemadAbbas Vagh]]></dc:creator>
            <pubDate>Mon, 05 Jan 2026 07:16:03 GMT</pubDate>
            <atom:updated>2026-01-05T07:16:02.603Z</atom:updated>
            <content:encoded><![CDATA[<h4>Learn how to embed native UI components and implement advanced integration patterns in Flutter</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*aYQ7i1sCPZnBWiBrru7oiw.png" /></figure><p>Welcome back! In <strong>Part 1</strong>, we explored Method Channels and Event Channels — the communication foundation between Flutter and native code. In <strong>Part 2</strong>, we’ll dive into <strong>Platform Views</strong>, which allow you to embed native UI components directly into your Flutter app, along with best practices, testing strategies.</p><blockquote><a href="https://medium.com/@ahemad7429/438d6fe634e4">📖 <strong>← Back to Part 1: Method Channels and Event Channels</strong></a></blockquote><blockquote>📖 <strong>Part 2: Platform Views and Advanced Patterns (You’re here)</strong></blockquote><h2>Platform Views: Embedding Native UI</h2><p>Sometimes you need to embed <strong>native UI components</strong> directly into your Flutter app. This is where <strong>Platform Views</strong> come in. They allow you to display native Android Views or iOS UIViews within your Flutter widget tree.</p><h4>Common Use Cases</h4><ul><li>Google Maps SDK</li><li>WebView with platform-specific features</li><li>Camera preview</li><li>Video players</li><li>Native ads</li><li>AR components</li></ul><h4>Performance Considerations</h4><p>Platform Views come with a performance cost because they require hybrid composition or virtual displays. Use them only when necessary.</p><h4>Platform Comparison</h4><p><strong>Android — Hybrid Composition (Default)</strong><br>✅ Better performance<br>✅ Recommended for production apps</p><p><strong>Android — Virtual Display (Legacy)</strong><br>⚠️ Legacy mode<br>⚠️ Use only for backward compatibility</p><p><strong>iOS — UIKitView</strong><br>✅ Good performance<br>✅ Standard approach for iOS</p><h4>Implementation Example: Native Text View</h4><p>Let’s build a simple native text view that renders a platform-specific label (TextView on Android, UILabel on iOS).</p><h4>Step 1: Add an Android platform-specific implementation</h4><p>Start by creating the <strong>Native View</strong> logic. This class implements PlatformView and handles the actual rendering of the Android view.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*mG1tNQK5iVHH3Yw4RfUvRA.png" /></figure><p>Next, create a <strong>Factory</strong> class that creates instances of your native view.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*g-s4XXEre98TD_eBtEePAQ.png" /></figure><p>Finally, <strong>register the factory</strong> in your MainActivity.kt. This tells the Flutter engine about your custom view type.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Pz1tCiYmQoXyaZ6eLOz1Og.png" /></figure><h4>Step 2: Add an iOS platform-specific implementation</h4><p>First, create the <strong>Native View</strong> class. This class inherits from NSObject and implements FlutterPlatformView.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*YG8zRv0ayLUaQQtmmaQdRg.png" /></figure><p>Next, create the <strong>Factory</strong> class.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*WgnV6VymvxbWFzVP2f_Y5Q.png" /></figure><p>Finally, register the factory in AppDelegate.swift.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*gsKt24a4X4_c8liufTEq1w.png" /></figure><p><em>Note: In a real plugin, you would register this in your plugin’s main class. For the app runner, you can register it in the App Delegate.</em></p><h4>Step 3: Create the Flutter Platform View widget</h4><p>Now you can use the native view in your Flutter layout. Use AndroidView for Android and UiKitView for iOS.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*8Qcy2dbj-HHy7x8GS1K0nA.png" /></figure><h4>Limitations of Platform Views</h4><p>Platform Views are a powerful tool, but they are not a silver bullet. Consider these limitations:</p><ul><li><strong>Performance Impact</strong>: Rendering native views is more expensive than rendering Flutter widgets. Each frame requires synchronization between the Flutter engine and the native UI system.</li><li><strong>Gesture Conflicts</strong>: Touch events pass through multiple layers. Sometimes, gestures might be consumed by the native view and not reach Flutter, or vice-versa. Careful configuration of gestureRecognizers is often needed.</li><li><strong>Keyboard Interactions</strong>: Handling focus and soft keyboards can be tricky when mixing Flutter text fields with native text fields, especially on Android versions using Virtual Displays.</li><li><strong>Frame Lag</strong>: In some scenarios, particularly with complex animations, the native view might lag one frame behind the Flutter UI due to the asynchronous nature of the rendering pipeline.</li><li><strong>Creation Param Size Limit</strong>: The creationParams are passed from Flutter to the native side using a platform channel. Therefore, they are subject to the same buffer size limitations as standard method calls (approx. 1MB on Android).</li></ul><h4>Best Practices for Platform Views</h4><ul><li><strong>Size constraints</strong>: Always provide explicit size constraints</li><li><strong>Gesture handling</strong>: Configure gesture recognizers appropriately</li><li><strong>Memory management</strong>: Properly dispose of resources</li><li><strong>Performance</strong>: Use sparingly, as they can impact rendering performance</li><li><strong>Hybrid composition</strong>: Prefer Android’s hybrid composition for better performance</li></ul><h4>Security Considerations</h4><ul><li><strong>Validate inputs</strong>: Never trust data from either side</li><li><strong>Permission handling</strong>: Request permissions appropriately</li><li><strong>Secure data transmission</strong>: Encrypt sensitive data</li><li><strong>Resource limits</strong>: Prevent resource exhaustion</li></ul><h2>Testing Platform Channels</h2><p>Testing platform interactions doesn’t require running on a real device. You can mock the channel responses in your widget tests or unit tests.</p><h4>Step 1: Test Setup</h4><p>Ensure you have the flutter_test dependency. In your test file, initialize the test binding.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*7nKOandcz9iXlvitk6KbGQ.png" /></figure><h4>Step 2: Mock the Method Call Handler</h4><p>Intercept calls to the platform channel and return a mock response. This simulates the native side returning data.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*F8mkn0QAoUdpxw_bRhXCbw.png" /></figure><h4>Step 3: Verify Results</h4><p>Write the actual test case to verify that your Dart code responds correctly to the mocked native data.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*fwe4csrMPQT7vdrJZpgl7A.png" /></figure><h2>Testing Event Channels</h2><p>Testing EventChannel streams require mocking the defaultBinaryMessenger to emit events.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*kAJwDVclHS-UbUWiGOSE9Q.png" /></figure><h2>Bonus: Type-safe Channels with Pigeon</h2><p>Writing manual method channel code (strings and maps) is error-prone. For production apps, consider using <a href="https://pub.dev/packages/pigeon"><strong>Pigeon</strong></a>.</p><p><strong>Pigeon</strong> generates type-safe Dart, Java/Kotlin, and Objective-C/Swift code from a simple API definition.</p><ol><li><strong>Define API in Dart</strong>: abstract class DeviceApi { String getDeviceModel(); }</li><li><strong>Run Pigeon</strong>: Generates the glue code for you.</li><li><strong>Implement</strong>: Just implement the generated interface on Android/iOS.</li><li><strong>Call</strong>: Use the generated Dart class to call native methods safely.</li></ol><h2>Quick Decision Guide</h2><ul><li><strong>Use Method Channels when:</strong> ✔️ Making one-off API calls ✔️ Calling platform-specific functions ✔️ Need bidirectional communication</li><li><strong>Use Event Channels when:</strong> ✔️ Streaming continuous data ✔️ Monitoring sensors or system events ✔️ Real-time updates from native code</li><li><strong>Use Platform Views when:</strong> ✔️ Embedding native UI components ✔️ Integrating third-party native SDKs with UI ✔️ Need platform-specific rendering</li></ul><h2>Conclusion</h2><p>Mastering Flutter’s native integration isn’t just about adding features; it’s about removing the invisible barriers of cross-platform development. By leveraging Method Channels, Event Channels, and Platform Views, you effectively “hack” the limitations of a UI-only framework, transforming Flutter into a high-performance, native-powerhouse builder.</p><p>The real “power” doesn’t come from choosing between Flutter or Native. It comes from the ability to <strong>bridge them seamlessly.</strong> You now have the architectural blueprints to build apps that are as fluid as Dart and as powerful as the underlying OS.</p><p>The bridge is built. The power is unlocked. <strong>What will you build next?</strong></p><h2>Further Resources</h2><ul><li><a href="https://docs.flutter.dev/platform-integration/platform-channels">Official Platform Channels Documentation</a></li><li><a href="https://docs.flutter.dev/development/packages-and-plugins/developing-packages">Flutter Plugins Development</a></li><li><a href="https://docs.flutter.dev/platform-integration/android/platform-views">Platform Views Deep Dive</a></li><li><a href="https://github.com/flutter/samples">Sample Code Repository</a></li></ul><blockquote><a href="https://medium.com/@ahemad7429/438d6fe634e4">📖 <strong>← Back to Part 1: Method Channels and Event Channels</strong></a></blockquote><blockquote><strong><em>For more updates on the latest tools and technologies, follow the </em></strong><a href="https://medium.com/simform-engineering"><strong><em>Simform Engineering</em></strong></a><strong><em> blog.</em></strong></blockquote><blockquote><strong><em>Follow us: </em></strong><a href="https://twitter.com/simform"><strong><em>Twitter</em></strong></a><strong><em> | </em></strong><a href="https://www.linkedin.com/company/simform/"><strong><em>LinkedIn</em></strong></a></blockquote><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=eac514311bed" width="1" height="1" alt=""><hr><p><a href="https://medium.com/simform-engineering/unlock-native-power-in-flutter-part-2-eac514311bed">Unlock Native Power in Flutter [Part 2]</a> was originally published in <a href="https://medium.com/simform-engineering">Simform Engineering</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Unlock Native Power in Flutter [Part 1]]]></title>
            <link>https://medium.com/simform-engineering/unlock-native-power-in-flutter-part-1-438d6fe634e4?source=rss----ce67e0b67c0d---4</link>
            <guid isPermaLink="false">https://medium.com/p/438d6fe634e4</guid>
            <category><![CDATA[dart]]></category>
            <category><![CDATA[ios]]></category>
            <category><![CDATA[android]]></category>
            <category><![CDATA[cross-platform]]></category>
            <category><![CDATA[flutter]]></category>
            <dc:creator><![CDATA[AhemadAbbas Vagh]]></dc:creator>
            <pubDate>Wed, 31 Dec 2025 11:37:42 GMT</pubDate>
            <atom:updated>2025-12-31T11:37:36.863Z</atom:updated>
            <content:encoded><![CDATA[<h4>Learn how to bridge Flutter with native Android and iOS code using Method Channels and Event Channels</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*5smbDEnM0k4zoZekjl85lQ.png" /></figure><p>Flutter has revolutionized cross-platform development, but let’s face it, there are times when you need to tap into platform-specific APIs that Flutter doesn’t expose out of the box. Whether it’s accessing native sensors, integrating third-party SDKs, or leveraging platform-specific UI components, understanding how to bridge the gap between Flutter and native code is essential for building production-ready apps.</p><p>In this two-part series, we’ll learn about the complete spectrum of Flutter-native interaction mechanisms. <strong>Part 1</strong> focuses on <strong>Method Channels</strong> and <strong>Event Channels</strong> the foundation of Flutter-native communication. <strong>Part 2</strong> will cover <strong>Platform Views</strong> and advanced integration patterns.</p><blockquote>📖 <strong>Part 1: Method Channels and Event Channels (You’re here)</strong></blockquote><blockquote><a href="https://medium.com/@ahemad7429/eac514311bed">📖 <strong>Continue to Part 2: Platform Views and Advanced Patterns →</strong></a></blockquote><h2>Why Native Integration Matters</h2><p>Before we dive into the technical details, let’s understand why native integration is crucial:</p><ul><li><strong>Platform-specific APIs</strong>: Bluetooth, NFC, advanced camera features</li><li><strong>Third-party SDKs</strong>: Payment gateways, analytics, crash reporting</li><li><strong>Performance optimization</strong>: Heavy computations, image processing</li><li><strong>Native UI components</strong>: MapView, WebView with advanced features</li><li><strong>Background tasks</strong>: Geolocation tracking, push notifications</li></ul><p>Flutter provides excellent coverage for most use cases, but the ecosystem is vast, and sometimes you need that extra mile of platform-specific functionality.</p><h2>Platform Channels</h2><p>At its core, Flutter uses <strong>platform channels</strong> to communicate between Dart code and native code (Kotlin/Java for Android, Swift/Objective-C for iOS). Think of it as a <strong>message-passing bridge</strong> that serializes data across the platform boundary.</p><h4>Key Concepts</h4><ol><li><strong>Channel Names:</strong> Unique identifiers for communication channels</li><li><strong>Method Codecs</strong>: Serialize data between Dart and native (StandardMethodCodec, JSONMethodCodec)</li><li><strong>Platform-specific implementations</strong>: Separate code for Android and iOS</li></ol><h4>Method Channels: Bidirectional Communication</h4><p><strong>Method Channels</strong> are the most common way to invoke platform-specific code from Flutter. They support <strong>asynchronous method calls</strong> with responses.</p><h4>How It Works</h4><ol><li>Flutter calls a method on the Dart side</li><li>The call is serialized and sent to the native side</li><li>Native code processes the request</li><li>Result is serialized and returned to Flutter</li></ol><h4>Implementation Example: Device Model</h4><p>Let’s build a practical example to fetch the device model name (e.g., “Pixel 6” or “iPhone 14”).</p><h4>Step 1: Create the Flutter platform client</h4><p>The app’s State class holds the current app state. Extend that to hold the current device model.</p><p>First, construct the channel. Use a MethodChannel with a single platform method that returns the device model.</p><p>The client and host sides of a channel are connected through a channel name passed in the channel constructor. All channel names used in a single app must be unique; prefix the channel name with a unique ‘domain prefix’, for example: com.example.app/device.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*mLDluF_Cc9A3fqAWdm9ZMA.png" /></figure><p>Next, invoke a method on the method channel, specifying the concrete method to call using the String identifier getDeviceModel. The call might fail—for example, if the platform doesn&#39;t support the platform API (such as when running in a simulator), so wrap the invokeMethod call in a try-catch statement.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*ntKznJBnkQDuCWCOEGNw3Q.png" /></figure><p>Finally, replace the build method from the template to contain a small user interface that displays the device model in a string, and a button for refreshing the value.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*wdEfs2vjAHYKetXhuFvNOw.png" /></figure><h4>Step 2: Add an Android platform-specific implementation</h4><p>Start by opening the Android host portion of your Flutter app:</p><ol><li>Navigate to the directory holding your Flutter app, and select the android folder inside it.</li><li>Open the file MainActivity.kt located in the kotlin folder in the Project view.</li></ol><p>Inside the configureFlutterEngine() method, create a MethodChannel and call setMethodCallHandler(). Make sure to use the same channel name as was used on the Flutter client side.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Tdu0D7NqnVBqEBFwv_Vl8Q.png" /></figure><p>Add the Android Kotlin code that uses the Android Build API to retrieve the device model. This code is exactly the same as you would write in a native Android app.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/904/1*qP3YiqyjFaF-2Aw6tixAtg.png" /></figure><h4>Step 3: Add an iOS platform-specific implementation</h4><p>Start by opening the iOS host portion of your Flutter app:</p><ol><li>Navigate to the directory holding your Flutter app, and select the ios folder.</li><li>Open the file AppDelegate.swift located under <strong>Runner &gt; Runner</strong> in the Project navigator.</li></ol><p>Override the application:didFinishLaunchingWithOptions: function and create a FlutterMethodChannel tied to the channel name com.example.app/device.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*S716aaQAyWTAvWPJUhPl9g.png" /></figure><p>Next, add the iOS Swift code that uses the UIDevice API to retrieve the device model.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*01w-y_7YW8aEO9yg-IpsLw.png" /></figure><h4>Understanding the Response Mechanism</h4><p>The result object (Android) and completion handler (iOS) are used to communicate back to the Flutter client. There are three primary responses:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*tclnKPQ9z9-rQmV8QeS6Lg.png" /></figure><ul><li><strong>success</strong>: Use this to return data (Strings, Maps, Lists, etc.) or null if the operation completed but has no return value.</li><li><strong>error</strong>: Use this for expected failures (e.g., “SENSOR_MISSING”, “PERMISSION_DENIED”). This allows you to handle platform errors gracefully in Dart using try-catch.</li><li><strong>notImplemented</strong>: This is a fallback to inform the Flutter client that the channel is registered, but the specific method name requested is not recognized by the native implementation.</li></ul><h4>Key Takeaways</h4><ul><li><strong>Channel naming</strong>: Use reverse domain notation for uniqueness</li><li><strong>Error handling</strong>: Always wrap calls in try-catch blocks</li><li><strong>Type safety</strong>: Method channels support standard Dart types (int, String, List, Map)</li><li><strong>Async nature</strong>: All method calls are asynchronous</li></ul><h2>Event Channels: Streaming Data</h2><p>While Method Channels are great for one-off requests, <strong>Event Channels</strong> shine when you need <strong>continuous streams of data</strong> from native code to Flutter. Think sensor data, location updates, or real-time notifications.</p><h4>Use Cases</h4><ul><li>Accelerometer/Gyroscope data</li><li>GPS location tracking</li><li>Battery state monitoring</li><li>Network connectivity changes</li><li>BLE device scanning</li></ul><h4>Implementation Example: Accelerometer Stream</h4><p>Let’s implement a stream that listens to accelerometer sensor updates.</p><h4>Step 1: Create the Flutter platform client</h4><p>The app’s State class holds the current app state. Extend that to hold the current accelerometer data.</p><p>First, construct the channel. Use an EventChannel instead of a MethodChannel, as we are listening to a stream of events.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*wXQ5O4l0y96CQ0vVcSTdQg.png" /></figure><p>Next, listen to the stream. We’ll start listening in initState and cancel the subscription in dispose. The receiveBroadcastStream() method returns a stream that we can listen to.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Iosnir-eZsRwD7Tmaa1U9A.png" /></figure><p>Finally, replace the build method to display the streaming data.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Ir0G9FAxDgdXtAaewmXRCw.png" /></figure><h4>Step 2: Add an Android platform-specific implementation</h4><p>Start by opening the Android host portion of your Flutter app in Android Studio.</p><p>Inside MainActivity.kt, implement the EventChannel.StreamHandler interface. This interface requires two methods: onListen (called when the Flutter client subscribes) and onCancel (called when the subscription ends).</p><p>Register the stream handler in configureFlutterEngine.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*vZbStaS1sSgma9VueiyWTg.png" /></figure><p>Implement the onListen and onCancel methods to register/unregister the Android sensor listener.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*04eJUBuhQqHuLPLDUiw56w.png" /></figure><p>Finally, implement the SensorEventListener to send sensor data to Flutter using the eventSink.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*QglIXh1EVzWy1dwsFxPAiQ.png" /></figure><h4>Step 3: Add an iOS platform-specific implementation</h4><p>Start by opening the iOS host portion of your Flutter app in Xcode. Open AppDelegate.swift and implement the FlutterStreamHandler protocol.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*7ohHjdrIURZAiF8hVUBElw.png" /></figure><p>Implement onListen to start accelerometer updates using CoreMotion and send data to the eventSink.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*xdRqsCMgWx7bPGIA41EPyg.png" /></figure><p>Implement onCancel to stop updates and clean up.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*C07LL-AclymKi6HboaO4iA.png" /></figure><h2>Method Channel vs Event Channel</h2><p>Here’s a quick comparison to help you choose the right approach:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*yezdHBkq-iFlM8F1LS6eTg.png" /></figure><h2>Limitations of Platform Channels</h2><p>While powerful, platform channels come with some constraints you should be aware of:</p><ul><li><strong>Main Thread Blocking</strong>: Platform channels run on the platform’s main thread (UI thread). Heavy processing on the native side can block the Flutter UI. Always offload intensive tasks to background threads.</li><li><strong>Serialization Overhead</strong>: All data passed between Flutter and native code must be serialized and deserialized. Transferring large chunks of data (like images or large files) directly through channels can be slow.</li><li><strong>Limited Data Types</strong>: Only a specific set of data types (Map, List, String, int, bool, etc.) are supported by the StandardMessageCodec. Complex custom objects need to be converted to Map/JSON.</li><li><strong>Transaction Size Limit</strong>: Arguments codified into the envelope are subject to buffer size limits (e.g., on Android, the Binder transaction buffer is ~1MB). Sending data larger than this can crash the app with a TransactionTooLargeException.</li></ul><h2>What’s Next?</h2><p>In <strong>Part 1</strong>, we’ve covered the fundamental communication mechanisms between Flutter and native platforms using Method Channels and Event Channels. You now understand how to make one-off API calls and stream continuous data from native code.</p><p><strong>Part 2</strong> will dive into:</p><ul><li>🎨 <strong>Platform Views</strong>: Embedding native UI components</li><li>⚡ <strong>Best Practices</strong>: Performance optimization and security</li><li>🧪 <strong>Testing</strong>: How to test platform channel code</li></ul><blockquote><a href="https://medium.com/@ahemad7429/eac514311bed">📖 <strong>Continue to Part 2: Platform Views and Advanced Patterns →</strong></a></blockquote><blockquote><strong><em>For more updates on the latest tools and technologies, follow the </em></strong><a href="https://medium.com/simform-engineering"><strong><em>Simform Engineering</em></strong></a><strong><em> blog.</em></strong></blockquote><blockquote><strong><em>Follow us: </em></strong><a href="https://twitter.com/simform"><strong><em>Twitter</em></strong></a><strong><em> | </em></strong><a href="https://www.linkedin.com/company/simform/"><strong><em>LinkedIn</em></strong></a></blockquote><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=438d6fe634e4" width="1" height="1" alt=""><hr><p><a href="https://medium.com/simform-engineering/unlock-native-power-in-flutter-part-1-438d6fe634e4">Unlock Native Power in Flutter [Part 1]</a> was originally published in <a href="https://medium.com/simform-engineering">Simform Engineering</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Building a Centralized Authentication System for .NET Microservices with Azure Entra ID]]></title>
            <link>https://medium.com/simform-engineering/building-a-centralized-authentication-system-for-net-microservices-with-azure-entra-id-8a643057269a?source=rss----ce67e0b67c0d---4</link>
            <guid isPermaLink="false">https://medium.com/p/8a643057269a</guid>
            <category><![CDATA[dotnet]]></category>
            <category><![CDATA[token]]></category>
            <category><![CDATA[azureentraid]]></category>
            <category><![CDATA[microservices]]></category>
            <category><![CDATA[authentication]]></category>
            <dc:creator><![CDATA[Anas Hamirka ]]></dc:creator>
            <pubDate>Fri, 26 Dec 2025 11:13:20 GMT</pubDate>
            <atom:updated>2025-12-26T11:13:18.778Z</atom:updated>
            <content:encoded><![CDATA[<h3>Building a Centralized Authentication System for .NET Microservices with Azure Entra ID</h3><h4>Securing Distributed Systems at Scale: Advanced Token Management, Service-to-Service Auth, and Compliance Strategies</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*cV5xm8V4xK4eyfTK9kbrdg.png" /></figure><h3>Topic Overview — Introduction</h3><p>In a microservices architecture, authentication and authorization become significantly more complex than in monolithic applications. Each service could potentially implement its own authentication logic, leading to duplicated code, inconsistent security policies, and increased attack surface. As your microservices ecosystem grows from 3 to 30 services, this fragmentation becomes a critical security and maintenance liability.</p><p>Centralized authentication addresses these challenges by providing:</p><ul><li>Consistency: A single source of truth for authentication policies across all services</li><li>Reduced Duplication: No need to implement authentication logic in every microservice</li><li>Security at Scale: Centralized monitoring, key rotation, and policy enforcement</li><li>Simplified Compliance: Unified audit trails and access control for regulatory requirements</li></ul><p>While Single Sign-On (SSO) is one benefit, the real value lies in establishing a consistent security perimeter across distributed services. This blog explores advanced topics, including token lifecycle management, service-to-service authentication patterns, multi-tenancy considerations, security hardening techniques, and compliance monitoring strategies for production-grade systems.</p><p>Azure Entra ID (formerly Azure Active Directory) serves as our Identity Provider (IdP), offering enterprise-grade features including conditional access, threat detection, multi-factor authentication, and seamless integration with the .NET ecosystem.</p><h3>Core Concepts</h3><h3>Authentication vs Authorization</h3><p>Authentication answers “Who are you?” — verifying the identity of a user or service through credentials. Authorization answers “What can you do?” — determining permissions and access rights based on that verified identity.</p><p>In microservices, these concerns are separated:</p><ul><li>Authentication happens once at the entry point (API Gateway)</li><li>Authorization happens at each service based on claims and policies</li></ul><h3>Role of an Identity Provider (IdP)</h3><p>An Identity Provider like Azure Entra ID centralizes user management, credential storage, and token issuance. Instead of each microservice validating credentials, they trust tokens issued by the IdP. Popular options include Azure Entra ID, Duende IdentityServer, Auth0, and Keycloak.</p><h3>JWT vs Reference Tokens</h3><p>JWT (JSON Web Tokens) are self-contained tokens carrying claims in a signed payload. Advantages: stateless validation, no database lookup per request. Disadvantages: cannot be revoked before expiration, larger payload size.</p><p>Reference Tokens are opaque identifiers requiring validation against the authorization server. Advantages: instant revocation, smaller size. Disadvantages: adds latency and database load for each validation.</p><p>Best Practice: Use short-lived JWTs (5–15 minutes) for access tokens, with refresh tokens for extended sessions. For highly sensitive operations, use reference tokens or hybrid approaches.</p><h3>Features</h3><h3>Core Capabilities</h3><ul><li>Centralized identity management with unified user lifecycle and MFA enforcement</li><li>JWT-based stateless authentication with refresh token rotation</li><li>OAuth 2.0 Client Credentials flow for service-to-service communication</li></ul><h3>Advanced Security</h3><ul><li>Multi-tenancy support with tenant isolation via claims</li><li>Role-based and policy-based authorization with fine-grained permissions</li><li>Automatic key rotation, token binding, and IP-based refresh token validation</li></ul><h3>Operations &amp; Compliance</h3><ul><li>Centralized audit logs and security event monitoring</li><li>Compliance reporting for GDPR, HIPAA, and PCI-DSS requirements</li></ul><h3>Advanced Token Management</h3><h3>Short-Lived Access Tokens &amp; Refresh Tokens</h3><p>In microservices, balancing security with user experience is critical. Access tokens should be short-lived (5–15 minutes) to limit exposure if compromised. Refresh tokens enable obtaining new access tokens without re-authentication.</p><p>Architecture Pattern:</p><ol><li>User authenticates → receives access token (15 min) + refresh token (7 days)</li><li>Client uses an access token for API calls</li><li>When the access token expires, the client uses the refresh token to get a new access token</li><li>Refresh tokens are rotated on each use (one-time use pattern)</li></ol><p>Implementation Considerations:</p><ul><li>Store refresh tokens securely (encrypted, httpOnly cookies for web)</li><li>Implement refresh token rotation to detect token theft</li><li>Track refresh token families to invalidate chains on suspicious activity</li></ul><h3>Token Exchange Patterns</h3><p>When a user calls Service A, which needs to call Service B on behalf of the user, we need a token exchange:</p><p>On-Behalf-Of (OBO) Flow:</p><pre>User → API Gateway (user token) → Service A (user token) → <br>Service A exchanges token → Service B (delegated token with Service B audience)</pre><p>This ensures each service validates tokens meant for its audience, downstream services act on behalf of the original user, and maintain a full audit trail of user actions across services.</p><p>Implementation: Use Azure Entra ID’s OBO flow or token exchange specification (RFC 8693).</p><h3>Token Revocation Challenges</h3><p>JWTs are stateless — services validate them locally without contacting the IdP. This creates a revocation problem: a user logs out, but their JWT remains valid until expiration.</p><p>Solutions:</p><ol><li>Short Expiration: Minimize exposure window (5–15 minutes)</li><li>Token Introspection: Services call IdP to validate token status (adds latency)</li><li>Distributed Cache: Maintain a blacklist of revoked tokens in Redis with TTL</li><li>Reference Tokens: Use opaque tokens requiring IdP lookup for every request</li><li>Hybrid Approach: JWT for most operations, introspection for sensitive actions</li></ol><p>Recommended Pattern: Short-lived JWTs + refresh token rotation + blacklist cache for critical revocations (account compromise, privilege changes).</p><h3>Service-to-Service Authentication</h3><h3>Why Client Credentials Flow Matters</h3><p>Not all API calls originate from users. Background jobs, scheduled tasks, and inter-service communication require machine-to-machine authentication. OAuth 2.0 Client Credentials flow addresses this:</p><ul><li>Service authenticates using its own credentials (client ID + secret or certificate)</li><li>Receives access token with service-specific scopes</li><li>No user context involved</li></ul><p>Azure Entra ID Implementation:</p><ul><li>Register each service as an App Registration</li><li>Use client secrets (development) or certificate-based auth (production)</li><li>Azure Managed Identity for services running in Azure (no credential management)</li></ul><h3>Securing gRPC and REST Microservices</h3><p>Both REST and gRPC services can be secured with JWT Bearer authentication:</p><ul><li>REST API: Standard Authorization: Bearer &lt;token&gt; header</li><li>gRPC: Token propagation via metadata headers</li></ul><p>Key Configuration: Validate issuer, audience, and signing keys; configure HTTPS/TLS for all communication; implement retry logic with token refresh.</p><h3>API Gateway Pattern</h3><p>The API Gateway acts as the authentication enforcement point:</p><ol><li>Receives incoming requests with user credentials or tokens</li><li>Validates authentication against Azure Entra ID</li><li>Enforces authorization policies (rate limiting, IP restrictions)</li><li>Propagates tokens downstream to microservices</li><li>Handles token refresh transparently</li></ol><p>Benefits: Microservices don’t handle authentication, only authorization; centralized security policy enforcement; token transformation (user token → service-specific tokens).</p><h3>Example: Service A → Service B with Delegated Token</h3><p>Scenario: Order Service needs to verify product availability from Product Service</p><pre>1. User calls Order Service with user access token<br>2. Order Service validates token (user is authenticated)<br>3. Order Service exchanges token for Product Service-specific token (OBO flow)<br>4. Order Service calls Product Service with new token<br>5. Product Service validates token and processes request</pre><p>This maintains user context throughout the call chain while ensuring each service only accepts tokens meant for its audience.</p><h3>Multi-Tenancy &amp; Claims Management</h3><h3>Handling Multiple Tenants</h3><p>In SaaS applications, a single deployment serves multiple customers (tenants). Centralized auth must enforce tenant isolation:</p><p>Strategies:</p><ol><li>Separate Audiences: Each tenant has unique audience claim in token</li><li>Tenant Claim: Include tenant_id claim in all tokens</li><li>Claims Transformation Middleware: Inject tenant context early in request pipeline</li></ol><p>Implementation Pattern:</p><pre>// Extract tenant from token claims<br>var tenantId = user.FindFirst(&quot;tenant_id&quot;)?.Value;</pre><pre>// Filter data by tenant<br>var products = await _context.Products<br>    .Where(p =&gt; p.TenantId == tenantId)<br>    .ToListAsync();</pre><h3>Role-Based vs Policy-Based Authorization</h3><p>Role-Based Access Control (RBAC):</p><ul><li>Simple: Users have roles (Admin, User, Manager)</li><li>Roles map to permissions</li><li>Good for straightforward hierarchies</li></ul><pre>[Authorize(Roles = &quot;Admin,Manager&quot;)]<br>public IActionResult DeleteProduct(int id) { }</pre><p>Policy-Based Authorization:</p><ul><li>Flexible: Policies evaluate multiple requirements</li><li>Combine roles, claims, custom logic</li><li>Better for complex scenarios</li></ul><pre>[Authorize(Policy = &quot;CanManageProducts&quot;)]<br>public IActionResult UpdateProduct(Product product) { }</pre><pre>// Policy definition<br>services.AddAuthorization(options =&gt;<br>{<br>    options.AddPolicy(&quot;CanManageProducts&quot;, policy =&gt;<br>        policy.RequireAssertion(context =&gt;<br>            context.User.HasClaim(&quot;role&quot;, &quot;Admin&quot;) ||<br>            (context.User.HasClaim(&quot;role&quot;, &quot;Manager&quot;) &amp;&amp; <br>             context.User.HasClaim(&quot;department&quot;, &quot;Sales&quot;))));<br>});</pre><h3>Scope-Based Service Management</h3><p>OAuth 2.0 scopes define what operations a token permits. In microservices:</p><ul><li>User tokens: products.read, products.write, orders.read, orders.write</li><li>Service tokens: products-service.all, orders-service.all</li></ul><p>Best Practice: Assign minimum required scopes. A read-only client should never receive write scopes.</p><pre>[Authorize(Policy = &quot;RequireProductWriteScope&quot;)]<br>public IActionResult CreateProduct([FromBody] Product product)<br>{<br>    // Only tokens with &quot;products.write&quot; scope can access<br>}</pre><pre>// Policy checks scope<br>options.AddPolicy(&quot;RequireProductWriteScope&quot;, policy =&gt;<br>    policy.RequireClaim(&quot;scope&quot;, &quot;products.write&quot;));</pre><h3>Security Hardening</h3><h3>Rotating Signing Keys</h3><p>JWT tokens are signed with cryptographic keys. Compromised keys allow attackers to forge valid tokens. Automatic key rotation mitigates this risk:</p><p>Azure Entra ID: Automatically rotates signing keys every 90 days, publishes multiple keys simultaneously in JWKS endpoint, and services validate against any current key.</p><p>Implementation:</p><pre>services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme)<br>    .AddJwtBearer(options =&gt;<br>    {<br>        options.Authority = &quot;https://login.microsoftonline.com/{tenant-id}&quot;;<br>        options.TokenValidationParameters = new TokenValidationParameters<br>        {<br>            ValidateIssuer = true,<br>            ValidateAudience = true,<br>            ValidateLifetime = true,<br>            ValidateIssuerSigningKey = true,<br>        };<br>        options.RefreshOnIssuerKeyNotFound = true; // Auto-refresh on key rollover<br>    });</pre><h3>Securing Refresh Tokens</h3><p>Refresh tokens are long-lived (days to months), making them high-value targets:</p><p>IP/Device Binding:</p><pre>// Store IP and device fingerprint with refresh token<br>var refreshToken = new RefreshToken<br>{<br>    Token = GenerateSecureToken(),<br>    UserId = userId,<br>    IpAddress = HttpContext.Connection.RemoteIpAddress.ToString(),<br>    DeviceFingerprint = ComputeDeviceFingerprint(userAgent, acceptLanguage),<br>    ExpiresAt = DateTime.UtcNow.AddDays(7)<br>};</pre><pre>// Validate on refresh<br>if (storedToken.IpAddress != currentIp || <br>    storedToken.DeviceFingerprint != currentFingerprint)<br>{<br>    await RevokeTokenFamily(storedToken.FamilyId);<br>    await AlertSecurityTeam(userId, &quot;Refresh token used from different device&quot;);<br>    throw new SecurityException(&quot;Token validation failed&quot;);<br>}</pre><p>Rotation Strategies: One-time use (invalidate after use), token families (track chains, revoke on anomaly), sliding expiration (extend on use, up to maximum lifetime).</p><h3>Protecting Against Replay Attacks</h3><p>In distributed systems, an attacker capturing a valid token could replay it:</p><p>Mitigation Strategies:</p><ol><li>Short token lifetime: Minimize replay window</li><li>HTTPS everywhere: Prevent token capture</li><li>Token binding: Cryptographically bind token to TLS connection (RFC 8473)</li><li>Nonce/JTI claims: Track used token IDs in distributed cache</li><li>Time-based validation: Reject tokens outside time window (nbf, exp claims)</li></ol><p>Implementation Example:</p><pre>public async Task&lt;bool&gt; ValidateTokenReplayAsync(string jti, DateTime exp)<br>{<br>    var cacheKey = $&quot;used_token:{jti}&quot;;<br>    if (await _cache.ExistsAsync(cacheKey))<br>        return false;<br>    <br>    var ttl = exp - DateTime.UtcNow;<br>    await _cache.SetAsync(cacheKey, &quot;1&quot;, ttl);<br>    return true;<br>}</pre><h3>Advantages</h3><h3>Security &amp; Operations</h3><ul><li>Single point of control reduces attack surface and enables rapid token revocation</li><li>Consistent security policies across all services with centralized audit logs</li><li>Stateless JWT validation enables horizontal scaling without database dependencies</li></ul><h3>Development &amp; Business</h3><ul><li>Reduced code duplication and faster development cycles</li><li>Lower TCO through simplified compliance and maintenance</li><li>Better user experience with SSO and seamless cross-service navigation</li></ul><h3>Use Cases</h3><h3>Enterprise SaaS &amp; Regulated Industries</h3><p>Multi-tenant SaaS platforms (e-commerce, CRM, project management) benefit from tenant isolation and SSO integration. Financial services and healthcare systems leverage centralized auth for unified audit trails, regulatory compliance (PCI-DSS, HIPAA), and comprehensive access logging across microservices.</p><h3>Monitoring &amp; Compliance</h3><h3>Logging Failed Authentication Attempts</h3><p>Centralized authentication creates a single point for security monitoring:</p><p>Key Metrics to Track:</p><ul><li>Failed login attempts by user/IP</li><li>Unusual login locations or times</li><li>Token validation failures</li><li>Refresh token reuse attempts</li><li>Service-to-service authentication failures</li></ul><p>Implementation with Application Insights:</p><pre>JwtBearerEvents.OnAuthenticationFailed = context =&gt;<br>{<br>    _telemetry.TrackEvent(&quot;AuthenticationFailed&quot;, new Dictionary&lt;string, string&gt;<br>    {<br>        { &quot;Reason&quot;, context.Exception.Message },<br>        { &quot;IP&quot;, context.HttpContext.Connection.RemoteIpAddress.ToString() },<br>        { &quot;Endpoint&quot;, context.Request.Path },<br>        { &quot;Service&quot;, Environment.GetEnvironmentVariable(&quot;SERVICE_NAME&quot;) }<br>    });<br>    return Task.CompletedTask;<br>};</pre><h3>Security Event Monitoring</h3><p>Anomaly Detection Patterns:</p><ol><li>Velocity Checks: &gt;10 failed logins in 5 minutes</li><li>Impossible Travel: Login from New York, then Tokyo 1 hour later</li><li>Token Pattern Abuse: Same refresh token used from multiple IPs</li><li>Privilege Escalation: User role changes followed by sensitive operations</li></ol><h3>Compliance (GDPR, HIPAA, PCI)</h3><p>Centralized authentication simplifies compliance:</p><p>GDPR Requirements: Right to Access (single query returns all auth events), Right to Erasure (revoke all tokens in one place), Data Minimization (tokens contain only necessary claims), Audit Trail (comprehensive logs).</p><p>HIPAA Requirements: Access Controls (centralized enforcement), Audit Logs (detailed PHI access logging), Automatic Logoff (centralized session timeout), Person Authentication (strong auth mechanisms).</p><p>PCI-DSS Requirements: Unique ID for each person (IdP enforced), Track and monitor all access (centralized logs), Password policies (centralized enforcement).</p><h3>Auditing Access Across Services</h3><p>Distributed Tracing with Correlation IDs:</p><pre>public class CorrelationIdMiddleware<br>{<br>    public async Task InvokeAsync(HttpContext context)<br>    {<br>        var correlationId = context.Request.Headers[&quot;X-Correlation-ID&quot;].FirstOrDefault()<br>            ?? Guid.NewGuid().ToString();<br>        <br>        context.Items[&quot;CorrelationId&quot;] = correlationId;<br>        context.Response.Headers.Add(&quot;X-Correlation-ID&quot;, correlationId);<br>        <br>        _logger.LogInformation(&quot;User {UserId} accessed {Endpoint} [CorrelationId: {CorrelationId}]&quot;,<br>            context.User.Identity?.Name, context.Request.Path, correlationId);<br>        <br>        await _next(context);<br>    }<br>}</pre><p>This enables tracing a user’s complete journey through the system for security investigations and compliance audits.</p><h3>Conclusion</h3><p>Building a centralized authentication system for .NET microservices is a comprehensive security architecture covering token lifecycle management, service-to-service communication, multi-tenancy, and regulatory compliance.</p><p>Key Takeaways:</p><ol><li>Strong Foundations: Azure Entra ID provides enterprise features, but understanding JWT vs reference tokens and token exchange patterns is crucial for production.</li><li>Service-to-Service Auth: Client credentials flow and managed identities enable secure microservice communication without user context.</li><li>Layered Security: Short-lived tokens, refresh rotation, key rotation, and distributed caching create defense in depth.</li><li>Compliance by Design: Centralized authentication simplifies GDPR, HIPAA, and PCI-DSS implementation with built-in audit trails and access controls.</li></ol><p>Investing in robust centralized authentication is fundamental to building secure, scalable, and compliant distributed systems. The patterns outlined here provide a roadmap that scales from startups to enterprise deployments.</p><h3>References &amp; Further Reading</h3><ul><li><a href="https://docs.microsoft.com/en-us/azure/active-directory/develop/">Microsoft Identity Platform Documentation</a></li><li><a href="https://docs.microsoft.com/en-us/azure/active-directory/develop/v2-oauth2-on-behalf-of-flow">OAuth 2.0 On-Behalf-Of Flow</a></li><li><a href="https://docs.microsoft.com/en-us/dotnet/architecture/microservices/secure-net-microservices-web-applications/">.NET Microservices Security</a></li><li><a href="https://tools.ietf.org/html/rfc8725">JWT Best Practices</a></li><li><a href="https://openid.net/connect/">OpenID Connect Specification</a></li></ul><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=8a643057269a" width="1" height="1" alt=""><hr><p><a href="https://medium.com/simform-engineering/building-a-centralized-authentication-system-for-net-microservices-with-azure-entra-id-8a643057269a">Building a Centralized Authentication System for .NET Microservices with Azure Entra ID</a> was originally published in <a href="https://medium.com/simform-engineering">Simform Engineering</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[7 Vue 3 Performance Pitfalls That Quietly Derail Your App]]></title>
            <link>https://medium.com/simform-engineering/7-vue-3-performance-pitfalls-that-quietly-derail-your-app-33c7180d68d4?source=rss----ce67e0b67c0d---4</link>
            <guid isPermaLink="false">https://medium.com/p/33c7180d68d4</guid>
            <category><![CDATA[vuejs]]></category>
            <category><![CDATA[optimization]]></category>
            <category><![CDATA[performance]]></category>
            <category><![CDATA[vue-3]]></category>
            <category><![CDATA[vue]]></category>
            <dc:creator><![CDATA[Zainab Saify]]></dc:creator>
            <pubDate>Wed, 24 Dec 2025 05:19:28 GMT</pubDate>
            <atom:updated>2025-12-24T05:19:27.636Z</atom:updated>
            <content:encoded><![CDATA[<blockquote>A Senior Developer’s Field Guide, With Real Examples From Production</blockquote><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*cePxiwDHt9sWUt08X1uBpQ.png" /></figure><p>Vue 3 is fast. Exceptionally fast. But modern applications — complex dashboards, deeply nested components, large datasets, and aggressive API-driven updates — can push even Vue’s reactivity system into uncomfortable territory.</p><p>Over the last few years, I’ve seen otherwise clean applications develop slow renders, input lag, frame drops, and mysterious “Vue is slow” complaints that had nothing to do with Vue itself.</p><p>This guide breaks down the real performance issues senior developers run into, <em>why</em> they occur, and <em>how</em> to fix them. To ground this in reality, we’ll use a sample application: a property comparison dashboard displaying hundreds of items with filters, calculations, and nested data.</p><h3>1. Over-Reactive Components</h3><p><em>When everything reacts to everything</em></p><p>Vue’s reactivity system is incredibly capable, but it only tracks what you tell it to. If you bind large reactive objects directly to your components — especially full API responses or entire Pinia store states — Vue will re-render more aggressively than intended.</p><h3>Why this hurts</h3><ul><li>Large reactive dependencies trigger updates even for small property changes.</li><li>Expensive component trees re-render unnecessarily.</li><li>Deep-nested components inherit all reactivity from parent objects, even when they don’t need it.</li></ul><h3>Real impact in our dashboard</h3><p>A single filter change causes all 200+ cards to update because every card depends on the same large reactive state object.<br> This can create 300–500ms frame delays on mid-range devices.</p><h3>Fix</h3><ul><li>Destructure stores and reactive objects to isolate the smallest reactive units.</li><li>Use computed wrappers around small slices of state.</li><li>Mark static data with markRaw() or freeze it.</li><li>Apply v-once for sections that never change.</li></ul><h3>2. Computed Properties That Recompute Too Often</h3><p><em>The silent recalculation problem</em></p><p>Computed properties are cached, but only until one of their dependencies changes. If the dependency happens to be a large reactive source, then even minor changes invalidate the cache.</p><h3>Why this hurts</h3><p>A single reactive update — unrelated to the computed property’s purpose can force expensive operations to run repeatedly.</p><h3>Real impact in our dashboard</h3><p>“Y-Total” calculations depend on a large list of property details. Typing a character into a search box invalidates the computed and recalculates heavy logic across 200 items. Multiply that across multiple filters, and your UI starts lagging.</p><h3>Fix</h3><ul><li>Narrow dependency tracking: expose only required fields through computed refs.</li><li>Break monolithic computed chains into smaller, independent segments.</li><li>Memoize stable derived values.</li><li>Pre-calculate static parts at API ingestion time.</li></ul><h3>3. v-for + Reactive Props = Unnecessary Re-Renders</h3><p><em>Where performance dies one row at a time</em></p><p>v-for is efficient, but it still re-renders children when parent props change.</p><h3>Why this hurts</h3><ul><li>Child components receive reactive props.</li><li>Parent updates — even unrelated ones — cascade through every child.</li><li>Each child may run its own computed properties, watchers, or lifecycle logic.</li></ul><h3>Real impact in our dashboard</h3><p>Scrolling becomes jittery, and typing filters have noticeable lag. Chrome DevTools shows frame rates dipping below 40 FPS because Vue is forced to update the entire list on every minor change.</p><h3>Fix</h3><ul><li>Always use stable keys.</li><li>Mark static sections non-reactive.</li><li>Use v-memo (Vue 3.4+) to skip unnecessary updates.</li><li>Flatten overly nested structures where possible.</li></ul><h3>4. Watchers That Shouldn’t Exist</h3><p><em>Accidental complexity through observation</em></p><p>Watchers often start as convenience utilities but quietly accumulate into reactive chains that trigger excessive updates or duplicate calls.</p><h3>Why this hurts</h3><ul><li>Watchers re-run on every dependency change.</li><li>One watcher triggers another.</li><li>Developers often attach watchers to entire objects instead of specific fields.</li></ul><figure><img alt="The dominos effect" src="https://cdn-images-1.medium.com/max/200/0*t_pDGgDerA_d-M6H.gif" /></figure><h3>Real impact in our dashboard</h3><p>One filter change fires:<br> → multiple watchers<br> → computed recalculations<br> → downstream watchers<br> → API calls<br> → even more watchers</p><p>What should be a single update becomes a storm of cascading reactivity.</p><h3>Fix</h3><ul><li>Replace watchers with computed properties when possible.</li><li>Avoid watchEffect unless necessary—prefer explicit watchers.</li><li>Add flush: &#39;post&#39; to avoid chained reruns.</li><li>Debounce watchers that trigger I/O.</li></ul><h3>5. Deeply Nested Props: The Cascade of Doom</h3><p><em>When one prop update ripples across the entire component tree</em></p><p>Passing large reactive objects four or five levels deep spreads reactivity everywhere.</p><h3>Why this hurts</h3><ul><li>A single parent update invalidates the entire tree.</li><li>Child components recompute derived states that don’t need refreshing.</li><li>Memoization becomes ineffective because dependencies appear unstable.</li></ul><h3>Real impact in our dashboard</h3><p>ComparisonContainer → ComparisonList → ItemCard → MetricRow<br> One parent update forces every metric to recompute—even if only one item changed.</p><h3>Fix</h3><ul><li>Use provide/inject for stable data.</li><li>Pass primitives or readonly slices instead of entire objects.</li><li>Move shared state into composables or Pinia modules.</li><li>Avoid deeply nested props when not necessary.</li></ul><h3>6. Overly Reactive Stores</h3><p><em>When a pinia store becomes a junk drawer</em></p><p>Pinia encourages global reactivity, which is powerful but dangerous if used without boundaries.</p><h3>Why this hurts</h3><p>If you store everything in a single reactive store — user info, filters, lists, pagination, UI flags — then any update will ripple across the entire application.</p><h3>Real impact in our dashboard</h3><p>Pagination changes cause:<br> → list recomputation<br> → card re-renders<br> → header updates<br> → UI flashes<br> Even though pagination isn’t related to any of those components.</p><h3>Fix</h3><ul><li>Split stores by domain and responsibility.</li><li>Store static data using markRaw() or non-reactive structures.</li><li>Avoid unnecessary $subscribe calls.</li><li>Use getters for expensive derived data — they are cached by default.</li></ul><h3>7. Template-Level Anti-Patterns</h3><p><em>The quietest but most common source of performance issues</em></p><p>Templates re-evaluate expressions on every render. Recalculating heavy lists inside the template is one of the quickest ways to hurt performance.</p><h3>Why this hurts</h3><ul><li>Any render event recalculates inline functions, filters, mappings, or reductions.</li><li>Inline objects cause Vue to think props have changed.</li><li>Complex logic in templates makes re-renders extremely expensive.</li></ul><h3>Real impact in our dashboard</h3><p>A card template that runs multiple array filters or mappings recalculates all of them, even on unrelated UI changes, such as a tooltip appearing.</p><h3>Fix</h3><ul><li>Move every piece of logic out of the template and into computed properties.</li><li>Avoid inline object or function instantiations.</li><li>Preprocess data in the parent component if needed.</li></ul><h3>Why This Matters</h3><p>The performance issues above rarely appear in isolation. In real dashboards and complex apps, they stack. One inefficient watcher plus a heavy computed property plus deeply nested reactive props can lead to cascading performance problems that feel unsolvable.</p><p>Understanding Vue’s reactivity at a structural level and not just at the API level is what separates a smooth application from a sluggish one.</p><blockquote><strong>For more updates on the latest tools and technologies, follow the </strong><a href="https://medium.com/simform-engineering"><strong>Simform Engineering</strong></a><strong> blog.</strong></blockquote><blockquote><strong>Follow us: </strong><a href="https://twitter.com/simform"><strong>Twitter</strong></a><strong> | </strong><a href="https://www.linkedin.com/company/simform/"><strong>LinkedIn</strong></a></blockquote><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=33c7180d68d4" width="1" height="1" alt=""><hr><p><a href="https://medium.com/simform-engineering/7-vue-3-performance-pitfalls-that-quietly-derail-your-app-33c7180d68d4">7 Vue 3 Performance Pitfalls That Quietly Derail Your App</a> was originally published in <a href="https://medium.com/simform-engineering">Simform Engineering</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Firebase App Check with Play Integrity API in Android]]></title>
            <link>https://medium.com/simform-engineering/firebase-app-check-with-play-integrity-api-in-android-d35cf3e7eb9b?source=rss----ce67e0b67c0d---4</link>
            <guid isPermaLink="false">https://medium.com/p/d35cf3e7eb9b</guid>
            <category><![CDATA[app-integrity]]></category>
            <category><![CDATA[firebase-app-check]]></category>
            <category><![CDATA[android]]></category>
            <category><![CDATA[play-integrity-api]]></category>
            <category><![CDATA[firebase]]></category>
            <dc:creator><![CDATA[Payal Rajput]]></dc:creator>
            <pubDate>Tue, 23 Dec 2025 07:16:45 GMT</pubDate>
            <atom:updated>2025-12-23T07:16:44.115Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*cO9D09JmLOZzu3aApxwxbw.png" /><figcaption>Firebase App Check Banner</figcaption></figure><p>When we build an Android app using Firebase, we trust that only our actual app will communicate with Firebase services, such as Firestore, Realtime Database, or Cloud Storage. But in reality, anyone can extract your Firebase keys and try to access your backend using fake or modified apps.</p><p>This is where Firebase App Check comes in.</p><p><strong>In this blog, we’ll understand:</strong></p><ul><li>What is Firebase App Check?</li><li>How App Check Works?</li><li>How to integrate App Check in an Android app?</li><li>App Check Token Expiry</li><li>Common Issues That Can Break the Firebase App Check</li><li>Using App Check During Development and Debugging</li><li>App Check request metrics</li><li>How it works with Firebase services: Demo use case</li></ul><p><strong>What Is Firebase App Check?</strong></p><p>Firebase App Check is a security layer that ensures only your real, untampered app can access Firebase services.</p><p>Without App Check:</p><ul><li>Fake or cloned apps can directly access your Firestore</li><li>Automated scripts can abuse your APIs</li><li>Your Firebase quota can be drained</li><li>Your billing costs can unexpectedly increase</li></ul><p>With App Check:</p><ul><li>Firebase verifies your app before responding to requests</li><li>Only trusted and valid apps are allowed access</li><li>Modified, fake, or unauthorized apps are blocked automatically</li><li>Your data, quota, and billing stay protected</li></ul><blockquote><em>Think of App Check as an </em><strong><em>identity card</em></strong><em> your app must show before talking to Firebase.</em></blockquote><figure><img alt="" src="https://cdn-images-1.medium.com/max/600/1*WASe-za9ZI7SY1Hlf8g-vg.png" /><figcaption>Firebase App Check</figcaption></figure><p><strong>How App Check Works:</strong></p><p>Firebase App Check doesn’t just blindly trust your app. Instead, it uses a trusted system called an <strong>attestation provider</strong> to verify that your app is genuine and hasn’t been tampered with. This verification process addresses three key questions: Is the app genuine, has it been modified, and was it installed from a trusted source? By checking these things, Firebase ensures that only authentic apps can communicate with its services.</p><p>On Android, the attestation provider is the <strong>Play Integrity API</strong>. This system helps confirm that user actions and server requests are coming from your real app, installed through Google Play, and running on an authentic, certified Android device. In other words, it makes sure the app interacting with Firebase is exactly the app you built, and nothing else.</p><p>How App Check Verifies Your App</p><ol><li>Your app asks the Play Integrity API for an attestation token</li><li>Play Integrity checks the app’s integrity</li><li>A token is returned to Firebase</li><li>Firebase verifies the token</li><li>If valid → access granted<br>If invalid → request blocked</li></ol><p>The best part is that all of this happens seamlessly in the background. Users won’t notice anything, but Firebase quietly ensures that every request comes from a secure, real, and untampered version of your app. This adds an invisible layer of protection, keeping your data, APIs, and backend safe from misuse.</p><p><strong>How to integrate App Check in an Android app?</strong></p><p><strong>Prerequisites:</strong> Ensure the following requirements are met before you begin:</p><ul><li>Firebase project is created</li><li>Android app has been added to Firebase</li><li>google-services.json is properly configured</li><li>Firebase SDK is already added</li></ul><p><strong>Step 1: Add App Check Dependency</strong></p><p>In your <strong>app-level</strong> build.gradle File:</p><pre>dependencies {<br>    implementation &quot;com.google.firebase:firebase-appcheck-playintegrity:LATEST_VERSION_HERE&quot;<br>}</pre><p><strong>Step 2: Enable App Check in Your App</strong></p><p>In your MainActivity or Application Class:</p><pre>val firebaseAppCheck = FirebaseAppCheck.getInstance()<br>firebaseAppCheck.installAppCheckProviderFactory(<br>    PlayIntegrityAppCheckProviderFactory.getInstance()<br>)</pre><p>That’s it! Now App Check is active in your Android app.</p><p><strong>Step 3: Enable App Check in Firebase Console</strong></p><ol><li>Open <strong>Firebase Console</strong></li><li>Go to <strong>Project Settings</strong></li><li>Open <strong>App Check</strong></li><li>Select your Android app</li><li>Enable App Check</li><li>Turn on enforcement for:</li></ol><ul><li>Firestore</li><li>Realtime Database</li><li>Cloud Storage</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*4xVtT-d4eS_ciqdN93J7lg.png" /><figcaption>Enforce App Check</figcaption></figure><p>Once App Check is enabled:</p><ul><li>Firebase services will expect a <strong>valid App Check token</strong></li><li>Requests without valid tokens will be rejected</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*MaDjlxde9aowhivHmL2D1Q.png" /></figure><p><strong>App Check Token Expiry:</strong></p><p>Once Firebase App Check has verified that your app is genuine, it provides your app with a temporary token. This token acts like a short-lived pass, proving to Firebase that your app is trusted while it communicates with the backend. Because these tokens are temporary, they eventually expire, but the good news is that the Firebase SDK takes care of refreshing them automatically. Your app can continue to interact with Firebase services without interruption, and most of the time, you don’t even have to think about it.</p><p>In some cases, such as debugging or making custom backend calls, you might want to manually fetch a new token. This can be done using a simple call like firebaseAppCheck.getToken(forceRefresh: true). The forceRefreshparameter determines whether Firebase should return a cached token if one exists or fetch a brand-new token from the server. Setting it to true forces a fresh token, while false allows Firebase to return a valid cached token if available. This flexibility can be very useful when testing or handling special scenarios in your app.</p><pre>firebaseAppCheck.getToken(forceRefresh: true)<br>.addOnSuccessListener(token -&gt; {<br>    // Use token if needed<br>})<br>.addOnFailureListener(e -&gt; {<br>    // Handle error<br>})</pre><p><strong>Common Issues That Can Break the Firebase App Check:</strong></p><p>Even though Firebase App Check works quietly in the background to protect your app, sometimes things can go wrong. Knowing these common issues can help you fix problems quickly and keep your app secure:</p><ul><li><strong>App Check token not sent</strong> — Firebase didn’t receive the verification token from your app.</li><li><strong>App not installed from the Play Store — </strong>App Check expects the app to come from a trusted source like Google Play.</li><li><strong>App Check is not enabled in the Firebase Console — </strong>App Check must be turned on for your project to work.</li><li><strong>Play Integrity API is not properly configured — </strong>The system that checks your app’s authenticity isn’t set up correctly.</li></ul><p><strong>Using App Check During Development and Debugging:</strong></p><p>While Play Integrity is the provider used in production to ensure your app is genuine, development and debugging require a slightly different approach. During development, your app may not be installed from the Play Store, or you might be testing on devices that aren’t certified. In these cases, Firebase provides a Debug App Check Provider that allows you to test and develop your app without running into verification issues.</p><p>Using the debug provider is simple. You can set it up with theDebugAppCheckProviderFactory which gives your app a debug token. This token acts like a temporary pass for your app during development, letting you test App Check functionality without requiring Play Integrity. Once your app is ready for production, you can switch back to Play Integrity to ensure full security for your real users.</p><p>Add a dependency in your <strong>app-level</strong> build.gradle file:</p><pre>dependencies {<br>    implementation &quot;com.google.firebase:firebase-appcheck-debug:LATEST_VERSION_HERE&quot;<br>}</pre><p>In your MainActivity or Application Class:</p><pre>val firebaseAppCheck = FirebaseAppCheck.getInstance()<br>firebaseAppCheck.installAppCheckProviderFactory(<br>    DebugAppCheckProviderFactory.getInstance()<br>)</pre><p>Now launch the app and check Logcat to get the debug token.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*RGX2hElIvYIIpEymuxlEXg.png" /><figcaption>Logcat Screenshot — App Check Debug Token</figcaption></figure><p>Add a debug token to App Check on the Firebase console to avoid blocking your test builds.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*_fB-dICXnAemfgvWLK08gA.png" /><figcaption>App Check — Firebase Console</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*pMZhw2EZjt2K8Sm6uwi4Tw.png" /><figcaption>Manage debug tokens</figcaption></figure><p><strong>App Check request metrics:</strong></p><ul><li>App Check metrics show how many requests are coming from valid vs invalid apps.</li><li>You can see trends like:</li><li>Spike in invalid requests → possible attack or misuse</li><li>Drop in valid requests → maybe legitimate users are facing issues</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*8ZPkyobtWtLExhL-SvTDow.png" /><figcaption>App Check request metrics</figcaption></figure><p><strong>Demo Use Case Of Firestore: App Check-in Action</strong></p><pre>// Add collection to Firestore<br>val db = FirebaseFirestore.getInstance()<br><br>val data = hashMapOf(<br>    &quot;text&quot; to &quot;Hi from Firestore App Check demo&quot;<br>)<br><br>db.collection(&quot;messages&quot;)<br>    .document(&quot;hello&quot;)<br>    .set(data)<br>    .addOnSuccessListener {<br>        Log.d(&quot;DEMO&quot;, &quot;Document written successfully&quot;)<br>    }<br>    .addOnFailureListener { e -&gt;<br>        Log.e(&quot;DEMO&quot;, &quot;Firestore write failed&quot;, e)<br>    }</pre><p>Scenario 1: Legit App</p><ul><li>Installed from the Play Store</li><li>Valid App Check token</li><li>Firestore access allowed</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*O_DnI6Wt2IUqozlWdygguA.png" /></figure><p>Scenario 2: Tampered App</p><ul><li>Modified APK</li><li>Invalid token</li><li>Firestore access blocked</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*chjiTHyFc3Voegj31TRU2w.png" /><figcaption>Logcat Screenshot — Permission Denied From Firestore</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*M0oUJFG7HHViy1Fnjn8eyg.png" /><figcaption>Invalid App Check token</figcaption></figure><h3>Conclusion:</h3><p>Firebase App Check is not only simple to set up but also extremely effective at protecting your app and backend. Without it, your Firebase services are exposed, leaving your data vulnerable to fake apps, automated scripts, and potential misuse that can increase your billing unexpectedly.</p><p>By adding just a few lines of code, you can secure your Firebase data, block unauthorized access, and prevent abuse, all without interrupting your users’ experience. App Check works quietly in the background, giving you peace of mind that only your real, trusted app can communicate with Firebase, keeping both your users and your backend safe.</p><p><strong>Official Documentation</strong><br>Firebase App Check: <a href="https://firebase.google.com/docs/app-check">https://firebase.google.com/docs/app-check</a></p><p><strong>Happy Learning!</strong></p><p>If this guide helped you, show some love with 👏 and share it with your friends. Let’s make learning Firebase fun and easy for everyone. Thanks for reading!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=d35cf3e7eb9b" width="1" height="1" alt=""><hr><p><a href="https://medium.com/simform-engineering/firebase-app-check-with-play-integrity-api-in-android-d35cf3e7eb9b">Firebase App Check with Play Integrity API in Android</a> was originally published in <a href="https://medium.com/simform-engineering">Simform Engineering</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
    </channel>
</rss>