<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by Devashish Patil on Medium]]></title>
        <description><![CDATA[Stories by Devashish Patil on Medium]]></description>
        <link>https://medium.com/@devashishpatil?source=rss-3b9a10f61d50------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Fri, 15 May 2026 18:15:00 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@devashishpatil/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Life of an API]]></title>
            <link>https://medium.com/the-plumber/life-of-an-api-6b0387de9725?source=rss-3b9a10f61d50------2</link>
            <guid isPermaLink="false">https://medium.com/p/6b0387de9725</guid>
            <category><![CDATA[software-development]]></category>
            <category><![CDATA[api]]></category>
            <dc:creator><![CDATA[Devashish Patil]]></dc:creator>
            <pubDate>Thu, 01 May 2025 19:28:34 GMT</pubDate>
            <atom:updated>2025-05-01T19:30:52.925Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*fEIDTt5ODT8-9WKvtc2GmQ.png" /><figcaption>Images created using napkin.ai</figcaption></figure><p>In this article, we will explore the API lifecycle, a crucial aspect of modern software development that ensures the effective management and utilization of Application Programming Interfaces (APIs).</p><p>The API lifecycle encompasses various stages, from planning and design to deployment, monitoring, and retirement. Understanding this lifecycle is essential for developers, product managers, and organizations looking to leverage APIs for their applications and services.</p><h3>Stages of the API Lifecycle</h3><h4>1. Planning</h4><p>The first stage involves identifying the need for an API. This includes understanding the target audience, defining the purpose of the API, and determining the resources required. During this phase, stakeholders should gather requirements and outline the expected functionalities.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*noOH8WF0RAHvt3Ujx8GXWw.png" /></figure><h4>2. Design</h4><p>Once the planning is complete, the next step is to design the API. This includes defining endpoints, data formats, authentication methods, and error handling. Tools like Swagger or OpenAPI can be used to create API specifications, which serve as a blueprint for development.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*bFhxlj0YI1hA5Fkq98g1nA.png" /></figure><h3>3. Development</h3><p>In this stage, developers write the code for the API based on the design specifications. This includes implementing the backend logic, setting up databases, and ensuring that the API adheres to the defined standards. Version control systems are often utilized to manage changes during this phase.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*xiJYdJaPM02o7wFAhm4qjQ.png" /></figure><h3>4. Testing</h3><p>After development, thorough testing is essential to ensure the API functions as intended. This includes unit testing, integration testing, and performance testing. Automated testing tools can help streamline this process and ensure consistent results.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*jllU3JBbpaGy7RALWyAcuA.png" /></figure><h3>5. Deployment</h3><p>Once testing is complete, the API is ready for deployment. This involves making the API accessible to users, which may include setting up servers, configuring load balancers, and ensuring security measures are in place. Documentation should also be published to guide users on how to interact with the API.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*aRYpOlxVGHTWPwEDKsIRaA.png" /></figure><h3>6. Monitoring</h3><p>Post-deployment, continuous monitoring is crucial to ensure the API performs well and remains reliable. This includes tracking usage metrics, error rates, and response times. Monitoring tools can help identify issues early and provide insights for future improvements.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*e8EZ_nCmGYTG7Oc2A9UYog.png" /></figure><h3>7. Maintenance</h3><p>APIs require ongoing maintenance to address bugs, implement new features, and ensure compatibility with other systems. Regular updates and versioning are important to keep the API relevant and functional.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*8rwKF-yxCYMDcyFUwa9NkA.png" /></figure><h3>8. Retirement</h3><p>Eventually, an API may become obsolete or be replaced by a newer version. The retirement phase involves deprecating the API, notifying users, and providing alternatives. Proper planning for retirement can minimize disruption for users.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*rIbCPUJaRSk8WLkkUpYjkQ.png" /></figure><p>Understanding the API lifecycle is vital for anyone involved in API development and management. By following these stages, organizations can create robust, reliable, and user-friendly APIs that meet the needs of their stakeholders.</p><p>Whether you are a developer, product manager, or business leader, grasping the intricacies of the API lifecycle will empower you to make informed decisions and drive successful API initiatives.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=6b0387de9725" width="1" height="1" alt=""><hr><p><a href="https://medium.com/the-plumber/life-of-an-api-6b0387de9725">Life of an API</a> was originally published in <a href="https://medium.com/the-plumber">The Plumber</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Multi-Cloud: A Strategy or a Headache?]]></title>
            <link>https://medium.com/google-cloud/multi-cloud-a-strategy-or-a-headache-3a341c4da7c1?source=rss-3b9a10f61d50------2</link>
            <guid isPermaLink="false">https://medium.com/p/3a341c4da7c1</guid>
            <category><![CDATA[cloud]]></category>
            <category><![CDATA[cloud-computing]]></category>
            <dc:creator><![CDATA[Devashish Patil]]></dc:creator>
            <pubDate>Mon, 23 Dec 2024 19:48:46 GMT</pubDate>
            <atom:updated>2024-12-25T10:11:29.527Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*7JL3WD36LmRmuhj0" /><figcaption>Photo by <a href="https://unsplash.com/@epicuros?utm_source=medium&amp;utm_medium=referral">Vasilis Caravitis</a> on <a href="https://unsplash.com?utm_source=medium&amp;utm_medium=referral">Unsplash</a></figcaption></figure><p>If you are into Cloud Computing or DevOps engineering, you may have heard of multi-cloud at some point. It has become a buzzword which for some people, may mean flexibility, high availability, resilience or even innovation.</p><p>Businesses more often than not envision an ideal world where workloads seamlessly operate between Google Cloud, AWS, Azure and others. The thought process usually involves leveraging each platform’s unique abilities and strengths.</p><blockquote>But in reality, Multi-cloud often introduces more headaches than benefits.</blockquote><h3>The Case for Multi-Cloud</h3><p>The appeal of multi-cloud is undeniable, especially for enterprises. One of the major reasons to consider a multi-cloud strategy is to <strong>avoid vendor lock-in</strong>. This is a huge concern for big companies stuck deep into a single provider.</p><p>Vendor lock-in is a real problem, not so much with generic offerings on public cloud platforms as it is with a lot of independent service providers, but that age-old fear still propagates.</p><p>Another major motivation for organizations to go with multi-cloud is its promise of <strong>higher availability and enhanced resiliency.</strong> With workloads distributed across providers, the fear of outages at one provider causing a halt to a company’s operations goes away.</p><p>And lastly, for companies operating in some regulated industries, multi-cloud might be the only option to meet local compliance requirements.</p><h3><strong>The Challenges of Multi-Cloud</strong></h3><p>But here’s the catch: multi-cloud isn’t a silver bullet that’ll magically solve all your problems. The operational complexity alone is huge.</p><p>Managing workloads across two or more cloud platforms means maintaining multiple toolsets, monitoring systems, and workflows. Your DevOps team needs expertise in everything from IAM policies in AWS to monitoring in Azure and networking in GCP. This not only increases the <strong>learning curve</strong> but also escalates <strong>costs</strong>.</p><p>Speaking of costs, let’s talk about<strong> data transfer</strong> fees. Moving data between providers can quickly become a financial black hole. Add to this the overhead of <strong>securing workloads</strong> across diverse environments — each with its quirks — and the promise of resilience starts to feel like an operational nightmare.</p><p><strong>Latency</strong> is another often-overlooked factor. Inter-cloud communication introduces delays that can impact application performance.</p><blockquote><strong>I can go on and on about this topic, but you get the idea.</strong></blockquote><h3><strong>When Does Multi-Cloud Make Sense?</strong></h3><p>Despite these challenges, multi-cloud has its place. Global organizations with a strong operational backbone may find it vital for their business.</p><p>Similarly, companies with stringent compliance needs — like financial institutions — might have no choice but to spread workloads across regions and providers. This is also a very common scenario.</p><p>But for most businesses, the decision should be a careful balance. If the complexity outweighs the value, it might be time to reconsider.</p><blockquote>I see startups which are just building their user base asking for multi-cloud deployments, just because it is fancy to do so. It should be avoided so early in a companies technical journey.</blockquote><h3>What’s the alternative?</h3><p>Embrace cloud-agnostic tools like Kubernetes, Terraform, open-source tools with some kind of paid/premium support and independent service providers. These technologies allow you to build portable workloads without committing to a full-blown multi-cloud strategy.</p><p>For those who are already using on-premises, a hybrid cloud strategy may be one option, but it has its own challenges which I’ll discuss some other time.</p><h3>Conclusion</h3><p><em>Here’s my take:</em></p><p>Multi-cloud is often a solution looking for a problem. For most organizations, it introduces unnecessary complexity, skyrocketing costs, and operational challenges.</p><p>Unless there’s a compelling reason — like regulatory compliance or the need for specific features across providers — it’s usually better to focus on optimizing a single cloud provider.</p><p>The cloud landscape is already complex. Adding multiple providers to the mix can derail your team’s focus, slow down delivery, and ultimately dilute the value you’re aiming to create.</p><blockquote>Choose simplicity, and let the business — not the buzzwords — drive your strategy. Simplicity often beats complexity.</blockquote><p>What’s your stance on multi-cloud? Share in the comments or on <a href="https://x.com/devashishpatil_">X</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=3a341c4da7c1" width="1" height="1" alt=""><hr><p><a href="https://medium.com/google-cloud/multi-cloud-a-strategy-or-a-headache-3a341c4da7c1">Multi-Cloud: A Strategy or a Headache?</a> was originally published in <a href="https://medium.com/google-cloud">Google Cloud - Community</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Understanding the DevOps Toolchain]]></title>
            <link>https://medium.com/the-plumber/understanding-the-devops-toolchain-a678fd6d70c4?source=rss-3b9a10f61d50------2</link>
            <guid isPermaLink="false">https://medium.com/p/a678fd6d70c4</guid>
            <category><![CDATA[devops]]></category>
            <category><![CDATA[software-development]]></category>
            <category><![CDATA[cloud-computing]]></category>
            <dc:creator><![CDATA[Devashish Patil]]></dc:creator>
            <pubDate>Tue, 10 Dec 2024 18:54:59 GMT</pubDate>
            <atom:updated>2024-12-10T18:54:59.584Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*g6vMRHCqja2G_X8t" /><figcaption>Photo by <a href="https://unsplash.com/@nina_mercado?utm_source=medium&amp;utm_medium=referral">Nina Mercado</a> on <a href="https://unsplash.com?utm_source=medium&amp;utm_medium=referral">Unsplash</a></figcaption></figure><h4>TL;DR</h4><ul><li><em>Develop: Git, GitHub, GitLab, BitBucket</em></li><li><em>Infrastructure management: Terraform, Ansible</em></li><li><em>Build and Deploy: GitHub Actions, GitLab CI, ArgoCD, Jenkins</em></li><li><em>Monitor: Grafana + Prometheus</em></li></ul><h4>Develop</h4><p>Collaboration on the code is essential, and you must be familiar with version control systems, the most popular of which is <strong>Git</strong>.</p><ul><li>Learn how Git works.</li><li>How to implement branching strategies and branch protection.</li><li>How to work with pull requests.</li></ul><p><strong>Tools: </strong>Git, GitHub, GitLab, BitBucket</p><h4>Build and Test</h4><p>This is where you do the CI(continuous integration) part of a CICD pipeline. You bundle your code with language-specific tools such as Maven/Gradle for Java, npm for node projects etc.</p><p>Testing is also dependent on the languages you are using. Some common tools for testing are Cypress, JUnit, and PyTest.</p><p>Container-based applications are very common, and you need to learn how to build container images. <strong>Docker</strong> is the most popular, and I would suggest learning that. After that, you can also explore <strong>Podman</strong>.</p><p>You automate such building and testing tasks in a CICD pipeline.</p><p><strong>Tools:</strong> GitHub Actions, Jenkins, GitLab CI</p><h4>Release</h4><p>Once you have built the artifacts, you’ll need a place to store and version these artifacts. Examples of common artefacts: Docker Images, JAR files, Python modules, Node Modules, Go binaries etc.</p><p><strong>Tools:</strong> JFrog Artifactory, Nexus, DockerHub</p><h4>Deploy</h4><p>Deployment of a particular application depends on its type and how to run that artifact in different machines. However, you should know common deployment strategies such as in-place, blue-green and canary deployments.</p><p><strong>Tools:</strong> ArgoCD, Helm, Spinnaker + tools tools mentioned in Build and Test</p><h4>Operate</h4><p>Deployment of applications is done on some kind of infrastructure, either cloud-based or on-premises.</p><p>You should know the basics of cloud computing and how it works, you can start with any of <strong>Google Cloud, AWS, or Azure.</strong></p><p>Apart from that you need to know the tools to manage the infrastructure on Cloud or on-premises. Apart from Infrastructure as Code, you will also need Configuration as Code for application/server configuration management.</p><p><strong>Tools:</strong> <strong>Terraform, Ansible</strong>, Chef, Pulumi, Crossplane</p><h4>Monitor</h4><p>Finally, you need to understand monitoring and observability.</p><p>If something breaks, you want to know before your users do. For example, a spike in response times might indicate an issue. With proper monitoring, you can catch and fix it before it escalates.</p><p><strong>Tools:</strong> Prometheus, Grafana, Datadog, SigNoz</p><p>The DevOps toolchain isn’t just about tools — it’s about creating a streamlined process that supports collaboration, automation, and continuous improvement.</p><p>By choosing the right tools and using them effectively, you’ll build systems that are not only resilient but also a joy to work with.</p><p>What’s your favorite DevOps tool? Or do you have a unique toolchain setup? Let me know — I’d love to hear your thoughts!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=a678fd6d70c4" width="1" height="1" alt=""><hr><p><a href="https://medium.com/the-plumber/understanding-the-devops-toolchain-a678fd6d70c4">Understanding the DevOps Toolchain</a> was originally published in <a href="https://medium.com/the-plumber">The Plumber</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[DevOps is harder than you think!]]></title>
            <link>https://medium.com/the-plumber/devops-is-harder-than-you-think-34a1f971f90f?source=rss-3b9a10f61d50------2</link>
            <guid isPermaLink="false">https://medium.com/p/34a1f971f90f</guid>
            <category><![CDATA[software]]></category>
            <category><![CDATA[challenge]]></category>
            <category><![CDATA[devops]]></category>
            <dc:creator><![CDATA[Devashish Patil]]></dc:creator>
            <pubDate>Sun, 01 Dec 2024 09:22:14 GMT</pubDate>
            <atom:updated>2024-12-01T09:22:14.950Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*KwOUlXIU16HAYoCT" /><figcaption>Photo by <a href="https://unsplash.com/@growtika?utm_source=medium&amp;utm_medium=referral">Growtika</a> on <a href="https://unsplash.com?utm_source=medium&amp;utm_medium=referral">Unsplash</a></figcaption></figure><p>DevOps Engineering is a multi-faceted role. It bridges the gap between development and operations by streamlining processes, automating workflows and improving system reliability.</p><p>While it is rewarding, it comes with several challenges.</p><h4>Tool Overload</h4><p>The DevOps ecosystem is vast, with multiple tools for CI/CD, configuration management, IaC, observability, orchestration and whatnot. Selecting tools is confusing and mastering the right ones is necessary.</p><p>But..</p><p>The problem doesn&#39;t end there, you need to ensure that the tools you’ve selected work well together, which may not be the case always. You’ll end up writing custom code adding to a lot of complexity.</p><h4>Speed vs Stability</h4><p>Balancing the two is hard. Delivering new features quickly without compromising system reliability is a constant challenge.</p><p>On top of this, handling on-call duties and troubleshooting outages under pressure get stressful very fast.</p><h4>Observability</h4><p>Crucial, but very tricky. Monitoring and observability in modern architectures involving microservices and distributed systems is complex and difficult.</p><blockquote>With distributed systems, finding the root cause of an issue can feel like searching for a needle in a haystack — only the haystack is on fire, and everyone’s yelling at you to fix it.</blockquote><h4>Scaling</h4><p>Scaling is another beast altogether. It’s not just about making systems bigger, you also need to keep them fast, efficient and cost effective while they grow.</p><p>Infrastructure scaling is one thing, your applications need to be scalable as well and that requires tight collaboration with developers which gets difficult quite often.</p><h4>Automation</h4><p>Over time, CI/CD pipelines tend to get brittle and prone to breaking as systems evolve. You are also required to put in lots of tests and automation while also keeping small build and deploy times.</p><blockquote>Automating tests for complex scenarios, particularly for stateful systems or edge cases, is non-trivial.</blockquote><h4>Knowledge and Skill Gaps</h4><p>DevOps engineers must understand coding, system design, networking, cloud platform and many more.</p><p>On top of knowing all these tools, you need to keep up with the ever evolving DevOps landscape. That is a full-time job in itself.</p><h4>Cost Management</h4><p>Managing unpredictable costs in cloud-native environments can be challenging without proper cost monitoring.</p><p>Resource optimization is a regular activity. Ensuring infrastructure is used efficiently without over-provisioning or under-provisioning is crucial.</p><h4>Burnout</h4><p>Finally, there’s the human side of it all. Burnout is real. Constant on-call rotations, high expectations, and the pressure to keep everything running smoothly can take a toll.</p><blockquote>The “always-on” nature of DevOps can create pressure to deliver under tight timelines.</blockquote><p>So yeah, DevOps is hard. But that’s also what makes it rewarding.</p><p>You’re solving real problems, creating systems that scale, and making life easier for everyone around you. It’s a tough job, but for those of us who thrive on challenges, it’s worth it.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=34a1f971f90f" width="1" height="1" alt=""><hr><p><a href="https://medium.com/the-plumber/devops-is-harder-than-you-think-34a1f971f90f">DevOps is harder than you think!</a> was originally published in <a href="https://medium.com/the-plumber">The Plumber</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Grow your income as a DevOps Engineer]]></title>
            <description><![CDATA[<div class="medium-feed-item"><p class="medium-feed-image"><a href="https://medium.com/the-plumber/grow-your-income-as-a-devops-engineer-0e9310eb3f2d?source=rss-3b9a10f61d50------2"><img src="https://cdn-images-1.medium.com/max/2600/0*sqYe9XOsCx3RW5VA" width="5616"></a></p><p class="medium-feed-snippet">Unlock your full potential!!</p><p class="medium-feed-link"><a href="https://medium.com/the-plumber/grow-your-income-as-a-devops-engineer-0e9310eb3f2d?source=rss-3b9a10f61d50------2">Continue reading on The Plumber »</a></p></div>]]></description>
            <link>https://medium.com/the-plumber/grow-your-income-as-a-devops-engineer-0e9310eb3f2d?source=rss-3b9a10f61d50------2</link>
            <guid isPermaLink="false">https://medium.com/p/0e9310eb3f2d</guid>
            <category><![CDATA[jobs]]></category>
            <category><![CDATA[money]]></category>
            <category><![CDATA[devops]]></category>
            <category><![CDATA[cloud]]></category>
            <dc:creator><![CDATA[Devashish Patil]]></dc:creator>
            <pubDate>Sun, 24 Nov 2024 15:28:51 GMT</pubDate>
            <atom:updated>2024-11-24T15:28:51.370Z</atom:updated>
        </item>
        <item>
            <title><![CDATA[About Reliability]]></title>
            <link>https://medium.com/google-cloud/about-reliability-8f79bc3d09ee?source=rss-3b9a10f61d50------2</link>
            <guid isPermaLink="false">https://medium.com/p/8f79bc3d09ee</guid>
            <category><![CDATA[reliability]]></category>
            <category><![CDATA[software-development]]></category>
            <category><![CDATA[system-design-interview]]></category>
            <dc:creator><![CDATA[Devashish Patil]]></dc:creator>
            <pubDate>Fri, 05 Jul 2024 07:20:09 GMT</pubDate>
            <atom:updated>2024-11-29T05:25:21.618Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*9Q9czctZWcMVEy4jduOMiw.png" /><figcaption>Designed by <a href="https://medium.com/u/3b9a10f61d50">Devashish Patil</a></figcaption></figure><p>Everyone has a rough idea about what it means for something to be reliable. For software systems, following can be considered as reliable expectations:</p><ul><li>The software is working as per the user expectations with required performance that the user needs.</li><li>If the user makes any mistakes, it still continues to work. Same is true for using the software in unintended ways.</li><li>Only allows authorized access and prevents abuse</li></ul><blockquote>A system is can be said as reliable when it continues to work correctly even if things go wrong. These can be hardware failures, software errors, human errors or even if it is used in a way that is not intended.</blockquote><h3>Importance of reliability</h3><p>Reliability is not just important for critical things like vehicles, food quality etc. It is also important for software systems.</p><p>Imagine you are storing all your photos(read memories) in a cloud based service and suddenly the data gets corrupted, how would you feel?</p><p>Apart from a bad user experience, this may result in loss of revenues for businesses. For example, a payment gateway going down directly affecting payment transactions, or an e-commerce website not able to show products, which indirectly causes reduction in sales.</p><p>This may also result in legal/monetary implications if data is reported incorrectly or system is down for more time than what was agreed to(read more about <a href="https://en.wikipedia.org/wiki/Service-level_agreement">Service Level Agreements or SLAs</a>).</p><h3>Can you compromise on reliability?</h3><p>When you are launching something new, or testing the product with an MVP, then it makes sense to focus on shipping the software as soon as possible and compromise on reliability to save costs.</p><p>But for systems where you already have a user base which is consuming services from your application, then it becomes absolutely necessary to keep the application reliable. Anything apart from that is a bad consequence.</p><h3>How to make your systems reliable?</h3><p>Once you have decided that your application needs to be reliable, which is going to be in most of the cases, following few approaches can be considered.</p><h4>Testing</h4><p>When you are trying to build reliable applications, incorporating testing helps a lot. Automated testing ensures you are not pushing code with bugs, breaking changes etc and therefore you’ll have more confidence when making changes to your application.</p><p>Many organizations follow <a href="https://en.wikipedia.org/wiki/Test-driven_development#:~:text=Test%2Ddriven%20development%20(TDD),with%20another%20new%20test%20case.">test-driven development</a>, where you write the tests first, and then write the actual code to pass those tests. This is usually done for Unit testing. Along with this, to ensure reliability even further, you should incorporate functional and integrations testing in your applications.</p><h4>Introduce Chaos</h4><p>Chaos engineering is a fairly new concept but is an effective one. It is the practice of intentionally injecting faults into a system to check its resiliency.</p><blockquote>Chaos Engineering is similar to how a vaccine works. You inject your body with small amount of potentially harmful things to build resistance.</blockquote><p>The goal of chaos engineering is to find potential issues early on which will allow you to mitigate them and prevent outages and disruptions.</p><p>Examples can include, terminating virtual machines or containers randomly, introducing memory leaks to cause resource exhaustion, introducing a flappy firewall or an unreliable network.</p><h4>Security Hardening</h4><p>Unreliability may be caused because of insecure on poorly guarded systems. Attackers can cause an outage such as a DDOS attack, or may exploit a known vulnerability to gain unauthorized access.</p><p>This is where hardening of the systems that are part of your applications is absolutely necessary.</p><p><strong>Examples:</strong></p><ul><li>Having a Web Application Firewall to protect against DDOS attacks, implementing <a href="https://owasp.org/www-project-top-ten/">OWASP top 10</a> best practices etc.</li><li>Regularly scanning your code, OS and container images for vulnerabilities and taking action to mitigate those.</li><li>Having security constructs such as authentication, authorization, encryption etc.</li></ul><p>These are just very few ways for improving the security of your application, but security is not limited to just this and is a whole another topic on its own.</p><p>If you made it this far, be sure to check out other articles from me at <a href="https://medium.com/u/3b9a10f61d50">Devashish Patil</a></p><p>Keep Learning.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=8f79bc3d09ee" width="1" height="1" alt=""><hr><p><a href="https://medium.com/google-cloud/about-reliability-8f79bc3d09ee">About Reliability</a> was originally published in <a href="https://medium.com/google-cloud">Google Cloud - Community</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[This is how you approach System Design]]></title>
            <link>https://medium.com/codebyte/this-is-how-you-approach-system-design-ea0001b7d297?source=rss-3b9a10f61d50------2</link>
            <guid isPermaLink="false">https://medium.com/p/ea0001b7d297</guid>
            <category><![CDATA[programming]]></category>
            <category><![CDATA[cloud-computing]]></category>
            <category><![CDATA[system-design-interview]]></category>
            <dc:creator><![CDATA[Devashish Patil]]></dc:creator>
            <pubDate>Wed, 03 Apr 2024 06:19:15 GMT</pubDate>
            <atom:updated>2024-04-03T06:19:15.062Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*1iHND0H5CJq6_E1girJb5w.png" /><figcaption>Designed by <a href="https://medium.com/u/3b9a10f61d50">Devashish Patil</a></figcaption></figure><p>But aside from the interviews, System Design is a skill one should excel at to become a better engineer.</p><p>In this article, I discuss how one can approach System Design.</p><h4>Understand the problem statement</h4><p>Collect information about the problem which needs a solution. The following points can be considered while defining the problem:</p><ul><li>Understand the requirements, such as the features, scale of the solution etc.</li><li>Think from the users’ perspective and try to understand their needs.</li><li>Finally, define any constraints or limitations of the system. <br>For example: There may be compliance requirements based on demographics or the industry for which the solution is being designed.</li></ul><h4>Identify the scope of the system</h4><p>Two things need to be defined to identify the scope of the system, what the system will do and what it won’t.</p><p>An example of this is: An e-commerce application needs to be built but the payments are to be handled by a third party. The system would be responsible for integrating the third-party payment service, but nothing will be of concern from developing, hosting or managing that service.</p><h4>Look for existing references</h4><p>Look at similar systems that have been built in the past and identify what worked well and what didn’t. Use this information to make your design decisions.</p><h4>Create a high-level design</h4><p>Outline the main components of the system and how they will interact with each other. This can include a raw diagram of the system’s architecture, or a flowchart outlining the process the system will follow.</p><h4>Refine the design</h4><p>Iterations and refinements will always be needed. One should not approach the system design process as a one-off scenario. During the initial design phase, iterate over the design until all requirements are met and there are no loose components.</p><p>Once the initial phase is over, further refinements may be needed with growth in the application.</p><h4>Document the design</h4><p>Proper documentation is necessary as part of the design process. The following few points can be considered during documentation.</p><ul><li>Document how the data flows between components. This can be in line with the user journeys.</li><li>Document the connectivity between components. Make a note of what all traffic is allowed between components.</li><li>Define the coupling between components.</li><li>Document how error handling is performed. Also, document possible implications and mitigation steps in case a component goes down.</li></ul><h4>Continuously monitor and improve the system</h4><p>The system design is not a one-time process, it needs to be continuously monitored and improved to meet the changing requirements.</p><p>Robust monitoring and logging are needed for the critical systems. This can include, system resource usage such as CPU, Memory, Disc space etc. Further, metrics in terms of latency, error rate, and response code analysis may also be required.</p><h4>Consider the cost</h4><p>One may be tempted to use multiple tools and resources for the points mentioned above and may successfully design robust and functional systems, but it may not be able to justify the cost.</p><p>Cost is a big factor for companies and one must be able to efficiently design systems with the minimum resources available at their disposal.</p><p>If content like this interests you, consider following me on Medium or hit me up on Twitter(X) <a href="https://twitter.com/devashishpatil_">here</a>.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=ea0001b7d297" width="1" height="1" alt=""><hr><p><a href="https://medium.com/codebyte/this-is-how-you-approach-system-design-ea0001b7d297">This is how you approach System Design</a> was originally published in <a href="https://medium.com/codebyte">CodeByte</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Rapid Python #1]]></title>
            <link>https://devashishpatil.medium.com/rapid-python-1-6802de43a662?source=rss-3b9a10f61d50------2</link>
            <guid isPermaLink="false">https://medium.com/p/6802de43a662</guid>
            <category><![CDATA[python]]></category>
            <category><![CDATA[code]]></category>
            <category><![CDATA[programming]]></category>
            <dc:creator><![CDATA[Devashish Patil]]></dc:creator>
            <pubDate>Sat, 11 Nov 2023 10:46:16 GMT</pubDate>
            <atom:updated>2023-11-11T10:46:16.298Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*Fx_EYwisSHdaMu-E" /><figcaption>Photo by <a href="https://unsplash.com/@marcojodoin?utm_source=medium&amp;utm_medium=referral">Marc-Olivier Jodoin</a> on <a href="https://unsplash.com?utm_source=medium&amp;utm_medium=referral">Unsplash</a></figcaption></figure><h3>Python os module</h3><p>Python’s os module is often seen as a collection of basic file and directory manipulation tools, but it is a powerful gateway to the underlying operating system. It provides a plethora of features that can make one a better programmer. Check out this curated list</p><ul><li><strong>Environment Variable Manipulation:</strong> os.environ dictionary</li><li><strong>Process Management:</strong> os.system() and os.popen()</li><li><strong>File Locking and I/O Control: </strong>os.lockfile() , os.unlockfile(), os.lseek()</li><li><strong>System Information Retrieval: </strong><br>For fetching system-level info such as current working directory, system name, user info</li><li><strong>Platform-Specific Operations </strong><br>Example: os.urandom() generates random numbers using the system’s random number generator.</li><li><strong>Path Manipulation and Expansion</strong> os.path.join() and os.path.expanduser()</li><li><strong>File Permission ControlL</strong> os.chmod() and os.chown()</li></ul><h4>Other use cases:</h4><ul><li>Interfacing with System Services (like sending signals to processes)</li><li>Debugging and Exception Handling</li><li>Encoding and Decoding</li></ul><h3><strong>History of Sets and Dictionary</strong></h3><p>Let’s understand a bit of history about Python Sets and Dictionaries</p><p>The set data structure in Python started off as a replica of the dict data structure, a key-value pair with dummy values.</p><p>Both use hashtables as the underlying data structure, which explains the average O(1) complexity for data lookup and insertion.</p><p>But since then the set’s and dict’s implementations have diverged from each other(e.g. arbitrary order in set vs insertion order in dict) and performance in various use cases differ.</p><p>If you like to see more such content, consider following.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=6802de43a662" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Design efficient file uploads from browser]]></title>
            <link>https://medium.com/google-cloud/make-efficient-file-uploads-from-browser-9556e74858eb?source=rss-3b9a10f61d50------2</link>
            <guid isPermaLink="false">https://medium.com/p/9556e74858eb</guid>
            <category><![CDATA[system-design-interview]]></category>
            <category><![CDATA[google-cloud-platform]]></category>
            <category><![CDATA[software]]></category>
            <category><![CDATA[gcp-app-dev]]></category>
            <dc:creator><![CDATA[Devashish Patil]]></dc:creator>
            <pubDate>Tue, 29 Aug 2023 09:46:48 GMT</pubDate>
            <atom:updated>2023-08-29T17:00:18.310Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*99vprPhxXxPxaCvp" /><figcaption>Photo by <a href="https://unsplash.com/@qwitka?utm_source=medium&amp;utm_medium=referral">Maksym Kaharlytskyi</a> on <a href="https://unsplash.com?utm_source=medium&amp;utm_medium=referral">Unsplash</a></figcaption></figure><p>Imagine having a frontend, a backend, a database for storing metadata, and a blob storage like Google Cloud Storage or S3 bucket for storing large objects like images, videos etc. Apart from these things, there will be services like load balancers, NAT gateways etc which will also constitute to hops and network traffic.</p><p>With the traditional approach, users would send the file from the client/frontend to the backend server. The file once uploaded to the server then needs to be uploaded to an object storage using some APIs or SDKs. This is how the architecture looks for such approach.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*EcPtZACCMh0Mn6AZe5dgKQ.png" /><figcaption>Designed by <a href="https://medium.com/u/3b9a10f61d50">Devashish Patil</a></figcaption></figure><h4>There are multiple issues with this architecture:</h4><ul><li>Too many hops, that too with file objects</li><li>File is travelling through all the components here and will result in high network costs.</li><li>More processing power to handle uploads and long running requests.</li></ul><h4>What can be done instead?</h4><p>The file should be uploaded directly from the browser to the object storage and the metadata such as the upload location, storage etc should be updated in the database.</p><h4>But how can you do that?</h4><p>There is a concept called signed URLs. Signed URLs are time limited endpoint to access a resource in S3, Google cloud storage etc., either for reading or writing.</p><h4>The flow will look like this:</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*tU3XAU9f0E1kIBWE2zcG3g.png" /><figcaption>Designed by <a href="https://medium.com/u/3b9a10f61d50">Devashish Patil</a></figcaption></figure><ul><li>Frontend will ask backend for a signed URL for a specific object for a limited time</li><li>Backend will use the object storage service API to create the signed URL and respond back.</li><li>Frontend will use this URL to upload the file and calls the backend again for the upload status.</li><li>Based on the status, the backend will update the metadata server for information such as file location, upload status etc. This metadata can be used later for accessing the uploaded files.</li></ul><p>This completes the whole flow of efficient file uploads, reduces the hops and network traffic thereby reducing the costs as well.</p><p>Let me know if you found this helpful, would love to know your feedback.</p><p>Follow for more.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=9556e74858eb" width="1" height="1" alt=""><hr><p><a href="https://medium.com/google-cloud/make-efficient-file-uploads-from-browser-9556e74858eb">Design efficient file uploads from browser</a> was originally published in <a href="https://medium.com/google-cloud">Google Cloud - Community</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[How routing works in Apigee X]]></title>
            <link>https://medium.com/google-cloud/how-routing-works-in-apigee-x-409aa544aea3?source=rss-3b9a10f61d50------2</link>
            <guid isPermaLink="false">https://medium.com/p/409aa544aea3</guid>
            <category><![CDATA[api-management]]></category>
            <category><![CDATA[api]]></category>
            <category><![CDATA[google-cloud-platform]]></category>
            <category><![CDATA[gcp-app-dev]]></category>
            <category><![CDATA[api-gateway]]></category>
            <dc:creator><![CDATA[Devashish Patil]]></dc:creator>
            <pubDate>Sun, 02 Jul 2023 13:38:48 GMT</pubDate>
            <atom:updated>2023-07-14T17:19:36.111Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*LfyvoPR01yYXUvhz" /><figcaption>Photo by <a href="https://unsplash.com/it/@iamjiroe?utm_source=medium&amp;utm_medium=referral">Jiroe (Matia Rengel)</a> on <a href="https://unsplash.com?utm_source=medium&amp;utm_medium=referral">Unsplash</a></figcaption></figure><p>This article aims to explain how routing works in Apigee X. It is divided into 3 sections:</p><ol><li>The first part explains how traffic comes to Apigee X from various clients.</li><li>The second part explains how to configure the traffic flow to backends in different networks and datacenters.</li><li>The third part focuses on two things.</li><li>First one is which proxy to execute based on the request hostname and base path</li><li>Second, how the request and response flow works inside an Apigee X proxy and how to customize the routing behavior for multiple backends.</li></ol><h3>Routing to Apigee X</h3><p>Apigee X is a SaaS service which is hosted inside a Google Cloud managed project and VPC, implementation of which is invisible to the Google Cloud customer. The Google managed Apigee X VPC is peered with a customer managed VPC via service networking and thus accessible from that VPC. This holds true for all the service projects inside a shared VPC as well.</p><p>Each Apigee X instance comes with an IP address that is accessible from the customer VPC, and VMs/Clusters inside that VPC can hit that IP address to access the APIs exposed via Apigee.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*WuxzSrpK1_LF7m6Xg-Porw.png" /></figure><p>This whole setup of traffic coming to Apigee is called Northbound Flow. There are few approaches to implementing northbound flow, 2 of the most common are discussed below.</p><h4>Load Balancer + Managed instance group</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*wd3TmaGtBIe7kFbk5pnAmw.png" /></figure><p>In the architecture diagram above, there is a managed instance group which is responsible for IP forwarding and which acts as a backend service for the load balancer. All the requests coming from the clients through the load balancer are forwarded to the Apigee runtime.</p><h4>Load Balancer + PSC NEG</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*pHJuOs6gL1MQfxWlFhBmYg.png" /></figure><p>In this approach, instead of the managed instance groups, there is a PSC NEG(Network endpoint group) which is responsible for forwarding requests to the Apigee runtime. This removes the need to manage compute engine instances.</p><p>In both of these cases, load balancer can be external and/or internal depending on the API clients.</p><h3>Routing from Apigee X</h3><p>The traffic going from Apigee X to the backends is called Southbound flow. There can be the following 4 scenarios based on where the backends are.</p><h4>Apigee to backends in the same VPC</h4><p>Backend in the same VPC will be accessible directly from Apigee if appropriate firewall rules are set.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*BxoEdV-jRzs5F-oXpsEzGQ.png" /></figure><h4>Apigee to backends in different Google Cloud VPCs</h4><p>If the backends are in VPCs different from the one Apigee is peered at, additional configuration is required. If VPC peering is set up between the Apigee VPC and the backend VPCs, the routing would work as it would for any other GCP service communication.</p><p>If peering can not be used, then another option is to use Private service connect(PSC). This would set up the connection between the two VPCs without any kind of peering.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Bqcf4mGA5xIrwG3n_N_krw.png" /></figure><h4>Apigee to backends in on-premise or other clouds:</h4><p>In case of networks residing in on-premise or multi-cloud systems, some type of VPN or Interconnect setup is required.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*-r2-o0YXBwev03FVimTMoA.png" /></figure><h4>Apigee to backends on the internet</h4><p>In this case, services hosted on the internet can directly be referenced by Apigee by their IP address or hostname.</p><p>Some internet services might require allowlisting of the IPs which are accessing those services. Apigee uses Cloud NAT for egress, and static IP addresses can be reserved for those. These IP addresses can be configured for allowlisting at the Internet service.</p><h3>Routing inside Apigee X</h3><h4>Which Proxy deployment to execute?</h4><p>Apigee has a construct for an Environment and an Environment Group. Each Environment group can contain one or more Environments.</p><pre>https://www.example.com/shopping/cart/addItem<br>        |_____________| |___________| |_____|<br>               |             |           |<br>            hostname      basepath     resource</pre><p>Proxies are deployed to an Environment while the hostnames on which the clients are going to access the API proxy are defined at an Environment Group level.</p><p>Each Environment Group can be configured to listen to requests from one or more hostnames but one hostname cannot be in more than one group.</p><p>The hostname and a proxy basepath(across multiple proxy deployments) makes a unique combination in Apigee. For more information on this, visit <a href="https://cloud.google.com/apigee/docs/api-platform/fundamentals/environments-overview">link</a></p><h4>How does traffic flow work inside a proxy?</h4><p>The diagram below shows how a request flows through Apigee.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*mBEG2eEf4b7oTFswH_0_5Q.png" /></figure><ul><li>Each request goes through a bunch of flows inside Apigee</li><li>The flow from Client to Server(left to right) is called request flow</li><li>The flow from Server to Client(right to left) is called response flow</li><li>An Apigee proxy is divided into 2 parts, proxy endpoint and target endpoint.</li><li>Traffic passes through each endpoint twice, first during the request flow and second during the response flow.</li><li>These flow executions are further divided into 3 parts, called PreFlow, Conditional Flow, PostFlow. Conditional flows only run if the underlying condition when defining the flow becomes true. Conditional flows are optional.</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/proxy/0*sAAuW9iJDTLNoYhW" /></figure><p>The code for a basic proxy endpoint looks like this.</p><pre>&lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot; standalone=&quot;yes&quot;?&gt;<br>&lt;ProxyEndpoint name=&quot;default&quot;&gt;<br>    &lt;PreFlow name=&quot;PreFlow&quot;&gt;<br>        &lt;Request/&gt;<br>        &lt;Response/&gt;<br>    &lt;/PreFlow&gt;<br>    &lt;Flows/&gt;<br>    &lt;PostFlow name=&quot;PostFlow&quot;&gt;<br>        &lt;Request/&gt;<br>        &lt;Response/&gt;<br>    &lt;/PostFlow&gt;<br>    &lt;HTTPProxyConnection&gt;<br>        &lt;BasePath&gt;/mock&lt;/BasePath&gt;<br>    &lt;/HTTPProxyConnection&gt;<br>    &lt;RouteRule name=&quot;default&quot;&gt;<br>        &lt;TargetEndpoint&gt;default&lt;/TargetEndpoint&gt;<br>    &lt;/RouteRule&gt;<br>&lt;/ProxyEndpoint&gt;</pre><p>Parent blocks for PreFlow and PostFlow can be seen here which further contains request and response blocks, which are the actual containers for Apigee policies.</p><p>There is a block for HTTPProxyConnection, which defines the base path of the proxy on which Apigee expects the traffic. Route rule block follows this, which is responsible for routing of the traffic and will be discussed later in this article.</p><p>The code for a target endpoint looks like this:</p><pre>&lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot; standalone=&quot;yes&quot;?&gt;<br>&lt;TargetEndpoint name=&quot;default&quot;&gt;<br>    &lt;PreFlow name=&quot;PreFlow&quot;&gt;<br>        &lt;Request/&gt;<br>        &lt;Response/&gt;<br>    &lt;/PreFlow&gt;<br>    &lt;Flows/&gt;<br>    &lt;PostFlow name=&quot;PostFlow&quot;&gt;<br>        &lt;Request/&gt;<br>        &lt;Response/&gt;<br>    &lt;/PostFlow&gt;<br>    &lt;HTTPTargetConnection&gt;<br>        &lt;URL&gt;https://mocktarget.apigee.net&lt;/URL&gt;<br>    &lt;/HTTPTargetConnection&gt;<br>&lt;/TargetEndpoint&gt;</pre><p>The PreFlow and PostFlow blocks are similar to the ones in Proxy endpoint. Instead of HTTPProxyConnection, there is a block for HTTPTargetConnection here, which defines which backend(or target in Apigee’s nomenclature) to send the request to.</p><p>The diagram above shows 2 target endpoints, which is configured for 2 different backends.</p><p>There can be multiple target endpoints within the same proxy. These target endpoints are referenced in the proxy endpoint via route rules.</p><p>When a proxy is created from the console, 1 default proxy endpoint and 1 default target endpoint is created. This default proxy endpoint references the default target endpoint in its default route rule.</p><p>Additional target endpoints can be created, each one can point to different backend servers. Apart from the default route rules, additional rules can be created along with a condition which is responsible for the routing decision. Only the default route rule is without any condition.</p><p>A conditional route rule block in the proxy endpoint looks like this:</p><pre>&lt;RouteRule name=&quot;test&quot;&gt;<br>    &lt;TargetEndpoint&gt;test&lt;/TargetEndpoint&gt;<br>    &lt;Condition&gt;proxy.pathsuffix MatchesPath &quot;/test&quot;&lt;/Condition&gt;<br>&lt;/RouteRule&gt;</pre><p>There are two sub-blocks here, first for the TargetEndpoint that the route rule is pointing to and the second defines the condition for the execution of the route rule which in this case is for a Path Match.</p><h3>Example</h3><p>Imagine having a microservice architecture for an e-commerce application. There are different services for catalog, cart, users etc with different hostnames. APIs can be exposed from a single endpoint via Apigee, and the routing decision will happen through the path suffix. The route rules for this use case will look something like this:</p><pre>&lt;RouteRule name=&quot;product&quot;&gt;<br>    &lt;TargetEndpoint&gt;product-target-endpoint&lt;/TargetEndpoint&gt;<br>    &lt;Condition&gt;proxy.pathsuffix MatchesPath &quot;/product&quot;&lt;/Condition&gt;<br>&lt;/RouteRule&gt;<br>&lt;RouteRule name=&quot;catalog&quot;&gt;<br>    &lt;TargetEndpoint&gt;catalog-target-endpoint&lt;/TargetEndpoint&gt;<br>    &lt;Condition&gt;proxy.pathsuffix MatchesPath &quot;/catalog&quot;&lt;/Condition&gt;<br>&lt;/RouteRule&gt;<br>&lt;RouteRule name=&quot;user&quot;&gt;<br>    &lt;TargetEndpoint&gt;user-target-endpoint&lt;/TargetEndpoint&gt;<br>    &lt;Condition&gt;proxy.pathsuffix MatchesPath &quot;/user&quot;&lt;/Condition&gt;<br>&lt;/RouteRule&gt;<br>&lt;RouteRule name=&quot;default&quot;&gt;<br>    &lt;TargetEndpoint&gt;default&lt;/TargetEndpoint&gt;<br>&lt;/RouteRule&gt;</pre><p>Notice that the default route rule is defined last and doesn’t have any conditions. This should be followed while proxy development.</p><blockquote>Route rules are one of the ways to handle routing, Any custom code or Apigee policies can be used to directly edit the target URL which can also be used for routing.</blockquote><p>This is how routing works in and out of Apigee. I hope this article provides some clarity on the traffic flow to and from Apigee, any feedback would be appreciated.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=409aa544aea3" width="1" height="1" alt=""><hr><p><a href="https://medium.com/google-cloud/how-routing-works-in-apigee-x-409aa544aea3">How routing works in Apigee X</a> was originally published in <a href="https://medium.com/google-cloud">Google Cloud - Community</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
    </channel>
</rss>