<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by Grant Birkinbine on Medium]]></title>
        <description><![CDATA[Stories by Grant Birkinbine on Medium]]></description>
        <link>https://medium.com/@birki?source=rss-7b6976573a9a------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Fri, 24 Apr 2026 15:18:15 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@birki/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[The History of .com]]></title>
            <link>https://birki.medium.com/the-history-of-com-48e33eee8bb8?source=rss-7b6976573a9a------2</link>
            <guid isPermaLink="false">https://medium.com/p/48e33eee8bb8</guid>
            <category><![CDATA[technology]]></category>
            <category><![CDATA[web]]></category>
            <category><![CDATA[curiosity]]></category>
            <category><![CDATA[domain-names]]></category>
            <category><![CDATA[internet]]></category>
            <dc:creator><![CDATA[Grant Birkinbine]]></dc:creator>
            <pubDate>Sun, 13 Aug 2023 18:42:06 GMT</pubDate>
            <atom:updated>2023-08-13T18:43:47.163Z</atom:updated>
            <content:encoded><![CDATA[<p>The unique history of x.com and the many sites that have occupied its space</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*4Jk3Ey5U7mDsuCzc.png" /><figcaption>x.com in 1997</figcaption></figure><h3><strong>Intro 💡</strong></h3><p>It has been impossible to avoid all the chaos surrounding Twitter over the last year. From the acquisition of the company by an eccentric billionaire, to mass layoffs, to re-branding the entire company to the letter “𝕏”… it has been a wild ride for the social media giant that once was a little blue bird.</p><p>When I first heard that <a href="https://en.wikipedia.org/wiki/Elon_Musk">Elon Musk</a> had decided to re-brand Twitter to “𝕏”, I thought it was purely a joke but a notification from the <a href="https://www.nytimes.com/2023/07/24/technology/twitter-x-elon-musk.html?smid=url-share">New York Times</a> on my phone proved otherwise.</p><p>The very first thought that came to my mind was “<em>How on Earth are they ever going to acquire the x.com domain?</em>”. Surely this single letter domain is either already owned by another tech giant or restricted by the <a href="https://en.wikipedia.org/wiki/ICANN">Internet Corporation for Assigned Names and Numbers</a>, right? Well, navigating to <a href="https://x.com">x.com</a> will show you that it is currently redirecting to Twitter (as of August 13th, 2023). This is where my adventure down the x.com rabbit hole began. 🐰</p><p>The majority of screenshots and source information in the rest of this article comes from the <a href="https://web.archive.org/">Internet Archive&#39;s WayBack Machine</a>. The Internet Archive is a non-profit organization that has been archiving the internet since 1996. It is a great resource for looking at the history of websites and how they have changed over time.</p><blockquote><em>The Internet Archive is an American digital library founded on May 10, 1996, and chaired by free information advocate Brewster Kahle. It provides free access to collections of digitized materials like websites, software applications, music, audiovisual and print materials. — </em><a href="https://en.wikipedia.org/wiki/Internet_Archive"><em>source</em></a></blockquote><h3><strong>The History of x.com 📜</strong></h3><p>Jumping right into the history of x.com, we can see that it was first archived in 1996. Let’s walk through the wild history of this domain to see all the different sites that have occupied its space.</p><h4><strong>1996</strong></h4><p>⭐ <strong>x.com is a personal site</strong></p><p>The very first archived version of x.com shows that it was possibly owned by someone named “Dave” and that it was under construction.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*E2_r4eC9HpyYkt-C.png" /><figcaption>x.com in 1996 — <a href="https://web.archive.org/web/19961219022100/http://x.com/">source</a></figcaption></figure><h4>1997</h4><p>In 1997, it looks like x.com was still under the ownership of “Dave” but this year he added a third color to the site… green! It also features a red sphere featuring the x.com domain. “Dave” was probably quite aware at the time just how unique the domain name was to own and likely had no idea that his page would be archived for the next 27 years and one day be the home of Twitter.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*L5BnT0tMMJ9RybEG.png" /><figcaption><em>x.com in 1997 — </em><a href="https://web.archive.org/web/19970411224438/http://x.com/"><em>source</em></a></figcaption></figure><h4>1998</h4><p>At this point, a new owner is using x.com as their personal site. Perhaps “Dave” and “Rob” were friends since “Dave” was mentioned in the previous version of the site. Either way, “Rob” is now the owner of x.com and quite possibly got a nice payout for being able to sell the valuable domain name in the following year.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*wXNH7aNiqIPBVfwN.png" /><figcaption><em>x.com in 1998 — </em><a href="https://web.archive.org/web/19981205063857/http://www.x.com/"><em>source</em></a></figcaption></figure><h4>1999</h4><p>⭐ <strong>x.com is an online banking service</strong></p><p>In 1999, an online bank by the name of <strong>x.com</strong> was founded by Elon Musk, Harris Fricker, Christopher Payne, and Ed Ho in Palo Alto, California (<a href="https://en.wikipedia.org/wiki/X.com_(bank)">source</a>). The bank needed a domain name so Elon Musk allegedly paid $1 million to purchase the domain name (presumably from “Rob”) (<a href="https://fortune.com/2023/07/26/elon-musk-second-try-twitter-x-dot-com-paypal/">source</a>).</p><p>During 1999, the site displays an “Under Construction” message without any details.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*Wpy1h48s7NYZEKYa.png" /><figcaption><em>x.com in 1999 — </em><a href="https://web.archive.org/web/19990429170509/http://www.x.com/"><em>source</em></a></figcaption></figure><h4>Early 2000</h4><p>x.com was up and running in the beginning half of 2000. Navigating to x.com, users would be presented with a login/sign-up screen that sharply aligns with the design trends of the dotcom bubble era. Looking over on the right side of the page, you can see a bit of foreshadowing for the future of x.com… PayPal.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*Zw64fnCBUECSO6en.png" /><figcaption><em>x.com in early 2000 — </em><a href="https://web.archive.org/web/20000301000000*/http://www.x.com/"><em>source</em></a></figcaption></figure><h4>Late 2000</h4><p>⭐ <strong>x.com is PayPal</strong></p><p>In the latter half of 2000, x.com and Confinity merged together to form PayPal and so x.com became PayPal (<a href="https://en.wikipedia.org/wiki/X.com_(bank)">source</a>).</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*xjNMMuAl2oFziLK6.png" /><figcaption><em>x.com in late 2000 — </em><a href="https://web.archive.org/web/20001019043926/http://www.x.com/"><em>source</em></a></figcaption></figure><h4>2012</h4><p>⭐ <strong>x.com is landing page for eBay and PayPal products</strong></p><blockquote>PayPal is acquired by eBay in 2002</blockquote><p>Flash forward to the year 2012 and x.com is now a landing page for eBay and PayPal products. eBay acquired PayPal in 2002 and between 2002 and 2012, x.com was mainly used as a fancy link to promote various services.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*2erWZGKGxeq31Rvx.png" /><figcaption><em>x.com in 2012 — </em><a href="https://web.archive.org/web/20120629033626/https://www.x.com/"><em>source</em></a></figcaption></figure><h3>2017</h3><p>⭐ <strong>x.com is a blank site with the letter “x”</strong></p><p>Between 2012 and 2017, x.com just redirects to ebayinc.com. This all ended when Elon Musk decided to purchase the domain name back from PayPal/eBay because “it has great sentimental value” (<a href="https://en.wikipedia.org/wiki/X.com_(bank)">source</a>).</p><p>x.com stays in this state for nearly all of its time until 2023, with just a short break in 2018.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/938/0*abzejVSoqdi0eHu1.png" /><figcaption><em>x.com in 2017 — </em><a href="https://web.archive.org/web/20170731220848/http://x.com/"><em>source</em></a></figcaption></figure><h3>2018</h3><p>⭐ <strong>x.com is used to promote hat sales</strong></p><p>For a brief period of time in 2018, x.com was used to promote hat sales for <a href="https://en.wikipedia.org/wiki/The_Boring_Company">The Boring Company</a> by redirecting to the boringcompany.com/hat page (done by Elon Musk, the owner of the domain).</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*wpb4dD2IiWFqPm-M.png" /><figcaption><em>x.com in 2018 — </em><a href="https://web.archive.org/web/20180228174126/http://x.com/"><em>source</em></a><em> — </em><a href="https://web.archive.org/web/20180428210848/https://www.boringcompany.com/hat"><em>redirect to hat sale</em></a></figcaption></figure><h4>2023</h4><p>⭐ <strong>𝕏.com is Twitter</strong> (well, almost)</p><p>As of August 1st, 2023, x.com is now redirecting to Twitter as part of their re-branding effort to the letter “𝕏” (<a href="https://en.wikipedia.org/wiki/X.com_(bank)">source</a>).</p><p>In the near future, this will likely “flip around” and x.com will be the main domain for Twitter while twitter.com will redirect to x.com.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*mnaI0pZfGBfZKHAt.png" /><figcaption><em>x.com in 2023 (August 1st) — </em><a href="https://web.archive.org/web/20230801001017/http://x.com/"><em>redirect to twitter</em></a></figcaption></figure><h3>IRL Example 🏭</h3><p>When thinking about how a “domain” can be the home to many different sites, projects, and companies, I keep thinking about how this is similar to the way humans occupy buildings in different ways over time. Only about a mile walk from me (writing this in London) is the Battersea Power Station. This is a perfect <a href="https://en.wikipedia.org/wiki/Real_life#:~:text=The%20initialism%20%22RL%22%20stands%20for,due%20to%20%22RL%20problems%22.">IRL</a> (real-life) example of how a place can serve so many different purposes over the years.</p><p>Battersea Power Station was built in the 1930s and at its peak, it was producing a fifth of London’s power (<a href="https://batterseapowerstation.co.uk/about/heritage-history/">source</a>). It was decommissioned in 1983 and has since been used as a filming location for many movies and TV shows. It has now been completely redeveloped to be a mixed-use development containing apartments, cafes, restaurants, tons of shops, and even a hotel.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*BG-KJBW4iYG4gYot.jpg" /></figure><h3>Conclusion 🏁</h3><p>The history of x.com is a wild one. From a personal site, to an online bank, to PayPal, to eBay services, to a hat sale, and finally to Twitter… x.com has been a home for many different sites over the years.</p><p>If you found this article interesting, please consider giving me a follow, thanks!</p><p>This article was originally written and published by Grant Birkinbine (me!) at the following URL: <a href="https://blog.birki.io/posts/x-dot-com/">https://blog.birki.io/posts/x-dot-com/</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=48e33eee8bb8" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Branch Deployments With IssueOps and GitHub Actions]]></title>
            <link>https://medium.com/better-programming/branch-deployments-with-issueops-and-github-actions-d9405311ad8b?source=rss-7b6976573a9a------2</link>
            <guid isPermaLink="false">https://medium.com/p/d9405311ad8b</guid>
            <category><![CDATA[software-engineering]]></category>
            <category><![CDATA[github-actions]]></category>
            <category><![CDATA[programming]]></category>
            <category><![CDATA[github]]></category>
            <category><![CDATA[devops]]></category>
            <dc:creator><![CDATA[Grant Birkinbine]]></dc:creator>
            <pubDate>Thu, 12 May 2022 23:09:39 GMT</pubDate>
            <atom:updated>2023-08-13T19:02:13.152Z</atom:updated>
            <content:encoded><![CDATA[<h4>Take your deployment practices to the stars with IssueOps and GitHub Actions</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/770/1*22TeVdsB1UIzF57gfGh64A.png" /><figcaption>IssueOps + Branch + Deploy</figcaption></figure><h3>Intro</h3><p>The most common way developers deploy their changes to production is the merge → deploy model. However, there is a significantly better way to ship our code to production. Rather than smash the merge button, cross our fingers, and hold onto our butts... we can hit merge with confidence that our change works exactly as we expect it to!</p><p><strong>Introducing… the branch deploy model!</strong></p><blockquote>If you have already heard of the branch deploy model and are familiar with it, you can skip ahead to get to the part where we implement it with GitHub Actions</blockquote><p>To really understand the branch deploy model, lets first take a look at a traditional <strong>deploy → merge </strong>model. It goes like this:</p><ol><li>Create a branch</li><li>Add commits to your branch</li><li>Open a pull request</li><li>Gather feedback + peer reviews</li><li>Merge your branch</li><li>A deployment starts from the main / master branch</li></ol><p>Now lets take a look at the <strong>branch deploy</strong> model:</p><ol><li>Create a branch</li><li>Add commits to your branch</li><li>Open a pull request</li><li>Gather feedback + peer reviews</li><li>Deploy your change</li><li>Validate</li><li>Merge your branch</li></ol><figure><img alt="" src="https://cdn-images-1.medium.com/max/832/1*8yai1iwnFmfhbsGw4TqE5w.png" /><figcaption>Merge Deploy Model</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/814/1*xQLngnrSz4ZgEUM5QL77qQ.png" /><figcaption>Branch Deploy Model</figcaption></figure><p>As you can see, the merge deploy model is inherently riskier because the main branch is never truly a stable branch. If a deployment fails, or we need to roll back, we follow the entire process again to roll back our changes. However, in the branch deploy model, the main branch is always in a “good” state and we can deploy it at any time to revert the deployment from a branch deploy. In the branch deploy model, we only merge our changes into main the branch once it has been successfully deployed and validated.</p><blockquote>Note: This is sometimes referred to as the <a href="https://docs.github.com/en/get-started/quickstart/github-flow">GitHub Flow</a></blockquote><h3>Key Concepts</h3><p>Key Concepts of the <strong>branch deploy</strong> model:</p><ul><li>The main branch is always considered to be a stable and deployable branch</li><li>All changes are deployed to production before they are merged to the main branch</li><li>To roll back a branch deployment, you deploy the main branch</li></ul><p>Okay, so by now you are hopefully sold on the branch deploy methodology. But how do we implement it? Introducing… IssueOps!</p><h3>IssueOps</h3><p>The best way to define IssueOps is to compare it to something similar, ChatOps. You may be familiar with the concept ChatOps already but in case you aren’t here is a quick definition below:</p><blockquote>ChatOps is the process of interacting with a chat bot to execute commands directly in a chat platform. For example, with ChatOps you might do something like .ping example.org to check the status of a website</blockquote><p>IssueOps adopts the same mindset but through a different medium. Rather than using a chat service (Discord, Slack, etc.) to invoke the commands we use comments on a GitHub Issue or Pull Request. <a href="https://github.com/features/actions">GitHub Actions</a> is the runtime that executes our desired logic when an IssueOps command is invoked.</p><h3>GitHub Actions!</h3><p>How does it work?</p><p>This section will go into detail about how this Action works and hopefully inspire you on ways you can leverage it in your own projects.</p><p>The full source code and further documentation can be found on <a href="https://github.com/GrantBirki/branch-deploy">GitHub</a></p><p>Create this file under .github/workflows/branch-deploy.yml in your GitHub repository</p><p>Let’s walk through a GitHub Action workflow using this <a href="https://github.com/marketplace/actions/branch-deploy">Action</a> line by line:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/b3484fbb7191e9cbece2ef72cc3d2d2f/href">https://medium.com/media/b3484fbb7191e9cbece2ef72cc3d2d2f/href</a></iframe><p>It is important to note that the workflow we want to run IssueOps on is issue_comment and created.</p><p>This means we will not run under any other contexts for this workflow. You can edit this as you wish but it does change how this model ultimately works.</p><p>For example, issue_comment workflows only use files found on main to run. If you do something like on: pull_request you could open yourself up to issues as a user could alter a file in a PR and exfil your secrets for example.</p><p>Only using issue_comment is the suggested workflow type</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/4622a595ed59c92b18f57d8d1a281ed3/href">https://medium.com/media/4622a595ed59c92b18f57d8d1a281ed3/href</a></iframe><p>These are the minimum permissions you need to run this Action</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/697c47ef86b28ba42e91baa11196696c/href">https://medium.com/media/697c47ef86b28ba42e91baa11196696c/href</a></iframe><p>Sets up your demo job, uses an ubuntu runner, and checks out your repo - Just some standard setup for a general Action</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/fbb2f519b86cc4aed0f2d2e0ed60da7f/href">https://medium.com/media/fbb2f519b86cc4aed0f2d2e0ed60da7f/href</a></iframe><blockquote>Note: It is important to set an id: for this job so we can reference its outputs in subsequent steps — You can see a full list of inputs and outputs the Action takes <a href="https://github.com/marketplace/actions/branch-deploy#inputs-%EF%B8%8F%EF%B8%8F">here</a></blockquote><p>The core of this Action takes place here. This block of code will trigger the branch deploy action to run. It will do the following:</p><ol><li>Check the comment which invoked the workflow for the trigger: phrase (.deploy) defined here</li><li>If the trigger phrase is found, it will proceed with a deployment</li><li>It will start by reacting to your message to let you know it is running</li><li>The Action will post a comment with a link to the running Actions workflow for you to follow its progress</li><li>Deployment will be started and attached to your pull request — You’ll get a nice little yellow rocket that tells you deployment is in progress</li><li>Outputs will be exported by this job for later reference in other jobs as well</li></ol><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/47f0a2733ed1a249a4b9d79a9875a233/href">https://medium.com/media/47f0a2733ed1a249a4b9d79a9875a233/href</a></iframe><p>As seen above, we have two steps. One for a noop deploy, and one for a regular deploy. For example, the noop deploy could trigger a terraform plan and the regular deploy could be a terraform apply. These steps are conditionally gated by two variables:</p><ul><li>steps.branch-deploy.outputs.continue == &#39;true&#39; - The continue variable is only set to true when deployment should continue</li><li>steps.branch-deploy.outputs.noop == &#39;true&#39; - The noop variable is only set to true when a noop deployment should be run</li></ul><blockquote>Example: You comment .deploy noop on a pull request. A noop deployment is detected so this action outputs the noop variable to true. You also have the correct permissions to execute the IssueOps command so the action also outputs the continue variable to true. This will allow the &quot;fake noop deploy&quot; step seen above to run and the &quot;fake regular deploy&quot; step will be skipped</blockquote><p><strong>That’s it!</strong></p><p>If you wish to learn more about setting up this Action and all the configuration options available, you can view the Action on the GitHub Marketplace: <a href="https://github.com/marketplace/actions/branch-deploy">link</a></p><blockquote>Note: You can also find all code and a full workflow example with the link referenced above</blockquote><h3>Example 🎥</h3><p>The example below demonstrates using the <a href="https://github.com/marketplace/actions/branch-deploy"><strong>branch-deploy</strong></a> Action on a pull request</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FgmF54npqG2I%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DgmF54npqG2I&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=youtube" width="640" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/28e8fe3d4cedb6df0f10a4d4a9ee45da/href">https://medium.com/media/28e8fe3d4cedb6df0f10a4d4a9ee45da/href</a></iframe><h3>Conclusion</h3><p>If you are looking to enhance your DevOps experience, have better reliability in your deployments, or ship changes faster, then branch-deployments are for you!</p><p>Hopefully you now have a better understanding of why the branch-deploy model is a great option for shipping your code to production.</p><p>By using GitHub + Actions + IssueOps you can leverage the branch deploy model in any repo!</p><p>Source code: <a href="https://github.com/marketplace/actions/branch-deploy">GitHub</a></p><p>Further reading:</p><ul><li><a href="https://blog.birki.io/posts/branch-deploy/">https://blog.birki.io/posts/branch-deploy/</a></li><li><a href="https://github.blog/2023-02-02-enabling-branch-deployments-through-issueops-with-github-actions/">https://github.blog/2023-02-02-enabling-branch-deployments-through-issueops-with-github-actions/</a></li></ul><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=d9405311ad8b" width="1" height="1" alt=""><hr><p><a href="https://medium.com/better-programming/branch-deployments-with-issueops-and-github-actions-d9405311ad8b">Branch Deployments With IssueOps and GitHub Actions</a> was originally published in <a href="https://betterprogramming.pub">Better Programming</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Using pure Terraform to deploy a Kubernetes cluster running Kong Gateway ☸]]></title>
            <link>https://birki.medium.com/using-pure-terraform-to-deploy-a-kubernetes-cluster-running-kong-gateway-295128a3ee3c?source=rss-7b6976573a9a------2</link>
            <guid isPermaLink="false">https://medium.com/p/295128a3ee3c</guid>
            <category><![CDATA[azure]]></category>
            <category><![CDATA[terraform]]></category>
            <category><![CDATA[kong]]></category>
            <category><![CDATA[devops]]></category>
            <category><![CDATA[kubernetes]]></category>
            <dc:creator><![CDATA[Grant Birkinbine]]></dc:creator>
            <pubDate>Wed, 13 Oct 2021 06:08:02 GMT</pubDate>
            <atom:updated>2021-10-13T06:08:02.068Z</atom:updated>
            <content:encoded><![CDATA[<blockquote>Create and deploy a K8s cluster for running any application with Kong as an API ingress using Terraform!</blockquote><figure><img alt="" src="https://cdn-images-1.medium.com/max/368/1*xlbTztlBEIVnj2FS1ba_oA.png" /><figcaption>k8s-kong-terraform</figcaption></figure><h4>Before we begin</h4><p><em>Full Disclosure</em>: This article is not sponsored (or promoted) by Kubernetes, Kong, Terraform, GitHub or any other organization. The sole purpose of this article is support the opensource community. 🖥️</p><h3>Intro 💡</h3><p>Have you ever had a great idea for a web app that got you so excited you just immediately started hacking away on it? And then after a few hours, days, or months realize that the single VM instance it is running on doesn’t scale (at all). Additionally, you have no CI/CD pipeline, nothing is containerized, you have no load balancer, all your secrets are saved to disk, the VM config is all tweaked by hand, and backups are non-existent?</p><p>Sounds all too familiar.. Its probably a kick-ass project but the infrastructure isn’t fun to work with and really hinders your development. That is the core reason I started the journey into k8s and why I have written this article.</p><p>The purpose of this guide is for you to create your own Kubernetes cluster using <a href="https://konghq.com/">Kong</a> as an API ingress. This will be done with minimal setup and by running a single command to provision your infrastructure. After running through the deployment you will have a working base project that you can customize and configure exactly the way you need it.</p><h4>Key Concepts</h4><p>There are a few key concepts which should be understood before we begin. Below I will touch on a few points briefly about why I am using certain technologies for this project/guide:</p><p><strong>Kubernetes</strong>:</p><ul><li>What is Kubernetes? You should check out this guide <a href="https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/">here</a> to learn about Kubernetes if you are unfamiliar</li><li>We want to use Kubernetes to allow rolling updates, zero downtime deployments, a containerized application, and many more k8s features</li></ul><p><strong>Kong:</strong></p><ul><li>What is <a href="https://konghq.com/">Kong</a>? Kong is an API gateway which can be used as an ingress controller with Kubernetes. To read more about Kong check out this link <a href="https://docs.konghq.com/kubernetes-ingress-controller/">here</a></li><li>We want to use Kong as our ingress controller because it is incredibly performant, scales flawlessly, and has a wide array of quality plugins. In this guide we will use the <a href="https://docs.konghq.com/hub/kong-inc/ip-restriction/">Request Control</a> and <a href="https://docs.konghq.com/kubernetes-ingress-controller/1.3.x/guides/cert-manager/">Let’s Encrypt</a> plugins.</li><li>Kong makes it easy to add new routes, enable caching, and loadbalance your applications</li></ul><p><strong>Terraform</strong></p><ul><li>What is Terraform? Terraform is an IaC (Infrastructure as Code) tool for deploying resources. You can read more about it <a href="https://www.terraform.io/intro/index.html">here</a></li><li>We want to use Terraform to declaratively define all the resources we will be deploying for your Kubernetes cluster.</li><li>It is very important that we use a tool like Terraform so you may automate the entire deployment process into a CI/CD pipeline</li></ul><h3>What you will create ⭐</h3><ul><li>A Kubernetes Cluster running on Azure Kubernetes Service (<a href="https://azure.microsoft.com/en-us/services/kubernetes-service/#overview">AKS</a>)</li><li>A k8s ingress controller using <a href="https://konghq.com/">Kong</a></li><li>Grafana/Prometheus <a href="https://grafana.com/grafana/dashboards/7424">dashboards</a> for viewing network metrics from Kong (ready to use out of the box)</li><li>A sample <a href="https://www.nginx.com/">NGINX</a> application which serves HTTP requests (loadbalanced by Kong)</li><li>A simple Flask backend that receives requests from NGINX and acts as a REST API</li><li>(optionally) Enable TLS encryption on your external facing Kong ingress for security (using <a href="https://cert-manager.io/docs/">cert-manager</a>!)</li></ul><p><strong>10,000 Foot Overview</strong></p><p>Let’s look at a <em>10,000 foot overview </em>of the infrastructure we are about to provision:</p><figure><img alt="High level architectural diagram" src="https://cdn-images-1.medium.com/max/1024/1*vcVzx-Sh5-Yk30GjKUzddw.png" /><figcaption>High Level Diagram</figcaption></figure><p>Breakdown of components seen above:</p><ul><li><strong>External Load Balancer</strong> — The “entry point” which Kong creates and binds with to allow public traffic into your k8s cluster. This is essentially just a Azure LB tweaked to work with Kong</li><li><strong>Kong Namespace</strong> — This namespace contains all the components of Kong. Plugins, routes, and deployments are all contained in here. Kong acts as the proxy into the rest of our infrastructure for public connections</li><li><strong>Let’s Encrypt Namespace</strong> — This is an optional namespace that is used for enabling the <a href="https://docs.konghq.com/hub/kong-inc/acme/"><em>ACME </em>Kong plugin</a>. By using this plugin you will automatically provision TLS for your external ingress with Kong.</li><li><strong>Monitoring Namespace</strong> — The monitoring namespace contains all the components for Grafana and Prometheus. This can either be accessibly publicly (using IP restriction) or via <a href="https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/">port forwarding tunnels</a> with kubectl. Both options are scripted for ease of use in this project. The end result of this namespace is a fancy Kong dashboard to view inbound connections for your k8s cluster</li><li><strong>Frontend Namespace</strong> — This is where the sample NGINX application lives. A Kong route is established for everything (/) to be routed to the NGINX web server.</li><li><strong>Backend Namespace</strong> — This is where the backend Flask REST API lives. There is no route configured to this application from Kong. All requests made to the API go through the NGINX server. This is done through k8s services</li></ul><h3>Prerequisites 🚩</h3><p>You will need a few things to use this project:</p><ol><li>Fork or clone the <a href="https://github.com/GrantBirki/k8s-kong-terraform">k8s-kong-terraform</a> project!</li><li>An <a href="https://azure.microsoft.com/en-us/free/">Azure</a> account (this project uses AKS)</li><li><a href="https://github.com/tfutils/tfenv">tfenv</a> (for managing Terraform versions)</li><li><a href="https://kubernetes.io/docs/tasks/tools/">kubectl</a> (for applying K8s manifests)</li><li><a href="https://docs.microsoft.com/en-us/cli/azure/install-azure-cli">Azure CLI</a></li><li>A <a href="https://www.terraform.io/cloud">Terraform Cloud</a> account to store your TF state remotely<br>See the <a href="https://github.com/GrantBirki/k8s-kong-terraform/blob/main/docs/terraform-cloud.md">terraform-cloud</a> docs for more info (required if you are using Terraform Cloud)</li><li>An Azure Service Principal for deploying your Terraform changes — <a href="https://docs.microsoft.com/en-us/azure/active-directory/develop/howto-create-service-principal-portal">Create a Service Principal</a></li><li>Your Azure Service Principal will need owner permissions to your Azure Subscription. This is due to K8s needing to bind your ACR registry to your K8s cluster with pull permissions - <a href="https://docs.microsoft.com/en-us/azure/role-based-access-control/role-assignments-portal?tabs=current">Assign Roles to a Service Principal</a></li><li>You will need to skim through the following files in the <a href="https://github.com/GrantBirki/k8s-kong-terraform">k8s-kong-terraform</a> repo (that you cloned or forked) and edit the lines with (CHANGE ME) comments:</li></ol><ul><li>terraform\k8s-cluster\versions.tf</li><li>terraform\k8s-cluster\variables.tf</li><li>terraform\k8s\k8s-cluster.tf</li><li>Example: Updating values with your own unique K8s cluster name and pointing to your own Terraform cloud workspaces</li></ul><blockquote>Let’s begin!</blockquote><h3>Building the Cluster 🔨</h3><p>Let’s try and build a K8s cluster with a single command!</p><blockquote>This can take a few minutes to run.. stand by, cross your fingers, and hope the magic happens. If you experience any issues feel free to open an <a href="https://github.com/GrantBirki/k8s-kong-terraform/issues/new">issue</a></blockquote><pre>$ make build<br><br>🔨 Let&#39;s build a K8s cluster!<br>✅ tfenv is installed<br>✅ Azure CLI is installed<br>✅ kubectl is installed<br>✅ terraform/k8s-cluster/terraform.auto.tfvars.json exists<br>✅ terraform/k8s-cluster/terraform.auto.tfvars.json ...<br>✅ terraform/k8s/terraform.auto.tfvars.json exists<br>✅ terraform/k8s/terraform.auto.tfvars.json contains ...<br>🚀 Deploying &#39;terraform/k8s-cluster&#39;...<br>⛵ Configuring kubectl environment<br>🔨 Time to build K8s resources and apply their manifests...<br>✅ All manifests applied successfully<br>🦍 Kong LoadBalancer IP: 123.123.123.123<br>📊 Run &#39;script/grafana&#39; to connect to the Kong metrics dashboard<br>✨ Done! ✨</pre><p>Great! You should now have a fully running k8s cluster ready to use. Let’s test it out.</p><p>Either grab the IP above from Kong Loabalancer IP: &lt;IP_here&gt; or grab it from the Services and Ingresses section in your Azure account (under your AKS cluster). Throw that IP into your web browser and you should be presented with a screen something like this:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/604/1*7t_R-VSgcHnKlvfpXYN4Aw.png" /><figcaption>NGINX / Frontend</figcaption></figure><p>Clicking the Query Flask Backend button will generate an API call to the Backend Service which will return some information for you about the container and the environment’s context.</p><p>Success! We have our cluster up and running, Kong works, our frontend service receives requests, and our backend service can respond to API calls. Now that it is all up and working let’s walk through the make build command a bit because it just did a lot</p><p><strong>The make build command explained</strong></p><p>Here is a quick summary in order of operations of what the make build command just did</p><blockquote>Note: The best way to understand what this command just did is to look at the source code (as always) If you don’t want to do that there is a summary just below.</blockquote><p>make build summary:</p><ol><li>Starts the script/build bash script</li><li>Checks to make sure all necessary dependencies are installed (az cli, tfenv, etc)</li><li>Checks to ensure the proper terraform.auto.tfvars.json files have been created and do not contain the example credentials</li><li>Enters the terraform/k8s-cluster directory to init and apply all the Terraform resources for the base k8s-cluster infrastructure. (Think the core cluster, ACR registry, etc)</li><li>Configures your local kubectl environment to use the new cluster we just created. Necessarily values are grabbed from the Terraform state of the k8s-cluster that was just created for authentication and stored in ~/.kube/config as usual.</li><li>Invokes the script/build-and-push-azure script to build the frontend and backend images and push them to the ACR registry created in step 4.</li><li>Enters the terraform/k8s directory to init and apply all the Terraform resources for the services and workloads which will be running on our k8s cluster (think the frontend and backend services).</li><li>Completes the script and displays a status message with the Kong LoadBalancer IP and some ✨</li></ol><h3>Grafana / Prometheus Dashboards 📊</h3><p>If you want to view some request data about how much ingress traffic Kong is receiving for your shiny new web application all you need to do is run a single command:</p><pre>script/grafana</pre><p>This will create a <a href="https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/">port forward tunnel</a> with kubectl (which is encrypted) directly to your k8s cluster. The script/grafana command will give you a link to click and your credentials for accessing grafana</p><p>Once you log in, you may have click around a bit to find your pre-made Kong dashboard. Example:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*59gTfpU4lqUHhGplR56qqw.png" /><figcaption>Pre-made Kong dashboard</figcaption></figure><blockquote>Note: port forwarding should not be used in production systems to access Grafana</blockquote><p>This concludes the basic setup of the cluster! We can extend our stack further by using TLS or GitHub actions as a CI/CD pipeline for deployments. Those topics are more <em>experiential </em>and I have not polished them as much but they included below for your use.</p><h3>Enabling TLS 🔒</h3><p>This is a bonus / experimental section. It “works on my machine ™” but it will take a smidge of manual setup, knowledge of letsencrypt, DNS, etc.</p><p>What you need first (pre-reqs):</p><ul><li>A domain name (<a href="http://www.example.com/">www.example.com</a>)</li><li>A way to configure DNS records for your domain (route53, AzureDNS, etc)</li><li>A working DNS cluster that has been built with make build (above) - Copy down your Kong Proxy IP</li></ul><h4>Steps</h4><p>These are a mix of steps and an outline of the make enable-tls helper script</p><ol><li>Execute the following command: make enable-tls<br>This will invoke a bash script which will swap around some files, prompt you for some input, and inject said input into K8s manifests via sed. It is recommended to say yes (y) to everything and enter the information requested (make sure to read all the prompts!)</li><li>When prompted, create DNS records that point to your K8s cluster. You will need an A record that points to your Kong LoadBalancer ingress and a CNAME that maps to the A record at a minimum (more details presented from the script)</li><li>When prompted, edit each listed K8s manifest file to your liking. This part requires you to have a bit of K8s knowledge (not much though) in what you need to use and where. Each manifest file is commented to help you along!</li><li>The end of the script will run a full deployment of the cluster</li><li>It will take a few minutes for everything to settle and for your TLS certificates to be provisioned. Happy encryption!</li></ol><p>I will not begin to explain how this all works under the hood. It took me a bit of time to grasp. The ultra short version is as follows:</p><p>A Kong Plugin for lets-encrypt is enabled and automatically requests and renews certificates for the domains you provided. It does this via a <a href="https://letsencrypt.org/docs/challenge-types/#http-01-challenge">HTTP challenge</a> with your Kong public load balancer and the DNS records which you pointed to your Kong ingress IP.</p><p>If you are inclined to learn more, here are some good resources:</p><ul><li><a href="https://podtail.com/en/podcast/kubernetes-podcast-from-google/cert-manager-with-james-munnelly/">Kubernetes cert-manager podcast</a></li><li><a href="https://docs.konghq.com/kubernetes-ingress-controller/1.3.x/guides/cert-manager/">Official cert-manager documentation</a></li></ul><h3>GitHub Action for CI/CD 🚀</h3><blockquote>This is another bonus / experimental section</blockquote><p>Once your stack is fully up and running, you can use <a href="https://github.com/features/actions">GitHub Actions</a> to deploy changes to your cluster. I have documented the <a href="https://docs.github.com/en/actions/security-guides/encrypted-secrets">repository secrets</a> which you need to add in order to get in the project repo <a href="https://github.com/GrantBirki/k8s-kong-terraform/blob/main/docs/github-actions.md">here</a> (along with the rest of the docs). You can then use the .github/workflows/deployment.yml file in the repo to run your CI/CD pipeline.</p><p>There are some other workflows baked in as well for your use like <a href="https://tfsec.dev/">tfsec</a>, <a href="https://github.com/reviewdog/action-misspell">misspell</a>, and <a href="https://github.com/GrantBirki/k8s-kong-terraform/blob/main/.github/workflows/review.yml#L21">first-interaction</a>.</p><h3>Conclusion 🏁</h3><p>This write-up is essentially a brain dump of my learnings with building a Kubernetes stack from the ground up. I wanted to use pure Terraform to manage the state of the project, have TLS for the public ingress, and a simple application that will be easy to switch out and build upon for the backend. I certainly learned a lot along the way and hope if you are reading this you did too (even better if you now have your own k8s cluster to deploy applications upon)!</p><p><strong>GitHub Repository: </strong><a href="https://github.com/GrantBirki/k8s-kong-terraform"><strong>k8s-kong-terraform</strong></a></p><h4>Contributing 👩‍💻</h4><p>All are welcome to contribute to this project! If you have a suggestion feel free to fork and open a pull request :)</p><h4>k8s-discord 🟣</h4><p>I have another <em>sister project</em> called <a href="https://github.com/GrantBirki/k8s-discord">k8s-discord</a> which takes a similar approach and deploys a Kubernetes cluster for building Discord slash command applications. It is not as polished as this project but if that interests you feel free to check it out as well.</p><blockquote>Thanks for reading!</blockquote><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=295128a3ee3c" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Fastly-Tempo : A Real-Time Data Pipeline for Fastly’s CDN]]></title>
            <link>https://birki.medium.com/fastly-tempo-a-real-time-data-pipeline-for-fastlys-cdn-f1d5831a169c?source=rss-7b6976573a9a------2</link>
            <guid isPermaLink="false">https://medium.com/p/f1d5831a169c</guid>
            <category><![CDATA[metrics]]></category>
            <category><![CDATA[docker]]></category>
            <category><![CDATA[open-source]]></category>
            <category><![CDATA[python]]></category>
            <category><![CDATA[fastly]]></category>
            <dc:creator><![CDATA[Grant Birkinbine]]></dc:creator>
            <pubDate>Sat, 10 Apr 2021 18:45:04 GMT</pubDate>
            <atom:updated>2021-04-11T16:16:04.426Z</atom:updated>
            <content:encoded><![CDATA[<h3>Fastly-Tempo🚀: A Real-Time Data Pipeline for Fastly’s CDN</h3><blockquote>Monitor, Alert, and Display all your Fastly metrics in Real-Time!</blockquote><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*3fwOCzzB7RGII2Q_1q7oxQ.png" /><figcaption>Dashboard Created with Fastly-Tempo</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*JZQ-lwnkqIQzEWr73d-eLw.png" /><figcaption>Running Fastly-Tempo Container</figcaption></figure><p><em>Full Disclosure</em>: This article is not sponsored (or promoted) by Fastly, New Relic, or any other organization. The sole purpose of this article is support the opensource community.</p><h3>About 💡</h3><p><a href="https://github.com/GrantBirki/fastly-tempo">Fastly-Tempo</a> is an opensource project that allows you to stream real-time aggregated metrics from Fastly into New Relic and other monitoring services.</p><p>This is based off the <a href="https://github.com/newrelic/fastly-to-insights">New Relic blessed way</a> to get your <a href="https://www.fastly.com/">Fastly</a> metrics into <a href="https://newrelic.com/products/insights">New Relic Insights</a>, packaged as a Docker container image for ease of use!</p><blockquote>This project is opensource and hosted on <a href="https://github.com/GrantBirki/fastly-tempo">GitHub</a>.</blockquote><h3>Getting Started 💻</h3><p>Getting started and using the Fastly-to-Insights project is incredibly easy! In fact, it can be used by running a <strong>single command</strong>.</p><p>Before you get started, make sure that you have a <a href="https://docs.fastly.com/guides/account-management-and-security/using-api-tokens">Fastly API Key</a> and a <a href="https://docs.newrelic.com/docs/insights/insights-data-sources/custom-data/insert-custom-events-insights-api#register">New Relic Insert Key</a>.</p><p>Then run the command below:</p><pre>docker run \<br>  -e ACCOUNT_ID=&#39;yourNewRelicAccountId&#39; \<br>  -e FASTLY_KEY=&#39;yourFastlyKey&#39; \<br>  -e INSERT_KEY=&#39;yourNewRelicInsertKey&#39; \<br>  -e SERVICES=&#39;ServiceId1 ServiceId2 ...&#39; \<br>  grantbirki/fastly-tempo:latest</pre><blockquote>Grab the image from <a href="https://hub.docker.com/repository/docker/grantbirki/fastly-tempo">DockerHub</a> 🐳</blockquote><p>Boom! You should now have metrics from all your specified Fastly services streaming in real-time into New Relic.</p><p>The next step is to create some dashboards to visualize this data.</p><h3>Dashboard Creation 🗺️</h3><p>To start visualizing your data, you will need to create a dashboard.</p><p>I have created a pre-made <a href="https://github.com/GrantBirki/fastly-tempo/blob/main/assets/dashboards/new_relic.json">template</a> which can be imported through a simply copy &amp; paste in the New Relic UI. For more instructions on importing a dashboard through JSON, see New Relic’s <a href="https://docs.newrelic.com/docs/query-your-data/explore-query-data/dashboards/introduction-dashboards/#get-started">documentation</a>.</p><blockquote>Tip: Make sure to “find and replace” “accountId”: 1234567 in the JSON template to your own New Relic accountId. Also ensure to set your dashboard permissions appropriately.</blockquote><p><strong>Steps</strong>:</p><ol><li>Save the JSON dashboard template available <a href="https://github.com/GrantBirki/fastly-tempo/blob/main/assets/dashboards/new_relic.json">here</a></li><li>“Find and Replace” “accountId”: 1234567 with your own New Relic accountId</li><li>Import the dashboard to your account through the New Relic UI — <a href="https://docs.newrelic.com/docs/query-your-data/explore-query-data/dashboards/introduction-dashboards/#get-started">docs</a></li></ol><p>Congrats! You should now have some pretty slick dashboards to visualize your Fastly metrics in real-time.</p><h4>Dashboard Examples:</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*SlkDM52t1lE5ffwX3vHuvg.png" /><figcaption>3xx and 4xx Dashboard</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*0sXNqTqCAvM7ccXFdjllVw.png" /><figcaption>Cache Hit Dashboard</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/840/1*KdtMiwIBzFOjjNba7y0Njg.png" /><figcaption>2xx Dashboard</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*hl-jQyTLCX1V6CDGBsYkcw.png" /><figcaption>Total Requests Dashboard</figcaption></figure><h3>Exported Values 🧮</h3><p>There are over 100 values are are exported from the Fastly-to-Insights project for your to create dashboards on, visualize and set alerts for. Here is the complete list of usable values:</p><pre>service<br>num_requests<br>num_tls<br>num_http2<br>num_logs<br>num_pci<br>num_video<br>ipv6<br>pipe<br>uncacheable<br>shield<br>shield_resp_header_bytes<br>shield_resp_body_bytes<br>otfp<br>otfp_shield_time<br>otfp_deliver_time<br>otfp_manifests<br>otfp_shield_resp_header_bytes<br>otfp_shield_resp_body_bytes<br>otfp_resp_header_bytes<br>otfp_resp_body_bytes<br>bandwidth<br>resp_header_bytes<br>header_size<br>resp_body_bytes<br>body_size<br>req_body_bytes<br>req_header_bytes<br>bereq_header_bytes<br>bereq_body_bytes<br>billed_header_bytes<br>billed_body_bytes<br>status_2xx<br>status_3xx<br>status_4xx<br>status_5xx<br>status_200<br>status_204<br>status_301<br>status_304<br>status_400<br>status_401<br>status_403<br>status_404<br>status_500<br>status_501<br>status_502<br>status_503<br>status_504<br>status_505<br>status_1xx<br>waf_logged<br>waf_blocked<br>waf_passed<br>attack_req_body_bytes<br>attack_req_header_bytes<br>attack_logged_req_body_bytes<br>attack_logged_req_header_bytes<br>attack_blocked_req_body_bytes<br>attack_blocked_req_header_bytes<br>attack_passed_req_body_bytes<br>attack_passed_req_header_bytes<br>attack_resp_synth_bytes<br>hits<br>hit_ratio<br>miss<br>pass<br>pass_time<br>synth<br>errors<br>restarts<br>hits_time<br>miss_time<br>tls<br>tls_v10<br>tls_v11<br>tls_v12<br>tls_v13<br>imgopto<br>imgopto_resp_body_bytes<br>imgopto_resp_header_bytes<br>imgopto_shield_resp_body_bytes<br>imgopto_shield_resp_header_bytes<br>object_size_1k<br>object_size_10k<br>object_size_100k<br>object_size_1m<br>object_size_10m<br>object_size_100m<br>object_size_1g<br>recv_sub_time<br>recv_sub_count<br>hash_sub_time<br>hash_sub_count<br>deliver_sub_time<br>deliver_sub_count<br>hit_sub_time<br>hit_sub_count<br>prehash_sub_time<br>prehash_sub_count<br>predeliver_sub_time<br>predeliver_sub_count</pre><h3>Writing Custom Queries (NRQL)🔎</h3><p>You can use any of the values above with <a href="https://docs.newrelic.com/docs/query-your-data/nrql-new-relic-query-language/get-started/introduction-nrql-new-relics-query-language/">New Relic’s Query Language</a> to create your own visualizations and reports.</p><p>Here are several examples using NRQL to create a few sample visualizations:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/534680efabd618a7baeba7c91e70910d/href">https://medium.com/media/534680efabd618a7baeba7c91e70910d/href</a></iframe><h3>Further Documentation 📚</h3><p>For further documentation on how this project works, building your own images, and enabling more features, check out the <a href="https://github.com/GrantBirki/fastly-tempo">GitHub repo</a>!</p><h3>Contributing 👩‍💻</h3><p>If you like this project and want to support its development, you are free to do so! There are plans to expand Fastly Tempo to additional backends such as Graphite, Splunk, and Datadog.</p><h3>JavaScript version 🔗</h3><p>This project was forked and developed in Python from the Fastly-to-Insights project. Checkout the <a href="https://github.com/newrelic/fastly-to-insights">original version</a> written in JavaScript by New Relic engineers.</p><h3>Conclusion ⭐️</h3><p>CDN’s are generally Tier 0 services that need robust and continuous monitoring. Fastly provides great real-time metrics available via their API or consumption. Using Fastly-Tempo, you can build out a centralized dashboard for real-time alerts, monitoring and service visualization for Fastly using their API. If you followed along, you should now have a data pipeline to visualize all your Fastly services and monitor their performance. Enjoy! 🎉</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=f1d5831a169c" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Using Fastly with Terraform, Automation, and CICD]]></title>
            <link>https://birki.medium.com/using-fastly-with-terraform-automation-and-cicd-c29356cda2a6?source=rss-7b6976573a9a------2</link>
            <guid isPermaLink="false">https://medium.com/p/c29356cda2a6</guid>
            <category><![CDATA[fastly]]></category>
            <category><![CDATA[code]]></category>
            <category><![CDATA[cicd]]></category>
            <category><![CDATA[terraform]]></category>
            <category><![CDATA[automation]]></category>
            <dc:creator><![CDATA[Grant Birkinbine]]></dc:creator>
            <pubDate>Tue, 23 Mar 2021 17:26:42 GMT</pubDate>
            <atom:updated>2021-03-23T17:26:42.874Z</atom:updated>
            <content:encoded><![CDATA[<blockquote>Building a continuous edge delivery pipeline for any organization, small or large.</blockquote><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*IGvQp1N6YTIJWBuQ" /></figure><h4>Before we begin</h4><p><em>Full Disclosure</em>: This article is not sponsored (or promoted) by Fastly, Terraform, GitLab or any other organization. The sole purpose of this article is support the opensource community. 🖥️</p><h3>Intro 💡</h3><p>Over the past year, many organizations have gone through a transitional phase to adopt faster, smarter, and more developer friendly technologies. For many companies, this involved migrating their entire CDN (Content Delivery Network) to <a href="https://www.fastly.com/">Fastly</a>. For these organizations it can also be the prime opportunity to adopt <a href="https://en.wikipedia.org/wiki/Infrastructure_as_code">Infrastructure as Code</a> (IaC) methodologies and a robust pipeline for <a href="https://en.wikipedia.org/wiki/Continuous_delivery">Continuous Delivery</a>.</p><h4>Key Terms:</h4><ul><li><strong>CDN:</strong> A “Content Delivery Network” (CDN) is a geographically distributed network of servers to deliver web content to users. <em>Think images, html, JavaScript, and API responses</em>. <a href="https://www.fastly.com/"><strong>Fastly</strong></a> is the CDN used in this article.</li><li><strong>IaC: </strong>“Infrastructure as Code” is the process of managing and provisioning infrastructure through machine-readable definition files, rather than physical configuration or interactive configuration tools. We will use <a href="https://www.terraform.io/"><strong>Terraform</strong></a> as our tool for IaC in this article.</li><li><strong>CI/CD:</strong> “Continuous Integration and Continuous Delivery” is an engineering principal for the building, testing and deployment of applications. We will be using <a href="https://docs.gitlab.com/ee/ci/"><strong>GitLab CI</strong></a> in this article.</li></ul><h4>The Open Source <a href="https://github.com/GrantBirki/fastly-framework">Fastly-Framework</a></h4><p>The entire framework for this article and project can be found on <a href="https://github.com/GrantBirki/fastly-framework">GitHub</a>. The source code also contains a lot of docs pages and in-line documentation for usage. Full Link: <a href="https://github.com/GrantBirki/fastly-framework">https://github.com/GrantBirki/fastly-framework</a></p><h4>Benefits</h4><p>There are many benefits to using these three technologies together — here are just a few:</p><ul><li>Using Git as a version control system for all Fastly changes</li><li>Eliminate code reuse through shared VCL files, Snippets, and Terraform configuration blocks</li><li>Test your services through a CICD pipeline before deploying them</li><li>Integrate with ChatOps for deployments (Example: Slack)</li><li>Quickly create new services from templates with make service - Using Jinja and Python</li><li>Adopt <em>Infrastructure as Code</em> methodologies with Terraform</li><li>Promote a <em>peer-review culture</em> through merge/pull requests</li><li>Create your own pipeline stages for robust testing, alerts, approval, and much more</li></ul><h4>Let’s dive in!</h4><h3>Prerequisites 📝</h3><p>Here are the prerequisites you will need to follow along with this article:</p><ul><li>A <a href="https://www.fastly.com/signup/">Fastly</a> account — Free!</li><li>A <a href="https://gitlab.com/users/sign_up">GitLab</a> account — Free!</li><li>An <a href="https://portal.aws.amazon.com/billing/signup#/start">AWS</a> account if you wish to use Terraform Remote State —<a href="https://aws.amazon.com/free/"> Free tier</a> eligible</li><li>You own <a href="https://domains.google/">domain</a> — Replace all occurrences of example.com in this guide and the framework repo with your domain.</li></ul><h3>Fastly ⏰</h3><p>I may be a little biased and have only had the opportunity to work on a couple of CDNs but I must say, Fastly is awesome. Don’t just take it from me, here are other companies that use Fastly:</p><blockquote>GitHub, Imgur, Reddit, Stripe, New Relic, The New York Times, Kickstarter, Yelp, Shopify, BuzzFeed, Kayak, USA Today, The Guardian, and <a href="https://www.fastly.com/customers/">many more</a></blockquote><p>As the name states, Fastly is <em>fast, </em>especially when it comes to deployment times<em>. </em>When you make a change to a service in Fastly, your changes are deployed globally in under 60 seconds. Other CDNs that are out there have ~10 minute deployment times. With Fastly, you are now able to build, test, deploy, and validate a service before other CDNs can even activate a service!</p><p>Not only is Fastly <em>fast</em> but it is also developer centric. This means that everything you can do in the Fastly console, you can also do via the Fastly API or with <strong>Terraform.</strong></p><p>Let’s break down Fastly for understanding and then explain how we can leverage Terraform to build Fastly services:</p><ul><li>Fastly is a <strong>CDN</strong>. This means there are servers all over the world serving requests for <em>Fastly Services.</em></li><li>Fastly serves requests based on <strong>domains</strong>. We give Fastly a domain and it listens for requests to this domain: <a href="http://www.example.com">www.example.com</a></li><li>Fastly fetches content from <strong>backends</strong>. We provide Fastly with a backend (ex: S3 bucket with images) and Fastly will serve content from these backends to clients.</li><li>Fastly uses <strong>VCL</strong>. <a href="https://varnish-cache.org/docs/2.1/tutorial/vcl.html"><em>Varnish Configuration Language</em></a> is the code we write to fine tune how Fastly responds, caches, and processes requests to our domains and backends.</li></ul><h4>Example of a Fastly Service</h4><p>A Fastly service that listens for incoming requests all around the world to www.example.com.</p><ol><li>A <strong>request </strong>comes in to www.example.com/cookie.jpg and Fastly begins processing.</li><li>Fastly <strong>executes </strong>the service’s <strong>VCL</strong> code which we uploaded.</li><li>The <strong>VCL </strong>code states that all requests with .jpg file extensions should go to a static S3 bucket to get assets.</li><li>Fastly checks its <strong>cache </strong>for this image before requesting it from the <strong>backend</strong>. Fastly determines the image is not in it’s <strong>cache</strong>.</li><li>Fastly <strong>requests </strong>cookie.jpg from the S3 <strong>backend</strong>.</li><li>Fastly <strong>responds </strong>to the client with cookie.jpg</li><li>www.example.com/cookie.jpg renders on the <strong>client’s</strong> browser.</li></ol><p>To accomplish the example above, we need to build a Fastly service, define the <strong>domains </strong>to listen on, <strong>backends </strong>to fetch data from, and <strong>VCL </strong>to process requests. Luckily with Terraform, we can define all this as code!</p><p>Now let’s check out how you can get started writing some Infrastructure as Code and build a Fastly service and configure these components <em>with Terraform</em>!</p><h3>Fastly Service with Terraform ⚙️</h3><p>The snippet below shows how you can make a simple Fastly service with Terraform:</p><pre>resource &quot;fastly_service_v1&quot; &quot;fastly-service&quot; {<br>  name            = &quot;www.example.com&quot;<br>  activate        = false<br>  version_comment = &quot;Hello World&quot; </pre><pre>domain {<br>  name    = &quot;www.example.com&quot;<br>  comment = &quot;Example Domain&quot;<br>  }</pre><pre>backend { <br>  name          = &quot;S3_Example&quot;<br>  address       = &quot;example.s3-website-us-west-2.amazonaws.com&quot;<br>  override_host = &quot;example.s3-website-us-west-2.amazonaws.com&quot;<br>  port          = 80<br>  }</pre><pre>vcl {<br>  name    = &quot;main&quot;<br>  content = file(&quot;fastly.vcl&quot;)<br>  main    = true<br>}</pre><pre>}</pre><p>This example would build a basic Fastly service that serves requests to www.example.com from a S3 website backend.</p><blockquote>You will need to replace all occurrences of example.com with your own domain</blockquote><p>Note: you will need to create the fastly.vcl file listed above and place it into the same directory as you are running your Terraform commands. The fastly.vcl file needs to contain the <a href="https://developer.fastly.com/learning/vcl/using/">Fastly boilerplate</a> as a starting point. Simply paste the boilerplate into your fastly.vcl file. For reference you may view the example service folder for using <strong>Fastly </strong>+ <strong>Terraform </strong>+ the adapted <strong>VCL Boilerplate</strong> in the <strong>framework</strong> <a href="https://github.com/GrantBirki/fastly-framework/tree/main/services/www.example.com">here</a>.</p><h4>Building a Fastly Service</h4><p>Once you have Terraform installed, a valid fastly.tf file, and a fastly.vcl file (with the boilerplate) you are ready to build your service!</p><ol><li>cd into the same directory as your files listed above</li><li>Get a Fastly API key from your account page and set it as an environment variable like so: export FASTLY_API_KEY=&quot;&lt;your_key_here&gt;&quot;</li><li>Run terraform init</li><li>Run terraform plan</li><li>Run terraform apply</li></ol><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*JWqS0RN-0qc-rgubzfANMQ.png" /></figure><p>Check the Fastly console to see your <strong>new</strong> service!</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/765/1*pxc91Z69o-nZN8jO4z8H7w.png" /><figcaption>www.example.com - Fastly Service</figcaption></figure><p>Note — If you are having any difficulties with this step please refer to the following documents as guides:</p><ul><li><a href="https://registry.terraform.io/providers/fastly/fastly/latest/docs">Fastly Terraform Provider</a></li><li><a href="https://docs.fastly.com/en/guides/getting-started">Fastly Developer Guide</a></li><li><a href="https://learn.hashicorp.com/tutorials/terraform/infrastructure-as-code">Learning Terraform</a></li><li><a href="https://github.com/GrantBirki/fastly-framework/blob/main/docs/getting-started.md">Getting Started Framework Documentation</a></li></ul><h3>Building a CI/CD Pipeline 🔨</h3><p>So far we have seen the core components of a Fastly service and how we can create one with Terraform by hand. However, in the real world we do not want a bunch of engineers making changes by hand, with no version control, and without a process to make changes uniformly. This is where pipelines come into play.</p><h4>Intro</h4><blockquote>A CI/CD pipeline is a series of steps that must be performed in order to deliver a new version of a service. They are repeatable, automated, and reliable ways to release and deploy code.</blockquote><p>There are several big players in the space of CI/CD:</p><ul><li><a href="https://docs.gitlab.com/ee/ci/">GitLab-CI</a></li><li><a href="https://github.com/features/actions">GitHub Actions</a></li><li><a href="https://www.jenkins.io/">Jenkins</a></li><li><a href="https://circleci.com/">CircleCI</a></li></ul><p>This project is using <strong>GitLab-CI </strong>for the CI/CD pipeline. However, you can use any CI/CD platform you like and follow the <a href="https://github.com/GrantBirki/fastly-framework">Fastly-Framework</a> as a guide for what you can create.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*jG2MnKTk8zErG_3ilTRP4g.png" /></figure><h4>Create Your Repository</h4><p>The first step to creating a CI/CD pipeline is to create a Git Repo where our code and configuration will live. Since we are using GitLab in this article we can create a repo there.</p><p>If you haven’t cloned the <a href="https://github.com/GrantBirki/fastly-framework">Fastly-Framework</a> yet, please do so now:<br>git clone <a href="https://github.com/GrantBirki/fastly-framework.git">https://github.com/GrantBirki/fastly-framework.git</a></p><ul><li>Create a <strong>new </strong>repo in GitLab</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/939/1*e5ckQXO8F6-RwXPxHIsoqQ.png" /></figure><ul><li>Clone your new repo <strong>locally</strong>:<br>git clone https://gitlab.example.com/&lt;username&gt;/fastly.git</li><li>Copy the Fastly-Framework contents into your new GitLab repo locally:<br>cp -r fastly-framework/. fastly/</li><li>Create a new branch:<br>git checkout -b &quot;initial-fastly-build&quot;</li><li>Add, Stage, and Commit all files:<br>git add -A &amp;&amp; git commit -m &quot;Initial Fastly Repo Commit&quot;</li><li>Push your changes into GitLab:<br>git push --set-upstream origin initial-fastly-build</li><li>Check back into GitLab. You should see your branch and have the option to create a <em>Merge Request</em> now:</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/543/1*3dsHmrQjKbfy0P1x1m2ScQ.png" /><figcaption>Creating your new Merge Request</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/562/1*CVWaG4RMwHMnKB8KziHwLQ.png" /><figcaption>Submitting your new Merge Request</figcaption></figure><ul><li>View the Merge Request and the CI/CD pipeline which was automatically created:</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/978/1*y1ceix5jGUlHEykFGut45A.png" /></figure><p>You will notice that the pipeline starts to run right away in the <em>Merge Request </em>above. However, it fails on the first stage. This is expected as we have not yet configured the pipeline… yet.</p><h4>Configuring Pipeline Stages</h4><p>The first step to getting our pipeline to actually run successfully is to configure pipeline stages.</p><blockquote>For this section we will referencing the .gitlab-ci.yml file frequently. It can be found in the Fastly-Framework <a href="https://github.com/GrantBirki/fastly-framework/blob/main/.gitlab-ci.yml">here</a>.</blockquote><ul><li>Edit the .gitlab-ci.yml file to configure the <em>CI/CD Stages</em> you wish to use.</li></ul><pre>stages:  <br>  - repo-check 🗺️<br>  - plan 📝<br>  - test 🧪<br>  # - metrics build-and-push 📊 (optional)<br>  # - approval 📯 (optional)<br>  - apply ⚙️<br>  - deploy 🚀<br>  - rapid-rollback 🔄<br>  # - metrics deploy 📊 (optional)</pre><p>All the stages listed as <em>(optional)</em> above may be removed:</p><p>approval 📯 is used for ServiceNow automated change requests. More info can be found <a href="https://github.com/GrantBirki/fastly-framework/blob/main/docs/pipeline.md">here</a>.</p><p>metrics build-and-push 📊 and metrics deploy 📊 are used for aggregated metrics collection and publishing to New Relic. More info can be found <a href="https://github.com/GrantBirki/fastly-framework/tree/main/code/logs/fastly-to-insights">here</a>.</p><p>For the sake of simplicity, this would leave us with the following stages once the optional ones are removed:</p><pre>stages:  <br>  - repo-check 🗺️ #(runs on merge_requests)<br>  - plan 📝 #(runs on merge_requests)<br>  - test 🧪 #(runs on merge_requests)<br>  - apply ⚙️ #(runs on master branch)<br>  - deploy 🚀 #(runs on master branch)<br>  - rapid-rollback 🔄 #(runs on master branch)</pre><p>To see what each stage does, please see the <a href="https://github.com/GrantBirki/fastly-framework/blob/main/docs/pipeline.md">Pipeline documentation</a> in the Fastly-Framework. As a helpful tip, you can see where each stage will run above: merge_requests or merges to the master branch</p><p>Note: If you delete the optional stages above please also delete their related references later on in the .gitlab-ci.yml file. For example, if you are not using the approval stage, make sure to delete the block below (it should already be commented out anyways):</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/572/1*HB65rwjaBshBOnhtbYt26w.png" /><figcaption>Approval Job</figcaption></figure><h4>Build our Default Pipeline Image</h4><p>Now that we have our initial stages setup, we can begin configuring the basics of our pipeline. First off, we will need a <strong>default image</strong> for our pipeline. This image will need the following dependencies:</p><ul><li>Terraform</li><li>Python</li><li>AWS CLI (If using a remote S3 backend for Terraform) — suggested</li><li>CURL</li></ul><p>To create this image, you can view the code/ci/docker <a href="https://github.com/GrantBirki/fastly-framework/tree/main/code/ci/docker">folder</a> of the Fastly-Framework for instructions. This folder also contains a Dockerfile to easily build the image.</p><p>Once you have built the image, you will need to push it up to GitLab’s container registry so the pipeline can easily access it.</p><p>Run the following commands from the code/ci/docker <a href="https://github.com/GrantBirki/fastly-framework/tree/main/code/ci/docker">folder</a>:</p><pre>docker login registry.gitlab.com<br>docker build -t registry.gitlab.com/&lt;account&gt;/&lt;repo&gt;/&lt;image&gt;:&lt;tag&gt; .<br>docker push registry.gitlab.com/&lt;account&gt;/&lt;repo&gt;/&lt;image&gt;:&lt;tag&gt;</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/845/1*tqF_wMCN6SqFCHp8lgHEsQ.png" /><figcaption>Image in GitLab Container Registry</figcaption></figure><p>The exact &lt;path&gt;/&lt;repo&gt;/&lt;image&gt;:&lt;tag&gt; to your image my differ depending on what you name it and your registry’s org structure. No matter what you name it you just need to ensure that the default: image: line in your .gitlab-ci.yml points to this image.</p><pre>stages:<br>  ...<br>  ..<br>  .</pre><pre>default:<br>  image:<br>    &lt;GitLab URL&gt;/&lt;repo&gt;/&lt;image&gt;:&lt;tag&gt;</pre><p>Now our pipeline will be able to access this image and it will use it as the default for all jobs and stages unless another image is specified.</p><h4>Configure Pipeline Variables + Terraform State</h4><p>The pipeline needs two types of authentication in order to run. It needs to be able to authenticate to <strong>Fastly</strong> via an API key to deploy changes, and it needs to be able to authenticate to <strong>a remote backend</strong> like AWS for Terraform state.</p><p>Both of these variables/credentials can be configured via the GitLab console. Steps for doing so in the GitLab console can be found <a href="https://docs.gitlab.com/ee/ci/variables/#create-a-custom-variable-in-the-ui">here</a>.</p><p>For <strong>Fastly</strong>, this authentication is very straightforward. Simply follow these steps to <a href="https://docs.fastly.com/en/guides/using-api-tokens">create an API token with Fastly</a>.</p><ul><li>Add the API token for <strong>Fastly </strong>to GitLab CI variables as a key:value <br>pair:<br>FASTLY_API_KEY: &lt;value&gt;</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*rW8UvcdfVuTTeepEAvaDQA.png" /></figure><p>For <a href="https://blog.gruntwork.io/how-to-manage-terraform-state-28f5697e68fa"><strong>Terraform Remote State</strong></a> authentication, this can be done in a variety of ways and you will need to configure this on your own. A common workflow is to use <a href="https://blog.gruntwork.io/how-to-manage-terraform-state-28f5697e68fa">AWS as a remote state</a> for Terraform with S3 and DynamoDB. There are many ways to authenticate to AWS and a simple/common one is to use static IAM user credentials (there are safer methods but this is just an example). You could add your AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY as GitLab CI variables. Then when plan 📝 , test 🧪 , and apply ⚙️ stages run, credentials are automatically set as they are present as environment variables. To see how this works you can checkout AWS documentation <a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-envvars.html">here</a>.</p><blockquote>If you go with this method above, it should just work. If you do any method requiring AWS tokens, or use a service like <a href="https://www.vaultproject.io/">Vault</a> you will need to edit the following files (below) and add your custom logic to get necessary remote state credentials.</blockquote><pre>code/ci/plan/plan.sh<br>code/ci/test/test.sh<br>code/ci/apply/apply.sh</pre><p>It is <strong>highly suggested</strong> to use a Terraform Remote State. This framework uses the assumption that you have Terraform remote backend configured using AWS S3 and DynamoDB. To set this up, please reference the following <a href="https://blog.gruntwork.io/how-to-manage-terraform-state-28f5697e68fa">guide</a>.</p><p>If you do use this method, you will need to make a few additional edits:</p><blockquote><strong>Reminder</strong>: Make sure to be replacing example.com with your own domain in all occurrences.</blockquote><ul><li>Enter the services/ folder</li><li>Enter both folders www.example.com and nonprod.example.com and make the same following edits to the config.tf file</li></ul><pre>terraform {<br>  backend &quot;s3&quot; {<br>    bucket         = &quot;example-terraform-state-bucket&quot; # set to your own S3 bucket name<br>    key            = &quot;fastly/services/www.example.com/terraform.tfstate&quot; # change www.example.com<br>    region         = &quot;us-west-2&quot; # put your desired region here<br>    dynamodb_table = &quot;terraform-lock&quot;<br>    encrypt        = true<br>  }<br>}<br>provider &quot;aws&quot; {<br>  region = &quot;us-west-2&quot; # put your desired region here<br>}</pre><ul><li>Make very similar edits to the code/ci/test/test.tf file</li></ul><pre>terraform {<br>  backend &quot;s3&quot; {<br>    bucket         = &quot;example-terraform-state-bucket&quot; # set to your own S3 bucket name<br>    key            = &quot;fastly/services/test-servicename/terraform.tfstate&quot; #ID0001 - #Do NOT change this line<br>    region         = &quot;us-west-2&quot; # put your desired region here<br>    dynamodb_table = &quot;terraform-lock&quot;<br>    encrypt        = true<br>  }<br>}<br><br>provider &quot;aws&quot; {<br>  region = &quot;us-west-2&quot; # put your desired region here<br>}</pre><blockquote>The edits you make to these files will be directly related to how you setup your Terraform Remote State in AWS.</blockquote><p>All three files you just made edits to should have the same bucket and region . Each file will have its own key as that will be the unique path to your state file for each Fastly service you are building with Terraform. The only oddball is the test.tf file. This is because the test 🧪 stage of the pipeline works a little differently… It works by creating an <em>ephemeral </em>Fastly service. This is so that you can validate the VCL you are uploading before you actually make changes to your own service to avoid <em>tainting</em> your Terraform state. It also allows you to write custom tests against this <em>ephemeral</em> service. The test 🧪 stage essentially just creates your service with a unique “test” name and then instantly delete it (unless you write custom tests in before deletion). The <em>ephemeral</em> test service name will look something like this in Fastly for its short existence: $CI_COMMIT_SHORT_SHA-TEST&lt;domain&gt; . This is set through the test.sh file, the bash sed command against the test.tf file, and the domain block name = “${var.FastlyEnv}&lt;domain&gt;&quot; in each services/&lt;service&gt; folder.</p><p>Whew! We just covered a lot there… Let’s summarize the <strong>Variables + State </strong>section:</p><ul><li>Set an environment variable named FASTLY_API_KEY with your Fastly API key through the GitLab UI.</li><li>Setup credentials that the pipeline can access and authenticate with for your Terraform Remote State (Ex: AWS access/secret keys).</li><li>Edit each fastly.tf file for each service in your services/ folder to point to your Terraform Remote State locations.</li><li>Edit the code/ci/test/test.tf file in a very similar manner to the previous step. Follow the #comments in the file.</li></ul><h4>Trigger and Run the Pipeline</h4><p>Now that we have configured our .gitlab-ci.yml file for our pipeline, the next step is to trigger our pipeline and test it to ensure all the pieces work!</p><p>You should still have the same Merge Request/Pull Request open from when we first pushed our code up to GitLab/GitHub. If you don’t, follow the steps above to create another MR. If your MR is still open, let’s make a new commit with our changes and push it up!</p><p>On every commit to our open Merge Request, GitLab will re-run all jobs that reference the merge_requests requirements. Example:</p><pre>only:<br>  refs:<br>    - merge_requests</pre><p>However, it will only run merge_requests jobs if all other criteria is met. If you take a look at the .gitlab-ci.yml file, you will notice that we try to build two Fastly services: www.example.com and nonprod.example.com . Taking a look at the plan stage for www.example.com in our yml file we can see the following job defined:</p><pre>plan:www.example.com:<br>  stage: plan 📝<br>  script:<br>    - sh code/ci/plan/plan.sh<br>  only:<br>    refs:<br>      - merge_requests<br>    changes:<br>      - services/www.example.com/*<br>      - code/logs/log_format.json<br>      - code/snippets/*<br>      - code/terraform/*<br>      - code/vcl/*<br>  artifacts:<br>    untracked: false<br>    expire_in: 1 days<br>    when: always<br>    paths:<br>        - &quot;services/*/*plan*&quot;</pre><p>Lets break down what this job is doing:</p><ul><li>Running a job called plan:www.example.com</li><li>The job is attached to the plan 📝 stage</li><li>The job will execute the code/ci/plan/plan.sh script</li><li>The job will only run on merge_requests</li><li>The job will only run if changes are made in any of the following locations: services/www.example.com/* , code/logs/log_format.json , code/snippets/* , code/terraform/* , code/vcl/*</li><li>The job will produce an <a href="https://docs.gitlab.com/ee/ci/pipelines/job_artifacts.html">artifact</a> and save it for 1 day. Note: The artifact that is being saved is the plan file that is created after running terraform plan</li></ul><p>Now that we know the criteria for triggering this job lets push up another commit to our merge_request . The only change we need to make is add a newline or perhaps a #comment to any file services/www.example.com/* . This will make the pipeline think that a “change” has occurred in our service and trigger related pipeline jobs. For this example, I will add a single newline to both services/www.example.com/fastly.tf and services/nonprod.example.com/fastly.tf .</p><p>This will trigger the pipeline to build or push new versions of these services to Fastly. Lets check our pipeline status in GitLab:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/972/1*MWHOrWNMtxmZWNq7dfLsug.png" /><figcaption>Merge Request Pipeline</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*8A1AJ24kHZZ42-4zynOA9g.png" /><figcaption>Fully Passed!</figcaption></figure><p>Our merge_request pipeline has passed! 🎉</p><p>Let’s merge our change now to the master or main branch and trigger our deployment pipeline!</p><p>After clicking merge in the GitLab UI we can see our deployment pipeline is immediately kicked off:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/871/1*ppwPo7LAru6DM75TZ-uMWA.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*aWYy3DU78C78EHy7EVj_tg.png" /></figure><p>Now if we check our <em>deployment pipeline</em> we will see that the apply ⚙️ stage has kicked off right away:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1000/1*IuShyv2EsqGFz-YVjO97sQ.png" /></figure><p>Remember, you can check out the <a href="https://github.com/GrantBirki/fastly-framework/blob/main/docs/pipeline.md">pipeline.md</a> docs to get more info on a pipeline stage.</p><blockquote>Apply ⚙️ — The apply phase pushes up an inactive service which you can review in the console. This is useful for a final review before deploying to production.</blockquote><p>Let’s view the Fastly service which the apply ⚙️ stage has created for us in Fastly:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*VtL-B7e6qTenECdRuJ6biA.png" /><figcaption>Example Services Created in Fastly</figcaption></figure><p>It is always good practice to take a look at the service and its associated version in Fastly before triggering the manual Deploy 🚀 stage. In Fastly, this can easily be done by clicking on “Diff versions” in the UI. An example of how to do this can be seen below:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1004/1*ZaUWxAS7fsXDXGH4bKTrnw.png" /><figcaption>Diff version example in Fastly</figcaption></figure><p>Note: Since we are pushing up our first ever service with this pipeline it will be <em>Version 1</em> and we will not be able to run a “Diff” on the service. Keep this in mind though when running your next pipeline.</p><p>Now let’s move onto the deploy stage…</p><blockquote>Deploy 🚀 — This phase activates the service via an API call to Fastly.</blockquote><figure><img alt="" src="https://cdn-images-1.medium.com/max/346/1*7a-4XfTPGOmOLpFOwVV-zg.png" /></figure><p>Click on the job for the service you want to deploy. You can also click the top “play” button to deploy all services at once (risky — You should always run your nonprod services first).</p><p>Once our nonprod service is deployed and it looks good we can deploy prod (www.example.com):</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*bvp9ioAL1pedxCVfRGSOzQ.png" /><figcaption>Successful GitLab CI Deployment Pipeline</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*G599QFeRavnJc1yUf-YH4g.png" /><figcaption>Example Services Activated in Fastly</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/800/1*Pj7UOXjiGD5SpARbU1OF-w.png" /></figure><p>Yahoo! Our deploy pipeline has successfully passed and our Fastly services are activated! 🎉</p><p>Note: The Rapid-rollback 🔄 stage at the end of the pipeline is manual and is for rolling back a service if the deployment causes issues:</p><blockquote>Rapid Rollback 🔄 — This phase is a <em>break glass</em> option to rollback the change made to a service. It should be used only if needed. <a href="https://github.com/GrantBirki/fastly-framework/blob/main/docs/rapid-rollback.md">Documentation Link</a></blockquote><p>This means that if you have made it through the Deploy 🚀 stage you have successfully pushed out your first Fastly change with Terraform and a CI/CD pipeline. Congrats!</p><h3>Summary ⭐</h3><p>We just covered a ton of info and if all went well, you now have a working CI/CD pipeline to consistently deploy Fastly services in an automated fashion. Let’s summarize what we just did to connect some neurons:</p><h4>Fastly</h4><ul><li>Created a fastly.tf file with our general domain and backend info</li><li>Created a fastly.vcl file with the <a href="https://developer.fastly.com/learning/vcl/using/#adding-vcl-to-your-service-configuration">Fastly VCL boilerplate</a></li><li>Used Terraform commands locally to build a Fastly service</li></ul><h4>Pipeline and Terraform</h4><ul><li>Built a GitLab repository using the <a href="https://github.com/GrantBirki/fastly-framework">Fastly-Framework</a></li><li>Built a default image with Docker to run pipeline jobs</li><li>Pushed our default image to the GitLab container registry</li><li>Setup Terraform Remote State using AWS S3 + DynamoDB or a comparable method — <a href="https://blog.gruntwork.io/how-to-manage-terraform-state-28f5697e68fa">Related Guide</a></li><li>Set FASTLY_API_KEY and AWS credentials as environment variables for our pipeline jobs</li><li>Point each services/&lt;service&gt;/config.tf file and code/ci/test/test.tf config file to your Terraform Remote State</li><li>Create a new branch and merge request to trigger our pipeline</li><li>Merge our change with the main or master branch and run the deployment pipeline</li><li>View our shiny new services in Fastly!</li></ul><h3>Conclusion 🎇</h3><p>Pipelines, Infrastructure as Code, and Automation are here to stay. Being able to leverage these technologies for consistent, fast, and reliable deployments is incredibly powerful. These benefits can be amplified when using critical services like Fastly that are the entry point for entire domains. Whether you are an organization of 10 people or 10,000 people, you can benefit from all that CI/CD methodologies have to offer.</p><p>I hope you enjoyed this article, learned a thing or two, and got a useful intro into the world of CI/CD with Fastly. If you haven’t already, please checkout the open source framework of this project on <a href="https://github.com/GrantBirki/fastly-framework">GitHub</a>. There is a lot more documentation, code examples, and notes in the repo to help you get a working pipeline stood up with Fastly + Terraform.</p><h4>❤️ Opensource</h4><p>This project is 100% opensource and free for anyone/everyone to use. All contributors are welcome. Feel free to open pull requests, leave comments, or fork this project for your own use.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=c29356cda2a6" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Digital Success for Eltana in the Era of COVID-19 ]]></title>
            <link>https://birki.medium.com/digital-success-for-eltana-in-the-era-of-covid-19-7f7703d0f93a?source=rss-7b6976573a9a------2</link>
            <guid isPermaLink="false">https://medium.com/p/7f7703d0f93a</guid>
            <category><![CDATA[delivery]]></category>
            <category><![CDATA[serverless]]></category>
            <category><![CDATA[online]]></category>
            <category><![CDATA[aws]]></category>
            <category><![CDATA[covid19]]></category>
            <dc:creator><![CDATA[Grant Birkinbine]]></dc:creator>
            <pubDate>Mon, 01 Mar 2021 16:34:18 GMT</pubDate>
            <atom:updated>2021-07-11T04:20:16.007Z</atom:updated>
            <content:encoded><![CDATA[<blockquote><a href="https://www.eltana.com">Eltana</a> is a wood fired bagel café located in Seattle, WA. Home to some famous Montreal style bagels and great coffee.</blockquote><figure><img alt="Photo of bagels and cream cheese" src="https://cdn-images-1.medium.com/max/1024/1*YDf7HylQ15odAOh-Hm41sQ.jpeg" /></figure><h3>Intro 💡</h3><p>The pandemic shifted the way customers interact with their favorite cafes almost overnight. Restaurants had to adapt to serve their customer’s digitally in a world where food and people always went hand in hand.</p><p>Eltana had already been focusing on its digital customer experience for a while now with online pickups, preorders, and catering. This ultimately helped Eltana be well positioned for a huge shift in customer behavior during COVID-19. By having a solid digital presence and foundation, Eltana was able to pivot and quickly adjust for a drop in physical customer interactions and a sharp increase in online sales. For Eltana, there was a key area that they needed to conquer to succeed. That area was <strong>online delivery</strong>.</p><h3><strong>The Problem </strong>💥</h3><p>The challenges that Eltana (and many other restaurant businesses) faced was online sales. Eltana needed a way to shift from in-store transactions to a hybrid online business to succeed. They needed to make this shift rapidly and work with their existing infrastructure and staff. This article will explain how I designed and implemented an <strong>online ordering and delivery routing engine</strong> on top of Eltana’s existing infrastructure with zero licensing, zero monthly fees, and with entirely serverless infrastructure.</p><h4><strong>Eltana’s Requirements:</strong></h4><p>Eltana needs to be able to do large deliveries each day. Eltana has their own delivery vans and drivers which they use to fulfill large event and group orders (think birthday parties, corporate events, etc.) so they have the equipment and staff to handle bulk orders already. Now they need the code to process, route, and ultimately get bagels to customers.</p><ul><li>Customers need to be able place an order <strong>online</strong></li><li>Orders need to be <strong>routed </strong>in the most efficient manner</li><li>Delivery drivers need to be able to <strong>follow this route </strong>via a phone or tablet</li><li>Bakers need <strong>item totals</strong> so they can prep before orders go out</li><li>No added licensing, hardware, monthly costs, or overhead</li></ul><h3><strong>The Solution </strong>✔️</h3><p>The solution included several components and met all the requirements stated above:</p><h4><strong>Tech Stack:</strong></h4><ul><li><a href="https://www.python.org/">Python</a></li><li><a href="https://aws.amazon.com/">AWS</a></li><li><a href="https://aws.amazon.com/lambda/">Lambda</a></li><li><a href="https://www.google.com/maps">Google Maps</a></li><li><a href="https://developer.here.com/">Here API</a></li><li><a href="https://developer.squareup.com/docs/orders-api/what-it-does">Square API</a></li></ul><h4><strong>Square Online Store </strong>🥯:</h4><p>The <strong>Square Online Store</strong> is a front end service which customers interact with to place their orders. They place an order for the “Neighborhood” they are apart of and then their order is delivered in that neighborhood batch. Store available <a href="https://www.eltanabagels.com/">here</a> 🔗.</p><figure><img alt="Image of Eltana’s Online Store with Square" src="https://cdn-images-1.medium.com/max/1024/1*cSg8TT_SQ6CZqI-ZuYuR4A.png" /><figcaption>Eltana’s Online Store — Square</figcaption></figure><h4><strong>Routing Engine </strong>⚙️:</h4><p>The<strong> </strong>Routing Engine<strong> </strong>(<strong><em>Engine </em></strong>from here on out) was truly the bread 🍞 and butter 🧈 of this project.</p><h4>What does it do?</h4><p>At a high level, the Engine inputs customer orders and outputs them in an routed list. This allows delivery drivers to efficiently deliver their bagels in the morning.</p><h4>Let’s dive in!</h4><ul><li>The <strong>inputs </strong>for the engine are customer orders 💁</li><li>The <strong>output </strong>of the Engine is a <em>Delivery Manifest </em>📃</li></ul><p>The <strong><em>inputs </em></strong>are collected when the Engine runs. These inputs are customer orders which include name, address, items, and any notes the customer left for delivery.</p><pre>{<br>  &quot;orders&quot;: [<br>    {<br>      &quot;id&quot;: &quot;7227c0c1-198f-3719-a32d-d99a61d33589&quot;,<br>      &quot;neighborhood&quot;: &quot;WEST-SEATTLE&quot;,<br>      &quot;address&quot;: &quot;1234 1st Avenue N&quot;,<br>      &quot;city&quot;: &quot;Seattle&quot;,<br>      &quot;state&quot;: &quot;WA&quot;,<br>      &quot;zip&quot;: &quot;98101&quot;,<br>      &quot;name&quot;: &quot;Indiana Jones&quot;,<br>      &quot;phone&quot;: &quot;+1 123-456-7890&quot;<br>    }<br>  ]<br>}</pre><p>The <strong><em>outputs </em></strong>are one or multiple <strong><em>Delivery Manifests</em></strong> which are emails containing all routed orders, their details, and spreadsheets for order preparation. Real world example below — Note: customer data and revenue have been obfuscated.</p><figure><img alt="Eltana Manifest" src="https://cdn-images-1.medium.com/max/918/1*5QxQ2od9d80-N9JafG8qvQ.png" /></figure><figure><img alt="Eltana Order Totals — Manifest" src="https://cdn-images-1.medium.com/max/787/1*2sY-J144d9JNAWycB7h7EQ.png" /></figure><figure><img alt="Eltana Google Maps Links — Manifest" src="https://cdn-images-1.medium.com/max/906/1*cf0ufPqaFUU2JmARQApekw.png" /></figure><figure><img alt="screenshot of manifest" src="https://cdn-images-1.medium.com/max/912/1*t_KbBZCjVoFRvEDKMnrxMA.png" /></figure><figure><img alt="screenshot of manifest" src="https://cdn-images-1.medium.com/max/998/1*V58GWwPH-mOSYYRSTib3ZA.png" /><figcaption>Orders that encounter routing issues are attached at the bottom for managers to manually process</figcaption></figure><figure><img alt="screenshot of order spreadsheets" src="https://cdn-images-1.medium.com/max/410/1*bsAaCfthGygkSkTiyE6_xg.png" /><figcaption>Order Info CSV Attachments</figcaption></figure><h4><strong>How The Engine Works </strong>⚙️</h4><p>The Engine can either be triggered <strong>manually</strong> through the <em>management web app</em> (more on this later) or via a <strong>scheduled </strong>⏰ Lambda trigger. The Lambda trigger goes off every night at 11:00pm PST. Either way, the trigger provides the <em>Neighborhood</em> for which orders are to be routed. For example, if triggered via the web app, a manager provides a <em>Neighborhood</em> to collected and route all orders within. This way all orders for <em>West Seattle</em> can be routed together for delivery the next day.</p><p>Once <strong>triggered</strong> the Engine searches for all orders within the supplied Neighborhood from Square’s Order API. These orders are <strong>collected </strong>and stored in memory.</p><p>Once all the orders for a given “Neighborhood” are collected the Engine translates all the addresses into coordinates, sequences those coordinates via an algorithm that searches for the fastest route, and then populates that route into a usable Google Maps URL 📍 for the delivery drivers. The Engine then kicks off an automated email with the routing sequence and all relevant order data. This email is referred to as the <strong><em>Delivery Manifest </em></strong>as seen in screenshots above.</p><h4><strong>High Level Diagram </strong>🗺️</h4><figure><img alt="Diagram" src="https://cdn-images-1.medium.com/max/1024/1*JC0aVbdYvnUnhbYqWXIvhA.png" /><figcaption>Architectural Diagram of the Routing Engine</figcaption></figure><h4><strong>Management Console — Web App </strong>👨‍💻</h4><p>For ease of use, Eltana’s managers are also able to trigger Manifests with a few clicks via a custom web app. This allows for Manifests to be regenerated if orders come in late, if unexpected errors occur, or in case they want to tweak the routing algorithm (shortest vs fastest).</p><figure><img alt="Eltana Web App Screenshot" src="https://cdn-images-1.medium.com/max/1024/1*Bcqjw5gm7EymY2GMrTztSQ.png" /><figcaption>Eltana Web App for Manifest Generation</figcaption></figure><p>Upon clicking <strong>GENERATE </strong>🚀 a manager can expect a Manifest to land in their inbox in less than 60 seconds.</p><p><strong>Costs </strong>💸:<br>Since this entire project is based off of <a href="https://aws.amazon.com/serverless/"><strong>Serverless</strong></a><strong> </strong>technology that is within the “<a href="https://aws.amazon.com/free/">always free / free tier</a>” of <strong>AWS</strong> there are <strong>zero</strong> monthly costs associated with this project’s infrastructure. In addition to that, there was no added licensing or service fees. This is because the <a href="https://squareup.com/us/en/online-store"><strong>Square Online Store</strong></a> and <strong>API</strong> are <strong>free</strong> for businesses that use their POS solutions already. <strong>Here API </strong>is also <a href="https://developer.here.com/pricing"><strong>free</strong> </a>under 250K requests per month for commercial use (which Eltana is well under).</p><h3>Measuring Success 📈</h3><blockquote>Cool, so that’s a lot of tech. But does it work?</blockquote><p>In order to see if this investment in technology is actually worth it we need to measure the outcomes before and after. Luckily, we have data to back us up!</p><h4><strong>Pre-COVID Era</strong></h4><p>In the era before COVID, Eltana was accepting its own online orders through Formstack for pick-up and delivery. The delivery orders were more geared towards group catering and events, while the pick-ups were for typical customers on the go. This data goes all the way back to 2018.</p><h4><strong>Number Crunching Time</strong></h4><p>The year prior to COVID-19 (<strong>Feb 2019 — Feb 2020</strong>) Source: <a href="https://www.formstack.com/">Formstack</a> data export</p><ul><li>Total Online Orders: <strong>195</strong></li><li>Total Bagels Sold Online: <strong>2465</strong></li></ul><p>COVID-19 era (<strong>Feb 2020 — Feb 2021</strong> time of writing) Source: <a href="https://squareup.com/us/en/online-store">Square Online Store</a> data export</p><ul><li>Total Online Orders: <strong>8,100</strong></li><li>Total Bagels Sold Online: <strong>115,173</strong></li></ul><h4><strong>More Numbers COVID-19 era…</strong></h4><ul><li><strong>4053.85%</strong> increase in online sales</li><li><strong>4536.75%</strong> increase in online bagels sold</li><li><strong>9340 </strong>bagel spreads sold</li><li><strong>10.52%</strong> cart conversion rate (All time) — Last 30 days <strong>15.65%</strong></li><li><strong>6.83%</strong> true conversion rate (All time) — Last 30 days <strong>10.49%</strong></li></ul><h4><strong>How about sales?</strong></h4><p>Not only were sales strong right from the start, they continued to stay strong and are even increasing gradually from the beginning of 2020.</p><figure><img alt="Sales Graph" src="https://cdn-images-1.medium.com/max/754/1*80wDvSvCDqSOBTLGg6kefQ.png" /><figcaption>COVID ERA Online Sales</figcaption></figure><h4>So back to our opening question…</h4><blockquote>Cool, so that’s a lot of tech. But does it work?</blockquote><blockquote>Yes, yes it does!</blockquote><p>We can see that from the start of Eltana’s Online ordering platform, sales are strong and they continue to stay strong. Conversion rate is also out performing for the restaurant category as well. On top of all that… we see a whopping <strong>4536.75% </strong>increase in bagels sold through online channels and at the end of the day, that’s all that really matters. Success! 🍾🎉</p><figure><img alt="Boxes of bagels waiting to go out for delivery" src="https://cdn-images-1.medium.com/max/1024/1*srvX3Z5fyooj-nEp6k7BAA.jpeg" /><figcaption>Boxes of bagels waiting to go out for their online delivery :)</figcaption></figure><h3>Summary</h3><p>What problems have we solved?</p><ul><li>Customers are able to easily place an order <strong>online</strong></li><li>Orders are <strong>routed</strong> in their most efficient manner for delivery drivers</li><li>Delivery drivers are be able to <strong>follow this route </strong>via a phone or tablet</li><li>Bakers have <strong>item totals</strong> so they can prep before orders go out</li><li>No added licensing, hardware, monthly costs, or overhead</li><li>Eltana can still utilize existing infrastructure: delivery vans, Square POS, prep kitchens, and employees</li><li>Eltana is able to prep, route, and deliver to distant neighborhoods with 300+ orders 🎉</li></ul><blockquote>PS if you want to check out the Online Store where the magic happens, here is the link <a href="https://www.eltanabagels.com/"><strong>www.eltanabagels.com</strong></a></blockquote><h3>Closing Thoughts 🥯</h3><p>This year, the pandemic has proven concretely that being able to adapt rapidly is a requirement for success. This is especially true when it comes to technology. Technology can no longer be an afterthought, especially for businesses that have not traditionally relied on it as a core part of their businesses. Investing in technology doesn’t have to be difficult or scary either, in Eltana’s case it was an enhancement to their pre-existing work flows that allowed them to fulfill more orders, do so efficiently, and with zero monthly costs.</p><p>Technology can be your best friend, so embrace it.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=7f7703d0f93a" width="1" height="1" alt="">]]></content:encoded>
        </item>
    </channel>
</rss>