<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
  <title></title>
  <subtitle>Recent posts</subtitle>
  <link href="https://joshghent.com/rss.xml" rel="self"/>
  <link href=""/>
  <updated>2026-04-08T00:00:00Z</updated>
  <id></id>
  <author>
    <name></name>
    <email></email>
  </author>
  
  <entry>
    <title>One item purchased, Ten emails</title>
    <link href="/online-shopping/"/>
    <updated>2026-04-08T00:00:00Z</updated>
    <id>/online-shopping/</id>
    <content type="html">&lt;p&gt;Online shopping is fantastic. A few clicks and you&#39;ve ordered almost anything from anywhere.&lt;/p&gt;
&lt;p&gt;But I&#39;ve noticed a huge uptick in the volume of emails relating to an online order which makes it frustrating to order anything.&lt;/p&gt;
&lt;p&gt;I recently had a purchase which included the following chain&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Thanks for your order&lt;/li&gt;
&lt;li&gt;Create your account&lt;/li&gt;
&lt;li&gt;We&#39;ve got your order&lt;/li&gt;
&lt;li&gt;We&#39;ve shipped your order&lt;/li&gt;
&lt;li&gt;We&#39;re expecting your parcel&lt;/li&gt;
&lt;li&gt;We&#39;ve got your parcel&lt;/li&gt;
&lt;li&gt;Your order is scheduled for delivery&lt;/li&gt;
&lt;li&gt;We&#39;ve delivered your item (Courier)
8a. We&#39;ve delivered your item (Vendor)&lt;/li&gt;
&lt;li&gt;How was your delivery&lt;/li&gt;
&lt;li&gt;Are you happy with your purchase&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;I&#39;m sure there are a myriad of A/B tests that these businesses have run to optimise the living day lights out of their email campaigns to burn both the brand and the experience as a joyful one in the minds of any consumers. But as Goodhart&#39;s Law teaches us, that when a measure becomes a target, it ceases to be a good measure - these sorts of email chains are prime examples of that. Created to optimise whoknowswhat, but ultimately resulting in a frustrating experience.&lt;/p&gt;
&lt;p&gt;My solution is ultimately to use a simplelogin alias that I turn off immidiately but this feels like a solution to a problem that shouldn&#39;t exist.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Anti-Book Book club</title>
    <link href="/antibook-book-club/"/>
    <updated>2026-02-28T00:00:00Z</updated>
    <id>/antibook-book-club/</id>
    <content type="html">&lt;p&gt;The Anti-book book club started as an excuse to meet with friends every month or two. I wanted to document what the idea was and how it worked.&lt;/p&gt;
&lt;p&gt;Regular book clubs all chose a single book to talk about.
There are a few disadvantages to this though:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;It doesn&#39;t cater to personal preferences on the book&lt;/li&gt;
&lt;li&gt;You all need to buy the same book&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;By contrast, the anti-book book club is where we all chose a different book but around a central theme.
This has a few advantages&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Get to share insights from multiple books and different perspectives&lt;/li&gt;
&lt;li&gt;Trade books after&lt;/li&gt;
&lt;li&gt;Books cater towards your preferences&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;So far it’s been quite successful! We’ve had a few themes covered so far&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Plays&lt;/li&gt;
&lt;li&gt;Human brain&lt;/li&gt;
&lt;li&gt;Books we disagree with&lt;/li&gt;
&lt;li&gt;Books told from different perspectives&lt;/li&gt;
&lt;li&gt;Break the fourth wall&lt;/li&gt;
&lt;li&gt;Books you can read in a day&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;But we also have a huge host of theme ideas&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Biographies&lt;/li&gt;
&lt;li&gt;Airport books&lt;/li&gt;
&lt;li&gt;Great books that are fatally flawed&lt;/li&gt;
&lt;li&gt;Take place in a single day&lt;/li&gt;
&lt;li&gt;Widely loved, you hated it&lt;/li&gt;
&lt;li&gt;You loved it, most hate it&lt;/li&gt;
&lt;li&gt;Guilty pleasures&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;...And many more.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Supply Chain Irony</title>
    <link href="/supply-chain-irony/"/>
    <updated>2026-02-27T00:00:00Z</updated>
    <id>/supply-chain-irony/</id>
    <content type="html">&lt;p&gt;There is a certain irony that large organisations carry out a myriad of checks, due diligence, impact assessments, contract reviews and more for any and all business they themselves do business with. But, their team of developers npm, pip, or cargo install any and all dependencies built by a single person on the other side of the world.&lt;/p&gt;
&lt;p&gt;This is not to say open source projects are untrustworthy. But if you suggested to an enterprise business lawyer that you want to run some random code you found on the internet in production, they would tell you no.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Todoist Setup 2026</title>
    <link href="/todoist-setup/"/>
    <updated>2026-02-26T00:00:00Z</updated>
    <id>/todoist-setup/</id>
    <content type="html">&lt;p&gt;I’ve used Todoist for almost a decade and completed over 50,000 tasks on there. Over time my setup has changed quite a lot.&lt;/p&gt;
&lt;p&gt;Previously I used a standard GTD setup - one that Todoist itself lends itself to.&lt;/p&gt;
&lt;p&gt;If you haven’t read Getting things Done, one thing it defines that anything that requires 2 or more actions as a “Project”.
So if you need to plan a holiday, you may have a holiday project and then task related - book flights, define budget etc.&lt;/p&gt;
&lt;p&gt;Over time I found this project setup became a burden, tasks were everywhere and I was always juggling what was and wasn’t a “project”.
It also took a long time to review my tasks and priorities them because they spanned so many different areas of life.&lt;/p&gt;
&lt;p&gt;Instead I now have a simple low maintenance system&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;4 Projects
&lt;ul&gt;
&lt;li&gt;Now - what I must do asap&lt;/li&gt;
&lt;li&gt;Next - things I need to do but aren’t as urgent as now&lt;/li&gt;
&lt;li&gt;Later - tasks with no deadline&lt;/li&gt;
&lt;li&gt;Routines - recurring tasks divided into section of daily, monthly, yearly. These account for a lot of my day to day tasks as I prefer having routine systems.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Labels corresponding to areas of my life. For example:
&lt;ul&gt;
&lt;li&gt;@_work is day job tasks&lt;/li&gt;
&lt;li&gt;@_house is anything house related&lt;/li&gt;
&lt;li&gt;@_project/repowarden is anything related to repowarden one of my side projects. These types of projects get a prefix because I can then filter and get all side project related tasks&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;No more than 3 tasks scheduled per day - from most important to least. I &lt;a href=&quot;https://github.com/joshghent/todoist-wrapped&quot;&gt;reviewed my Todoist habits last year&lt;/a&gt; and found that less than 0.7% of my tasks were marked as important. The things I were doing was important but it makes me consider whether I was prioritising visually for myself to make it easier to know what to do next.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;And that’s it.&lt;/p&gt;
&lt;p&gt;Crucially, to keep things light I make the following UI tweaks&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Make all Labels Grey - I’m deliberately not using them to colour code tasks&lt;/li&gt;
&lt;li&gt;Remove Task count - more tasks makes me feel overwhelmed&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Finally, it wouldn’t be a system in 2026 without using AI so I find it helpfully to use AI in the following ways&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Identify tasks that haven’t been priorities - this is based on giving it context around my overarching yearly theme and priorities&lt;/li&gt;
&lt;li&gt;Identify tasks that need to be split (normally they contain the word “and”)&lt;/li&gt;
&lt;li&gt;Identify tasks that need more information to complete “speak to Jack” is meaningless, “speak to Jack about the quarterly sales report to identify opportunities” provides the context and offloads the memory.&lt;/li&gt;
&lt;li&gt;Find tasks that can be delegated or removed.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Hopefully you find this system useful. As with anything, it could do with further refinement and naturally adapts as my requirements change.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Power Puttering</title>
    <link href="/power-puttering/"/>
    <updated>2026-02-25T00:00:00Z</updated>
    <id>/power-puttering/</id>
    <content type="html">&lt;p&gt;You know those tiny jobs you never think to do until you’re sat on the loo with no paper? Yeah me too.&lt;/p&gt;
&lt;p&gt;To solve those tiny jobs that don’t deserve dedicated time, I decided to start bundling them together into a sort of power hour - hence Power Puttering.&lt;/p&gt;
&lt;p&gt;I schedule this power puttering time when I know I have some time free and I do the following (not every week)&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Refill all the toilet rolls in bathrooms&lt;/li&gt;
&lt;li&gt;Review all PR’s created by Repowarden&lt;/li&gt;
&lt;li&gt;Empty household bins in offices and other rooms&lt;/li&gt;
&lt;li&gt;Water the house plants&lt;/li&gt;
&lt;li&gt;Clean and maintain the vacuum&lt;/li&gt;
&lt;li&gt;Send invoices to clients&lt;/li&gt;
&lt;li&gt;Set the dishwasher to a clean cycle&lt;/li&gt;
&lt;li&gt;Update my wife’s computer&lt;/li&gt;
&lt;li&gt;Update my computer&lt;/li&gt;
&lt;li&gt;Update any other technology things&lt;/li&gt;
&lt;li&gt;Filter my email&lt;/li&gt;
&lt;li&gt;Stock coats and cars with tissues, antibacterial, doggie bags, chewing gum etc&lt;/li&gt;
&lt;li&gt;Sharpen the kitchen knives&lt;/li&gt;
&lt;li&gt;Refill our tea bag thing (we buy the bags in bulk)&lt;/li&gt;
&lt;li&gt;Refill our rice, lentils, pasta, flour and other things (again we buy in bulk)&lt;/li&gt;
&lt;li&gt;Fix any tech issues&lt;/li&gt;
&lt;li&gt;Tighten handles&lt;/li&gt;
&lt;li&gt;WD-40 Handles&lt;/li&gt;
&lt;li&gt;Charge batteries - Apple TV, mouse, keyboard, doorbell, lights etc.&lt;/li&gt;
&lt;li&gt;Clean the coffee machine&lt;/li&gt;
&lt;li&gt;Floss&lt;/li&gt;
&lt;li&gt;Update servers&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Maybe others have certain things like this. But I quite like the regularity of this quiet system to keep things going.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Launch: Repowarden - AI-powered repository maintenance</title>
    <link href="/repowarden/"/>
    <updated>2026-02-15T00:00:00Z</updated>
    <id>/repowarden/</id>
    <content type="html">&lt;p&gt;Today I am launching &lt;a href=&quot;https://repowarden.dev&quot;&gt;Repowarden&lt;/a&gt;, an AI-powered repository maintenance tool.&lt;/p&gt;
&lt;p&gt;The pitch is simple:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Automated dependency updates&lt;/li&gt;
&lt;li&gt;Test generation for changed code&lt;/li&gt;
&lt;li&gt;Custom code tasks based on your rules&lt;/li&gt;
&lt;li&gt;Everything delivered as clean, reviewable pull requests&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Why I built it&lt;/h2&gt;
&lt;p&gt;Most of the burden of maintaining my projects falls into a few categories:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Death by dependabot PR&#39;s that stack up&lt;/li&gt;
&lt;li&gt;Test coverage falls behind changes so there are gaps I&#39;m not quite aware of&lt;/li&gt;
&lt;li&gt;Repetitive clean up tasks never make it into sprints&lt;/li&gt;
&lt;li&gt;General bug fixes, QoL improvements etc.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I wanted something that is like a &amp;quot;dev in a box&amp;quot;. Sure I could boot up claude code in each of these projects, but that&#39;s a huge chore. This is completely automated, like another member of staff.&lt;/p&gt;
&lt;h2&gt;What Repowarden does&lt;/h2&gt;
&lt;p&gt;Repowarden connects to your repository, understands the codebase, and opens PRs that do specific maintenance work.&lt;/p&gt;
&lt;p&gt;It can:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;keep dependencies current without noisy upgrade spam&lt;/li&gt;
&lt;li&gt;generate or improve tests where changes happen&lt;/li&gt;
&lt;li&gt;run repo-specific tasks like refactors, lint migrations, docs updates, and other custom workflows&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The key constraint is quality: changes should be readable, scoped, and easy to merge.&lt;/p&gt;
&lt;h2&gt;How it fits into a team workflow&lt;/h2&gt;
&lt;p&gt;I built Repowarden to work with existing engineering rituals. It&#39;s not designed for feature development. It just keeps things ticking along so you don&#39;t need to worry about how to upgrade Eslint config&#39;s, or patch an obscure security problem.&lt;/p&gt;
&lt;p&gt;Repowarden just handles the repetitive work and leaves the final call to humans.&lt;/p&gt;
&lt;h2&gt;What is next&lt;/h2&gt;
&lt;p&gt;Next up is improving task customization, broadening language support (currently Node, Python and Rust), and giving teams better control over how aggressive maintenance should be.&lt;/p&gt;
&lt;p&gt;If your repo has a backlog of &amp;quot;we should tidy this up later,&amp;quot; Repowarden is built for that.&lt;/p&gt;
&lt;p&gt;Check it out at &lt;a href=&quot;https://repowarden.dev&quot;&gt;repowarden.dev&lt;/a&gt;.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Thoughts on LLMs and AI</title>
    <link href="/thoughts-on-llms/"/>
    <updated>2025-08-29T00:00:00Z</updated>
    <id>/thoughts-on-llms/</id>
    <content type="html">&lt;blockquote&gt;
&lt;p&gt;Thou shalt not make a machine in the likeness of a human mind. - &lt;em&gt;Dune&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;LLMs have been with us for a while now, and the rate that they evolve continues to accelerate. As a developer, I was the prime market for these tools. Not just for programming, but for searching the web, technical questions and the occasional cooking recipe.&lt;/p&gt;
&lt;p&gt;These tools are incredible. &lt;a href=&quot;https://claude.ai/&quot;&gt;Claude&lt;/a&gt; Code creates a website in minutes. The ever-patient &lt;a href=&quot;https://chat.openai.com/&quot;&gt;ChatGPT&lt;/a&gt; answers questions. &lt;a href=&quot;https://gemini.google.com/&quot;&gt;Gemini&lt;/a&gt; can summarise research papers in language even I can understand.&lt;/p&gt;
&lt;p&gt;After using them for a while though, I’ve formed some thoughts on how I use these tools, and how they shape thinking in general.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Inputs over outputs&lt;/h2&gt;
&lt;p&gt;Like many people, I started by typing a vague question into ChatGPT and waiting for an answer. Then I’d poke at the result until it seemed right.&lt;/p&gt;
&lt;p&gt;The problem is obvious: if you don’t define the problem clearly, you can’t judge whether the output is correct.&lt;/p&gt;
&lt;p&gt;What I do now is load the model with context up front - a full problem statement or a &lt;a href=&quot;https://en.wikipedia.org/wiki/5_Whys&quot;&gt;5 Whys&lt;/a&gt; breakdown. That forces me to clarify what I’m asking, and gives the model something useful to work with.&lt;/p&gt;
&lt;p&gt;For programming, that means I don’t ask Claude to “build a backend.” Instead I’ll ask it to suggest edge test cases based on my code coverage, or to compare two different approaches I’m considering. For research, I’ll gather papers using &lt;a href=&quot;https://elicit.com/&quot;&gt;Elicit&lt;/a&gt; and then ask Gemini to summarise and distil them.&lt;/p&gt;
&lt;p&gt;Improve the input, and you improve the output. Garbage in, garbage out.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Overreliance&lt;/h2&gt;
&lt;blockquote&gt;
&lt;p&gt;Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them. - &lt;em&gt;Dune&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;When I first used Claude Code it was magical. I typed a few words and zip - off it went.&lt;/p&gt;
&lt;p&gt;But over time, I noticed my own abilities atrophying. Without exercising the basics, I started forgetting them. I eventually removed AI from my editor and went back to coding manually, only asking models to review my approach.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Telephone&lt;/h2&gt;
&lt;p&gt;AI also extends the &lt;a href=&quot;https://en.wikipedia.org/wiki/Chinese_whispers&quot;&gt;game of telephone&lt;/a&gt; that programming already is. Customers tell analysts, who tell product managers, who tell designers, who tell developers. By the time it reaches you, intent is already muffled.&lt;/p&gt;
&lt;p&gt;Now add an AI into that chain: a brand-new “teammate” with no context beyond your codebase. No wonder the outputs can be so poor.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Bus factor&lt;/h2&gt;
&lt;p&gt;“Vibe coding” makes the AI tool the &lt;a href=&quot;https://en.wikipedia.org/wiki/Bus_factor&quot;&gt;bus factor&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;I often found myself spending more time reviewing AI-generated code than if I’d just written it myself. As a hands-on learner, the process was both laborious and unhelpful.&lt;/p&gt;
&lt;p&gt;For me, autocomplete is enough - small nudges without handing over the wheel.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Yes men&lt;/h2&gt;
&lt;p&gt;Even with attempts to fix it, AI models remain sycophantic. Look through Claude Code’s GitHub issues and you’ll see people complaining at its endless &lt;a href=&quot;https://github.com/anthropics/claude-code/issues/3382&quot;&gt;&amp;quot;You’re absolutely right!&amp;quot;&lt;/a&gt; replies.&lt;/p&gt;
&lt;p&gt;The problem isn’t comedy, it’s that models:
A) lack critical thinking, and
B) echo whatever you feed them.&lt;/p&gt;
&lt;p&gt;That echo chamber has led some to form unhealthy attachments to models. The uproar over &lt;a href=&quot;https://www.theguardian.com/technology/2025/aug/22/ai-chatgpt-new-model-grief&quot;&gt;GPT-4o being replaced by GPT-5&lt;/a&gt; was so loud that OpenAI restored it for Pro users.&lt;/p&gt;
&lt;p&gt;But models aren’t good friends or coworkers. They won’t disagree, push back, or tell us when we’re wrong. Without that, we don’t learn.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Garbage&lt;/h2&gt;
&lt;p&gt;At work, I tried &lt;a href=&quot;https://github.com/features/copilot&quot;&gt;GitHub Copilot&lt;/a&gt; for reviewing pull requests. I hoped it might catch obvious mistakes.&lt;/p&gt;
&lt;p&gt;Instead, it nitpicked irrelevant details, suggested edits that broke the build, and ignored the big picture. Copilot has a reputation for this, but Claude and ChatGPT have given me similar results.&lt;/p&gt;
&lt;p&gt;People say “90% correct, 10% wrong.” My experience is closer to 70/30 - and you spend more time fixing that 10–30% than if you’d just done it yourself. Diminishing returns set in quickly.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Should you learn to code?&lt;/h2&gt;
&lt;p&gt;With reports of job cuts and articles declaring “coding is dead,” panic is natural. But the truth is nobody knows what happens in ten years.&lt;/p&gt;
&lt;p&gt;We’re living through an AI bubble. Even leading players admit as much. The whole house of cards could collapse - especially if issues like &lt;a href=&quot;https://arxiv.org/abs/2305.17493&quot;&gt;model collapse&lt;/a&gt; materialise.&lt;/p&gt;
&lt;p&gt;So should you learn to code? I don’t know. Neither does anyone else. That uncertainty itself is the point.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Going forward&lt;/h2&gt;
&lt;p&gt;LLMs are magical. But they need to be appreciated for what they are: magical parrots&lt;sup id=&quot;fnref1&quot;&gt;&lt;a href=&quot;#fn1&quot;&gt;1&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;
&lt;p&gt;Avoid enslaving yourself to them. Let them sharpen your inputs, but keep the outputs - and the thinking - yours.&lt;/p&gt;
&lt;hr /&gt;
&lt;div class=&quot;footnotes&quot;&gt;
&lt;div id=&quot;fn1&quot;&gt;
&lt;p&gt;&lt;small&gt;[1]&lt;/small&gt; As an aside I asked ChatGPT what it thought of the term &quot;magical parrot&quot; for itself. It replied (clipped):
&quot;If I were self-reflecting, I might say: being called a &#39;magical parrot&#39; captures the spectacle of what I do, but not the structure. A parrot repeats without context; I generate within probabilistic context windows. The magic isn’t that I parrot - it’s that I can combine, remix, and weigh information in ways that often feel like original thought, even though the mechanism is utterly different from yours.&quot; &lt;a href=&quot;#fnref1&quot; title=&quot;return to article&quot;&gt;↩&lt;/a&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
</content>
  </entry>
  
  <entry>
    <title>Just launched, PlanPacer - The easiest way to add payment plans to your app using Stripe</title>
    <link href="/planpacer/"/>
    <updated>2025-06-25T00:00:00Z</updated>
    <id>/planpacer/</id>
    <content type="html">&lt;p&gt;I launched a new microsaas - &lt;a href=&quot;https://planpacer.com&quot;&gt;PlanPacer&lt;/a&gt;!
PlanPacer offers payment plans for Stripe. Allowing you to create flexible instalment payments and boost your conversion rates.&lt;/p&gt;
&lt;p&gt;Now the elevator pitch is out the way, here is some more information about why and how I built it - as well as some musings on the advantages LLM’s can give solo founders.&lt;/p&gt;
&lt;h2&gt;Why I built it&lt;/h2&gt;
&lt;p&gt;Whilst in a meeting with some coworker&#39;s, we discussed Stripe as a potential candidate to serve as a payment gateway. The one blocker to using them however was a lack of dynamic payment plans based on products and other variables.&lt;/p&gt;
&lt;p&gt;Another coworker lamented that they had a similar issue and ended up rolling their own solution.&lt;/p&gt;
&lt;p&gt;I was surprised that stripe didn’t have this functionality since it had such a rich ecosystem. So I dove into their documentation to see what I could find. To my surprise, there wasn’t anything.&lt;/p&gt;
&lt;p&gt;I concluded that there must surely then be product offering for this. Again, much to my surprise my search turned up short. There was one product but the documentation was complex and this feature wasn’t their core offering.&lt;/p&gt;
&lt;p&gt;Now sprung the idea. If two people had the same problem, that’s product validation.
In order to validate it further I would need to put the feelers out. A couple social media posts later and I found that others did have this problem but there was no good solution.&lt;/p&gt;
&lt;h2&gt;How I built it&lt;/h2&gt;
&lt;p&gt;On the train home, I decided to get to work. Using a hono template on cloudflare workers and Claude sonnet, I managed to get a bare bones MVP up within a couple hours.&lt;/p&gt;
&lt;p&gt;I took the rest of the evening to then polish it up manually as the AI had spuriously created a lot of things that weren’t needed. I also ran through some manual testing of the API.&lt;/p&gt;
&lt;p&gt;It broke me from a developer point of view, but I concluded that to launch an MVP there would be no automatic sign up (I run a DB insert and email api keys), or dashboard at all.&lt;/p&gt;
&lt;p&gt;Next stage was to launch, I posted to socials and Reddit and am now continuing to post to share the product.
Whilst it’s not been a stampede to gain access to the product, I was able to launch in just a few hours. This is crucial because it meant I hadn’t wasted time building a product that no one might have wanted.&lt;/p&gt;
&lt;p&gt;If nothing else, it was a great exercise in learning how to launch a proper MVP. And I hope it can serve as a good micro-saas that gains a user base.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Book Notes - The Mezzanine</title>
    <link href="/the-mezzanine/"/>
    <updated>2025-06-06T00:00:00Z</updated>
    <id>/the-mezzanine/</id>
    <content type="html">&lt;p&gt;Nicholson Baker’s The Mezzanine is ostensibly about nothing. An account of a man who purchases shoelaces and milk during his lunch break. That’s it.&lt;/p&gt;
&lt;p&gt;But within that hour, Baker stretches the mundane into something strangely majestic. A 120-page meditation on escalators, office supplies, bathroom etiquette, and the fragile genius of product packaging. It’s one of the funniest books I’ve read in years.&lt;/p&gt;
&lt;p&gt;What really struck me, though, wasn’t just the humour — it was the tactile world that is evoked. The protagonist lives in a clearly analogue age, where staplers clunk, paper is sacred, and the objects of daily life are designed with care and purpose. Reading it felt like being in a beautifully curated stationery shop where every item has a story.&lt;/p&gt;
&lt;p&gt;The footnotes—sprawling, frequent, and delightful—mimic the way thoughts naturally branch and loop back. At times they’re longer than the main text. But rather than feel cluttered, they create a layered, rhythmic inner monologue. If your brain tends to zigzag like mine, it feels like home.&lt;/p&gt;
&lt;p&gt;This isn’t a review. I’ve already forgotten some of the book (it was for our anti-book book club some months ago). But it left a strange comfort behind. A sense that someone else, even a fictional someone, thinks the way I do—treating the trivial with the seriousness it sometimes deserves.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Event Driven Architecture Rules of Thumb</title>
    <link href="/eda-rules-of-thumb/"/>
    <updated>2025-01-09T00:00:00Z</updated>
    <id>/eda-rules-of-thumb/</id>
    <content type="html">&lt;p&gt;Event driven architectures are a fantastic mechanism for powering decoupled services. But they depend on the contract - the actual data points within each event.
As with these sorts of things, there is always an &amp;quot;it depends&amp;quot; of what data should go within events. Therefore, this guide is not prescriptive and should be followed as a &amp;quot;rules of thumb&amp;quot; - guiding principles, not rules.&lt;/p&gt;
&lt;h2&gt;1. Think about testing and versioning early.&lt;/h2&gt;
&lt;p&gt;Each service should be considered in its own little silo and be looked upon by consumers as a third party. Often overlooked is when you need to change your event to modify the payload. How do you version your events? How do you tell consumers to upgrade?
It&#39;s certainly not an easy problem to solve. The first step is to define some guidance. In general this what I try to do&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Add a &amp;quot;deprecation&amp;quot; message to the old event - inform consumers of when the event will be retired.&lt;/li&gt;
&lt;li&gt;Publish both the old and new event for a time&lt;/li&gt;
&lt;li&gt;Publish documentation of how to upgrade to the new event&lt;/li&gt;
&lt;li&gt;Do a codebase search across the GitHub organisation to find instances of consuming that event - inform those teams who manage those services.&lt;/li&gt;
&lt;li&gt;Utilise an event catalogue such as eventcatalog.dev or event bridges schema discovery features to find other areas it might be consumed.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;An informal way is to just make sure you inform all the teams who manage the consumers verbally. A more robust way could be to include a &amp;quot;deprecated&amp;quot; property and make sure that consumers check it. Another approach I have seen is to publish explicit &amp;quot;deprecation&amp;quot; events and have them published somewhere publicly (like to slack or something).
That said, once you have an established pattern for releasing the old and new events, there is basically zero cost for keeping the old one around (unless there is some kind of security issue).&lt;/p&gt;
&lt;p&gt;As regards testing, there are tools such as Pact that can test contracts. But all testing libraries can perform mocks of the event emitter you are using and therefore mandate a certain event structure.&lt;/p&gt;
&lt;h2&gt;2. Avoid inter-service events&lt;/h2&gt;
&lt;p&gt;An anti pattern I see in &amp;quot;pure&amp;quot; event driven architectures, is when the service begins to emit events that are consumed only by itself. Whilst events are good for decoupling outside of your applications scope, there is no need to do the same within the service - it&#39;s already coupled. It often defeats the object of decoupling by creating more internal complexity for your application developers.
If you need to do some an action performing asynchronously, I&#39;d suggest using queues. Otherwise, if you need data on the fly within a system then just perform a direct request to the function internally.&lt;/p&gt;
&lt;h2&gt;3. Structure events&lt;/h2&gt;
&lt;p&gt;In general you should try to have the minimum amount of data required for the consuming service(s) to perform the desired action. Across your organisation, it&#39;s useful to have a set structure that all events should follow for consistent parsing and error handling.
In my view, all events should contain a handful of properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The source application name.&lt;/li&gt;
&lt;li&gt;The version of the event/source application.&lt;/li&gt;
&lt;li&gt;A trace identifier.&lt;/li&gt;
&lt;li&gt;The data - a generic JSON object containing all the info required&lt;/li&gt;
&lt;li&gt;A time stamp of when it was emitted.&lt;/li&gt;
&lt;li&gt;If you want to have a zero-trust architecture, then a sha256 hex digest &amp;quot;signature&amp;quot; property based on the above values.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;4. Be strict with accidental coupling&lt;/h2&gt;
&lt;p&gt;One common mistake I see in event driven architectures is &amp;quot;physiologically coupled&amp;quot; services. This often occurs when team 1 comes along and says &amp;quot;Hey Team 2, we need to build this feature, can you start notifying us when the name of a user changes&amp;quot;. And team 2 begrudgingly agrees and starts emitting events when the name changes. No harm done right? Whilst an isolated example, once you start crafting events bespoke to one consumer you create an immediate coupling. And it means in the future, you have to maintain that event for a single consumer.
Another example is when another service needs an extra piece of information not entirely related to your event, but adjacent. For example, you might emit an event called &amp;quot;OrderCreated&amp;quot;. Another system consumes this but has a requirement to display the expected payment information. Now this should theoretically be handled in a &amp;quot;InvoicePaid&amp;quot; event. But alas, due to the phase of the moon and/or time constraints the payment information associated with the order is added to this event. In this case, we&#39;ve created a three way coupling - between orders, invoices and the end consumer. Whilst challenging, it&#39;s prudent to push back at requests such as these and treat any consumers as a black box. Emit events with the data that tells them about the event. Nothing more. Nothing less.&lt;/p&gt;
&lt;h2&gt;5. Avoid event chain&lt;/h2&gt;
&lt;p&gt;Sometimes your system might need to do a sequence like so
&lt;code&gt;Order placed &amp;gt; invoice created &amp;gt; invoice paid &amp;gt; order approved&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Whilst the instance above is a valid one, it’s extremely easy to fall into a trap where your event driven architecture becomes an “eventually consistent sync architecture”.
You eventually get nested chains of events each sequenced one after another.&lt;/p&gt;
&lt;p&gt;As an antidote consider:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Do those intermediate steps need to exist?&lt;/li&gt;
&lt;li&gt;Could I fan out an event to multiple consumers instead of sequencing?&lt;/li&gt;
&lt;li&gt;Could I process the event in parallel? Do I need the result of the preceding event to action the next?&lt;/li&gt;
&lt;li&gt;Is this an event in the true sense? Or more of a pub/sub architecture?&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Let’s think of some examples based on this:&lt;/p&gt;
&lt;p&gt;In the payment example above, it might seem impossible to architect around this. At the end of the day, the invoice should be paid before the order can be approved and shipped.
In this case, an EDA pattern is not what you’re after. Rather it’s a saga pattern, an event orchestrator to ensure an exact sequence execution.&lt;/p&gt;
&lt;p&gt;Let’s consider another example: a social media system. You may have an event chain like this
&lt;code&gt;PostLiked &amp;gt; NotifyPostOwner &amp;gt; GenerateFeedEntry&lt;/code&gt;
In this case, the events can be processed in parallel. The post owner does not need to be notified of the like before the feed entry is created.&lt;/p&gt;
&lt;p&gt;One final example - a notification system.
Your event chain might look like this &lt;code&gt;UserRegistered &amp;gt; WelcomeEmailSent &amp;gt; AccountActiviatinReminder &amp;gt; OfferSent&lt;/code&gt;
We’ll gloss over that you shouldn’t roll your own drip marketing funnel. Anyway…
In this case you can use a fan out pattern to react to the original user registered event. If there is some logic around not sending one email before another then create the event with an appearance delay.&lt;/p&gt;
&lt;p&gt;In all the examples, dependencies are created unnecessarily making it difficult to debug and each step assuming the previous succeeded.&lt;/p&gt;
&lt;p&gt;I&#39;m sure you can think of many more than 5 rules of thumb for event driven architectures but these are just some I think are often overlooked.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>ClickOps and IaC - which are Pets and Cattle?</title>
    <link href="/pets-vs-cattle/"/>
    <updated>2025-01-08T00:00:00Z</updated>
    <id>/pets-vs-cattle/</id>
    <content type="html">&lt;p&gt;Ancient history tells us that a peoples known as the &amp;quot;sysadmins&amp;quot; or &amp;quot;web masters&amp;quot; used to manually configure servers via the archaic command line. These people literally SSH&#39;ed into machines and ran commands on them. Utter animals.&lt;/p&gt;
&lt;p&gt;And these peoples then needed to baby the servers. Nursing them as a young child to make sure they never broke, never went down or got tired.&lt;/p&gt;
&lt;p&gt;Then the cloud renaissance came, ushering  in a revolutionary way of configuring servers - Infrastructure-as-Code. Gone are the days of running CLI commands - no no. Instead you&#39;re writing Terraform or CDK, and configuring servers via version controlled, repeatable methods. Rather than pets that we had to baby, we had cattle - a cow is just a cow. We can get rid of as many as we like and still rebuild the herd.&lt;/p&gt;
&lt;p&gt;To many people this appeared like a good thing. We lost a lot of the complexity that was many legacy systems configuring by relics long gone.&lt;/p&gt;
&lt;p&gt;The problem is, that Infrastructure-as-code is not synonymous with having cattle.&lt;/p&gt;
&lt;h2&gt;IaC != Cattle&lt;/h2&gt;
&lt;p&gt;The promise of infrastructure-as-code is that we should have infinitely repeatable infrastructure. And buy-and-large it brought us that. It brought us 95% of the way there. The problem is that last 5%.
That 5% is all the things not repeatable like&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Integrations with other systems&lt;/li&gt;
&lt;li&gt;Custom scripts&lt;/li&gt;
&lt;li&gt;SSL certificates - there are ACM certs but these don&#39;t always fit in complex environments.&lt;/li&gt;
&lt;li&gt;Literally any EC2 instance - yes there is cloud-init, but it&#39;s not a silver bullet&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Many may chalk this down as &amp;quot;bad engineering&amp;quot; but this is the reality of the world. If your business honestly could recreate their entire infrastructure from scratch using only code, then great for you - but this is not the norm.
This is not even including the fact that no matter what IaC you use, it still involves some kind of initial manual work to provision the cloud accounts. At &lt;day job=&quot;&quot;&gt;, we have a &amp;quot;manual-account-setup&amp;quot; terraform module that is run to provision new accounts, but this has to be done manually using local AWS credentials - not in a pipeline.&lt;/day&gt;&lt;/p&gt;
&lt;p&gt;But let&#39;s talk about that 95% that is automated. Now sure, you have a singing and dancing pipeline that runs your Terraform. But then how do you upgrade your Terraform, add governance across multiple teams and ensure that the system is secure? I&#39;ll admit that this process is more observable than sshing into servers, but it&#39;s still babying of a kind.&lt;/p&gt;
&lt;p&gt;And whilst your walled garden of Terraform is performing great. You then get a requirement to connect to a DB outside of your IaC and before you know it you end up manually provisioning security groups based on the IP address of your EC2 instance. Quickly, it becomes only marginally better than the old days.&lt;/p&gt;
&lt;h2&gt;Manual Setup of a Server does not mean Pets&lt;/h2&gt;
&lt;p&gt;Let&#39;s remember the original goal of IaC - not simply automation, but rather repeatability and immutability. A manually provisioned server could still be &amp;quot;cattle&amp;quot; as long as the &lt;em&gt;exact&lt;/em&gt; process to stand it up again is documented.
This process is much more difficult than with code, but it&#39;s not impossible.&lt;/p&gt;
&lt;p&gt;Before the &amp;quot;IaC&amp;quot; tools that we have now, most of the time there was at least some sort of &amp;quot;setup.sh&amp;quot; script used to configure the server. It meant that provisioning a new one was as simple as adding it to the internal network and running the script through a KVM.&lt;/p&gt;
&lt;p&gt;So did it matter if your server burned to the ground? No, it was just a server. And as long as you didn&#39;t literally only have one server, you can just roll to the backup and provision a new one.&lt;/p&gt;
&lt;h2&gt;How to be truly resilient&lt;/h2&gt;
&lt;p&gt;The question is, how do we make our infrastructure truly resilient? Truly repeatable?&lt;/p&gt;
&lt;h3&gt;1. Shoot your cattle - regularly&lt;/h3&gt;
&lt;p&gt;Though this might be unpopular with the rest of your team, I&#39;d highly recommend performing a &lt;code&gt;terraform destroy&lt;/code&gt; on your non-production environments. Then run a terraform apply to recreate it. If something breaks, you can quickly address your code to make sure that it completely recreates the environment.&lt;/p&gt;
&lt;h3&gt;2. Adopt the ephemeral mindset&lt;/h3&gt;
&lt;p&gt;As a team, it&#39;s key that you drive the ephemeral mindset into all that you do. This mindset should encompass development, architecture and operations.
This means moving beyond the idea of &amp;quot;environments&amp;quot; and instead think in terms of sandboxes that can be destroyed and recreated. In connection with the first point, a good hallmark of this is that you should be able to create an entire sandbox environment for all new pull requests or locally on a developers machine.&lt;/p&gt;
&lt;p&gt;There are other tactics to drive resiliency in infrastructure but these are the two that I believe give outsized impacts. Building resilient infrastructure is not a trivial task but by adopting these two tactics you&#39;ll set yourself up with a system that requires much less manual intervention, makes upgrades painless and ensures that the infrastructure is always in a deployable state.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Learning to Type Fast</title>
    <link href="/typing-fast/"/>
    <updated>2024-12-12T00:00:00Z</updated>
    <id>/typing-fast/</id>
    <content type="html">&lt;p&gt;Typing is the conduit by which thoughts flow via the keyboard to the computer. Doing this quickly can massively improve not only your productivity but your flow.
Remember when you had a phone that looked like this?&lt;/p&gt;
&lt;div class=&quot;image&quot;&gt;
	&lt;img alt=&quot;A keyboard with number 0-9. Each button represents 3 characters&quot; src=&quot;../../assets/images/old-keyboard.jpg&quot; /&gt;
&lt;/div&gt;
&lt;p&gt;Remember how slow it was? It took ages to write something - &#39;wic iz y we rote lyk this&#39;.&lt;/p&gt;
&lt;p&gt;If you work as a software engineer, you&#39;ll be typing all day - whether programming or more predominantly, emails and slack messages.&lt;/p&gt;
&lt;p&gt;I found that I was typing reasonably fast (70WPM) but often found myself being unable to keep up with my thoughts as I typed. And I made a LOT of mistakes.
The reason? I typed with my index fingers only. I was super quick with it but I knew that I would never be able to write faster unless I retaught myself to write with all 10 digits.&lt;/p&gt;
&lt;p&gt;My first scope was to decide what my baseline was - a test that I could use to track my typing speed over the course of this learning experience. For me this was the standard 1 minute typing test on 10fastfingers.&lt;/p&gt;
&lt;p&gt;Next, I researched a tool that could re-teach me how to type. I found typingclub to be an incredible tool for this.&lt;/p&gt;
&lt;h2&gt;The build up&lt;/h2&gt;
&lt;p&gt;Typingclub starts by building you up what fingers you use. Initially it just focuses on the index finger and builds the muscle memory for what keys it should type. It keeps things simple by having everything lower case and focusing just on the position of your fingers first before typing anything. Once I had reached a stage where I could use 3 fingers I started to apply this in my job. My typing was biblically slow but I was building the muscle memory.&lt;/p&gt;
&lt;h2&gt;A new keyboard&lt;/h2&gt;
&lt;div class=&quot;image&quot;&gt;
	&lt;img src=&quot;../../assets/images/my-keyboard.jpg&quot; /&gt;
&lt;/div&gt;
&lt;p&gt;This has nothing to do with typing speed per se (more on that after), but a new keyboard would affect my typing experience.
Previously, I was using a Logitech K380. A fantastic keyboard with multi-bluetooth and plenty of action keys. For £30, it&#39;s a steal and I&#39;ve owned two of them over the years (after the first one met it&#39;s unfortunate end with a &amp;quot;e&amp;quot; key being totally broken). But I found the typing experience quite flat.
A mechanical keyboard was the obvious answer. But this posed a problem. When I was a teenager, I used to play copious amounts of starcraft 2 - with a mechanical keyboard. Soon, my left forearm swelled with fluid and I eventually had to get carpal tunnel surgery to correct the problem.
Wanting to avoid all that pain again (literally), I was a bit hesitant about picking up a new one. Fortunately, mechanical keyboards have come a long way. And one of those new developments is a &amp;quot;low profile&amp;quot; mechanical keyboard. They are a slimmed down version of mechanical keyboards that are less raised so don&#39;t strain your hands as much.&lt;/p&gt;
&lt;p&gt;After a great deal of searching culminating in an excel spreadsheet comparison, I landed on the Nuphy Air75v2 with Wisteria switches. These switches are most akin to cherry brown switches and have a nice tactile bump on each keystroke. Crucially, they have multi-bluetooth to allow switching between my work and personal machine.&lt;/p&gt;
&lt;p&gt;The new keyboard actually reduced my typing speed by a lot. Because I kept accidentally hitting keys or not pressing the correct one entirely.
It was worth the persistence though because the keyboard massively improved my enjoyment of typing. The satisfying &amp;quot;thonk&amp;quot; with each keypress echoing like a sturdy industrial-era engine.&lt;/p&gt;
&lt;h2&gt;More keys&lt;/h2&gt;
&lt;p&gt;Last up was my pinky finger. I found this difficult to practise because &amp;quot;non-pinky&amp;quot; way of typing was so heavily ingrained. With practise and being conscious about it, I managed it. Although, I&#39;ll confess that I&#39;m far from perfect in this regard.
I kept grinding on these lessons daily - usually doing 5 a day. They soon moved onto more complex topics such as punctuation, numbers and macros. I&#39;m continuing these lessons now but by around lesson 300 I had a fairly good typing speed.&lt;/p&gt;
&lt;p&gt;Once I had grasped using all 10 fingers, I moved on to measuring my progress and practising more common phrases. For this I used, 10fastfingers.com and the top 250 words test. I applied the 80/20 rule in this regard. My thinking was that if I could type the most used 250 words then this would account for 80% of my typing.&lt;/p&gt;
&lt;h2&gt;Progress&lt;/h2&gt;
&lt;div class=&quot;image&quot;&gt;
	&lt;img src=&quot;../../assets/images/typing-progress.jpg&quot; /&gt;
&lt;/div&gt;
My highest words per minute (as measured on 10fastfingers) is 81WPM and my average is staying well above where I started so I&#39;m count that as a success!
&lt;p&gt;If you work on a computer daily, I&#39;d strongly recommend improving your typing speed as your fluency of transmitting information will increase dramatically.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Allgood - Instant healthcheck webpage and API for JS/TS projects</title>
    <link href="/allgood/"/>
    <updated>2024-11-25T00:00:00Z</updated>
    <id>/allgood/</id>
    <content type="html">&lt;p&gt;Over the weekend, I shipped a new open source project - allgood. It&#39;s an npm module designed to instantly add a &lt;code&gt;/healthcheck&lt;/code&gt; page to your app. Out the box it supports Express, Fastify and Hono. It could be adapted to use Next as well (although I haven&#39;t tested this).&lt;/p&gt;
&lt;p&gt;After you set it up, you get a page like this:
&lt;img src=&quot;../../assets/images/allgood/allgood.png&quot; /&gt;&lt;/p&gt;
&lt;p&gt;You can find the &lt;a href=&quot;https://github.com/joshghent/allgood&quot;&gt;code on GitHub&lt;/a&gt; and the &lt;a href=&quot;https://www.npmjs.com/package/@joshghent/allgood&quot;&gt;library published on NPM&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;It was a fun project to build and satisfying to ship something in a weekend. I haven&#39;t built an npm module in over 5 years(!) so it was nice from that perspective also.&lt;/p&gt;
&lt;h2&gt;Why I built allgood&lt;/h2&gt;
&lt;p&gt;It mainly boiled down to 3 key reasons:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Shipping is good. It&#39;s often easy to get caught up in the trap of endless delivery timelines, interspaced with laborious planning meetings, discussions and the like. Creating something and putting it out into the world is amazing. And this project was something small I could deliver and provide value. It was satisfying to actually see the project live.&lt;/li&gt;
&lt;li&gt;Open source is good. A lot of my recent programming work outside of the jobby job has been on commercially focused products (a side hustle if you will). To avoid the complexity that billing and marketing bring, I wanted to create something entirely free and for no tangible benefit to myself other than self-satisfaction. Doing so enabled me to focus on the code and to craft a nice API (internal) - thinking about extensibility and simplicity.&lt;/li&gt;
&lt;li&gt;It&#39;s a tool I&#39;ll use. Like most, I scratch my own itches. It&#39;s one of the great things about knowing how to code. You stumble across a problem you have, and you can solve it!&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;How I built it&lt;/h2&gt;
&lt;p&gt;By far, the most difficult part was setting up the build system so that it worked with both &lt;code&gt;require&lt;/code&gt; and &lt;code&gt;import&lt;/code&gt; syntax. It was a total pain, and tells me something not that nice about the state of the JS ecosystem - I digress. I knew this was going to be a pain though so I figured I&#39;d use a boilerplate. The one I landed on needed a little bit of fudging but I got there in the end.&lt;/p&gt;
&lt;p&gt;Afterwards, I got started on writing the check code. I started with memory usage as I thought this would be the easiest - simply using &lt;code&gt;process.memoryUsage()&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Now though, there was a challenge. The user could configure N checks to run on their healthcheck page. I needed to run all of the checks (in parallel, for speed) and then display the results in a consistent order. I also needed to call the correct check function.
The solution I landed on was for all checks to use a consistent interface. This meant I could guarantee the output of each check was consistent. They are sort of decoupled from the app itself meaning the display message and other properties are entirely within their control.&lt;/p&gt;
&lt;p&gt;Here is how the interface looked&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-ts&quot;&gt;export interface HealthCheck {
  status: Status;
  value: string;
  componentName: string;
  message: string;
  time: number; // Time in milliseconds that elapsed running the check
}

// The interface that each check implements
export interface CheckFn {
  (config: Config): Promise&amp;lt;HealthCheck&amp;gt;
}

export interface CheckRegistry {
  [key: string]: CheckFn;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I then created an object mapping of &amp;quot;checks&amp;quot; to their functions. And then tethered it together with &lt;code&gt;.maps&lt;/code&gt; and &lt;code&gt;Promise.all&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Knowing my own monkey mind, I wanted to make this as maintenance free as possible. So when new dependency PR&#39;s are created, I can just merge them straight away. Therefore, I took some time to create a test suite. Initially I wanted to use the new native node test suite. But after trying to wrangle with mocks I gave up and used Jest instead - which just worked.&lt;/p&gt;
&lt;p&gt;One surprising outcome of this project was the package manager. Using pnpm was a really nice interface and extremely quick. There wasn&#39;t a lot of dependencies to install but I never got into the usual hassle of dependency wrangling I usually do.&lt;/p&gt;
&lt;h2&gt;The future&lt;/h2&gt;
&lt;p&gt;I&#39;m not sure what this project will yield but after sending it round to some colleagues I&#39;ve got a good response and had some good feature requests. It&#39;s fun to code and create things purely for the sake of the act of creation. If you know how to code, do the same - you&#39;ll feel amazing.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Vendor Lock-in is an Imaginary Problem</title>
    <link href="/vendor-lockin/"/>
    <updated>2024-10-23T00:00:00Z</updated>
    <id>/vendor-lockin/</id>
    <content type="html">&lt;p&gt;Vendor lock-in is often cited as a reason to chose on-premise hosting, or to be mindful of &amp;quot;avoiding&amp;quot; it when using the cloud (whatever that means). Whilst there are some compelling arguments to mitigate risks associated with vendor lock-in, I&#39;d argue that this is a completely imaginary problem.&lt;/p&gt;
&lt;h2&gt;1. You&#39;re already locked-in - at every level&lt;/h2&gt;
&lt;p&gt;Vendor lock-in is always discussed through the lens of the cloud. But, in reality the cloud is not a special class of tooling, it&#39;s like everything else. If you use a language like Javascript, you&#39;re already locked in to using the myriad of NPM packages you have. Worst still is if you&#39;re using NextJS, React, Rails or Laravel. These frameworks all lock you in to some degree. What about your continuous integration? The same organisations that worry about vendor lock-in to the cloud providers never bat an eye lid writing thousands of bespoke lines of github actions workflows.
The point is, the cloud is not the only place you&#39;re locked in. You&#39;re locked-in everywhere. But, it&#39;s about making appropriate trade-off.&lt;/p&gt;
&lt;h2&gt;2. Businesses rarely migrate from a cloud provider&lt;/h2&gt;
&lt;p&gt;Asides from a few extremely rare cases, it&#39;s likely that once AWS or whatever other cloud provider puts their claws into you, you won&#39;t escape it&#39;s clutches. Rather you&#39;ll likely embrace it. Abandoning the lift-and-shift you originally oversaw, and build &amp;quot;cloud native&amp;quot; - but why? Because it&#39;s cheaper (on the surface) and the recommended way to do things (according to the cloud providers). You also end up hiring staff with the skills to wrestle these tools to do your bidding. It&#39;s unlikely then that after doing all that work you would want to migrate to another provider. No only is the cost so astronomically high to make it unfeasible but there is also no reason, commercial or otherwise, for doing so. Even if they heavily raise prices, it&#39;s still a footnote for the costs that most businesses pay. In some cases however, at enormous scale, it does become economically advisable to migrate elsewhere. If you built the product in a vendor agnostic way, I&#39;m sure the migration team will thank you. But if not, you&#39;re likely at a scale that the cost of doing this migration is just viewed as &amp;quot;one of those things&amp;quot;.&lt;/p&gt;
&lt;h2&gt;3. You know what you&#39;re signing up for&lt;/h2&gt;
&lt;p&gt;When you go to a shopping centre (or mall), you might need to buy some clothes, gifts or yet another Apple device. But then you suddenly realise that you need shampoo. Now, will you go to the pharmacy shop that sells the shampoo despite the cost being a little higher? Or will you get it from your regular supermarket that is 20 minutes away? Of course, you&#39;ll pay the premium and buy it at the shopping centre. There are no tangible benefits to the product - they are the same. What you are paying for is convenience.
The cloud is no different, you might go in first of all to simply get some compute. But then quickly you start using it for everything - it&#39;s convenient. The business has already signed off on it, there is budget and your tooling is setup for it.
In other words, you know what you signed up for when you started using the compute. And if you need to pay a bit extra to use other things then big deal, you would have paid that cost elsewhere in the first place.
Public cloud pricing is publicly available (although notoriously difficult to forecast). So, due diligence should be taken up front to understand what kind of cost implications the cloud could have. While convenience often outweighs cost at smaller scales (like buying shampoo one time), in larger enterprises, even slight cost increases can lead to substantial expenses. This is why some companies consider multi-cloud or cloud-agnostic strategies to balance convenience with financial prudence.&lt;/p&gt;
&lt;h2&gt;The only sound argument against vendor lock-in&lt;/h2&gt;
&lt;p&gt;Local development. That&#39;s it. Once you use bespoke cloud software, it becomes extremely challenging to create working development environments that mimic your live deployment. Therefore, I do favour against using &amp;quot;cloud-native&amp;quot; tools because it makes your life a lot more difficult for doing development work (which makes the money).&lt;/p&gt;
&lt;h2&gt;Summary&lt;/h2&gt;
&lt;p&gt;As discussed vendor lock-in is largely an imaginary problem. There may well be 1 (or 2) extremely valid reasons for avoiding vendor lock-in but generally it&#39;s something that you need to not worry about and get on with shipping.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>DynamoDB Considered Harmful</title>
    <link href="/dynamodb-harmful/"/>
    <updated>2024-10-21T00:00:00Z</updated>
    <id>/dynamodb-harmful/</id>
    <content type="html">&lt;p&gt;I think DynamoDB is quite a useful database. But, I&#39;m here to tell you should pretty much never use it whether in a greenfield project or a mature product. Let me explain why.&lt;/p&gt;
&lt;h3&gt;1. It&#39;s inflexibility will slow you down.&lt;/h3&gt;
&lt;p&gt;Initially, DynamoDB&#39;s speed and schema-less nature can make development fast. Although DynamoDB isn&#39;t modelled around schema&#39;s (like a traditional SQL database), it is modelled around queries. This means you need to know the query model up front. In a mature product, you might have a good idea of what possible queries you would want but in a greenfield product it&#39;s downright impossible. But, regardless of how mature your product is we all suffer from a fog of war. It&#39;s impossible to know what feature requests will be fired at us by a product manager. This is where DynamoDB will start to become as cumbersome as jeans in a rainstorm. Because you originally modelled the database around the queries you knew about, it becomes inflexible to change for the new queries you need to perform. Often times, teams just add new global secondary indexes (which behind the scenes is a complete copy of your database). But these are limited to 20 per table. And it creates another problem, deciding what index to use when. This becomes a headache to maintain and build upon.
Some may reason that they can create other tables around the new query model, or change the existing database. But these solutions are also rife with complexity. For example, DynamoDB has a 1MB limit on results from scans. Therefore, to write a migration script you&#39;d need to loop through all this data, update it in memory, remove the old data and then write it back to the database. Try doing that in a no-downtime fashion.
SQL on the other hand can be mashed, broken, cracked and mangled in order to create whatever queries you want. You can join tables where there are separations of relationships. And if you ever get into place where the table design is bad you can do a SQL migration - it might take a while but there are no query limits to contend with.&lt;/p&gt;
&lt;h3&gt;2. You don&#39;t have the scale nor will ever need it&lt;/h3&gt;
&lt;p&gt;DynamoDB was designed with hyperscaler levels of traffic. Sites receiving this level of traffic clock in at less than 50 across the entire internet. Ergo, you do not need DynamoDB&#39;s scale. Even Facebook who started on MySQL (but eventually use other supporting databases), didn&#39;t need DynamoDB level scale (initially) and when they did they refactored. Choosing a database technology for scaling reasons is akin to getting &amp;quot;CEO&amp;quot; business cards before writing a line of code - it&#39;s premature optimisation.&lt;/p&gt;
&lt;h4&gt;&amp;quot;But what about scale?&amp;quot; I hear you cry&lt;/h4&gt;
&lt;p&gt;This is a non-argument made by people who haven&#39;t launched serious products. Any SQL database with enough money thrown at it can scale plenty fine. SQL databases power some of the most used websites in the world - including all Wordpress sites &lt;a href=&quot;https://colorlib.com/wp/wordpress-statistics/&quot;&gt;(43.3% of all websites)&lt;/a&gt;, Spotify &lt;a href=&quot;https://the-cfo.io/2024/07/29/revenue-radar-spotify-hits-high-note-with-q2-2024-results-but-faces-industry-discord/&quot;&gt;(626M MAU)&lt;/a&gt; and Twitter/X &lt;a href=&quot;https://www.demandsage.com/twitter-statistics/&quot;&gt;(516M MAU)&lt;/a&gt;.
If you reach a scale where SQL becomes the blockage then you likely have the money to solve the problem either by refactoring parts of your app or mitigating the issues (with caches, increased horizontal capacity, sharding etc). In any case, scale is not a reason to discount SQL.&lt;/p&gt;
&lt;h3&gt;3. You need other technologies to handle basic database functions&lt;/h3&gt;
&lt;p&gt;Let&#39;s say you are using DynamoDB for your little CRUD app. Percy the product manager swaggers up to your desk and asks &amp;quot;Can we add search and sorting of tables to our product pretty please?&amp;quot;. After a few umms and ahhs, you probably realise that DynamoDB ain&#39;t gonna cut it. You&#39;re going to use another tool, like Opensearch or Algolia. Before any DynamoDB rage nerds ask, yes you can do search using &amp;quot;contains&amp;quot; and you can do sorting. But try doing something even remotely complex and it becomes impossible to perform. You know what could add search and sorting to your app, SQL! DynamoDB needs all this other cruft to provide basic functions to your app. And it&#39;s not just a case of spinning up some new system and hey presto it works. Nope! You&#39;ve got to keep those systems in sync (DynamoDB streams or Kinesis), then you&#39;ve got to configure indexes and so on.
Your technology choice is slowing you down from delivery of features to your customers. Guess who also doesn&#39;t care about your database - your customers!&lt;/p&gt;
&lt;h3&gt;4. It&#39;s challenging to work on your system locally&lt;/h3&gt;
&lt;p&gt;Working with DynamoDB locally isn&#39;t as simple as running a docker container (like ehem, MySQL or Postgres). So, then you&#39;re focused to have a &amp;quot;remote&amp;quot; development environment. Where you have resources deployed to the cloud that are used by each developer. These can work, but provide horrendous experiences. Changes to system configuration have to be deployed and you can&#39;t work offline. There are a whole host of problems that arise as a result of systems that cannot just be run on a computer.&lt;/p&gt;
&lt;h3&gt;In summary&lt;/h3&gt;
&lt;p&gt;DynamoDB isn&#39;t a bad database. It&#39;s simply a tool that came from a certain context - in Amazons case high throughput writes and reads. In 99% of cases, you are not going to be in that same context. Focus on getting there first with the power of a SQL database. Most apps are just CRUD, SQL is super good at that - use it, and get back to building the thing.&lt;/p&gt;
&lt;p&gt;PS: A good litmus test to see if your tech stack is working is asking &amp;quot;in the last 6 months to what extent has our technology impeded or prevented the development of new features or fixes?&amp;quot;. If the answer is &amp;gt; 3 then you probably chose wrong.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>An engineers guide to the Solutions Architecture role</title>
    <link href="/solutions-architecture/"/>
    <updated>2024-08-21T00:00:00Z</updated>
    <id>/solutions-architecture/</id>
    <content type="html">&lt;p&gt;For the past couple of years, I&#39;ve worked as a solutions architect. Originally I stumbled into the role after being offered a contract, but found I&#39;d been doing something akin to it for years. Coming from an engineering background (rather than business), I found I had to make lots of changes with how I approached work. This article is centred around those who are coming from an engineering background but have minimal exposure to business.&lt;/p&gt;
&lt;h2&gt;Defining the role&lt;/h2&gt;
&lt;p&gt;A solution architect can be difficult to define as they often have a broad remit. It can often include (ordered by technical involvement from low to high)&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Enterprise Architect - working at an organization level to align IT strategy with business goals.&lt;/li&gt;
&lt;li&gt;Solutions Architect - Focuses on specific project designing end-to-end solutions based on business requirements and technical implementation.&lt;/li&gt;
&lt;li&gt;Technical Architect - Focuses on low level code based design decisions influencing the implementation heavily.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;But, presuming we fit nicely in the second category, you will be responsible for:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Creating end-to-end designs for systems based on technical and business requirements.&lt;/li&gt;
&lt;li&gt;Innovating and introducing new technologies to the platform.&lt;/li&gt;
&lt;li&gt;Working with delivery teams to make sure they understand the designs created.&lt;/li&gt;
&lt;li&gt;Working with business users (and business analysts) to translate requirements and pain points into technical improvements.&lt;/li&gt;
&lt;li&gt;Creating PoCs for new systems.&lt;/li&gt;
&lt;li&gt;Selecting vendors to solve business problems.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;This isn&#39;t an exhaustive list but gives you a flavour.&lt;/p&gt;
&lt;h2&gt;Understanding the business needs&lt;/h2&gt;
&lt;p&gt;This is the key role that transforms you from a developer into an architect. I&#39;ll admit, it&#39;s something I&#39;m still working on. Unlike technical skills you can &amp;quot;train&amp;quot; by smashing leetcode, these softer skills cannot be approached in the same way. But there are some good patterns to follow:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Spend lots of time gathering requirements. I&#39;ve always ended up making mistakes when I&#39;ve rushed this part. This &amp;quot;discovery&amp;quot; should end up making up at least 60% of the overall project work - depending on the complexity. Principally you&#39;re trying to uncover the known unknowns, and the unknown unknowns. It&#39;s challenging to know when you&#39;re &amp;quot;done&amp;quot; here but a litmus test I do is  &amp;quot;could I explain how this project works down to the minute details to someone outside of this team?&amp;quot;. This stage involves carefully diagramming the current architecture. It also involves creating sequence, bpmn, or state machine diagrams (amongst others) that map out business processes, and other artifacts of the system. Use different diagramming systems based on the problem you&#39;re trying to solve. Consider, &amp;quot;what am I trying to communicate?&amp;quot;. Before moving onto the solution phase make sure to have all the business and technical requirements in one place. Then you will be able to validate all your possible solutions meet those requirements.&lt;/li&gt;
&lt;li&gt;Consider the solutions. Notice the plural, solutions. Although you may have a good idea of what the &amp;quot;best&amp;quot; idea is, it&#39;s vital to consider more than one solution. Why? Because you risk narrowing your focus and going for a solution that appeals to you rather than the business. Openly share the possible solutions with other architects, developers and engineering leaders. These ones will be able to share insights to improve your designs. They may also shed light on other systems that have an impact on your system - I find this especially true when being new to an organization.&lt;/li&gt;
&lt;li&gt;Handover - Once a solution has been selected, it&#39;s time to make sure it&#39;s documented so that the delivery teams can implement it. This phase involves things such as story mapping (what tickets do you need to build the thing), RFCs (for cross cutting technological decisions), ADRs (for project specific concerns), and workshops with developers. I&#39;ve often found it useful to record a talk track which commentates the architecture diagrams and talks through how it all works.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;Tools of the trade&lt;/h2&gt;
&lt;p&gt;In most organizations, the technology choices should be fairly well trodden - AWS, serverless, NoSQL databases, you get the idea. So what are the &amp;quot;tools&amp;quot; of an architect. Some have already been mentioned but here is a complete list of tools that I use regularly as part of my work.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Diagramming. There are a myriad of diagramming techniques. Try to learn as many as you can. If in doubt, ask others in your team. And when you feel confident enough, run a knowledge share session where you teach others about this technique. In general, for architectures use the C4 model. It&#39;s by far the most effective way of communicating technical designs. Ultimately, all diagramming is about effectively communicating to others about how a system or process works.&lt;/li&gt;
&lt;li&gt;Frameworks - Think TOGAF, AWS Well Architected and Zachman. These are frameworks that can be used to inform how you create system designs. In particular, the Well Architected framework provides a huge amount of value through its best practices. Make sure you can competently complete the well architected review and understand the terms within it.&lt;/li&gt;
&lt;li&gt;Documentation - In addition to diagrams, written documentation is a vital resource that you can provide to stakeholders and delivery teams. Becoming well adept at written communication is a great skill to have that will stand you in good stead. Documents you can produce (but not limited to), are RFCs, ADRs, Vendor comparison matrices and runbooks.&lt;/li&gt;
&lt;li&gt;Technical thinking - Coming from an engineering background you should have a good sense of if something you&#39;re designing will be secure, performant and cost effective (in that order). Make sure to validate these understandings against the real world by perhaps doing a proof of concept for your selected design. On this point, consider making yourself knowledgeable in things like the OWASP Top 10.&lt;/li&gt;
&lt;li&gt;Misc - I couldn&#39;t think of an umbrella term for this one. But, I&#39;ve often found myself reaching for calculator.aws in order to provide a cost analysis/comparison for the systems I design. There are many other tools that can enrich the designs you do.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Measuring success&lt;/h2&gt;
&lt;p&gt;Engineers can easily measure success by looking at what is delivered. Solutions architects are an &amp;quot;enabler&amp;quot; - they don&#39;t &amp;quot;deliver&amp;quot; anything customer facing in the true sense of the word. But, as we have discussed, the success measure is that the project was discovered correctly (i.e., there were no further iterations required after new requirements came to light), good documentation was produced, and the designs were handed off smoothly to the team. Further, you can make sure that your systems were aligned with business goals and the costs matched what was expected. It&#39;s a challenge to be disconnected from delivery but still measure success - but it&#39;s still possible. And crucially this makes sure that you are delivering value.&lt;/p&gt;
&lt;h2&gt;Closing thoughts&lt;/h2&gt;
&lt;p&gt;I&#39;ve enjoyed working as a solutions architect. Business was a weak area of mine and so it has been useful to understand and expand my knowledge. Further, the opportunity to teach and communicate (through diagrams, code and words) is what I love to do. I hope this small collection of thoughts helps you to become an architect!&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Should I learn to code if AI will replace me?</title>
    <link href="/will-ai-take-my-job/"/>
    <updated>2024-05-07T00:00:00Z</updated>
    <id>/will-ai-take-my-job/</id>
    <content type="html">&lt;p&gt;In the past few years, AI has gone from a nerdy pipe dream to reality. Its use has become commonplace amongst professionals and enthusiasts alike. Some are using it to create art, write tweets or improve their appearance.&lt;/p&gt;
&lt;p&gt;Although it was predicted that self-driving cars and menial work would be automated, instead it’s targeted a lot of creative pursuits. Already some sectors have seen &lt;a href=&quot;https://fortune.com/2024/02/08/how-many-workers-laid-off-because-of-ai/&quot;&gt;layoffs as a result of automation&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;As a software engineer, and I know I’m not the only one, I’ve been questioning whether I’ll have a job in 5 to 10 years. Will my job even exist? And for those new to software, what should they learn?&lt;/p&gt;
&lt;p&gt;I don’t have a crystal ball but these are some personal musings on this topic. Hopefully, I can say “I told you so!” In 5 years. Or I’ll be writing a reflective piece on showing scepticism toward prediction pieces. Time will tell.&lt;/p&gt;
&lt;h2&gt;Will software engineering disappear in 5 years?&lt;/h2&gt;
&lt;p&gt;I highly doubt this one. One analogy of how I see software engineers in the future is to that of the draughtsman. Historically, a draughtsman was employed to create various technical drawings of various things - buildings, bridges etc.&lt;/p&gt;
&lt;p&gt;In the 60’s CAD came along and changed the whole scene. Now draughtsman had a large portion of their work removed as it had been automated away. Nowadays the profession doesn’t exist per se but has in large part become “CAD Technicians”.&lt;/p&gt;
&lt;p&gt;Now that sounds like I think software engineers are going to die out but I don’t! I instead think that, like draughtsmen, our role will change.
We will work collaboratively with AI models to be more productive.&lt;/p&gt;
&lt;p&gt;Again, the sceptic would say that more productivity means less need for people. But again, just as CAD opened up a world of careers, so too will AI. I can’t even imagine what these might be.&lt;/p&gt;
&lt;p&gt;It’s important to note that AI (not AGI) will always need some kind of “prompt” - the instruction on what to do.&lt;/p&gt;
&lt;p&gt;Software development is not just about writing the code. It’s about taking customer requirements, having them filter through a chain of product managers, and then translating that into functional and non-functional requirements that you can code.&lt;/p&gt;
&lt;p&gt;For example, let’s say your manager comes to you and says he needs a new API point created to handle students&#39; exam results.
Now you know from experience that around 100,000 students are taking their exams - so you need scale. And you know the company is an educational institution so can&#39;t spend too much money. Based on these parameters (and doubtless countless others), you&#39;d recommend using a serverless solution. This is just a small example, ask this same thing of Chatgpt and it quickly falls down.&lt;/p&gt;
&lt;p&gt;Of course, this all changes if it becomes trivially cheap to train specific models for each business context. But there is still that translation layer. Often software needs multiple iterations. AI can write stuff but it can’t read people’s minds. It can’t think in abstract creative terms (yet).&lt;/p&gt;
&lt;p&gt;This is all to say that the profession of software engineering will not disappear. Our role may however change to work collaboratively with AI. This will mean less coding and more prompt engineering.&lt;/p&gt;
&lt;h2&gt;Should I train to be a software developer?&lt;/h2&gt;
&lt;p&gt;If you’re new on the scene and learning traditional web development, you might wonder if you should continue.
I’d emphatically say yes! Software is changing all the time. I’m quite new in my career but have still seen multiple technology changes - from frontend frameworks and languages. If you are learning, expect to keep learning and adapt as the market changes.&lt;/p&gt;
&lt;p&gt;Many recommend becoming the &amp;quot;robot operators&amp;quot; rather than the people who are going to be replaced by robots. In other words, becoming an AI engineer. This is certainly a new growing career path - if it interests you, go for it. If not, then there will be plenty of career paths that remain in web development, backend development and more.&lt;/p&gt;
&lt;p&gt;While learning, I&#39;ve always &lt;a href=&quot;https://joshghent.com/learning-software/&quot;&gt;recommended learning principles&lt;/a&gt; over frameworks. For example, all languages have a way to create loops, declare variables and call functions - these are the &amp;quot;principles&amp;quot; or building blocks of technology. This will help you to stay adaptable and not stuck to one way of thinking.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;We&#39;re living through a very interesting time in technology, AI has become a reality and it presents a huge opportunity for working in tandem and increasing our productivity with it.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Components of a Great Architecture Diagram</title>
    <link href="/architecture-diagrams/"/>
    <updated>2024-05-02T00:00:00Z</updated>
    <id>/architecture-diagrams/</id>
    <content type="html">&lt;ul&gt;
&lt;li&gt;Info - key, author, legend, version history&lt;/li&gt;
&lt;li&gt;Flow diagrams&lt;/li&gt;
&lt;li&gt;VPC and markings&lt;/li&gt;
&lt;li&gt;Services wrapped up&lt;/li&gt;
&lt;li&gt;Understand the different scopes - overarching be more general &amp;quot;image service&amp;quot; but a more indepth diagram would include the components or even the classes of that service etc.&lt;/li&gt;
&lt;li&gt;Organise it from right to left or top to bottom&lt;/li&gt;
&lt;li&gt;Try to avoid the mess of arrows&lt;/li&gt;
&lt;li&gt;Use the C4 Model&lt;/li&gt;
&lt;li&gt;Mark clear domain and service boundaries&lt;/li&gt;
&lt;li&gt;Add interaction comments&lt;/li&gt;
&lt;li&gt;Record questions asked about the diagram, along with the diagram&lt;/li&gt;
&lt;li&gt;Record an explanation of the diagram&lt;/li&gt;
&lt;li&gt;Have both short term and long term vision for the architecture&lt;/li&gt;
&lt;/ul&gt;
</content>
  </entry>
  
  <entry>
    <title>Set your time zone manually when in the Canary Islands</title>
    <link href="/canary-islands-tz/"/>
    <updated>2024-05-01T00:00:00Z</updated>
    <id>/canary-islands-tz/</id>
    <content type="html">&lt;p&gt;On a recent vacation to the Canary Islands, I decided to book a romantic evening at a sea side restaurant for my wife and I.&lt;/p&gt;
&lt;p&gt;Dutifully, with plenty of time to spare, we leisurely strolled along the beach and arrived at this restaurant.&lt;/p&gt;
&lt;p&gt;But, on first glance it was quite empty. Too empty for a restaurant on a summers evening.
When I asked to be seated, they told me they were closed and the table wasn’t for another hour.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Then it hit me, my phone and computer had switched to Spanish time - not Canarian time.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;You see, the time in Spain is 1 hour ahead of the UK.
But, the Canary Islands operates on the same time zone as the UK.&lt;/p&gt;
&lt;p&gt;The iPhone, trying to be the clever stick it is, helpfully changes your time zone to a Spanish time zone. I imagine because the canaries are spanish, and so when pinging time.apple.com, it says “you’re in Spain!”.&lt;/p&gt;
&lt;p&gt;So this is a small reminder, to both my future self and any other travellers, to manually change your timezone.
Bon appétit&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>AWS Summit 2024</title>
    <link href="/aws-summit-2024/"/>
    <updated>2024-04-26T00:00:00Z</updated>
    <id>/aws-summit-2024/</id>
    <content type="html">&lt;p&gt;I was fortunate this year to attend the AWS Summit at London&#39;s ExCeL. In this post I wanted to outline what the event was like (for a first timer), what I learned and other tips I found.&lt;/p&gt;
&lt;h2&gt;What it was like&lt;/h2&gt;
&lt;p&gt;The conference itself is enormous. Over 25,000 people all flocked to the ExCeL in London &lt;s&gt;to worship at the alter of Werner Vogals&lt;/s&gt; - I mean talk about all things AWS. It was good being surrounded by people who worked on similar things to you.&lt;/p&gt;
&lt;p&gt;There are two key parts of the summit - the talks, and the vendors.&lt;/p&gt;
&lt;p&gt;Overall, the talks were great. I think your experience depends largely on what talks you attend. From speaking to my colleagues who attended other sessions that the &amp;quot;difficulty&amp;quot; ratings for the sessions do not represent the actual technical detail they provide. There are a few categories of talks&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Talks from a company representative describing how they use a particular AWS technology. Usually these talks have 2-3 speakers - with 1 coming from the company in question and the others from AWS themselves. For example, I attended a talk by Flo Health who had their CTO speak about how they were using DynamoDB to store data in their system. They then had 2 AWS representatives give talks about how DynamoDB worked under the hood and cost optimisation strategies. These were the best talks to attend in my opinion.&lt;/li&gt;
&lt;li&gt;Talks sponsored by a company. These are more like sales pitches to demonstrate how you can use a companies technology.&lt;/li&gt;
&lt;li&gt;Talks by AWS themselves. These are usually focused on a particular area or AWS service. I found these to be very high level and to &amp;quot;drink the coolaid&amp;quot;. For example, when speaking about analytics for serverless systems they of course recommended Cloudwatch (which is terrible). And for increasing resiliency recommended Blue/Green deployments with Codebuild (which is trash).&lt;/li&gt;
&lt;li&gt;Community talks. These were likely not recorded so are valuable to attend in person. I attended one of these in the morning and found it to be like a talk you might find at a good tech meet-up.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The vendors are a sprawl of booths that litter the conference floors. They are different software companies of all flavours advertising and pitching their products. They usually give away swag of some description (I got a bucket hat from PagerDuty) and some have challenges where the winner gets a prize. These challenges usually have huge queues so I avoided them and focused on what I could get out of the experience as a whole.&lt;/p&gt;
&lt;h2&gt;What I learned&lt;/h2&gt;
&lt;p&gt;I was able to attend 4 talks in total here are my learnings from each.&lt;/p&gt;
&lt;div class=&quot;image&quot;&gt;
	&lt;img src=&quot;../../assets/images/aws-summit/remocal.jpeg&quot; alt=&quot;A slide from COM201: Patterns for Efficient Software Architecture showing types of software testing&quot; /&gt;
  &lt;em&gt;&quot;remocal&quot;&lt;/em&gt;
&lt;/div&gt;
&lt;h3&gt;&amp;quot;COM201: Patterns for Efficient Software Architecture&amp;quot;&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Focused on local development for serverless.&lt;/li&gt;
&lt;li&gt;Use Lambda to &lt;em&gt;transform&lt;/em&gt; data &lt;strong&gt;NOT&lt;/strong&gt; &lt;em&gt;transport&lt;/em&gt; data&lt;/li&gt;
&lt;li&gt;Try to leverage no code serverless solutions where possible for transport - event bridge pipes, dynamodb streams etc.&lt;/li&gt;
&lt;li&gt;Suggested strategy for local development and testing is &amp;quot;remocal&amp;quot; - a portmanteau of &amp;quot;remote&amp;quot; and &amp;quot;local&amp;quot;
&lt;ul&gt;
&lt;li&gt;This entails testing against mocks and true resources (deployed into AWS).&lt;/li&gt;
&lt;li&gt;So you deploy your code to an ephemeral environment. And then write tests that use mocks for DynamoDB responses but then call the actual API gateway (for example). This is targeted at testing and debugging the code in the Lambda&#39;s themselves.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class=&quot;image&quot;&gt;
	&lt;img src=&quot;../../assets/images/aws-summit/monitoring.jpeg&quot; alt=&quot;A slide from ARC302: Building observability to increase resiliency showing an equation of metrics plus logs plus traces - to create an overall picture of your system.&quot; /&gt;
&lt;/div&gt;
&lt;h3&gt;&amp;quot;ARC302: Building observability to increase resiliency&amp;quot;&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;One pain point that many businesses face is there is no granularity to the errors they get - for example &amp;quot;the site is down&amp;quot;.&lt;/li&gt;
&lt;li&gt;The key to resolving this is multi-dimensional errors - for example &amp;quot;latency per trace id&amp;quot;, or &amp;quot;requests per AZ&amp;quot; etc. This can all be provided by Cloudwatch.&lt;/li&gt;
&lt;li&gt;Increased observability can then feed into automated rollback systems. The multi-dimensional alarm would be &amp;quot;errors per code revision&amp;quot; for example.&lt;/li&gt;
&lt;li&gt;JustEat engineer spoke about how:
&lt;ul&gt;
&lt;li&gt;They regularly have engineers review logs for noise. I think this is a good practise but difficult to understand what is noise.&lt;/li&gt;
&lt;li&gt;They standardise labels across their services - environment, service, team and version/code revision&lt;/li&gt;
&lt;li&gt;These tags feed into cloudwatch alarms which help them to detect bad deployments etc.&lt;/li&gt;
&lt;li&gt;Teams have regular &amp;quot;graph club&amp;quot; meetings where the goal is to understand and review telemetry data and action any monitoring and alerts.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Game days are recommended to be done on a regular cadence. A good game day is described as
&lt;ul&gt;
&lt;li&gt;Realistic - like production if possible&lt;/li&gt;
&lt;li&gt;Reasoned - having a why and desired outcomes&lt;/li&gt;
&lt;li&gt;Regular - with a regular cadence.&lt;/li&gt;
&lt;li&gt;Controlled - targeting a certain system or area. For example, how do our API&#39;s react without database access.&lt;/li&gt;
&lt;li&gt;Tools such as AWS Fault injection service can be used to facilitate these game days.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;They brought out a good illustration (which I foolishly didn&#39;t note down) where the idea was that you practise to build your own luck. In this context, they were talking about how it could be viewed as &amp;quot;lucky&amp;quot; that justeat rarely goes down. But the reality is that they practise the system breaking so much that they never face it in production and if it does, they know what to do to fix it quickly.&lt;/li&gt;
&lt;li&gt;The final note was to have an observability strategy. This should align with the business goals. It&#39;s not just a case of saying &amp;quot;more metrics, more alarms&amp;quot; but knowing what to measure and why.&lt;/li&gt;
&lt;/ul&gt;
&lt;div class=&quot;image&quot;&gt;
	&lt;img src=&quot;../../assets/images/aws-summit/dynamodb.jpeg&quot; alt=&quot;A slide from DAT202: DynamoDB Deep dive with Flo Health: Powering critical data for 300M users showing the architecture of DynamoDB.&quot; /&gt;
&lt;/div&gt;
&lt;h3&gt;&amp;quot;DAT202: DynamoDB Deep dive with Flo Health: Powering critical data for 300M users&amp;quot;&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Data for their customers is very sensitive as it&#39;s around reproductive health.&lt;/li&gt;
&lt;li&gt;They also had the challenge of scaling to large user numbers without increasing costs. Uniquely they give their app away for free in 66 countries where education and help is needed but income levels mean they couldn&#39;t afford a subscription app.&lt;/li&gt;
&lt;li&gt;One of the main challenges for them is that all the data is unique, specific and needs to feed into a personalised experience. They describe storing over 1 trillion data points - all in DynamoDB.&lt;/li&gt;
&lt;li&gt;Their PII data is triple encrypted - using AWS KMS, the default server encryption from DynamoDB and encrypting the data itself at an application level.&lt;/li&gt;
&lt;li&gt;Under the hood of DynamoDB, and how it scales so well, is that each partition is provisioned with 1000 write units, 3000 read units and is up to 1GB in size. They can scale these partitions limitlessly.&lt;/li&gt;
&lt;li&gt;To partition data, they hash the partition key.&lt;/li&gt;
&lt;li&gt;Each GSI is like a materialised view and a different version of the database under the hood.&lt;/li&gt;
&lt;li&gt;Sometimes it&#39;s ok to throttle! For example, if something is inserting data to DynamoDB from the back of a queued job it doesn&#39;t matter if the insert gets throttled because it&#39;s not customer facing. So you can more freely use provisioned access because the impact is low. On the other hand, favour using on demand for customer facing tables.&lt;/li&gt;
&lt;li&gt;1KB = 1 WCU. Therefore the max consumption is 409,600 Bytes = 400WCU - because the max item size is 400KB.&lt;/li&gt;
&lt;li&gt;This means increased item size = bigger costs. If you want to update a single attribute in a 400KB item, you used 400WCU - because there is no such thing as an update with DynamoDB - it&#39;s a delete and insert.&lt;/li&gt;
&lt;li&gt;The recommendation therefore is to &amp;quot;vertically partition&amp;quot; your data. This is the push of the single table design model. Where the sort key is used to differentiate different properties of data related to a single partition key entity.&lt;/li&gt;
&lt;li&gt;This reduces cost by only having to access or update smaller records. Lots of data changes infrequently so doesn&#39;t need to be updated much and can be collected together.&lt;/li&gt;
&lt;/ul&gt;
&lt;div class=&quot;image&quot;&gt;
	&lt;img src=&quot;../../assets/images/aws-summit/aws-data-tech.jpeg&quot; alt=&quot;A slide from ANT306: How the BBC built a real time media analytics platform to process over 5B events a day showing different data ingestion systems in AWS.&quot; /&gt;
&lt;/div&gt;
&lt;h3&gt;&amp;quot;ANT306: How the BBC built a real time media analytics platform to process over 5B events a day&amp;quot;&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;The BBC talked about their unique challenges with data
&lt;ul&gt;
&lt;li&gt;Multiple platforms - weather, iplayer etc.&lt;/li&gt;
&lt;li&gt;Needing to provide real time feedback to customers.&lt;/li&gt;
&lt;li&gt;Data has a &amp;quot;half life&amp;quot; of usefulness - meaning the longer it takes to provide insights to that data the less useful it is to a customer. A recommendation to watch a program 40 minutes after they have finished the last program is not useful.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Primarily they use Kafka, Kinesis and Flink.&lt;/li&gt;
&lt;li&gt;They developed an SDK which abstracts the analytics data gathering from developers. This allows for consistent data gathering across all their products.&lt;/li&gt;
&lt;li&gt;They initially used timestamps sent from the clients to dictate when an event occurred. This led to issues with data processing (due to processing data by time chunks), so they switched to the timestamp when Kafka received the event.&lt;/li&gt;
&lt;li&gt;There other issue was with bad or malicious data. Using JSON schema definitions, which power their event system, they were able to tighten these definitions and dead-letter any events that were bad or malicious.&lt;/li&gt;
&lt;li&gt;They chose JSON schema&#39;s (over protobuf etc) because of compatibility with existing API&#39;s as well as compatibility with tools like Kafka and Kinesis. In particular it allows them to make no-code changes when a new event gets added to their platform. The definitions get stored in S3 and then registered by Kafka.&lt;/li&gt;
&lt;li&gt;The final problem they described was how they distribute Flink tasks (machine learning)
&lt;ul&gt;
&lt;li&gt;If they do random then it makes querying for time specific data very memory intensive (for example &amp;quot;unique readers in the past 5 minutes&amp;quot;).&lt;/li&gt;
&lt;li&gt;If they do it by articleId they often overloaded one Flink instance. This was because people would view a single page on the website and refresh lots - for example when the champions league was being drawn.&lt;/li&gt;
&lt;li&gt;In the end they chose to accept a certain level of inaccuracy and went with a system called HyperLogLog. This is an algorithm for calculating high cardinality with minimal memory usage.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Other tips&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Plan your day in advance, ideally to things that are close together. If you have to walk around a lot of the centre to get to your next talk then you might not get in as there are always queues to the great talks.&lt;/li&gt;
&lt;li&gt;Allow space in the day to talk to vendors. Target both those that you (or your business) are using and see if they have any new features/releases coming up. For vendors that you don&#39;t or haven&#39;t used, approach with curiosity and have an idea of where you might want to use them.&lt;/li&gt;
&lt;li&gt;Most of the activities and challenges have huge queues, I personally avoided these because it cut into time doing other things but YMMV.&lt;/li&gt;
&lt;li&gt;Registration is listed from 8am-10am. My train was late so arrived late and was surprised to see hoards of people still queuing for the registration.&lt;/li&gt;
&lt;li&gt;Lunch is provided but there are plenty of (overpriced) options in the excel centre.&lt;/li&gt;
&lt;li&gt;Unless you want to do a workshop or take notes, then you can leave your laptop at home.&lt;/li&gt;
&lt;li&gt;Usually there are drinks at the booths between 4:30 and 5:30.&lt;/li&gt;
&lt;li&gt;Although I arrived late to the keynote, I don&#39;t think I missed much. It really just outlined the day. It also is recorded and available online. I&#39;d recommend using the time to explore the venue and talk to vendors.&lt;/li&gt;
&lt;li&gt;Download the Map, your agenda and anything else you might need to your phone in advance. The wifi is quite spotty and there is barely any phone signal.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Overall, if you have the opportunity, I&#39;d highly recommend attending. It&#39;s well worth it, even if just for the stickers.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;/assets/images/aws-summit/security-spaniel.jpeg&quot;&gt;P.S., Thanks to this dog for keeping us safe!&lt;/a&gt;&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Markdown for Slides</title>
    <link href="/marp/"/>
    <updated>2024-03-12T00:00:00Z</updated>
    <id>/marp/</id>
    <content type="html">&lt;p&gt;Working in tech, if you wanted to &amp;quot;share your knowledge&amp;quot; chances are you used Powerpoint to create a presentation.
It&#39;s been around forever so can run on anything and, on the surface, is simple to use.&lt;/p&gt;
&lt;p&gt;But, Powerpoint sucks. It really does. The interface is incredibly cluttered, the snap to grid system is completely unhelpful in real world scenarios, and there are just too many options.
All I need is to have some images and text on a page and be able to scroll between them. I can forgo my beloved &amp;quot;star-wipe&amp;quot; transitions for simplicities sake.&lt;/p&gt;
&lt;p&gt;Images, and text. That sounds a lot like something I could do in Markdown.
Turns out someone already had the same idea, and created a tool called &lt;a href=&quot;https://marp.app/&quot;&gt;Marp&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;It&#39;s incredible simple to use and makes the process of creating a presentation like writing a blog post. It&#39;s actually pleasurable to use. A simple CLI, good defaults and the flexibility of markdown.&lt;/p&gt;
&lt;p&gt;I&#39;ve only created a few presentations with it and my usage is quite basic. But I have settled on the below as being a good configuration for all my presentations.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;---
theme: gaia
_class: lead
paginate: true
backgroundColor: #fff
backgroundImage: url(&#39;https://marp.app/assets/hero-background.svg&#39;)
style: |
  table {
    width: 100%;
    margin: 0 auto;
    margin-top: 1em;
    font-size: 0.75em;
  }
  h1 {
    font-size: 1.5em;
  }
  h2 {
    font-size: 1em;
  }
  p {
    font-size: 0.75em;
  }
---
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The styles were added as a result of tables overflowing the slides. Smaller fonts fixed this.&lt;/p&gt;
&lt;p&gt;Starting a new presentation is super easy:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# create the slides
$ touch slides.md

# Start the slides server in Watch mode
$ npx @marp-team/marp-cli@latest -w slides.md
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can then open the HTML file that is created in your browser. It will live reload whenever you make changes.&lt;/p&gt;
&lt;p&gt;Then when I want to export them to Powerpoint (or PDF) for others to use:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# for pdf
$ npx @marp-team/marp-cli@latest slides.md -o slides.pdf

# or for pptx
$ npx @marp-team/marp-cli@latest slides.md -o slides.pptx
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And hey presto that&#39;s all there is to it.
Marp is a great tool and I highly recommend using it if you like working directly in markdown.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Cool Links from Around the Web</title>
    <link href="/cool-links/"/>
    <updated>2024-03-11T00:00:00Z</updated>
    <id>/cool-links/</id>
    <content type="html">&lt;p&gt;Despite web search providing an interface to a wealth of human knowledge, one thing that it can&#39;t crack is finding cool stuff. And that&#39;s mainly because cool is relative and hard to quantify.&lt;/p&gt;
&lt;p&gt;I&#39;ve stumbled across a bunch of cool sites across the web that you might not have heard of.&lt;/p&gt;
&lt;p&gt;If you have suggestions, submit a pull request on this blog!&lt;/p&gt;
&lt;p&gt;Where possible, I&#39;ve attempted to categorize them. Enjoy!&lt;/p&gt;
&lt;h2&gt;Web&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;https://solar.lowtechmagazine.com&lt;/li&gt;
&lt;li&gt;https://grumpy.website&lt;/li&gt;
&lt;li&gt;https://www.abandonedamerica.us/abandoned-theaters&lt;/li&gt;
&lt;li&gt;https://designmanifestos.org&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Psychology&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;https://untools.co/&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Utilities&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;https://coolbackgrounds.io/&lt;/li&gt;
&lt;/ul&gt;
</content>
  </entry>
  
  <entry>
    <title>Common phrases that probably aren&#39;t true</title>
    <link href="/not-true/"/>
    <updated>2024-03-10T00:00:00Z</updated>
    <id>/not-true/</id>
    <content type="html">&lt;p&gt;There are a litany of common phrases that are found on lots of packaging and in marketing materials. The problem is, that most of them aren&#39;t true.
This is a collection of phrases I&#39;ve come across that just aren&#39;t true and why you should be skeptical of them.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&amp;quot;Sustainably sourced&amp;quot; - sustainable doesn&#39;t mean it&#39;s not harmful to the environment&lt;/li&gt;
&lt;li&gt;&amp;quot;Web scale&amp;quot; - ambiguous and lacks clear definition&lt;/li&gt;
&lt;li&gt;&amp;quot;We care&amp;quot; - when said by a cooperation of any kind&lt;/li&gt;
&lt;li&gt;&amp;quot;Clean&amp;quot; prefixing any fossil fuel (e.g., &amp;quot;clean coal&amp;quot;) - science doesn&#39;t support that any fossil fuels can somehow be carbon neutral&lt;/li&gt;
&lt;li&gt;&amp;quot;We practise agile&amp;quot; - most businesses practise fast waterfall&lt;/li&gt;
&lt;li&gt;&amp;quot;No hidden fees&amp;quot; - usually has caveats around when the business will honour this&lt;/li&gt;
&lt;li&gt;&amp;quot;Lifetime guarantee&amp;quot; - this guarantee is not a binding legal statement that can be easily reversed&lt;/li&gt;
&lt;li&gt;&amp;quot;Detox&amp;quot; - there is no scientific evidence to support getting rid of &amp;quot;toxins&amp;quot; in the body.&lt;/li&gt;
&lt;li&gt;&amp;quot;World-class&amp;quot; - subjective term&lt;/li&gt;
&lt;li&gt;&amp;quot;100% natural&amp;quot; - hydrochloric acid is natural but not something I&#39;d want to consume.&lt;/li&gt;
&lt;/ol&gt;
</content>
  </entry>
  
  <entry>
    <title>Tips for Battling Alert Fatigue</title>
    <link href="/alert-fatigue/"/>
    <updated>2024-03-09T00:00:00Z</updated>
    <id>/alert-fatigue/</id>
    <content type="html">&lt;p&gt;When your first outage happens, alerting and monitoring becomes top priority. You don&#39;t want to be woken up at 3am again.&lt;/p&gt;
&lt;p&gt;So you add alerting. Lots and lots of alerting.&lt;/p&gt;
&lt;p&gt;But soon enough, the alerts start returning false positives and everyone gets used to the alerts. They become background noise.&lt;/p&gt;
&lt;p&gt;Alert fatigue is a real problem. And one I&#39;ve seen at almost every company I&#39;ve worked with.&lt;/p&gt;
&lt;p&gt;But the process of reducing alert fatigue is laborious. You have to commit hours of time and there isn&#39;t always a best way to approach it.&lt;/p&gt;
&lt;p&gt;Nonetheless, here are some tips I&#39;ve found useful:&lt;/p&gt;
&lt;h2&gt;1. One problem causing multiple alerts&lt;/h2&gt;
&lt;p&gt;If your database irrecoverably crashes, you probably get a myriad of alerts. The database is down, the website is down, the API is down etc.
In isolation we want alerting for all these components. But when the root cause is the same, we want to consolidate the alerting if possible.&lt;/p&gt;
&lt;p&gt;The implementation will vary depending on the tool you use. But, let&#39;s say the database goes down. You could only have the API alert trigger if the response does not indicate that the database is down. This would mean that the API monitor would only alert if the database was up but the API was actually down.&lt;/p&gt;
&lt;h2&gt;2. Alerts for self-healing problems&lt;/h2&gt;
&lt;p&gt;If your primary server goes down, but the second takes over. Do you need alerting about this? Maybe, but likely not.&lt;/p&gt;
&lt;p&gt;In reality, if you have a &amp;quot;cattle not pets&amp;quot; approach to infrastructure, that primary server should be terminated and another started up. In such a case, the problem has &amp;quot;self-healed&amp;quot; - it has resolved itself without any human intervention.&lt;/p&gt;
&lt;p&gt;This is a case where alerting is not needed as there is no direct action you or your team need to take.&lt;/p&gt;
&lt;h2&gt;3. Non-actionable alerts&lt;/h2&gt;
&lt;p&gt;Similar to the above, if an alert doesn&#39;t require an action, then it should be ditched or filtered out into a separate &amp;quot;notifications&amp;quot; bucket.&lt;/p&gt;
&lt;p&gt;As an exercise, do the following:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Go through your alerting channel (Slack, Email, etc.)&lt;/li&gt;
&lt;li&gt;Analyse the language of the alert to see if it has a clear action (e.g., &amp;quot;Reboot the server&amp;quot;, &amp;quot;Increase auto-scaling capacity&amp;quot;, &amp;quot;Contact 3rd-party vendor as the API is broken&amp;quot;).&lt;/li&gt;
&lt;li&gt;If it doesn&#39;t have a clear action then either: &lt;strong&gt;A)&lt;/strong&gt; Delete it - the preferred choice or &lt;strong&gt;B)&lt;/strong&gt; Reword it - include the action as part of the alert message&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;By spending just 15 minutes on this, I bet you&#39;ll handle some of your noisiest alerts and restore sanity to your team.&lt;/p&gt;
&lt;h2&gt;4. Alerts that don&#39;t have a clear owner&lt;/h2&gt;
&lt;p&gt;Another symptom of impending alert fatigue is alerts that don&#39;t have a clear owner. Now you have actions in your alerts, you need to consider who will complete those actions.
Some alerting tools now have this built in, that can automatically &lt;code&gt;@&lt;/code&gt; the code owners/maintainers who will then swarm on the problem. Even so, if there is multiple individuals, it&#39;s likely worth making sure expectations are clear. It shouldn&#39;t always be person A that picks up the alerts. It should be shared around the team. But equally, if a person is in back to back meetings, they can&#39;t be expected to pick up any issues. Keeping clear expectations within your team will make sure one person doesn&#39;t end up being the &amp;quot;go to&amp;quot; when a siren sounds.&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;Hopefully these tips are actionable and help you reduce the noise of alerts on your platform. Alerts are a useful tool, but there are seldom thought of as being a key part of the developer experience. But, to run systems reliably at scale great observability is crucial - with alerts being a cornerstone of that.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>What people mean when they say - we do TDD</title>
    <link href="/tdd/"/>
    <updated>2024-01-16T00:00:00Z</updated>
    <id>/tdd/</id>
    <content type="html">&lt;p&gt;Test-driven development is that - test-driven. Not test passenger.&lt;/p&gt;
&lt;p&gt;This means that tests come first.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;What people usually mean when they say “We do TDD” is that they believe that all the code shipped has tests against it.
Rarely in my experience is this actually true. [^1]&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;When the common understanding of a methodology is not true to its original meaning, it ceases all meaning. You then have people talking across each other who are speaking slightly different languages.&lt;/p&gt;
&lt;p&gt;Worse than this is a business that claims to be both BDD and TDD! This leads to some lead developers and product managers pushing for BDD and therefore implementing systems to support that. Whereas another team may be doing the same but with TDD.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Like a car, you can only have one driver.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;So what’s the solution?
As with a car having a &lt;strong&gt;single driver&lt;/strong&gt;, you need a &lt;strong&gt;single methodology&lt;/strong&gt; to lead the charge. This doesn’t preclude you from having tests if you’re BDD or vice Versa. But, having clearly communicated expectations about the software development life cycle will be crucial to make sure that teams can work productively.&lt;/p&gt;
&lt;p&gt;Having chosen a single driver, support it! For example, in the case of TDD, invest in fast CI runners, put developers through training courses, and communicate the impact across the product team.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;[^1]: I would also argue that in TDD you do not need tests for every single function you write. This has been written about a lot in other places, so I won&#39;t rehash it. Generally however, my rule of thumb is to primarily test integrations and only unit tests where it makes sense.&lt;/p&gt;
&lt;p&gt;P.S. I’m not saying that BDD is better than TDD or vice versa. I’m just saying that you can’t be led by both.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>11ty or Bust</title>
    <link href="/11ty-or-bust/"/>
    <updated>2024-01-03T00:00:00Z</updated>
    <id>/11ty-or-bust/</id>
    <content type="html">&lt;p&gt;&lt;a href=&quot;https://joshghent.com/gatsby-or-bust/&quot;&gt;It’s been four years since I last did any major work to my site&lt;/a&gt;. But over those years, things have got a little chaotic.&lt;/p&gt;
&lt;p&gt;Personal websites are usually fairly low down on peoples priority lists. Test coverage, clean code and the like all get thrown out the window in favour of tinkering and writing new posts. At least that’s what happened to me.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://joshghent.com/redesign/&quot;&gt;When I first ported to Gatsby it was fantastic&lt;/a&gt;. Having a React-ish framework felt like I was just doing an ordinary days work. But over time, like many react projects, it lumbered to a grinding halt.
The site itself was still quick. But the development process was painful. Dependency upgrades had meant that I could no longer run the site on my own laptop. It was a mess of “peer dependency unmet” and “X type is not recognised”. This site is barely 5 years old (in its Gatsby form), but the fast moving Nodejs changes had meant that it hadn’t broken significantly.&lt;/p&gt;
&lt;p&gt;It was time for a change.&lt;/p&gt;
&lt;p&gt;First, I looked at my requirements:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Minimal dependencies&lt;/strong&gt; - I don’t want a repeat of the current Gatsby spaghetti.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Minimal code&lt;/strong&gt; (so I don’t need to mess with it in the future to upgrade)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Dynamic Opengraph images&lt;/strong&gt; - for every blog post that look nicer for social media&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Web mentions&lt;/strong&gt; - make it easy to integrate the indie web features I currently have.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Fast build times&lt;/strong&gt; - Gatsby is extremely slow if you have lots of markdown files.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Easy to add new features&lt;/strong&gt; - I don’t ideally want to be maintaining a React component library.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Extremely small payload size&lt;/strong&gt; - my gatsby site used 1g of C02 per request! I wanted to reduce that.&lt;/li&gt;
&lt;li&gt;&lt;em&gt;(Nice to have) Create my resume dynamically based on a “work” page&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Next I took a look at the options, ultimately there were two it boiled down to:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;a href=&quot;https://gohugo.io/&quot;&gt;Hugo&lt;/a&gt;. A relative new comer to the static site scene but a strong contender already. It’s written in Go, which is nice. Ultimately, although there were no major issues with Hugo from my requirements, I realised my Golang knowledge was not strong enough. I want my site to be exceptionally simple to maintain. This is not a learning side project, I have those.&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.11ty.dev/&quot;&gt;Eleventy&lt;/a&gt;. Again, a newcomer on the static site scene. Written in JS but uses Nunjucks or Liquid for templating rather than something heavy like React.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Ultimately, I settled on eleventy.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The nice thing about it is that it ships no javascript by default to the client.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;It was a fairly simple migration path:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Clone down 11ty template project. I used this one https://github.com/tomreinert/minimal-11ty-tailwind-starter/tree/master&lt;/li&gt;
&lt;li&gt;Move posts from &lt;code&gt;/content/blog&lt;/code&gt; in gatsby to &lt;code&gt;/src/blog/posts&lt;/code&gt; in 11ty&lt;/li&gt;
&lt;li&gt;Updated all the front matter to include the &lt;code&gt;layout&lt;/code&gt; tag.&lt;/li&gt;
&lt;li&gt;Encountered an error because of a code block. Solved this by wrapping them in  and .&lt;/li&gt;
&lt;li&gt;Found a github issue that said it was fixed in a later version. Realised that this template uses 11ty v1 not v2. So I upgraded that. Thankfully the upgrade was fairly simple but still not got the site working.&lt;/li&gt;
&lt;li&gt;Removed webpack because it’s the spawn of the devil and anything else I didn’t need.&lt;/li&gt;
&lt;li&gt;Configured Tailwind according to this guide - https://ben.page/eleventy-tailwind&lt;/li&gt;
&lt;li&gt;Added RSS via the 11ty rss plugin&lt;/li&gt;
&lt;li&gt;Converted the homepage, blog post page and now page to 11ty. This process was simple enough, just copy pasting content. In the process, I restyled the look of the blog posts page to be simpler, and (for the time being) removed photos and notes from the homepage.&lt;/li&gt;
&lt;li&gt;Add a new projects page!&lt;/li&gt;
&lt;li&gt;Updated cloudflare pages to build the 11ty site.&lt;/li&gt;
&lt;li&gt;After doing an accessibility scan, I updated a number of colors and alt tags to ensure that the site was accessible.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;And that was it!&lt;/p&gt;
&lt;p&gt;Overall, I’m really happy with the migration. The site now uses a mere 0.01g of C02 per request and ships no JS (although Cloudflare injects some that I’m trying to remove). It’s much easier to maintain and I am enjoying the templating engine.
I have learned my lesson to keep things as simple as possible and prefer stability over feature development.
To keep on top of inevitable upgrades, I’m going to configure some basic snapshot testing that means I have some basic reassurance that the site builds, and displays content.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Lambda Warming is an Antipattern</title>
    <link href="/lambda-warming-antipattern/"/>
    <updated>2023-12-18T00:00:00Z</updated>
    <id>/lambda-warming-antipattern/</id>
    <content type="html">&lt;p&gt;At a certain stage during most monolith to microservice migrations, teams that choose to migrate to Lambda often encounter a classic problem: cold starts.&lt;/p&gt;
&lt;p&gt;Searching this issue on Google (or DuckDuckGo for those conscious of privacy) yields thousands of results. Strategies abound, from reducing cold start times to using Lambda warmers and other seemingly haphazard tactics.&lt;/p&gt;
&lt;p&gt;On the surface, optimizing your Lambdas seems beneficial. Lambda bills per millisecond of compute time, so optimizing at scale can lead to marginal cost savings, not to mention the customer impact of a faster app.&lt;/p&gt;
&lt;p&gt;I won&#39;t rehash the numerous articles suggesting sound methods to improve Lambda performance, such as increasing memory, avoiding placement in VPCs, and choosing quick languages like Python, Go, or Node.js (in that order).&lt;/p&gt;
&lt;p&gt;Instead, I&#39;ll focus on another strategy often suggested to mitigate cold start problems: Lambda warming.&lt;/p&gt;
&lt;p&gt;Lambda warming is a method where Lambdas are pinged to maintain an active, spun-up instance. Theoretically, this eliminates the time AWS takes to boot the container. However, as Yan Cui points out on The Burning Monk, this assumption is flawed. By implementing a Lambda warming mechanism, you&#39;re only keeping a single instance of your app alive (until it recycles about every 45 minutes, a built-in AWS quirk) rather than having many concurrent instances ready for traffic.&lt;/p&gt;
&lt;p&gt;Beyond this apparent flaw, Lambda warming disrupts the pattern of Lambda&#39;s transactional, functional nature—stateless input and output.&lt;/p&gt;
&lt;p&gt;If you&#39;ve concluded that you need Lambda warming, it suggests a larger issue: the need for low latency in your backend systems, either for customer experience or technical reasons. And by using Lambda warming, you&#39;re likely willing to invest money to combat 300ms of latency.&lt;/p&gt;
&lt;p&gt;If these statements ring true, then the reality is you might be better served by a different service like ECS Fargate, Kubernetes (EKS), or traditional EC2—essentially, anything long-lived.&lt;/p&gt;
&lt;p&gt;Lambda is a fantastic tool, but it&#39;s not a one-size-fits-all solution. Often, a HTTP API with some cold start latency is perfectly acceptable. However, for real-time services like chat, it might not be the best fit. Yet, if your traffic for such a service is consistent, then perhaps it is suitable!&lt;/p&gt;
&lt;p&gt;The key is to gather data, understand your use case and pain points, and then evaluate your options. Don&#39;t rush to a workaround without thorough consideration.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>How to buy a car</title>
    <link href="/how-to-buy-car/"/>
    <updated>2023-08-30T00:00:00Z</updated>
    <id>/how-to-buy-car/</id>
    <content type="html">&lt;p&gt;Recently, I purchased a new-to-me (used) car. Unfortunately, after I purchased it there was something wrong with it. It wasn&#39;t a costly repair but annoying nonetheless. And there are a lot of horror stories of buying used cars. So, I decided to put together a checklist (for myself mostly) that I can run through whenever buying a car (or helping someone else to).&lt;/p&gt;
&lt;h2&gt;The list&lt;/h2&gt;
&lt;h3&gt;Before you view the car&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;[] Check the MOT History - ask the seller if the minor defects have been addressed&lt;/li&gt;
&lt;li&gt;[] Review common faults with the car for the mileage - ask the seller if they have been addressed&lt;/li&gt;
&lt;li&gt;[] Check if it has a service history and who with (dealer service history is better) - I wouldn&#39;t recommend buying a car without a service history.&lt;/li&gt;
&lt;li&gt;[] Get an insurance quote on the car - can you afford it?&lt;/li&gt;
&lt;li&gt;[] Check the vehicle tax&lt;/li&gt;
&lt;li&gt;[] Do a HPI check on the car&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;When viewing the car&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;[] Check the tyre tread depth - if it&#39;s low, you can factor in the cost for new tyres as a negotiation point&lt;/li&gt;
&lt;li&gt;[] Check the service history paper work and the electronic service history (if available).&lt;/li&gt;
&lt;li&gt;[] Check all the interior functions work - AC, Radio, Bluetooth, Steering wheel controls, wipers, windscreen washer, all lights.&lt;/li&gt;
&lt;li&gt;[] Check the boot to make sure the spare tyre/repair kit is there.&lt;/li&gt;
&lt;li&gt;[] Check the engine bay for any leaks or damage.&lt;/li&gt;
&lt;li&gt;[] Check fluid levels in the engine bay - coolant, oil, brake fluid, power steering fluid etc.&lt;/li&gt;
&lt;li&gt;[] Make sure the oil cap isn&#39;t milky - this could indicate a head gasket failure.&lt;/li&gt;
&lt;li&gt;[] Check the bodywork for any damage.&lt;/li&gt;
&lt;li&gt;[] Turn the wheel all the way to the left and right to check for any clicking noises.&lt;/li&gt;
&lt;li&gt;[] Turn the steering wheel each way and check the tie rods and ball joints&lt;/li&gt;
&lt;li&gt;[] Make sure both keys work&lt;/li&gt;
&lt;li&gt;[] Give the car a test drive - make sure it can rev high, no warning lights appear and the breaks are responsive. If possible, make sure that the parking break holds the car on a hill.&lt;/li&gt;
&lt;li&gt;[] After test driving leave the car idle, make sure the idle sounds smooth and underneath the car there are no leaks.&lt;/li&gt;
&lt;li&gt;[] Check the VIN of the car matches the V5C&lt;/li&gt;
&lt;li&gt;[] Examine how clean the car is inside and the condition of the interior, usually this is a good sign of the cars care.&lt;/li&gt;
&lt;li&gt;[] Examine the brand of tyres the car has, if they are budget tyres and/or don&#39;t all match then this may be a sign the car has been maintained on a tight budget.&lt;/li&gt;
&lt;/ul&gt;
</content>
  </entry>
  
  <entry>
    <title>Business Version 2</title>
    <link href="/consulting-launch-2023/"/>
    <updated>2023-08-30T00:00:00Z</updated>
    <id>/consulting-launch-2023/</id>
    <content type="html">&lt;p&gt;I have been working as a software engineer for 8 years. Over the past 2 years, I&#39;ve been freelancing for startups, enterprises and everything in between.&lt;/p&gt;
&lt;p&gt;The next stage of my business is pivoting away from pure freelancing work to a more complete offering of products and services.&lt;/p&gt;
&lt;h2&gt;TL;DR&lt;/h2&gt;
&lt;p&gt;I launched a business 24 months ago chiefly to work as a freelancer. I&#39;m now looking to diversify and offer a broader suite of products and services to businesses.
These include:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;🧙‍♂️ Cloud architecture and optimisation&lt;/li&gt;
&lt;li&gt;⚙️ Hands-software development&lt;/li&gt;
&lt;li&gt;🌅 Engineering leadership, management and training&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If you want to work with me, please drop me an email at &lt;code&gt;me at joshghent.com&lt;/code&gt;&lt;/p&gt;
&lt;h2&gt;What I (now) offer&lt;/h2&gt;
&lt;p&gt;The key to any successful business is bringing the maximum amount of value to customers. This one is, thankfully, no different. I&#39;ve chosen 3 key areas where I believe my skills and experience can make a difference.&lt;/p&gt;
&lt;h3&gt;🧙‍♂️ Cloud architecture and optimisation&lt;/h3&gt;
&lt;p&gt;Having worked as a software engineer for over 8 years, I&#39;ve come across every type of cloud architecture imaginable. I&#39;ve successfully transformed monolithic applications into microservices, migrated applications from on-premise to the cloud and optimised cloud infrastructure to save money.&lt;/p&gt;
&lt;p&gt;When I work with any organisation, I focus on the following four pillars (in priority order):&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Observability.&lt;/strong&gt; Helping organisations to get a handle on the numbers behind their products and developers to gain insight into their code. Having worked with a number of different observability tools, I can help organisations to choose the right one for them.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Reliability.&lt;/strong&gt; The only thing worse than an app that is slow is an app that is down. I help organisations to design, build and ship robust products that can scale as the business grows.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Performance.&lt;/strong&gt; Application speed is a feature and competitive advantage. Taking a holistic view, I can provide indepth insights into your applications performance and work with your team to improve it.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Cost.&lt;/strong&gt; No one wants an AWS bill that leaves you sobbing uncontrollably in a corner. I can help bring confidence and restore consistency to your cloud spend. I approach this by looking at the other three pillars and optimising them to reduce costs.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Generally, my approach is simple, I work with you to understand your business and goals. And then I build you a plan to achieve them by means of the technology you&#39;re building.
I don&#39;t recommend a one-size-fits-all approach or that you need to hire a giant devops team to manage your infrastructure.
I believe that the best solutions are the simplest ones and I work with you to find the right solution for your business.&lt;/p&gt;
&lt;h3&gt;⚙️ Hands-software development&lt;/h3&gt;
&lt;p&gt;I&#39;ve spent the past 8 years working directly with code. My areas of expertise are back-end development (particularly API&#39;s and event-driven microservices) and infrastructure (particularly serverless). And I&#39;ve extensively worked with both greenfield and legacy projects.&lt;/p&gt;
&lt;p&gt;I&#39;ve worked with lots of different languages and frameworks, but generally favour these stacks:&lt;/p&gt;
&lt;h4&gt;Stack 1&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;&lt;em&gt;Frontend&lt;/em&gt;: React/Next.js, TailwindCSS, TypeScript&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Backend&lt;/em&gt;: Node.JS, Typescipt&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Datastore&lt;/em&gt;: DynamoDB, S3, SQS&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Infrastructure&lt;/em&gt;: AWS, SST, Cloudflare Pages&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;Stack 2&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;&lt;em&gt;Frontend&lt;/em&gt;: Laravel&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Backend&lt;/em&gt;: PHP/Laravel&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Datastore&lt;/em&gt;: PostgreSQL, Redis&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Infrastructure&lt;/em&gt;: Render.com, Cloudflare&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The skills listed above are not exhaustive. I&#39;m always happy picking up new technologies and languages as required.&lt;/p&gt;
&lt;p&gt;Software development goes beyond just coding. I take pride in producing robust code, complemented by concise documentation, and prioritize clear communication with both technical and non-technical individuals.&lt;/p&gt;
&lt;h3&gt;🌅 Engineering leadership, management and training&lt;/h3&gt;
&lt;p&gt;Encompassed within many of my roles is leadership and training. I believe I can help your organisation to train, and retain your engineers - and build a thriving engineering culture in the process. To do this, my approach focuses on&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Creating &lt;em&gt;systems&lt;/em&gt; that empower the team to drive their own growth.&lt;/li&gt;
&lt;li&gt;Introducing &lt;em&gt;fair and accurate metrics&lt;/em&gt; to motivate the team and measure progress.&lt;/li&gt;
&lt;li&gt;Building a culture of sharing and learning by introducing &lt;em&gt;knowledge shares&lt;/em&gt;, &lt;em&gt;pair programming&lt;/em&gt; and &lt;em&gt;code reviews&lt;/em&gt;.&lt;/li&gt;
&lt;li&gt;Establishing &lt;em&gt;psychological safety&lt;/em&gt; by means of &lt;em&gt;blameless postmortems&lt;/em&gt;, &lt;em&gt;retrospectives&lt;/em&gt; and &lt;em&gt;one-to-ones&lt;/em&gt;.&lt;/li&gt;
&lt;li&gt;Making sure that communication between stakeholders and the team is &lt;em&gt;transparent&lt;/em&gt; and &lt;em&gt;effective&lt;/em&gt;, with shared goals and values.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I have primarily worked with engineering teams who have a headcount of 1-20 people and successfully introduced these practices to them.&lt;/p&gt;
&lt;h3&gt;📟 Fixed cost services&lt;/h3&gt;
&lt;p&gt;These fixed price services are designed to give your team a boost in a particular area. They are designed to be delivered in a short time frame and provide you with a clear plan of action to move forward.&lt;/p&gt;
&lt;p&gt;If you would like more information about the services below please get in touch.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Cloud Architecture Plan or Review - £1,300&lt;/li&gt;
&lt;li&gt;Website/Application Performance Audit - £1,500&lt;/li&gt;
&lt;li&gt;Observability Audit - £750&lt;/li&gt;
&lt;li&gt;Team Strategy Documentation - £2,500&lt;/li&gt;
&lt;li&gt;Team Training - £1,250&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;All prices are exclusive of VAT.&lt;/p&gt;
&lt;h3&gt;🎁 Products&lt;/h3&gt;
&lt;p&gt;In addition to the services above, I&#39;m also working on a number of products that I have built as a result of problems observed in the organisations I have worked with. These include:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;LoginLlama&lt;/strong&gt; - A suspicious login detection service for your applications.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;These products can be provided to your organisation for a discounted rate and implemented by myself.&lt;/p&gt;
&lt;h2&gt;Work with me&lt;/h2&gt;
&lt;p&gt;If you&#39;re interested in the above, and want to work together, please drop me an email at &lt;code&gt;me at joshghent.com&lt;/code&gt;. Or book a call with me &lt;a href=&quot;https://calendly.com/joshghent/consultation&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;I look forward to hearing from you!&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>TIL - How to send an SQS message from a Lambda inside a VPC</title>
    <link href="/sqs-inside-vpc-lambda/"/>
    <updated>2023-08-15T00:00:00Z</updated>
    <id>/sqs-inside-vpc-lambda/</id>
    <content type="html">&lt;p&gt;Sending a message to SQS from a Lambda inside a VPC should be trivial. Unfortunately, this is AWS so they like to make it as complex as possible.&lt;/p&gt;
&lt;p&gt;Here is the process to follow if you&#39;re stuck:&lt;/p&gt;
&lt;h2&gt;Setup&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;The lambda function was in a subnet on the VPC&lt;/li&gt;
&lt;li&gt;Create a new VPC endpoint with a Full Access policy in that same VPC and subnet as the lambda. You will need to create the VPC endpoint first, then click on it, then select the &amp;quot;Policy&amp;quot; tab, then edit the policy.&lt;/li&gt;
&lt;li&gt;Create a security group with HTTPS -&amp;gt; 0.0.0.0/0 Inbound and All traffic outbound.&lt;/li&gt;
&lt;li&gt;Attach this security group to the VPC endpoint.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;Sample Lambda Function code&lt;/h2&gt;
&lt;pre&gt;&lt;code class=&quot;language-ts&quot;&gt;import { SendMessageBatchCommand, SQSClient } from &amp;quot;@aws-sdk/client-sqs&amp;quot;;
import chunk from &amp;quot;lodash/chunk&amp;quot;;
import { logger } from &amp;quot;./logger&amp;quot;;

const BATCH_SIZE = 10;

const sqsClient = new SQSClient({
  region: AWS_REGION,
  endpoint: SQS_VPC_ENDPOINT || null,
});

export const addToQueue = async (
  messages: Array&amp;lt;{ type: string; data: Record&amp;lt;string, string&amp;gt; }&amp;gt;
): Promise&amp;lt;void&amp;gt; =&amp;gt; {
  logger.debug(
    `Sending ${messages.length} emails. Data: ${JSON.stringify(messages)}`
  );
  const batches = chunk(messages, BATCH_SIZE);

  logger.debug(`Sending email to ${QUEUE_URL}`);

  await Promise.all(
    batches.map(async (batch) =&amp;gt; {
      const command = new SendMessageBatchCommand({
        QueueUrl: QUEUE_URL,
        Entries: batch.map((message, index) =&amp;gt; ({
          Id: String(index),
          MessageBody: JSON.stringify(message),
        })),
      });

      try {
        await sqsClient.send(command);
        logger.debug(`Sent SQS email(s) to queue`);
      } catch (err) {
        logger.error(`Error when queueing email: ${JSON.stringify(err)}`);
      }
    })
  );
};
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Adapted from the answer here: &lt;a href=&quot;https://github.com/aws/aws-sdk-js/issues/3203#issuecomment-786372586&quot;&gt;https://github.com/aws/aws-sdk-js/issues/3203#issuecomment-786372586&lt;/a&gt;&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Pull Request Environments in GitHub Actions (with SST, AWS and Cloudflare pages)</title>
    <link href="/github-actions-pr-env/"/>
    <updated>2023-07-24T00:00:00Z</updated>
    <id>/github-actions-pr-env/</id>
    <content type="html">&lt;p&gt;Pull request environments are a useful tool to have in your CI/CD pipeline. They allow you to preview your changes in a production-like environment before merging them into the main branch. You can send these environments to stakeholders, QA teams and even customers to request early feedback.&lt;/p&gt;
&lt;p&gt;Recently, I was tasked with adding this into a project.
The project used:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;SST&lt;/li&gt;
&lt;li&gt;AWS (for deploying the backend)&lt;/li&gt;
&lt;li&gt;Cloudflare pages (for deploying the frontend)&lt;/li&gt;
&lt;li&gt;NextJS frontend&lt;/li&gt;
&lt;li&gt;NodeJS backend&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I couldn&#39;t find a complete guide on how to do this, so I thought I&#39;d write one.&lt;/p&gt;
&lt;h2&gt;The setup&lt;/h2&gt;
&lt;p&gt;Principally we have 4 things to do&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Deploy the backend&lt;/li&gt;
&lt;li&gt;Deploy the frontend (pointing to that backend)&lt;/li&gt;
&lt;li&gt;Post a comment to the pull request with the URL&#39;s.&lt;/li&gt;
&lt;li&gt;Destroy the environment when the pull request is closed or merged.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Let&#39;s break this down.&lt;/p&gt;
&lt;h3&gt;Starting out&lt;/h3&gt;
&lt;p&gt;First we need to setup the workflow. We&#39;ll trigger for new &lt;code&gt;pull_requests&lt;/code&gt; and give permissions to the workflow to allow it to post comments, access secrets etc.&lt;/p&gt;
&lt;p&gt;Additionally, we&#39;ll add a &lt;code&gt;PR_PREFIX&lt;/code&gt; variable. This is so that we can deploy multiple pull request environments at the same time.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Throughout the jobs, you will likely see code to filter out &lt;code&gt;dependabot&lt;/code&gt; triggers. This is because these jobs will not work when triggered by dependabot. You can remove these if you don&#39;t use dependabot.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;name: Pull Request Ephemeral Environment

on:
  pull_request:

permissions:
  contents: write
  pull-requests: write
  id-token: write
  deployments: write

env:
  PR_PREFIX: pr-$
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Deploy the backend&lt;/h3&gt;
&lt;p&gt;As we&#39;re using SST, this is pretty easy.&lt;/p&gt;
&lt;p&gt;One caveat I did discover is that if you have multiple stacks that you want to deploy individually, you&#39;ll need to set the output from the &lt;code&gt;.sst/outputs.json&lt;/code&gt; file and then deploy the next stack. This is because when deploying with SST, it will wipe the existing &lt;code&gt;.sst/outputs.json&lt;/code&gt; file.&lt;/p&gt;
&lt;p&gt;Here is the job for deploying the backend:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;backend:
  name: Deploy Backend for PR
  if: github.actor!= &#39;dependabot[bot]&#39;
  runs-on: ubuntu-latest
  outputs:
    api-endpoint: ${{ steps.sst-api-outputs.outputs.apiUrl }}
  steps:
    - name: Checkout
      uses: actions/checkout@v3
    - name: Configure Non Prod AWS Credentials
      uses: aws-actions/configure-aws-credentials@v2
      with:
        role-to-assume: ${{ secrets.AWS_ROLE_TO_ASSUME }}
        aws-region: ${{ secrets.AWS_REGION }}
    - uses: actions/setup-node@v3
      with:
        node-version: 18
    - name: Cache node modules
      uses: actions/cache@v3
      with:
        path: node_modules
        key: node_modules-${{hashFiles(&#39;package-lock.json&#39;)}}
        restore-keys: node_modules- # Take any latest cache if failed to find it for current yarn.lock
    - run: npm install
    - run: npm run build
    - name: Deploy Global
      run: npx sst deploy global --stage $PR_PREFIX
    - name: Deploy API
      run: npx sst deploy api --stage $PR_PREFIX
    - name: Extract Api URL and set output
      id: sst-api-outputs
      run: |
        cat .sst/outputs.json
        API_URL=$(jq -r &#39;.[].ApiEndpoint | select(. != null)&#39; .sst/outputs.json)
        echo &amp;quot;apiUrl=$API_URL&amp;quot; &amp;gt;&amp;gt; $GITHUB_OUTPUT
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Deploy the frontend&lt;/h3&gt;
&lt;p&gt;Next we need to deploy the frontend. As we&#39;re using Cloudflare pages, this is also pretty easy!&lt;/p&gt;
&lt;p&gt;How this works is using the &lt;code&gt;cloudflare/pages-action&lt;/code&gt;. This will deploy it to a unique URL as a preview deployment for that pages project.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;frontend:
  name: Deploy Frontend for PR
  if: github.actor!= &#39;dependabot[bot]&#39;
  runs-on: ubuntu-latest
  outputs:
    url: ${{ steps.cloudflare-publish.outputs.url }}
  needs:
    - backend
  steps:
    - uses: actions/checkout@v3
    - uses: actions/setup-node@v3
      with:
        node-version: 18
    - name: Cache node modules
      uses: actions/cache@v3
      with:
        path: node_modules
        key: node_modules-${{hashFiles(&#39;package-lock.json&#39;)}}
        restore-keys: node_modules- # Take any latest cache if failed to find it for current yarn.lock
    - name: Cache NextJS Build
      uses: actions/cache@v3
      with:
        path: |
          ~/.npm
          ${{ github.workspace }}/packages/web/.next/cache
        # Generate a new cache whenever packages or source files change.
        key: ${{ runner.os }}-nextjs-${{ hashFiles(&#39;**/package-lock.json&#39;) }}-${{ hashFiles(&#39;packages/web/**/*.[jt]s&#39;, &#39;packages/web/**/*.[jt]sx&#39;) }}
        # If source files changed but packages didn&#39;t, rebuild from a prior cache.
        restore-keys: |
          ${{ runner.os }}-nextjs-${{ hashFiles(&#39;**/package-lock.json&#39;) }}-
    - run: npm install
    - name: Build
      run: npm run build -w packages/web
      env:
        NEXT_PUBLIC_API_URL: ${{ needs.backend.outputs.api-endpoint }}
    - name: Publish
      uses: cloudflare/pages-action@1
      id: cloudflare-publish
      with:
        apiToken: ${{ secrets.CLOUDFLARE_API_TOKEN }}
        accountId: ${{ secrets.CLOUDFLARE_ACCOUNT_ID }}
        projectName: ${{ secrets.CLOUDFLARE_PAGES_PROJECT_NAME }}
        directory: packages/web/dist
        gitHubToken: ${{ secrets.GITHUB_TOKEN }}
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Post a comment&lt;/h3&gt;
&lt;p&gt;Next we need to post a comment to the pull request with the URL&#39;s from the output.
I copied the format of the message from the one that the native Cloudflare integration uses because it looks quite good.&lt;/p&gt;
&lt;p&gt;If there is no comment, it will create one. If there is one, it will update it. Simple!&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;comment:
  name: Comment on PR
  if: github.actor!= &#39;dependabot[bot]&#39;
  runs-on: ubuntu-latest
  needs:
    - backend
    - frontend
  steps:
    - name: Find Comment
      uses: peter-evans/find-comment@v2
      if: success() &amp;amp;&amp;amp; github.event.number
      id: fc
      with:
        issue-number: ${{ github.event.number }}
        body-includes: &amp;quot;🚀 Successfully deployed preview environment&amp;quot;

    - name: Create Comment
      uses: peter-evans/create-or-update-comment@v3
      if: success() &amp;amp;&amp;amp; github.event.number
      with:
        issue-number: ${{ github.event.number }}
        comment-id: ${{ steps.fc.outputs.comment-id }}
        edit-mode: replace
        body: |
          ## 🚀 Successfully deployed preview environment

          &amp;lt;table&amp;gt;&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;&amp;lt;strong&amp;gt;Latest commit:&amp;lt;/strong&amp;gt; &amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;
          &amp;lt;code&amp;gt;${{ github.sha }}&amp;lt;/code&amp;gt;
          &amp;lt;/td&amp;gt;&amp;lt;/tr&amp;gt;
          &amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;&amp;lt;strong&amp;gt;Status:&amp;lt;/strong&amp;gt;&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;&amp;amp;nbsp;✅&amp;amp;nbsp; Deploy successful!&amp;lt;/td&amp;gt;&amp;lt;/tr&amp;gt;
          &amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;&amp;lt;strong&amp;gt;Preview URL:&amp;lt;/strong&amp;gt;&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;
          &amp;lt;a href=&#39;${{ needs.frontend.outputs.url }}&#39;&amp;gt;${{ needs.frontend.outputs.url }}&amp;lt;/a&amp;gt;
          &amp;lt;/td&amp;gt;&amp;lt;/tr&amp;gt;
          &amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;&amp;lt;strong&amp;gt;API URL:&amp;lt;/strong&amp;gt;&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;
          &amp;lt;a href=&#39;${{ needs.backend.outputs.api-endpoint }}&#39;&amp;gt;${{ needs.backend.outputs.api-endpoint }}&amp;lt;/a&amp;gt;
          &amp;lt;/td&amp;gt;&amp;lt;/tr&amp;gt;
          &amp;lt;/table&amp;gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Clean Up&lt;/h3&gt;
&lt;p&gt;Finally, we need to clean up the resources that we created. This is done by using the &lt;code&gt;sst remove&lt;/code&gt; command. Cloudflare pages resources are automatically cleaned up for us.&lt;/p&gt;
&lt;p&gt;This process requires a completely new GitHub action workflow that triggers when pull requests are closed (this means merged or manually closed).&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;name: Destroy PR Environment

# only trigger on pull request closed events
on:
  pull_request:
    types: [closed]

env:
  PR_PREFIX: pr-${{ github.event.pull_request.number }}

permissions:
  id-token: write
  contents: read

jobs:
  remove:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout
        uses: actions/checkout@v3
      - name: Configure Non Prod AWS Credentials
        uses: aws-actions/configure-aws-credentials@v2
        with:
          role-to-assume: ${{ secrets.AWS_ROLE_TO_ASSUME }}
          aws-region: ${{ secrets.AWS_REGION }}}
      - uses: actions/setup-node@v3
        with:
          node-version: 18
      - name: Cache node modules
        uses: actions/cache@v3
        with:
          path: node_modules
          key: node_modules-${{hashFiles(&#39;package-lock.json&#39;)}}
          restore-keys: node_modules- # Take any latest cache if failed to find it for current yarn.lock
      - run: npm install
      - run: npx sst remove --stage $PR_PREFIX
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In addition to this we need to make sure that our default destroy policy is set to remove all resources. By default SST will never remove DynamoDB tables, S3 buckets and other data sensitive resources. In this case though, we want to completely remove everything.&lt;/p&gt;
&lt;p&gt;In your &lt;code&gt;sst.config.ts&lt;/code&gt; file, add the following:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-ts&quot;&gt;if (app.stage !== &amp;quot;production&amp;quot;) {
  app.setDefaultRemovalPolicy(&amp;quot;destroy&amp;quot;);
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This will remove all resources when the stage is not &lt;code&gt;production&lt;/code&gt;.&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;And that&#39;s all! You now have a custom preview environment for your pull requests.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>How Part Time has helped me in life</title>
    <link href="/part-time/"/>
    <updated>2023-07-11T00:00:00Z</updated>
    <id>/part-time/</id>
    <content type="html">&lt;p&gt;Part time working has been the best career move I&#39;ve ever made. This change was natural and made a lot of sense. But, I understand it&#39;s hard to fathom for many people. Do I just sit around? Do I actually get anything done?&lt;/p&gt;
&lt;p&gt;I wanted to expand on how I&#39;m working now and why I believe it will be the next era of working.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;I believe that in the coming decade, just as remote work has arisen this decade, part time work will become increasingly commonplace.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h3&gt;How I work now&lt;/h3&gt;
&lt;p&gt;Currently, I work 3 days per week. These days vary from client to client but are generally fixed.&lt;/p&gt;
&lt;p&gt;The other 2 days a week I do voluntary work for charity.&lt;/p&gt;
&lt;p&gt;My work is solely remote and centres around a few key clients. I also create &lt;a href=&quot;https://loginllama.app&quot;&gt;SaaS products on the side&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;Why I find part time work so great&lt;/h3&gt;
&lt;p&gt;Here&#39;s the secret, you don&#39;t need 35 hours a week to accomplish your work. Often it takes half that time. Parkinson&#39;s law is real. By having 35 hours, &amp;quot;stuff&amp;quot; expands to fill that time. After all, if you&#39;re a software engineer, how much time do you &lt;em&gt;actually&lt;/em&gt; spend time coding verses planning meetings, catch ups, speaking with stakeholders and stand ups? In addition to this, the complexity of software nowadays means that it takes much longer to create new features and fix bugs.&lt;/p&gt;
&lt;p&gt;Parkinson&#39;s law still applies when it comes to part time working.&lt;/p&gt;
&lt;p&gt;The difference is, less hours forces me to cut through the noise and focus on outputs. Let me break these down.&lt;/p&gt;
&lt;h4&gt;Cutting through the noise&lt;/h4&gt;
&lt;p&gt;Seen a calendar that looks like a losing Tetris game? I bet you have. Because everyone has this structure, people think nothing of adding another meeting.&lt;/p&gt;
&lt;p&gt;By not working on some days, you can simply reject the meeting that you don&#39;t work that day.&lt;/p&gt;
&lt;p&gt;You can also ask questions - does this &lt;em&gt;need&lt;/em&gt; to be a meeting? Do I &lt;em&gt;personally&lt;/em&gt; need to attend?&lt;/p&gt;
&lt;p&gt;That critical thinking greatly reduces the amount of distracting non-work that you&#39;re part of.&lt;/p&gt;
&lt;p&gt;Beyond meetings, it also prevents you from becoming a single point of failure. People get used to relying on you for answers and help if you&#39;re around all the time. But, working part time pushes your team to work from documentation and share knowledge. Critically, it cuts down on you being constantly queried.&lt;/p&gt;
&lt;h4&gt;Focusing on outputs, not inputs&lt;/h4&gt;
&lt;p&gt;If you are contracted to work 35 hours a week, what is the incentive to work hard all of that time? Very little. Of course, you don&#39;t want to miss deadlines. But people are used to scope creep and tickets taking longer than their allotted points. So what&#39;s the harm if you slack off a bit? After all, &lt;a href=&quot;https://www.vouchercloud.com/resources/office-worker-productivity&quot;&gt;Vouchercloud found that their office workers were &amp;quot;productive&amp;quot; for a mere 2 hours and 23 minutes per day&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Fewer hours mean a shift in mindset. My contracts are about deliverables within a set weekly timeframe. The spotlight is on what I&#39;ve done, not time spent.&lt;/p&gt;
&lt;p&gt;Overall, I&#39;ve found working part time incredibly beneficial for both myself and my clients. It&#39;s helped me reduce the amount of busy work, balance work and life, and importantly, makes things happen.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Why are software companies so obsessed with doing anything but work?</title>
    <link href="/complexity/"/>
    <updated>2023-06-21T00:00:00Z</updated>
    <id>/complexity/</id>
    <content type="html">&lt;p&gt;Recently, my favourite time building software I&#39;ve ever had has been developing Loginllama.&lt;/p&gt;
&lt;p&gt;It&#39;s so abundantly simple.&lt;/p&gt;
&lt;p&gt;Just NextJS pushed to Vercel.&lt;/p&gt;
&lt;p&gt;I don&#39;t write fancy commit messages - just &amp;quot;x&amp;quot;. And then deploy straight to production.&lt;/p&gt;
&lt;blockquote class=&quot;twitter-tweet&quot;&gt;&lt;p lang=&quot;en&quot; dir=&quot;ltr&quot;&gt;Creating a new product&lt;br /&gt;&lt;br /&gt;❌ No development environment&lt;br /&gt;❌ No local development&lt;br /&gt;❌ No commit messages (just &amp;quot;x&amp;quot;)&lt;br /&gt;&lt;br /&gt;It&amp;#39;s the fastest I&amp;#39;ve ever shipped.&lt;/p&gt;&amp;mdash; Josh Ghent (@joshghent) &lt;a href=&quot;https://twitter.com/joshghent/status/1625458796351848448?ref_src=twsrc%5Etfw&quot;&gt;February 14, 2023&lt;/a&gt;&lt;/blockquote&gt; &lt;script async=&quot;&quot; src=&quot;https://platform.twitter.com/widgets.js&quot; charset=&quot;utf-8&quot;&gt;&lt;/script&gt;
&lt;p&gt;Contrasting this with most of my &amp;quot;jobby job&amp;quot; software development, and the joy is quickly sapped.&lt;/p&gt;
&lt;p&gt;To work on anything the ticket needs scoping and then writing. Followed by an hour long &amp;quot;refinement&amp;quot; to point out that Kevin the designer still hasn&#39;t attached their the Figma link to the ticket (come on Kevin!), and Janine is raising concerns that it&#39;s not possible to do (it&#39;s technology, it&#39;s always possible!).&lt;/p&gt;
&lt;p&gt;Then you come to evaluate the amount of time it will take, which if you do not work on software involves one simple step - stick your head out the window and just gather a general &amp;quot;feeling&amp;quot; then call it a 5 or 3, or 8, none of it means anything anyway. Software estimates are one step away from the ridiculed pseudoscience of homeopathy. It&#39;s a total farce, but some people believe in it. And that&#39;s the important part.&lt;/p&gt;
&lt;p&gt;Now that you&#39;ve got your estimation, your project manager and your team sit down and prioritise it and divide the work into a sprint. Basically a sprint is the amount of work you can conceivably regurgitate to meet an imaginary deadline.&lt;/p&gt;
&lt;p&gt;Of course, bug fixes are priority number 1 you think. Think again! If it&#39;s not affect &lt;em&gt;that many&lt;/em&gt; customers then it&#39;s fine! Just add more features and the shareholders are happy.&lt;/p&gt;
&lt;p&gt;Before you get started on &lt;em&gt;the actual work&lt;/em&gt;, you need to meet with your manager to discuss OKR&#39;s - the things you hope to achieve to make the company more money.&lt;/p&gt;
&lt;p&gt;When you get back, you check your inbox and see Sarah has requested you give 360 feedback on her. So you spend the next hour trying to understand the questions and then just resigning and giving them &amp;quot;excellent&amp;quot; on everything.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;I could go on, but I&#39;m hoping some of this resonates with you.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Of course, my scrappy &amp;quot;x&amp;quot; commits and pushing to production does not really scale for a large team. And arguably, rushing to code before looking at the metrics and requirements is foolish. But, I would hope, there is a happy medium.&lt;/p&gt;
&lt;p&gt;Unfortunately, many organisations I have worked for solve this problem by aggressively hiring. Even during a down period for the economy, most software companies (which is to say, companies) are hiring aggressively. They need QA engineers, frontend engineers, full stack engineers, devops engineers - you name it, they need it.&lt;/p&gt;
&lt;p&gt;But, the question never gets ask of &lt;strong&gt;why we have all this complexity in the first place&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;No body wonders why we need more documentation than the great library of Alexandria to write an app that helps find you someone to walk your dog.&lt;/p&gt;
&lt;p&gt;It&#39;s a classic case of Parkinsons Law. Technology &lt;u&gt;should&lt;/u&gt; enable us to do more with less. In reality, the opposite is true.&lt;/p&gt;
&lt;h2&gt;What&#39;s the solution?&lt;/h2&gt;
&lt;p&gt;Unfortunately, there isn&#39;t a one size fits all solution. And in many cases, lots of people don&#39;t want a solution, because it would mean admitting that their job is unnecessary and their work is unvalued.&lt;/p&gt;
&lt;p&gt;In any case, you can try to do the following&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Ask good questions.&lt;/strong&gt; Questions such as &amp;quot;help me to understand why X&amp;quot; or &amp;quot;what is preventing us from Y approach?&amp;quot; helps everyone clarify their thoughts and justify the position. This should be an interrogation but rather a friendly discussion with the aim of getting things done, not out of a personal vendetta against busywork (however tempting).&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Be careful what you measure.&lt;/strong&gt; Goodharts law states that whenever you begin to measure something, it ceases to become a good measure. For example, if you want to measure the productivity of car builders, you could measure how many cars are built per day. But if all these cars break down due to quality issues, of what value is the measurement. Instead, try to measure things that lead to better outcomes. For example, in the car factory you could measure the ratio of vehicles produced to the number that have defects.&lt;/p&gt;
&lt;img src=&quot;./../../../content/assets/images/meeting-faceoff.png&quot; alt=&quot;sprint meetings vs a single developer&quot; height=&quot;300&quot; /&gt;
&lt;p&gt;&lt;strong&gt;Aggressively cull meetings.&lt;/strong&gt; It&#39;s already been written about at length how meetings can be corrosive for deep work. But still, most organisations have lots of regular and &amp;quot;quick catch up&amp;quot; meetings. As teams scale, complexity of communication also scales. It&#39;s not uncommon to have teams dedicate 1 day per &amp;quot;work cycle&amp;quot; to meetings - sprint reviews, refinements and planning. Be a leader and ask if these meetings need to be in place, can they be reduced by doing more work async? Are they achieving the outcomes they set out to solve? Can we focus the discussion more?&lt;/p&gt;
&lt;p&gt;Busy work will always exist but it doesn&#39;t have to just be accepted. But be kind, some peoples entire life is busy work.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Lifehacks</title>
    <link href="/lifehacks/"/>
    <updated>2023-06-09T00:00:00Z</updated>
    <id>/lifehacks/</id>
    <content type="html">&lt;blockquote&gt;
&lt;p&gt;Inspired by https://guzey.com/lifehacks/&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;strong&gt;Last updated&lt;/strong&gt;: 2023-06-09&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;you&#39;ll get far more benefit from reading books than anything on the web&lt;/li&gt;
&lt;li&gt;Be cautious of people who overcomplicate things&lt;/li&gt;
&lt;li&gt;be requirements led&lt;/li&gt;
&lt;li&gt;ask good questions&lt;/li&gt;
&lt;li&gt;time spend configuring a system (productivity, software, computer) is time wasted.&lt;/li&gt;
&lt;li&gt;the goal of a productivity system is to be simple and reduce cognitive overhead. evaluate if yours is accomplishing those goals.&lt;/li&gt;
&lt;/ol&gt;
</content>
  </entry>
  
  <entry>
    <title>Setup a Repo in Github</title>
    <link href="/github-repo-setup/"/>
    <updated>2023-05-03T00:00:00Z</updated>
    <id>/github-repo-setup/</id>
    <content type="html">&lt;p&gt;I end up creating quite a few repos in Github for customer projects. And I always end up having to remember how to best set them up. In line with the whole &amp;quot;blog-umentation&amp;quot; thing, I thought it would be best to write it down for myself.&lt;/p&gt;
&lt;p&gt;This setup lends itself to a &amp;quot;modular monolith&amp;quot; setup but can be used for any kind of setups.&lt;/p&gt;
&lt;h2&gt;Features&lt;/h2&gt;
&lt;p&gt;This setup brings you the following&lt;/p&gt;
&lt;p&gt;✅ Conventional commits check (making sure commits adhere to guidelines)&lt;/p&gt;
&lt;p&gt;✅ Ensure checklists in PR&#39;s (as defined in a template) are complete&lt;/p&gt;
&lt;p&gt;✅ &lt;code&gt;main&lt;/code&gt; branch is protected&lt;/p&gt;
&lt;p&gt;✅ Dependabot alerts are setup for security&lt;/p&gt;
&lt;p&gt;✅ Dependabot is setup for github actions and npm packages&lt;/p&gt;
&lt;p&gt;✅ Jest coverage reports get added to PR&#39;s&lt;/p&gt;
&lt;p&gt;✅ PR&#39;s are scanned for secrets&lt;/p&gt;
&lt;h2&gt;Steps&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Go to Settings and change the following&lt;/strong&gt;&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Protect the main branch - require 1 or 2 approvers, prevent force push&lt;/li&gt;
&lt;li&gt;Change merge types to Squash and Merge only&lt;/li&gt;
&lt;li&gt;Enable dependabot&lt;/li&gt;
&lt;li&gt;Enable automatically delete head branch&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Add the following Github actions&lt;/strong&gt;&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Secrets Scan (https://github.com/marketplace/actions/trufflehog-oss)&lt;/li&gt;
&lt;li&gt;Google release please (https://github.com/google-github-actions/release-please-action)&lt;/li&gt;
&lt;li&gt;Require PR checklist complete (https://github.com/mheap/require-checklist-action)&lt;/li&gt;
&lt;li&gt;Conventional Commits (https://github.com/amannn/action-semantic-pull-request)&lt;/li&gt;
&lt;li&gt;Coverage report (https://github.com/ArtiomTr/jest-coverage-report-action)&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Add the dependabot config (&lt;code&gt;.github/dependabot.yml&lt;/code&gt;&lt;/strong&gt;)&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;version: 2

updates:
  - package-ecosystem: &amp;quot;github-actions&amp;quot;
    directory: &amp;quot;/&amp;quot;
    schedule:
      interval: &amp;quot;daily&amp;quot;

  - package-ecosystem: &amp;quot;npm&amp;quot;
    directory: &amp;quot;/&amp;quot;
    schedule:
      interval: &amp;quot;weekly&amp;quot;
    rebase-strategy: &amp;quot;auto&amp;quot;
    open-pull-requests-limit: 2
    ignore:
      - dependency-name: &amp;quot;*&amp;quot;
        update-types: [&amp;quot;version-update:semver-major&amp;quot;]
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Hopefully this helps you setup your Github repos faster!&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>SQS, SNS, Eventbridge, DynamoDB - Chosing the right queue system in AWS</title>
    <link href="/aws-queues/"/>
    <updated>2023-05-02T00:00:00Z</updated>
    <id>/aws-queues/</id>
    <content type="html">&lt;p&gt;AWS has so many different queuing services.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://aws.amazon.com/sns/&quot;&gt;SNS&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://aws.amazon.com/eventbridge/&quot;&gt;Eventbridge&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://aws.amazon.com/sqs/&quot;&gt;SQS&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://aws.amazon.com/amazon-mq/&quot;&gt;Amazon MQ&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://aws.amazon.com/dynamodb/&quot;&gt;DynamoDB&lt;/a&gt; - a database but also can trigger lambda&#39;s so it&#39;s kind of a queue!&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For newcomers to AWS, having so many solutions for a seemingly simple problem can be overwhelming.&lt;/p&gt;
&lt;p&gt;Here is a breakdown of each queuing service and when you might use it.&lt;/p&gt;
&lt;h2&gt;SQS&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Sending&lt;/strong&gt;: 1 to 1&lt;/p&gt;
&lt;p&gt;What it is: Literally a &amp;quot;simple queue system&amp;quot;. Does what it says on the tin, send a message to a queue, have it consumed, easy.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;When to use it:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Buffer for API requests or 3rd party system. E.g., if you needed to send some data to a 3rd party system but don&#39;t want to hit their rate limit and add resiliency, then SQS can be used for this. You could also use it to buffer writing data to a database.&lt;/li&gt;
&lt;li&gt;Queuing jobs. e.g., If you have a pipeline to optimise images and upload them to a CDN, you can use SQS to queue up those images. This means you have resiliency if an optimisation fails (it will auto retry) and will not overwhelm your down stream systems. A queue message is also long lived (unlike an API request) so you don&#39;t need to worry about time outs.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;SNS&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Sending&lt;/strong&gt;: 1 to Many&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;What it is&lt;/strong&gt;: High throughput &amp;quot;fan-out&amp;quot; message distribution.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;When to use it&lt;/strong&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;You want to send SMS or Mobile notifications.&lt;/li&gt;
&lt;li&gt;You want a dumb pipe to send to lots of downstream targets.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Eventbridge&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Sending&lt;/strong&gt;: 1 to Many&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;What it is&lt;/strong&gt;: Event bus&#39;s. Think of them like queues but messages can go to multiple places. Similar to SNS but you can define rules for which messages go to which consumer.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;When to use it&lt;/strong&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Need to distribute to multiple targets&lt;/li&gt;
&lt;li&gt;Designing an event driven architecture&lt;/li&gt;
&lt;li&gt;You want to integrate with third parties (like Datadog, Shopify, Zendesk etc.)&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Amazon MQ&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Sending&lt;/strong&gt;: 1 to Many (depending on configuration)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;What it is&lt;/strong&gt;: AWS&#39;s managed RabbitMQ solution&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;When to use it&lt;/strong&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;You have a pre-existing RabbitMQ cluster and you don&#39;t want to migrate to SQS (yet).&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;DynamoDB&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Sending&lt;/strong&gt;: 1 to 1&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;What it is&lt;/strong&gt;: NoSQL database&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;When to use it (for queues)&lt;/strong&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;You want to keep logs of data your system has processed. e.g., you could use a table as a store for a CMS. When you upload a new post, it triggers a pipeline that sends it to your social media accounts.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Hopefully you feel a lot more confident in making an architectural decision about which queuing technology to use in AWS. If you have an application and you&#39;re confused as what to use, send me a message on &lt;a href=&quot;https://twitter.com/joshghent&quot;&gt;twitter @joshghent&lt;/a&gt;&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Raising £1000 for MSF</title>
    <link href="/running/"/>
    <updated>2023-04-28T00:00:00Z</updated>
    <id>/running/</id>
    <content type="html">&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;TL;DR&lt;/strong&gt; I&#39;m raising money for Medicins Sans Frontiers by running my first half marathon. You can donate &lt;a href=&quot;https://justgiving.com/fundraising/josh-ghent&quot;&gt;here&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;I love running. It&#39;s such a simple exercise. All you need is some good shoes, good music and, ideally, good weather.&lt;/p&gt;
&lt;p&gt;I love that I can run literally anywhere I am in the world and see new things and explore the terrain.&lt;/p&gt;
&lt;p&gt;When I first started, 5 kilometres was like climbing Everest. I wouldn&#39;t say I was unfit, but being in a sedentary job for many years, and not much attention paid to fitness led to a weak stamina.&lt;/p&gt;
&lt;p&gt;Unfortunately it took some health problems from family members to prompt me to pay proper attention to my health.&lt;/p&gt;
&lt;p&gt;As I worked up from 5 kilometres, to 10, then to 15. I was looking for another challenge. A full marathon has always been a goal of mine, but seemed like too much of a stretch.&lt;/p&gt;
&lt;p&gt;When I saw the Birmingham Great Run Half Marathon advertised, I knew that was just the challenge I needed.&lt;/p&gt;
&lt;p&gt;Alongside this increased attention to fitness, I was also reading through Peter Singers book, &amp;quot;The Life you can Save&amp;quot;. I won&#39;t rehash it here because it&#39;s been discussed at length elsewhere. But, crucially, it made me realised that although my wife and I gave to charity already, we could do more.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Kill two birds with one stone.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;So, the thought occurred to me - &amp;quot;why not run and raise money for charity at the same time?&amp;quot;.&lt;/p&gt;
&lt;p&gt;I launched a fundraiser through JustGiving in the coming days. I chose the charity MSF chiefly because of the great work they do and the high amount of money they spend on their activities versus marketing, salaries etc.&lt;/p&gt;
&lt;p&gt;I&#39;m still quite worried about doing 21.2 kilometres. But, the important thing is I am running and raising money for a great cause.&lt;/p&gt;
&lt;p&gt;I need your help to reach the goal of £1000. So, please, if you have a few quid donate to the fundraiser here: https://justgiving.com/fundraising/josh-ghent&lt;/p&gt;
&lt;p&gt;The business my wife and I run, Turbo Technologies, will be doing a donation match of up to £1000 ourselves. Hopefully in totally we can raise £2000.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Whimsical Software</title>
    <link href="/whimsical-software/"/>
    <updated>2023-03-24T00:00:00Z</updated>
    <id>/whimsical-software/</id>
    <content type="html">&lt;p&gt;Software takes itself too seriously. It&#39;s time to have some fun.
For far too long, software has been designed and created around one single metric - revenue.
And often, increasing revenue creates negative behaviour for us - the humans.
Designs are created to maximize clicks and to be more addictive than nicotine. We are constantly bombarded with notifications and alerts, and we are forced to make decisions in a split second.
On top of this, building software has never been more complex. Although we have a wonderful array of technology at our disposal to scale an application to millions of people, it&#39;s also incredibly complex to build and maintain.&lt;/p&gt;
&lt;p&gt;We cannot abandon these metrics. Ultimately, it keeps the world turning. But we should introduce some &amp;quot;whimsey&amp;quot;.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;| &lt;strong&gt;whimsical&lt;/strong&gt; &lt;i&gt;&lt;small&gt;wim-zi-kal&lt;/small&gt;&lt;/i&gt;
| &lt;i&gt;definition:&lt;/i&gt; playfully quaint or fanciful behaviour or humour.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;We should be able to build software that is fun to use, that makes us smile, and that makes us feel good about ourselves.&lt;/p&gt;
&lt;p&gt;Enter Whimsical Software.&lt;/p&gt;
&lt;h2&gt;The Whimsical Software Manifesto&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Be playful.&lt;/strong&gt; Discard the boring blues and greys. Use bright colours, delightful interactions, playful sounds and animations. Software should be like a playground, not a prison. In some cases, people perhaps spend more time using your software than they do with their family. So, it should be an enjoyable place that allows them to do their job.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Be kind.&lt;/strong&gt; Most revenue generation from software arises from hoovering up masses of data. Whimsical software should be kind, capturing only what data is needed to deliver a better experience. It should be kind to the environment, running on efficient architectures (e.g. serverless) and using minimal resources. And it should encourage us get the task done, and then log off.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Be helpful.&lt;/strong&gt; How many times have you wondered what an error message was, how a particular edge case function could be done or how to use a particular feature? Whimsical software should be helpful, providing guidance when needed. And automating as much as possible.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Whimsical software is not a particular aesthetic or technology. It&#39;s a way of building tools to empower people, not to enslave them.
You could have an old school terminal interface, or a modern web application. It doesn&#39;t matter. What matters is that it&#39;s fun to use, and that it makes you smile.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Devblog - LoginLlama 001</title>
    <link href="/loginllama-001/"/>
    <updated>2023-03-23T00:00:00Z</updated>
    <id>/loginllama-001/</id>
    <content type="html">&lt;p&gt;I&#39;m starting work on a new SaaS!&lt;/p&gt;
&lt;p&gt;I want to document my process because I used to love devlogs on tumblr and tigsource.&lt;/p&gt;
&lt;p&gt;I am creating an API-as-a-Service that monitors suspicious login attempts.&lt;/p&gt;
&lt;p&gt;At the moment there is a couple of solutions in this space. But they fall down in a few areas&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Bad pricing&lt;/li&gt;
&lt;li&gt;Bad marketing&lt;/li&gt;
&lt;li&gt;Rigid integration path. In the case of the main competitor, they make it so they have to send the emails out, rather than send them out yourself. It also doesn&#39;t have the concept of teams so not setup for enterprise.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;My MVP will be:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A single API endpoint to check if a login is suspicious or not
&lt;ul&gt;
&lt;li&gt;Check against known VPN&#39;s, TOR nodes, time of day and other factors.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Allow customers to control the factors that affect whether a login is considered suspicious or not.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Here is my bullet pointed notes as I did this first stage of development!&lt;/p&gt;
&lt;p&gt;You can checkout the in-progress site here: https://loginllama.app&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;started with the subscription starter pack from vercel&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Had to upgrade from 12-&amp;gt;13&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Main fix was done by the codemod dependency. Mainly had to update Link components to not include a elements&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Then had to manually migrate the database&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;And get the types form the repo&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;But then all setup!&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Added the first edge function.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Decided I’m going to for an mvp of&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Basic admin interface to control the sensitivity&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Api to check if the login is suspicious and return the reason why&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Good looking doc pages&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;NodeJS SDK&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Allow customers to get the data in a computer readable way so they can send their own emails.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Homepage listing features&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Pay as you go pricing model&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;ul&gt;
&lt;li&gt;$1 per seat&lt;/li&gt;
&lt;li&gt;$0.00015 per request&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Teams but off only one person&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Added basic /api/v1/login/check endpoint&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Added basic accounts screen with API Key&lt;img src=&quot;../../assets/images/accounts-screen.png&quot; alt=&quot;Screenshot 2023-02-06 at 02.27.33&quot; /&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
</content>
  </entry>
  
  <entry>
    <title>Rules of Thumb for creating API&#39;s</title>
    <link href="/api-rules-of-thumb/"/>
    <updated>2023-03-14T00:00:00Z</updated>
    <id>/api-rules-of-thumb/</id>
    <content type="html">&lt;p&gt;Building good software often starts by having good principles upon which to design it upon.&lt;/p&gt;
&lt;p&gt;But, principles are high level and so often don&#39;t allow for much practical application. On the other hand, rules, although more specific and practical, are used by many as something to bash others over the head with.&lt;/p&gt;
&lt;p&gt;In both cases, these are hardly desirable to build good software.&lt;/p&gt;
&lt;p&gt;What we need, is something inbetween. The practicality of a rule, but the heuristics of a principle.&lt;/p&gt;
&lt;p&gt;Enter the rule of thumb! We&#39;ve all used rules of thumb (&amp;quot;righty tighty, lefty loosey&amp;quot; anyone?). But, recently I was asked by someone to create a list of rules for creating API&#39;s. Instead, I created some rules of thumb that I have documented here:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Pluralise routes.&lt;/strong&gt; For example use &lt;code&gt;/users&lt;/code&gt; not &lt;code&gt;/user&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Make use of HTTP methods.&lt;/strong&gt; It sounds basic but many API&#39;s break this convention. If you have an update use a &lt;code&gt;PUT&lt;/code&gt; or &lt;code&gt;PATCH&lt;/code&gt;. If you&#39;re getting data, you use &lt;code&gt;GET&lt;/code&gt; - get it? Why use this convention? Mostly because it&#39;s logical. It stops people guessing your routes. For example if you tell someone you have a route of &lt;code&gt;POST /users&lt;/code&gt; to create a user, then the person may logically assume that doing &lt;code&gt;GET /users&lt;/code&gt; will fetch users as well.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Don&#39;t include method in the name.&lt;/strong&gt; For example, don&#39;t create routes such as a &lt;code&gt;/createUser&lt;/code&gt; or &lt;code&gt;/changeName&lt;/code&gt;. The action &amp;quot;create&amp;quot;, &amp;quot;change&amp;quot;, or whatever, should be clear from the HTTP method (POST, PATCH, DELETE etc.).&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Return HTTP response codes appropriately.&lt;/strong&gt; Only return a 200 when successful. And equally, only return a 4xx/5xx code when there is an error. HTTP response codes are a simple way for API consumers to respond to issues when calling your API.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Return JSON to the client.&lt;/strong&gt; Makes handling your data handling far easier on your frontend.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Use query parameters for filtering data.&lt;/strong&gt; Don&#39;t use POST bodies for filtering data for a fetch request. Instead add filters via query parameters. For example &lt;code&gt;?age=88&amp;amp;surname=Ghent&amp;amp;company=Turbo Technologies&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Avoid backwards incompatible changes.&lt;/strong&gt; Generally, API&#39;s are tightly coupled to the consumers that are built against them. To avoid having to tie releases together and having to issue lots of updates to your frontend, try to avoid backwards compatible changes. For example, instead of removing a field, just add a new one. Or alter the response based on a &amp;quot;Version&amp;quot; header.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Keep versioning basic. It&#39;s not a lunar mission.&lt;/strong&gt; Favour using V1, V2 and so on rather than anything complex like dates, or semver versions. You likely won&#39;t need it.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Keep data, business logic and request life cycle handling separate.&lt;/strong&gt; Clear separation of these different areas will make maintenance of your application far easier. Additionally, by segmenting the request life cycle (for example, error handling, authentication etc.) it standardizes responses, making it easier to use for consuming applications.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Put changes into OpenAPI first.&lt;/strong&gt; Specifications seem like a huge pain to write. If you&#39;re like me, then you likely want to jump straight into the code. But, OpenAPI specifications are a great resource so that your frontend team can develop their code against a contract. Further, if you&#39;re API is used by 3rd parties then this OpenAPI spec can serve as documentation for them.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Mock DB responses when testing your API.&lt;/strong&gt; When testing, you want to test your code, not the database. Mocking the database response will accomplish this and mean you can run your test suite in your CI suite without having to spin up a local database.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Emit events.&lt;/strong&gt; Event-based architectures are not 100% practical. REST API&#39;s are far easier for handling basic CRUD operations. But, beyond this, it&#39;s prudent to release events (to Eventbridge or another event bus) from your CRUD API, which can then be consumed by other systems asynchronously. For example, let&#39;s say you have an endpoint to update a customers details. When the customer updates their email you need to send an email to verify that new email and notify the old email of the change. You could place all this logic in your CRUD API system. But, to improve resiliency and segment logic, it would be wiser to release an event of &amp;quot;CustomerEmailChanged&amp;quot;, and then have another consumer send the email, and another notify the old email address of the change.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;This list is by no means complete, or exhaustive. But it does serve as a good set of guidelines to build your API around.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Simple sites</title>
    <link href="/simple-sites/"/>
    <updated>2023-03-09T00:00:00Z</updated>
    <id>/simple-sites/</id>
    <content type="html">&lt;p&gt;I saw a tweet the other day from &lt;a href=&quot;https://twitter.com/dr&quot;&gt;Dan Rowden&lt;/a&gt; that hit on a class of software I really love.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Simple, small, minimalist tools are in ascension. And I like it.&lt;/p&gt;
&lt;p&gt;Check also:&lt;a href=&quot;https://t.co/objPWGsriu&quot;&gt;https://t.co/objPWGsriu&lt;/a&gt;&lt;a href=&quot;https://t.co/U2i8tW9Qq3&quot;&gt;https://t.co/U2i8tW9Qq3&lt;/a&gt;&lt;a href=&quot;https://t.co/GYt5FPPEbb&quot;&gt;https://t.co/GYt5FPPEbb&lt;/a&gt;&lt;a href=&quot;https://t.co/snSoIs6UPY&quot;&gt;https://t.co/snSoIs6UPY&lt;/a&gt; &lt;a href=&quot;https://t.co/CLXFQGqMOW&quot;&gt;https://t.co/CLXFQGqMOW&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;— Dan Rowden ⚡️ (@dr) &lt;a href=&quot;https://twitter.com/dr/status/1625521394678128640&quot;&gt;February 14, 2023&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;In creating my &amp;quot;digital garden&amp;quot;, I wanted to publish a list of these sorts of sites.&lt;/p&gt;
&lt;p&gt;If you have further suggestions, please &lt;a href=&quot;https://twitter.com/joshghent&quot;&gt;tweet them at me&lt;/a&gt;!&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;https://tasks.wtf/&lt;/li&gt;
&lt;li&gt;https://imgs.so/&lt;/li&gt;
&lt;li&gt;https://glass.photo&lt;/li&gt;
&lt;li&gt;https://read.cv/explore&lt;/li&gt;
&lt;li&gt;https://bmrks.com/&lt;/li&gt;
&lt;li&gt;https://lfe.org/&lt;/li&gt;
&lt;li&gt;https://savee.it/&lt;/li&gt;
&lt;/ul&gt;
</content>
  </entry>
  
  <entry>
    <title>You can&#39;t fix engineering culture with communication</title>
    <link href="/communication/"/>
    <updated>2023-03-08T00:00:00Z</updated>
    <id>/communication/</id>
    <content type="html">&lt;p&gt;It always bothers me that people say that &amp;quot;communication&amp;quot; is the problem in engineering organisations.&lt;/p&gt;
&lt;p&gt;We have near-constant access to each other via Email, Slack, JIRA and even GitHub PR comments.&lt;/p&gt;
&lt;p&gt;Saying &amp;quot;communication&amp;quot; blindly is like telling a potentially Olympic runner to &amp;quot;run&amp;quot;. That might be part of the overall strategy, but it&#39;s an oversimplification.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;./../../assets/images/communication-meme.png&quot; alt=&quot;communication-meme&quot; /&gt;&lt;/p&gt;
&lt;p&gt;For example, if the problem is that a new feature broke another area of the system, then that&#39;s not a communication failure. Why?&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The idea was conceived and translated into a ticket (communication)&lt;/li&gt;
&lt;li&gt;That ticket was then iterated upon, and a specification was formed (communication)&lt;/li&gt;
&lt;li&gt;That ticket was then delegated to a developer (communication)&lt;/li&gt;
&lt;li&gt;That ticket was spoken about at stand-up (communication)&lt;/li&gt;
&lt;li&gt;That ticket was then submitted for code review (communication)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;More communication is not the answer here.&lt;/p&gt;
&lt;p&gt;So what is?&lt;/p&gt;
&lt;p&gt;The answer, of course, varies on a case-by-case basis.&lt;/p&gt;
&lt;p&gt;But, I have found people blaming communication as a scapegoat for the real problems in the team.&lt;/p&gt;
&lt;p&gt;Some teams attempt to get around this with a &amp;quot;5-why&#39;s&amp;quot; system. But, in practice, the same groups that blame communication are also blind to the problems they are trying to solve. Many see a problem and jump to a solution they read that a FAANG company do.&lt;/p&gt;
&lt;p&gt;First, list the problems and the impact on the team (from low to high). Then look at the high-impact issues and understand why they are problematic. For example, if the issue is that shipping features is slow, then its high impact is because a business needs to ship certain features by deadlines.&lt;/p&gt;
&lt;p&gt;Next, being requirements led, look at your needs for the &amp;quot;ideal&amp;quot; system. Following the above example, the conditions might be that you can ship once per day, not require manual work to check new features and deploy new code in 15 minutes or less.&lt;/p&gt;
&lt;p&gt;From there, you can design solutions to solve those problems.&lt;/p&gt;
&lt;p&gt;This way, you solve your problems and are mindful of the issues. It&#39;s easy to read about solutions that FAANG&#39;s use. But the reality for many organisations is that those solutions would not fit and even do more harm.&lt;/p&gt;
&lt;p&gt;Finally, you can measure the impact of these changes and iterate on the approach you have created.&lt;/p&gt;
&lt;p&gt;Communication might be the problem. But I&#39;d suggest looking at other failings in your engineering process, as it might surface some more positive changes you can make.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>The Cobra Effect and Software</title>
    <link href="/cobra-effect/"/>
    <updated>2023-02-28T00:00:00Z</updated>
    <id>/cobra-effect/</id>
    <content type="html">&lt;p&gt;We like to focus on inputs, not outputs. Inputs are simple. Sugar for baking, metal for factories and petrol for cars. Software is no different.&lt;/p&gt;
&lt;p&gt;But by focusing on inputs, we often encourage the dreaded &lt;a href=&quot;https://en.wikipedia.org/wiki/Perverse_incentive&quot;&gt;cobra effect&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Basically, we incentivize bad behaviour.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;For example, let&#39;s say we have metrics for a team to write 30 tests a sprint and maintain 90% code coverage.&lt;/p&gt;
&lt;p&gt;On the surface, it sounds good! 90% coverage will mean our code is tested thoroughly and 30 tests a sprint means we definitely will be making the system more robust.&lt;/p&gt;
&lt;p&gt;But, consider the other side of the coin.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The team is now disincentivized from writing new code to not reduce the code coverage.&lt;/li&gt;
&lt;li&gt;Tests are written for the sake of it, not because they are required and therefore the CI/CD pipeline takes longer to run.&lt;/li&gt;
&lt;li&gt;Any new feature that is added has to be tested to absolute oblivion, thereby increasing the time to ship new features.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;What&#39;s the solution?&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;Focus on what you want the desired outcome to be and then design metrics that would change based on that outcome.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;In the above example, rather than enforcing code coverage, we could measure the number of defects raised, the number of releases, and the number of failing test cases.&lt;/p&gt;
&lt;p&gt;Metrics are important. But, consider the kinds of behaviour that those metrics will encourage. Otherwise, you&#39;ll need to watch out for the cobras.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Five Things I Wish I learnt sooner</title>
    <link href="/things-learned-sooner/"/>
    <updated>2023-02-13T00:00:00Z</updated>
    <id>/things-learned-sooner/</id>
    <content type="html">&lt;p&gt;I have been a software engineer for the best part of a decade (yikes, I&#39;m old now?). And in that time, I have learnt a great deal about software development - languages, frameworks, architecture and much more.&lt;/p&gt;
&lt;p&gt;But, since becoming a freelancer, I have learned how to make my work more successful and less chaotic.&lt;/p&gt;
&lt;p&gt;I&#39;m sharing those lessons here because I wish I had learned them sooner.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Attach metrics (KPIs) to work.&lt;/strong&gt; If your work doesn&#39;t have metrics that business people can see go up or down, you will be in trouble. Metrics (or KPIs) are the genesis for doing successful good work. Developers often think that things such as technical debt cannot have business metrics. This was how I reasoned for many years. But turns out they do! Technical debt, for example, can be used against the metric of user retention or revenue. How? By reducing technical debt, we speed up the application, which increases customer retention.&lt;/p&gt;
&lt;p&gt;Additionally, by attaching metrics to your work, you can gain insight into how that work is progressing. Teams are motivated by seeing progress being made, so make sure to be transparent and share these metrics with them.&lt;/p&gt;
&lt;p&gt;Always attach metrics to your work. If there are no metrics, it&#39;s a sign that the project will likely not get done.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Be requirements led.&lt;/strong&gt; This means to start with the problem first and the solution second. Developers often need help creating a solution to something before fully exploring the problem. It&#39;s a natural tendency - after all, we are builders. But, by approaching things from a solutions-first mindset, we fail to appreciate all the requirements customers (both internal and external) have.&lt;/p&gt;
&lt;p&gt;For example, many people start their new year&#39;s resolutions with &amp;quot;I need to start going to the gym&amp;quot;. This is a solutions-first mindset. The requirements are more complex than that. Solutions are difficult. After all, why do you need to go to the gym? To look a certain way, feel a certain way, or stop eating certain foods? It raises many questions. But by asking and exploring the answers to them, we get down to the root of what you actually want. Armed with a list of requirements, you can now build a solution to satisfy them and maximise the value. Statistically speaking, when it comes to the resolution &amp;quot;go to the gym&amp;quot;, by the 3rd Thursday of January, 80% of hopefuls will have stopped. In the same way, being solutions-first will often lead to a project failing and getting ditched.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Adhere to engineering principles.&lt;/strong&gt; Principles are overarching guidelines that can motivate us to action. I don&#39;t mean &amp;quot;SOLID&amp;quot; or &amp;quot;DRY&amp;quot; when I speak about principles here. I&#39;m talking about a way to approach problems and solutions with engineering rigour - collecting metrics, testing, challenging assumptions and being proactive. Principled thinking.&lt;/p&gt;
&lt;p&gt;For example, if someone asked you to migrate a REST API in a VM to AWS. You might start by asking what the motivations behind this move are. When they say it&#39;s because it&#39;s slow, you challenge that assumption by first running a load test.
Then, review the APM metrics to see the average throughput from API consumers.
If it turns out the API was slow, then great, the assumption has been verified. But without hard metrics to back things up, it was a stab in the dark.
Engineering rigour helps developers to be more successful at the work they do. This particular lesson has made my work smoother.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Secure, simple, fast, in that order.&lt;/strong&gt; When developing a product, we should adhere to this order closely. If you&#39;re like me, you likely want to make something as performant as possible - pre-optimising. But this is a waste of time at this stage. This order of making a product first secure, then simple, then fast allows us to prioritise the correct thing and work down the chain. It also means we can make something secure even if it&#39;s complicated. But we shouldn&#39;t make something fast but complex.&lt;/p&gt;
&lt;p&gt;Why this order, though? Security always takes top priority. In my experience, a security breach is far more damaging to customer trust than a slow application. Simplicity comes second. We develop software with a team of people, so we want to keep things simple (even if it&#39;s not as fast compute-wise) so they can understand and build upon our code. Finally is speed. We want to make all our efforts to make our application fast. Slow apps are the bain of our existence. And often, people don&#39;t wait around. Google&#39;s target for site loading is now 300ms and has seen colossal bounce rate increases when sites exceed that number. Speed drives revenue, so it should always be something important to us.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Take the lead in making notes.&lt;/strong&gt; How many meetings have you been to where nothing gets accomplished. And afterwards, you are asked what the meeting was about and need help remembering. For me, it&#39;s a lot. I&#39;ve just shared my screen with an Apple Note and then taken notes to remedy this. You can still make notes even if you&#39;re not &amp;quot;leading&amp;quot; the meeting. It&#39;s important to document what was discussed, what the actions are, who is responsible for them, and what the deadline is. Sharing your screen with everyone keeps people focused on the problem at hand and not getting sidetracked. It also means that everyone clearly knows why they are a part of that meeting and their responsibilities.
Doing this has changed my &amp;quot;wasted&amp;quot; meeting percentage (which should be an SI unit at this point) from around 80% to 60%. A massive reduction, even though I have only started doing this recently.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;All of the above deserve articles, so I may write about them more one day. But for now, keep learning!&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Favourite Subreddits</title>
    <link href="/favourite-subreddits/"/>
    <updated>2023-01-20T00:00:00Z</updated>
    <id>/favourite-subreddits/</id>
    <content type="html">&lt;p&gt;Most of Reddit is a trash fire, but there are some brilliant pockets of communities. Here is my list of subreddits I occasionally browse.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://reddit.com/r/writingprompts&quot;&gt;/r/writingprompts&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://reddit.com/r/culinaryplating&quot;&gt;/r/culinaryplating&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://reddit.com/r/itookapicture&quot;&gt;/r/itookapicture&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://reddit.com/r/sysadmin&quot;&gt;/r/sysadmin&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://reddit.com/r/homelab&quot;&gt;/r/homelab&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://reddit.com/r/f30&quot;&gt;/r/f30&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
</content>
  </entry>
  
  <entry>
    <title>Albums I listened to in 2022</title>
    <link href="/2022-music/"/>
    <updated>2023-01-19T00:00:00Z</updated>
    <id>/2022-music/</id>
    <content type="html">&lt;p&gt;In the past couple of years, I started tracking albums I listened to and what I rated them. The motivation was to be able to re-discover music in coming years and to satisfy my tendency to record data about little things.&lt;/p&gt;
&lt;p&gt;Here is my list of albums I listened to in 2022. Note that a lot of the albums didn&#39;t come out in 2022, just that I listened to them in 2022.&lt;/p&gt;
&lt;p&gt;Feel free to drop me recommendations.&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Name&lt;/th&gt;
&lt;th&gt;Artist&lt;/th&gt;
&lt;th&gt;Rating&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Gemini Rights&lt;/td&gt;
&lt;td&gt;Steve Lacy&lt;/td&gt;
&lt;td&gt;9&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Mellow moon&lt;/td&gt;
&lt;td&gt;Alfie Templeman&lt;/td&gt;
&lt;td&gt;9&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Harry’s House&lt;/td&gt;
&lt;td&gt;Harry Styles&lt;/td&gt;
&lt;td&gt;9&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;This is really going to hurt&lt;/td&gt;
&lt;td&gt;Flyte&lt;/td&gt;
&lt;td&gt;9&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Topical Dancer&lt;/td&gt;
&lt;td&gt;Charlotte Adigery&lt;/td&gt;
&lt;td&gt;9&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Today we’re the greatest&lt;/td&gt;
&lt;td&gt;Middle Kids&lt;/td&gt;
&lt;td&gt;9&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Sometimes I might be an introvert&lt;/td&gt;
&lt;td&gt;Little Simz&lt;/td&gt;
&lt;td&gt;9&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Let me do one more&lt;/td&gt;
&lt;td&gt;illuminati hotties&lt;/td&gt;
&lt;td&gt;9&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SOS&lt;/td&gt;
&lt;td&gt;Sza&lt;/td&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Blue Revs&lt;/td&gt;
&lt;td&gt;Alvvays&lt;/td&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Beatopia&lt;/td&gt;
&lt;td&gt;Beabadoobee&lt;/td&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Pocketknife&lt;/td&gt;
&lt;td&gt;Mr Little Jeans&lt;/td&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Hold the girl&lt;/td&gt;
&lt;td&gt;Rina Sawayama&lt;/td&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Midnights&lt;/td&gt;
&lt;td&gt;Taylor Swift&lt;/td&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Dropout boogie&lt;/td&gt;
&lt;td&gt;The black keys&lt;/td&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Love is yours&lt;/td&gt;
&lt;td&gt;Flasher&lt;/td&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Space Island&lt;/td&gt;
&lt;td&gt;BROODs&lt;/td&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Fake it Flowers&lt;/td&gt;
&lt;td&gt;beabadoobee&lt;/td&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Pieces&lt;/td&gt;
&lt;td&gt;IU&lt;/td&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Flowerland&lt;/td&gt;
&lt;td&gt;Pearl &amp;amp; The Oysters&lt;/td&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1/6&lt;/td&gt;
&lt;td&gt;SUNMI&lt;/td&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Dawn FM&lt;/td&gt;
&lt;td&gt;Weeknd&lt;/td&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Negro Swan&lt;/td&gt;
&lt;td&gt;Blood orange&lt;/td&gt;
&lt;td&gt;7&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Yaeji&lt;/td&gt;
&lt;td&gt;Yaeji&lt;/td&gt;
&lt;td&gt;7&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Shake Shook Shaken&lt;/td&gt;
&lt;td&gt;The Do&lt;/td&gt;
&lt;td&gt;7&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;A Mouthful&lt;/td&gt;
&lt;td&gt;The Do&lt;/td&gt;
&lt;td&gt;7&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;The Myth of the Happily Ever after&lt;/td&gt;
&lt;td&gt;Biffy Clyro&lt;/td&gt;
&lt;td&gt;7&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;MUNA&lt;/td&gt;
&lt;td&gt;MUNA&lt;/td&gt;
&lt;td&gt;6&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;The Cat Empire&lt;/td&gt;
&lt;td&gt;The Cat Empire&lt;/td&gt;
&lt;td&gt;6&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Laural Hell&lt;/td&gt;
&lt;td&gt;Mitski&lt;/td&gt;
&lt;td&gt;6&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Delta kream&lt;/td&gt;
&lt;td&gt;The black keys&lt;/td&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;WE&lt;/td&gt;
&lt;td&gt;Arcade Fire&lt;/td&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Pompeii&lt;/td&gt;
&lt;td&gt;Cate Le Bon&lt;/td&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Blue Banisters&lt;/td&gt;
&lt;td&gt;Lana Del Rey&lt;/td&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;The Car&lt;/td&gt;
&lt;td&gt;Arctic Monkeys&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Loner&lt;/td&gt;
&lt;td&gt;Alison Wonderland&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Palaces&lt;/td&gt;
&lt;td&gt;Flume&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Reward&lt;/td&gt;
&lt;td&gt;Cate Le Bon&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;The nearer the fountain, more pure the stream flows&lt;/td&gt;
&lt;td&gt;Damon Albarn&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Inside Voice / Outside voices&lt;/td&gt;
&lt;td&gt;KFlay&lt;/td&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Mr Morale &amp;amp; The Big Steppers&lt;/td&gt;
&lt;td&gt;Kendrick Lamar&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
</content>
  </entry>
  
  <entry>
    <title>Onebag</title>
    <link href="/onebag/"/>
    <updated>2023-01-18T00:00:00Z</updated>
    <id>/onebag/</id>
    <content type="html">&lt;p&gt;In 2020, my wife and I decluttered 10 carloads of stuff from our home. Since then, we&#39;ve vowed not to devote our lives to things. We want to focus on travelling more, spending less and generally have more freedom.&lt;/p&gt;
&lt;p&gt;This ethos of &amp;quot;less&amp;quot; carried over into our travel setup as well.
And we&#39;ve now done a few trips with this setup, learning a lot on the way.&lt;/p&gt;
&lt;p&gt;It&#39;s hard to describe a &amp;quot;eureka&amp;quot; moment of &amp;quot;ultralight&amp;quot; travel because it happens in such small ways, from being able to breeze through checkout to not getting caught on cobbled European streets with a giant carry-on case or worrying that you&#39;re over the weight limit for a flight.&lt;/p&gt;
&lt;p&gt;This post is not a guide or filled with affiliate links; it&#39;s merely a collection of the stuff I use and like.
Because there are only a few things here, extensive research was conducted before buying them. Also, everything was purchased gradually over time due to the cost of some items.&lt;/p&gt;
&lt;p&gt;But you don&#39;t &lt;em&gt;need&lt;/em&gt; fancy gear to pack ultralight. Most of the items you probably have in your wardrobe already.&lt;/p&gt;
&lt;p&gt;I decided to include and exclude certain items from my bag based on my requirements; yours might be different.&lt;/p&gt;
&lt;p&gt;Enjoy :)&lt;/p&gt;
&lt;h3&gt;Packing&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;mark&gt;&lt;a href=&quot;https://amzn.eu/d/2OWk5Ux&quot;&gt;&lt;strong&gt;&lt;u&gt;North face surge 2.&lt;/u&gt;&lt;/strong&gt;&lt;/a&gt;&lt;/mark&gt; My go to pack for daily tasks and a workhorse when travelling. It has 3 main compartments for separation of items and 3 mini pockets for little bits like pens, passports and keys. It has chest and hip straps to make it easier to hike with and has handy pockets and cables to attach things to. It&#39;s easily the best backpack I&#39;ve tried and shows no signs of falling apart.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Tech&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;mark&gt;&lt;strong&gt;&lt;u&gt;MacBook Pro 14” 2021.&lt;/u&gt;&lt;/strong&gt;&lt;/mark&gt; Brought after I spilled an entire litre of water on my old laptop (yes really - and no, rice didn&#39;t help). It allows me to do my work and that&#39;s it. I could have used some more RAM occasionally for running VM&#39;s and such but it&#39;s great 99% of the time. I generally limit software or OS upgrades because I find stuff just break after upgrading.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;mark&gt;&lt;strong&gt;&lt;u&gt;iPhone 14 Pro Max.&lt;/u&gt;&lt;/strong&gt;&lt;/mark&gt; Upgraded a few months ago and it&#39;s made a massive difference to the speed that I can accomplish tasks with. When travelling, it&#39;s my constant companion, for everything from maps, music, ride sharing, and travel guide.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Apps installed&lt;/strong&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Reading:&lt;/strong&gt; Kindle, Feedly&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Travel:&lt;/strong&gt; Uber, Alltrails, Flighty&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Journaling:&lt;/strong&gt; Day One, Notes&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Utilities:&lt;/strong&gt; 1Password, Notion&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Entertainment:&lt;/strong&gt; Spotify, Pocket casts&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;mark&gt;&lt;strong&gt;&lt;u&gt;Airpods Pro.&lt;/u&gt;&lt;/strong&gt;&lt;/mark&gt; Used for music and calls. Due to the noise cancelling, I use them to get silence when on a flight.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;mark&gt;&lt;strong&gt;Anker powerbrick.&lt;/strong&gt;&lt;/mark&gt; Mine is so old and battered you can&#39;t buy it anymore. These things are bomb proof and are great for providing power on the go. Because it weighs about 500g, I generally only bring it when I really think I&#39;ll need it.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;mark&gt;&lt;strong&gt;Universal plug.&lt;/strong&gt;&lt;/mark&gt; The amazon page has disappeared but search for any generic worldwide travel adapter and you&#39;ll get something decent. I chose something light and that can deliver fast charging. This plug means I don&#39;t need to carry USB plugs as it has the ports integrated itself.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;mark&gt;&lt;strong&gt;&lt;u&gt;Apple Watch.&lt;/u&gt;&lt;/strong&gt;&lt;/mark&gt; Used passively for tracking walks, sleep and, occasionally, telling the time.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;2x USB-C to Lightning Cable&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;USB-C cable&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Kindle&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Nikon D5600&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Clothing&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;mark&gt;&lt;a href=&quot;https://eu.patagonia.com/gb/en/torrentshell/&quot;&gt;&lt;strong&gt;&lt;u&gt;Patagonia torrent 3L.&lt;/u&gt;&lt;/strong&gt;&lt;/a&gt;&lt;/mark&gt; Only had this for a little bit. It was a replacement for my previous rainshell that had lost its waterproofing after 10 years. This jacket seems pretty robust and is minimal in design.&lt;/li&gt;
&lt;li&gt;&lt;mark&gt;&lt;a href=&quot;https://rab.equipment/uk/microlight-jacket-aw20&quot;&gt;&lt;strong&gt;&lt;u&gt;Rab microlight alpine.&lt;/u&gt;&lt;/strong&gt;&lt;/a&gt;&lt;/mark&gt; Another new comer to my gear list! This jacket is a proven warm midlayer that will keep me toasty.&lt;/li&gt;
&lt;li&gt;&lt;mark&gt;&lt;a href=&quot;https://amzn.eu/d/iUHmtP9&quot;&gt;&lt;strong&gt;&lt;u&gt;Adidas ultraboosts black.&lt;/u&gt;&lt;/strong&gt;&lt;/a&gt;&lt;/mark&gt; The only shoes I ever bring abroad. I walked 200km in Italy over 2 weeks and my feet were as fresh as the first day. They are incredibly light and dry quickly. Because they are black it means double as shoes for getting into a club or bar.&lt;/li&gt;
&lt;li&gt;&lt;mark&gt;&lt;strong&gt;&lt;u&gt;Merino wool socks.&lt;/u&gt;&lt;/strong&gt;&lt;/mark&gt; I must admit I was a little skeptical of the hype around merino. But after giving them a try I&#39;m a total convert. Because they keep you dry, they also keep you warm. It&#39;s an odd mental model to think of, but moisture (read, sweat) can either cool (and therefore cool your body) or warm up (and make you uncomfortable). Merino solves this problem by being highly wicking. I got a pair as a gift and now plan to replace all my current socks with them.&lt;/li&gt;
&lt;li&gt;&lt;mark&gt;&lt;strong&gt;&lt;u&gt;Jeans.&lt;/u&gt;&lt;/strong&gt;&lt;/mark&gt; A pair of Levi&#39;s. Usually I wear these on flights. Jeans go with everything so are a good item to have.&lt;/li&gt;
&lt;li&gt;&lt;mark&gt;&lt;strong&gt;&lt;u&gt;5x Uniqlo T-shirts.&lt;/u&gt;&lt;/strong&gt;&lt;/mark&gt; 2 white, 3 black. I don&#39;t always bring 5, but 5 is my maximum. If I&#39;m away for more than that, then I just wash the clothes. Black and white mean that dressing to match is super easy.&lt;/li&gt;
&lt;li&gt;&lt;mark&gt;&lt;strong&gt;&lt;u&gt;Shorts.&lt;/u&gt;&lt;/strong&gt;&lt;/mark&gt; A simple navy pair. Sometimes add another pair of shorts here depending on the type of trip we are taking.&lt;/li&gt;
&lt;li&gt;&lt;mark&gt;&lt;strong&gt;&lt;u&gt;Underwear.&lt;/u&gt;&lt;/strong&gt;&lt;/mark&gt; 5 pairs. All black. From Amazon so they are easy to replace.&lt;/li&gt;
&lt;li&gt;&lt;mark&gt;&lt;strong&gt;&lt;u&gt;Swim trunks.&lt;/u&gt;&lt;/strong&gt;&lt;/mark&gt; For swimming...&lt;/li&gt;
&lt;li&gt;&lt;mark&gt;&lt;strong&gt;&lt;u&gt;North face fleece.&lt;/u&gt;&lt;/strong&gt;&lt;/mark&gt; As this is largely worn to and from the airport in rainy and cold England, the goal of this item was to be light and packable. This fleece fits both those criteria.&lt;/li&gt;
&lt;li&gt;&lt;mark&gt;&lt;strong&gt;&lt;u&gt;Patagonia beanie.&lt;/u&gt;&lt;/strong&gt;&lt;/mark&gt; Not the warmest hat in the world so may swap this out, but keeps me pretty warm.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Other&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;mark&gt;&lt;strong&gt;&lt;u&gt;Card holder.&lt;/u&gt;&lt;/strong&gt;&lt;/mark&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;2 credit cards&lt;/li&gt;
&lt;li&gt;1 debit card&lt;/li&gt;
&lt;li&gt;Driving license&lt;/li&gt;
&lt;li&gt;Some cash&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;mark&gt;&lt;strong&gt;&lt;u&gt;Tote bag.&lt;/u&gt;&lt;/strong&gt;&lt;/mark&gt; Used as a beach bag and for groceries. Because it&#39;s so compressible and weighs nearly nothing, it&#39;s a must have for me now.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;mark&gt;&lt;strong&gt;&lt;u&gt;SIM card tool.&lt;/u&gt;&lt;/strong&gt;&lt;/mark&gt; Kept in a little pocket in my bag. Useful for international travel when I need to swap SIMs.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;mark&gt;&lt;strong&gt;&lt;u&gt;Washing powder (optional).&lt;/u&gt;&lt;/strong&gt;&lt;/mark&gt; Although I&#39;ve heard some horror stories of it being confused for drugs, we&#39;ve not been pulled asides for this in an airport so far! Usually washing powder is sold in large boxes so we carry a small ziplock bag of it. We take powder because we found tablets exploded constantly. So for the extra weight, it&#39;s worth it. It&#39;s marked as optional as we don&#39;t always take trips that require washing clothes.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;mark&gt;&lt;strong&gt;&lt;u&gt;Toiletries.&lt;/u&gt;&lt;/strong&gt;&lt;/mark&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Toothbrush&lt;/li&gt;
&lt;li&gt;Toothpaste&lt;/li&gt;
&lt;li&gt;Plaster&lt;/li&gt;
&lt;li&gt;Paracetamol&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;mark&gt;&lt;strong&gt;&lt;u&gt;Sharpie or pen.&lt;/u&gt;&lt;/strong&gt;&lt;/mark&gt; The pen is mightier than the sword as they say! You never know you need this, until you need it. And when you need it, it seems impossible to find a pen unless you have one yourself. For the microscopic weight that a pen has, it&#39;s worth packing even if I don&#39;t use it &lt;em&gt;all&lt;/em&gt; the time.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;mark&gt;&lt;strong&gt;&lt;u&gt;Sunglasses.&lt;/u&gt;&lt;/strong&gt;&lt;/mark&gt; I always break/lose sunglasses so I just got the cheapest UV400 pair I could find. UV400 means way more light is filtered and protects your eyes. If you&#39;re squinting on a beach whilst wearing sunglasses then you probably don&#39;t have UV400.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Hiking&lt;/h3&gt;
&lt;p&gt;This is the stuff I add for hiking trips. So far, I haven&#39;t taken my hiking stuff through an airport so just watch out for that! Currently, I&#39;m in the middle of picking a tent and quilt - I will update this list when I get them.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;mark&gt;&lt;a href=&quot;https://amzn.eu/d/8Y7sVm2&quot;&gt;&lt;strong&gt;&lt;u&gt;Soto Amicus.&lt;/u&gt;&lt;/strong&gt;&lt;/a&gt;&lt;/mark&gt; A powerful and lightweight stove. Resilient against wind and has saved me a few times. Able to boil a litre of water in around 5 minutes (depending on altitude).&lt;/li&gt;
&lt;li&gt;&lt;mark&gt;&lt;a href=&quot;https://amzn.eu/d/1PBDINy&quot;&gt;&lt;strong&gt;&lt;u&gt;Befree 1L.&lt;/u&gt;&lt;/strong&gt;&lt;/a&gt;&lt;/mark&gt; Probably my best hiking purchase. This bladder bottle is collapsible and can filter any water you find on a trail. Although I generally look for clean water to gather, this removes any nasty stuff.&lt;/li&gt;
&lt;li&gt;&lt;mark&gt;&lt;a href=&quot;https://amzn.eu/d/fhWDIc0&quot;&gt;&lt;strong&gt;&lt;u&gt;Stanley Pots.&lt;/u&gt;&lt;/strong&gt;&lt;/a&gt;&lt;/mark&gt; Normally used to boil water for instant meals. Although a little bulky, it covers stuff for 2 people.&lt;/li&gt;
&lt;/ul&gt;
&lt;hr /&gt;
&lt;h2&gt;FAQ&lt;/h2&gt;
&lt;h3&gt;But what about X?&lt;/h3&gt;
&lt;p&gt;In all likelihood, I don’t take it. But if you feel you need it, pack it. After you have travelled, see if you used that thing or not and go from there. Continue to cut the fat.&lt;/p&gt;
&lt;p&gt;For activity related gear like snorkels and goggles, you can usually rent this or borrow it. Just make sure to clean it first - I got a nasty cold in Turkey from a snorkel 🤮&lt;/p&gt;
&lt;h3&gt;No toiletries?&lt;/h3&gt;
&lt;p&gt;Again, to save time, I try to carry no liquid toiletries. Most airbnbs have shower stuff and small toothpaste is generally brought wherever I go.&lt;/p&gt;
&lt;h3&gt;How much does this all weigh?&lt;/h3&gt;
&lt;p&gt;Generally speaking the pack weights 8kg or less. 6kg without a laptop.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;Thanks for reading!&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Ten Software Architecture Rules of Thumb</title>
    <link href="/rules-of-thumb/"/>
    <updated>2023-01-04T00:00:00Z</updated>
    <id>/rules-of-thumb/</id>
    <content type="html">&lt;p&gt;I love a good &lt;a href=&quot;https://en.wikipedia.org/wiki/Rule_of_thumb&quot;&gt;rule of thumb&lt;/a&gt;. They are instantly understandable and based on the practice rather than the theory of a particular topic.&lt;/p&gt;
&lt;p&gt;Through my learnings as a software architect, I have often created these rules of thumb to apply the patterns to other systems.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;There is always a bottleneck.&lt;/strong&gt; Even in a serverless system or one you think will &amp;quot;infinitely&amp;quot; scale, pressure will always be created elsewhere. For example, if your API scales, does your database also scale? If your database scales, does your email system? In modern cloud systems, there are so many components that scalability is not always the goal. Throttling systems are sometimes the best choice.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Your data model is linked to the scalability of your application.&lt;/strong&gt; If your table design is garbage, your queries will be cumbersome, so accessing data will be slow. When designing a database (NoSQL or SQL), carefully consider your access pattern and what data you will have to filter. For example, with DynamoDB, you need to consider what &amp;quot;Key&amp;quot; you will have to retrieve data. If that field is not set as the primary or sort key, it will force you to use a scan rather than a faster query.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Scalability is mainly linked with cost. When you get to a large scale, consider systems where this relationship does not track linearly.&lt;/strong&gt; If, like many, you have systems on RDS and ECS; these will scale nicely. But the downside is that as you scale, you will pay directly for that increased capacity. It&#39;s common for these workloads to cost $50,000 per month at scale. The solution is to migrate these workloads to serverless systems proactively.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Favour systems that require little tuning to make fast.&lt;/strong&gt; The days of configuring your own servers are over. AWS, GCP and Azure all provide fantastic systems that don&#39;t need expert knowledge to achieve outstanding performance.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Use infrastructure as code.&lt;/strong&gt; Terraform makes it easy to build repeatable and version-controlled infrastructure. It creates an ethos of collaboration and reduces errors by defining them in code rather than &amp;quot;missing&amp;quot; a critical checkbox.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Use a PaaS if you&#39;re at less than 100k &lt;a href=&quot;https://en.wikipedia.org/wiki/Active_users&quot;&gt;MAUs&lt;/a&gt;.&lt;/strong&gt; With &lt;a href=&quot;https://www.heroku.com/&quot;&gt;Heroku&lt;/a&gt;, &lt;a href=&quot;https://fly.io&quot;&gt;Fly&lt;/a&gt; and &lt;a href=&quot;https://render.com&quot;&gt;Render&lt;/a&gt;, there is no need to spend hours configuring AWS and messing around with your application build process. Platform-as-a-service should be leveraged to deploy quickly and focus on the product.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Outsource systems outside of the market you are in. Don&#39;t roll your own CMS or Auth, even if it costs you tonnes.&lt;/strong&gt; If you go to the pricing page of many third-party systems, for enterprise-scale, the cost is insane - think $10,000 a month for an authentication system! &amp;quot;I could make that in a week,&amp;quot; you think. That may be true, but it doesn&#39;t consider the long-term maintenance and the time you cannot spend on your core product. Where possible, buy off the shelf.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;You have three levers, quality, cost and time. You have to balance them accordingly.&lt;/strong&gt; You have, at best, 100 &amp;quot;points&amp;quot; to distribute between the three. Of course, you always want to maintain quality, so the other levers to pull are time and cost.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Design your APIs as open-source contracts.&lt;/strong&gt; Leveraging tools such as OpenAPI/&lt;a href=&quot;https://swagger.io/&quot;&gt;Swagger&lt;/a&gt; (not a sponsor, just a fan!) allows you to create &amp;quot;contracts&amp;quot; between your front-end and back-end teams. This reduces bugs by having the shape of the request and responses agreed upon ahead of time.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Start with a simple system first (&lt;a href=&quot;http://principles-wiki.net/principles:gall_s_law&quot;&gt;Gall&#39;s law&lt;/a&gt;).&lt;/strong&gt; Galls&#39; law states, &amp;quot;all complex systems that work evolved from simpler systems that worked. If you want to build a complex system that works, build a simpler system first, and then improve it over time.&amp;quot;. You should avoid going after shiny technology when creating a new software architecture. Focus on simple, proven systems. S3 for your static website, ECS for your API, RDS for your database, etc. After that, you can chop and change your workload to add these fancy technologies into the mix.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Hopefully these rules of thumb can help you when designing new systems. Remember though, they are just rules of thumb, not rules!&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Book notes - The Seven Deaths of Evelyn Hardcastle</title>
    <link href="/seven-deaths-evelyn-hardcastle/"/>
    <updated>2023-01-03T00:00:00Z</updated>
    <id>/seven-deaths-evelyn-hardcastle/</id>
    <content type="html">&lt;h2&gt;🧠 Thoughts&lt;/h2&gt;
&lt;p&gt;The Seven Deaths of Evelyn Hardcastle first grabbed my attention when my friend quoted the opening paragraph.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;I forget everything between footsteps.&lt;/p&gt;
&lt;p&gt;&amp;quot;Anna!&amp;quot; I finish shouting, snapping my mouth shut in surprise.&lt;/p&gt;
&lt;p&gt;My mind has gone blank. I don&#39;t know who Anna is or why I&#39;m calling her name. I don&#39;t even know how I got here. I&#39;m standing in a forest, shielding my eyes from the spitting rain. My heart&#39;s thumping, I reek of sweat, and my legs are shaking. I must have been running, but I can&#39;t remember why.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;It instantly made me want to read it. Although amnesia can be a gimmick, in this opening paragraph, we immediately get a sense of the problem that will be tackled in the book.&lt;/p&gt;
&lt;p&gt;The premise is that you are Aiden, who has seven days to solve a murder. But as each day passes, he inhabits seven different people&#39;s bodies. Oh, and you live the same day over and over. It&#39;s like a combination of the bourne identity, groundhog day, and an episode of CSI all rolled into one.&lt;/p&gt;
&lt;p&gt;After my friend generously loaned it to me, I got right in. But as I trudged to page 30, I found it quite a slog. My main gripe was that the story needed to explain itself earlier. For example, it only explains what Aiden&#39;s motivations are in this story much later. And the pacing of how the complexity evolves is like a hockey stick graph - not much in the beginning, but then it ramps up hugely.&lt;/p&gt;
&lt;p&gt;The entire first half was a trial. Despite this, it&#39;s clear that the author had a wall of post-its and a ball of yarn to organise the plot. It&#39;s cleverly woven together to create a rich tapestry of a story. Full of twists and turns right to the end. The second half was much more fast-paced and went from a book I didn&#39;t want to pick up to a book I was glued to.&lt;/p&gt;
&lt;p&gt;The writing throughout is very well done. The challenge of inhabiting different people is not one to be sniffed at. There is an entirely different personality and way of looking at the world. And the writing perfectly conveys that feeling of being an alien in someone else&#39;s body. Almost as if the book takes place just behind the eyes of the body you are in. Standing at 525 pages, the author does a great job of balancing moving the story forward whilst capturing small details.&lt;/p&gt;
&lt;p&gt;In my eyes, it was an excellent fiction novel. I could get lost in an engaging story and have a changed worldview after reading it.&lt;/p&gt;
&lt;h2&gt;🪄 Actionable Takeaways&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;Our lived experience shapes our personality.&lt;/li&gt;
&lt;li&gt;We have the autonomy to change the future despite our past.&lt;/li&gt;
&lt;li&gt;&amp;quot;Another set of eyes&amp;quot; can provide new perspectives. It is important to listen to those different voices and seek them out.&lt;/li&gt;
&lt;li&gt;A good mystery is a lot like an onion with many layers and often conflicting motives&lt;/li&gt;
&lt;li&gt;Many people mask in public.&lt;/li&gt;
&lt;li&gt;Secrets often lead to more secrets or even lies.&lt;/li&gt;
&lt;li&gt;Judging a situation at face value can provide an inaccurate picture of events.&lt;/li&gt;
&lt;li&gt;Multiple competing personalities is a great plot device.&lt;/li&gt;
&lt;li&gt;Consider the people you are with now, and don&#39;t judge them based on their past.&lt;/li&gt;
&lt;li&gt;Strive to be your true self in public and private.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;💬 Favourite Quotes&lt;/h2&gt;
&lt;blockquote&gt;
&lt;p&gt;If this isn&#39;t hell, the devil is surely taking.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;We are never more ourselves than when we think people aren&#39;t watching&lt;/p&gt;
&lt;/blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;Nothing like a mask to reveal one&#39;s true nature&lt;/p&gt;
&lt;/blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;Every man is in a cage of his own making&lt;/p&gt;
&lt;/blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;Watchful like a deer in the woods that&#39;s just heard a branch snap&lt;/p&gt;
&lt;/blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;Their memories crowd the edges of my mind, the weight of them almost too much to bear.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;So many memories and secrets, so many burdens. Every life has such weight. I don&#39;t know how anybody Carrie&#39;s even one.&lt;/p&gt;
&lt;/blockquote&gt;
</content>
  </entry>
  
  <entry>
    <title>Five books</title>
    <link href="/five-books/"/>
    <updated>2022-12-14T00:00:00Z</updated>
    <id>/five-books/</id>
    <content type="html">&lt;p&gt;I love books.&lt;/p&gt;
&lt;p&gt;Books are portals to worlds that others have created. Books are expert knowledge distilled into the everyday. Books can make you cry having been written thousands of years ago. They are true magic.&lt;/p&gt;
&lt;p&gt;It was natural therefore that, like with Pokemon, I wanted to &amp;quot;catch &#39;em all&#39;&amp;quot;.&lt;/p&gt;
&lt;p&gt;So I purchase lots of used books, picked them up from charity shops, local sellers and more. But eventually I got so many, the small space I had on my window sill for storage didn&#39;t cut it. They overflowed onto my desk, in the living room, on my bedside table, even the bathroom. I&#39;d clearly made an error.&lt;/p&gt;
&lt;p&gt;Some may have purchased a book shelf at this point. And whilst I contemplated this for likely far more time than anyone else I know, I eventually decided against it.&lt;/p&gt;
&lt;p&gt;The driving force was that I didn&#39;t want to continue purchasing items that encouraged larger consumptions.&lt;/p&gt;
&lt;p&gt;Ultimately I can only physically read one book at a time. And at most I can carry around 5-8 books worth of information in my working memory.&lt;/p&gt;
&lt;p&gt;Also, wanting to reduce my personal footprint has motivated me to want to pare down my possessions as a whole. Books are a small part of that effort.&lt;/p&gt;
&lt;p&gt;So I&#39;ve decided I will only allow myself 5 books at a time. They can be swapped or changed, purchased and sold. But there can only be 5.&lt;/p&gt;
&lt;p&gt;Why 5? As I tend to read multiple books at once, each will have a purpose.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;A relaxing fictional read.&lt;/li&gt;
&lt;li&gt;A non-fiction information book.&lt;/li&gt;
&lt;li&gt;Poetry or something arty.&lt;/li&gt;
&lt;li&gt;A biography.&lt;/li&gt;
&lt;li&gt;A big book. 700 pages+ that I can&#39;t finish quickly.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;I can&#39;t be bothered to count my currently collection but it stands at around 50-60. Not that many but enough that I have had to shove them in a random container.&lt;/p&gt;
&lt;p&gt;I&#39;m going to read through these and then trade, sell or donate them. Then I&#39;ll be down to 5!&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Book notes - Project Hail Mary</title>
    <link href="/project-hail-mary/"/>
    <updated>2022-12-07T00:00:00Z</updated>
    <id>/project-hail-mary/</id>
    <content type="html">&lt;p&gt;Project Hail Mary is the 3rd major novel to be released by Andy Weir, most well known for The Martian.&lt;/p&gt;
&lt;p&gt;It chronicles an astronaut afflicted by amnesia attempting to save the earth. You know, the normal every day activities.&lt;/p&gt;
&lt;p&gt;Project Hail Mary has been my favourite book of 2022 and I&#39;ve been recommending it to all my friends. To me, it perfectly weaves semi-realistic science, human struggle and entertainment into a rich narrative with characters you can put yourself into. Simultaneously balancing conflict with calm scientific reasoning. There is tension but not in the &amp;quot;the bomb is about to go off&amp;quot; sense. More so in the we&#39;ve discovered a problem and now use a sound scientific approach to resolve it.&lt;/p&gt;
&lt;p&gt;It&#39;s almost like reading a doctors report on a patient. Breaking down their behaviours and questioning what it means to be human, why do we communicate with words and why do we sleep the way we do? Weir has an ability to take a microscopic focus on a small unassuming basic truth and rip it to shreds.&lt;/p&gt;
&lt;p&gt;I read this at a time where I needed to escape to another world - to step outside of the everyday and enter into a situation where there was everything to play for.&lt;/p&gt;
&lt;p&gt;In an effort to remember more from what I read, here is my brief review of Project Hail Mary and my key takeaways.&lt;/p&gt;
&lt;h2&gt;Actionable takeaways&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;Humans have the inbuilt ability to thrive in subpar environments.&lt;/li&gt;
&lt;li&gt;Food is a seldom thought of limiting factor in the growth of civilisation.&lt;/li&gt;
&lt;li&gt;Engineering principles trump knowledge in many areas.&lt;/li&gt;
&lt;li&gt;By contrast, people who are good engineers and knowledgeable are the true geniuses.&lt;/li&gt;
&lt;li&gt;Scientific literacy is so vitally important.&lt;/li&gt;
&lt;li&gt;The characters battling various real world physical limits (like the distance between stars and the speed of light) is a hugely engaging and motivating for the storytelling.&lt;/li&gt;
&lt;li&gt;Our names are tied heavily to our identity and memories.&lt;/li&gt;
&lt;li&gt;The threat of going crazy with loneliness is not something to be sniffed at.&lt;/li&gt;
&lt;li&gt;The book used science to solve real world problems with real world consequences if they were wrong. The protagonist couldn’t rely on computers. They needed the information in their brain. It made me think that the sciences (including maths) are so poorly taught. Completely abstracted away from any real problem, simplified to the equivalent of brain baby food and then spoon fed to children. Science is interesting, useful and in my view, pretty cool.&lt;/li&gt;
&lt;li&gt;Simple ”truths” like waving, morse code, saying “hi”, sleeping and other things are so beautiful and could have been completely different. The first interactions with the alien is fascinating because you have to build up this base of language, which in their case isn’t even words!&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;Favourite Quotes&lt;/h2&gt;
&lt;blockquote&gt;
&lt;p&gt;“Hurry”, “ok I’ll wait faster”&lt;/p&gt;
&lt;/blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;I’m smart enough now to know I’m stupid.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;Human brains are amazing things. We can get used to just about anything. I’m making the adjustment.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;Oh thank God. I can’t imagine explaining “sleep” to someone who had never heard of it. Hey, I’m going to fall unconscious and hallucinate for a while. By the way, I spend a third of my time doing this. And if I can’t do it for a while, I go insane and eventually die. No need for concern.&lt;/p&gt;
&lt;/blockquote&gt;
</content>
  </entry>
  
  <entry>
    <title>What it takes to become a lead developer</title>
    <link href="/become-lead-developer/"/>
    <updated>2022-11-16T00:00:00Z</updated>
    <id>/become-lead-developer/</id>
    <content type="html">&lt;p&gt;&amp;quot;Lead Developer&amp;quot; - now that would look good on a business card.&lt;/p&gt;
&lt;p&gt;Most developers I have worked with have all been striving after this title. But, many see the word &amp;quot;Lead&amp;quot; and stop reading there. Based on this, the idea is conjured up that a lead developer will command their team, revolutionize the technology stack and code 75% of the tickets. Meanwhile, others think it&#39;s about being a bridge between development and other groups - representing their vision for the company, and communicating their concerns.&lt;/p&gt;
&lt;p&gt;Simply put, there is a lot of confusion about who a lead developer is and what exactly he or she does.&lt;/p&gt;
&lt;p&gt;Recently some developers have approached me asking what it takes to be a lead developer. In their mind, they are already there. In their mind, they ship quality code, push the team forward and introduce new technology. What more could you want?&lt;/p&gt;
&lt;p&gt;Rather than focusing on specific actions that you&#39;ll be performing, it&#39;s far better to focus on attributes you should develop to be a lead developer. The reason is that actions are difficult to measure their effectiveness. Sure, &lt;em&gt;you&lt;/em&gt; may feel your ignored suggestions would transform the company but are you in touch with broader business contexts? When reaching out for a promotion, be mindful about coming from a place where you&#39;re already there. You may well not be (and that&#39;s ok!).&lt;/p&gt;
&lt;p&gt;Build the attributes below, and you&#39;ll be promoted in no time.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Communicate effectively.&lt;/strong&gt; In the era of remote work, communication is everything. Aside from the obvious, not complicating a topic, sticking to the agenda and choosing clear words, there is another crucial element of good communication. Translation. Not from Hebrew to Maltese but from non-technical teams to technical teams and vice versa. As a lead developer, you play a vital role in helping both teams work together cohesively. A good measure of this is how much one team hates the other - the less, the better! Doing this translation work also comes with being able to listen (yes, communication is both speaking and listening) to stakeholders about wider business concerns and use them to inform technical decisions.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Take responsibility.&lt;/strong&gt; As Uncle Ben (spiderman&#39;s uncle, not the bloke who sells microwave rice) would say, &amp;quot;with great power comes great responsibility&amp;quot;. As the team lead, you are responsible for how well the technical staff under your care perform. It is your responsibility when something goes wrong, even if it&#39;s not your fault. Your job is chiefly to make your team look good, not yourself. Building responsibility helps prove that you&#39;re ready to shoulder people problems.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Be principle and process driven.&lt;/strong&gt; There is no exact playbook for each situation that will arise as a lead developer. But by being principle-driven, you will always know what to do. And processes will form the basis of consistent actions across various decisions. For example, by taking the lead in incident postmortems, you will help develop the principle of transparency and discovery.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Trust others.&lt;/strong&gt; You don&#39;t know everything, and you can&#39;t know everything. And that&#39;s good! It makes humans interesting and gives different perspectives. When you don&#39;t know something, don&#39;t be afraid to empower someone on your team who does. Trust them, and ask questions to learn more about what they know.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Trust yourself.&lt;/strong&gt; As a lead developer, there should be a lot of knowledge you do have that your team needs. Don&#39;t be afraid to mentor them and teach them everything you know. Rather than reducing your job security, you&#39;ll get a better relationship with your team and improve the quality of the team&#39;s work.&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;Lead Dev or Bust?&lt;/h3&gt;
&lt;p&gt;Junior dev, mid-level, lead developer. Job done. Right?&lt;/p&gt;
&lt;p&gt;I used to think so.&lt;/p&gt;
&lt;p&gt;But it&#39;s really not as simple as that path. Rather than taking &amp;quot;upwards steps&amp;quot; into management, you can take horizontal steps. If you&#39;re a web developer, look at DevOps, Security engineering, site reliability engineering and solutions architecture. Because of the complexity of modern-day systems, there are so many technical roles now.&lt;/p&gt;
&lt;p&gt;Management might not be for you. It&#39;s not for me. I got bored of managing people directly, doing 1-to-1&#39;s, and sitting in planning meeting after planning meeting. I wanted to code, write, read about cool tech and create automated tools. It might not be for you, either.&lt;/p&gt;
&lt;p&gt;Having said that, we can all use the principles outlined above. Developing them will lead to being better engineers and improve teamwork.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>No Calendar</title>
    <link href="/no-calendar/"/>
    <updated>2022-11-10T00:00:00Z</updated>
    <id>/no-calendar/</id>
    <content type="html">&lt;p&gt;I used to plan everything - individual tasks, lunch breaks, exercise. My favourite phrase in the house was, &amp;quot;have you put it on the calendar?&amp;quot;&lt;/p&gt;
&lt;p&gt;Now, I don&#39;t care.&lt;/p&gt;
&lt;h2&gt;Why?&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Planning took time.&lt;/strong&gt; Every week I would religiously put in my time blocks and appointments; this was good for gauging how much (or little) I could accomplish. But it also took a considerable chunk of time, with no tangible benefit when doing the work itself.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;I was anxious about being behind.&lt;/strong&gt; Time blocking my calendar led me to feel like I was constantly behind. One task would take longer than I had budgeted, knocking out my entire day. Time-blocking evangelists recommend adjusting your schedule when this happens. But it feels like a constant game of cat and mouse. I was always &amp;quot;chasing&amp;quot; the time rather than getting things done.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;I was feeling less accomplished.&lt;/strong&gt; I have a relatively good idea of how much I can do in a day. But, when I time-blocked my calendar, I found it got less done.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Meetings are usually pointless.&lt;/strong&gt; Although meetings can be helpful, it&#39;s rare to find a meeting where everyone is relevant to the discussion, it&#39;s for making a collaborative decision, and action points are recorded. Most of the time, meetings are bloated and accomplish nothing in the name of &amp;quot;teamwork&amp;quot;. I now reject 99% of meetings I&#39;m invited to and favour asynchronous working practices.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;But now I don&#39;t use my calendar much, what do I do?&lt;/p&gt;
&lt;h2&gt;How?&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;I don&#39;t accept meetings.&lt;/strong&gt; As mentioned before, I prefer asynchronous working - discussions over documents, Slack or email.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Only plan things for the weekend ahead.&lt;/strong&gt; This means my wife and I can remember our plans and be mindful of other weekend tasks we may need to do.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Reduce my commitments with other people so I can usually remember all our plans.&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;I removed regular appointments.&lt;/strong&gt; Every week, I have about four or so regular appointments of various sorts. I realised I don&#39;t need my calendar for this. So, I have replaced them with a phone alarm or just remember.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Don&#39;t plan too far in advance.&lt;/strong&gt; Plans too far ahead often overwhelmed me because if circumstances changed, I had already committed to something. This simple approach means I can be more flexible with my time. The exception to this is rough travel plans and dentist appointments.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;When I first ditched my calendar, it made me feel uneasy.&lt;/p&gt;
&lt;p&gt;Coming from a time-blocking world, I felt like I was wasting time. Not doing what I was &amp;quot;supposed&amp;quot; to do.&lt;/p&gt;
&lt;p&gt;But, I realised I needed to pay better attention to myself. If I&#39;ve done my daily tasks, I can exercise, take a walk, be with my family, or call someone.&lt;/p&gt;
&lt;p&gt;I can relax because I have a single source of &amp;quot;truth&amp;quot; for what I need to do - my to-do list. Life is far simpler, and I get more done.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Common objections to CI/CD and why they are wrong</title>
    <link href="/ci-cd-objections/"/>
    <updated>2022-11-02T00:00:00Z</updated>
    <id>/ci-cd-objections/</id>
    <content type="html">&lt;p&gt;Continuous integration and delivery is something many of us take for granted. In reality, the vast majority of businesses are still using manual testing and deployments.&lt;/p&gt;
&lt;p&gt;If you’re inside one of those businesses then doubtless, CI/CD has been suggested - but never implemented.&lt;/p&gt;
&lt;p&gt;The reasons vary but here are a few common objections to CI/CD and why they are wrong. Use it as a template to speak with your team about this.&lt;/p&gt;
&lt;h2&gt;We don’t have time&lt;/h2&gt;
&lt;p&gt;Often organisations are caught chasing their tails. They don’t deploy automatically or write tests, which leads to bugs, which leads to more time spent.
To take a data driven approach, measure how much time you spend doing manual deployments and fixing bugs that could have been caught by tests. Based on this you can say, if we implement a CI/CD system we can eliminate at least 70% of this time (bugs will always happen no matter how many tests).&lt;/p&gt;
&lt;h2&gt;People won’t fix the build if it breaks&lt;/h2&gt;
&lt;p&gt;This statements suffers from a negativity bias - looking to the worst case. And the person who made the statement also doesn’t trust their staff.&lt;/p&gt;
&lt;p&gt;A good CI/CD pipeline empowers the developers to resolve problems themselves. And whilst it’s impossible to eliminate 100% of issues, it is possible to add passing test suites as part of the requirements to accept a merge request.&lt;/p&gt;
&lt;p&gt;Additionally, appoint a “build master” that rotates week to week who is responsible for fixing the build. Make sure that person has that time factored in for any planning.&lt;/p&gt;
&lt;h2&gt;Releasing all the time will break things&lt;/h2&gt;
&lt;p&gt;This statement suffers from zero risk bias. If you’re wanting to introduce an automated system, my guess is that the manual system has gone wrong - a lot.&lt;/p&gt;
&lt;p&gt;Taking a data driven approach, you can trial a CI/CD process, and then measure the amount of errors that occur, and the mean time to fix them.&lt;/p&gt;
&lt;p&gt;After the trial period, compare it to the data taken when doing manual releases. If the results are favourable, then you can proceed the discussion further.&lt;/p&gt;
&lt;h2&gt;We don’t have tests&lt;/h2&gt;
&lt;p&gt;This is, in some way, the only valid reason. And while it’s not possible to get a full CI/CD pipeline running, you can start. But how?&lt;/p&gt;
&lt;p&gt;Start small. Require tests for new features and bugs. Invest heavily (via contractors or dev time) in building an E2E test suite.&lt;/p&gt;
&lt;p&gt;If your business cares about shipping new features quickly and doesn’t want bugs then this investment should be an easy sell.&lt;/p&gt;
&lt;p&gt;—&lt;/p&gt;
&lt;p&gt;Ultimately, CI/CD is an investment. And many businesses are unwilling and afraid to take the plunge. Recognise that humans have biases that govern their thoughts and actions. You have your own biases too!
But, by taking a data driven approach it becomes a much more collaborative effort rather than one person championing a large change.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Get rid of your retrospective meetings</title>
    <link href="/no-more-retro/"/>
    <updated>2022-10-27T00:00:00Z</updated>
    <id>/no-more-retro/</id>
    <content type="html">&lt;p&gt;The end of the sprint rolls around - you know the drill. Pile into a room with the rest of the tech team for 2 hours and discuss how the sprint.&lt;/p&gt;
&lt;p&gt;The goal of this is to action any improvements that could be made.&lt;/p&gt;
&lt;p&gt;But that&#39;s rarely what happens.&lt;/p&gt;
&lt;p&gt;What happens is:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;People rant and don&#39;t offer solutions.&lt;/li&gt;
&lt;li&gt;Teams assign actions/blame to people outside of the meeting&lt;/li&gt;
&lt;li&gt;Action items aren&#39;t followed up correctly.&lt;/li&gt;
&lt;li&gt;Action items don&#39;t always reflect where business priorities lie.&lt;/li&gt;
&lt;li&gt;A small percentage of the team dominates the conversation. Making it impossible for others to speak.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;And that is repeated week in, week out.&lt;/p&gt;
&lt;p&gt;I have seen this pattern emerge in many organisations I have worked with. And the solution was quite simple - get rid of retrospectives.&lt;/p&gt;
&lt;p&gt;What would this look like practically?&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;No regular meeting&lt;/li&gt;
&lt;li&gt;Create a group retrospective board that is the same week to week. This surfaces recurring problems.&lt;/li&gt;
&lt;li&gt;Action points &amp;quot;open&amp;quot; at any one given time are capped at 3. This pushes more ownership of those changes rather than just piling them up.&lt;/li&gt;
&lt;li&gt;The items raised on the retrospective board are done asynchronously via chat. This encourages less extroverted members of the team to get involved. The idea is that this process doesn&#39;t always need to be repeated. Just as and when there are new items to discuss.&lt;/li&gt;
&lt;li&gt;Once action items are decided as a priority, they are created as a ticket or task in a shared area (like JIRA).&lt;/li&gt;
&lt;li&gt;Reminders are set weekly to remind the team to follow up on these action items.&lt;/li&gt;
&lt;li&gt;All action items should have a few states - Done, In Progress, Outside of team scope, Not a business priority. In the case of the latter two, this means they are dropped or a workaround needs to be implemented.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Why this approach?&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;No meetings.&lt;/strong&gt; Meetings cost a &lt;strong&gt;*lot*&lt;/strong&gt; of money. It&#39;s total madness to block important work in favour of ranting about all the problems. In the words of Elvis Presley - a little less conversation, a little more action, please. If you&#39;re curious about the cost, you can estimate it using &lt;a href=&quot;https://hbr.org/2016/01/estimate-the-cost-of-a-meeting-with-this-calculator&quot;&gt;this tool&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;All persons get an equal say.&lt;/strong&gt; Whereas in a meeting a small minority dominate the conversation, this approach encourages all to participate.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Asynchronous.&lt;/strong&gt; If you&#39;re a large organisation, no doubt you&#39;re desperate to expand to other countries. But without asynchronous working practises, it will be chaos. Encouraging these practices early sets you up for success in the future.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Focus on solutions, not rants.&lt;/strong&gt; As the conversation is entirely oriented around capturing action items that line up with business goals and can be accomplished by the team; it stops conversations about how Joe in marketing is blocking work or Elizabeth in admin keeps sending requests directly to one developer. There is a place for those discussions but this is something to capture as and when they happen and escalate this through managers. A retrospective sprint discussion focuses on the action items and nothing more.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Documented.&lt;/strong&gt; Visibility to stakeholders and the tech team is important. It means we can easily go back and see recurring issues, actions taken, and if those solutions worked. This all builds towards an organisation that documents its work. All too often are retro meetings held for 2 hours, and produce nothing more than a couple of messages on Slack.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;I have used this approach to great success in many organisations and I hope you can too!&lt;/p&gt;
&lt;p&gt;Give it a try for a while. After all, no one is going to complain about missing a 2-hour meeting every other week.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Creating my own personal Instagram</title>
    <link href="/webstagram/"/>
    <updated>2022-10-06T00:00:00Z</updated>
    <id>/webstagram/</id>
    <content type="html">&lt;p&gt;This website is my own little corner of the internet. Historically though, this site, has been a bit &amp;quot;distant&amp;quot;, a more professional reflection of my own self.&lt;/p&gt;
&lt;p&gt;Its purpose has been to share documentation on things I have done, and allow people to reach me on other platforms. This site is only a small slice of my personality with Twitter and Instagram filling in the more &amp;quot;personal&amp;quot; gaps.&lt;/p&gt;
&lt;p&gt;But recently, I have logged out of Twitter and it was over 3 years ago that I deleted Instagram entirely.
My reason for getting rid of both was&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Creepy advertising.&lt;/strong&gt; Particularly from Instagram. My wife, who still has instagram has had a number of extremely personal adverts that she never searched the web for.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;They had no purpose.&lt;/strong&gt; Twitters main purpose for me was to &amp;quot;build an audience&amp;quot;. Building an audience seems to be the catch all advice these days for influencers in the indie maker space. But, I&#39;m not making a product at the moment - so it serves no purpose.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Ownership of data.&lt;/strong&gt; Slowly, I&#39;m wanting my data to be more contained on my own website. I don&#39;t want to have tonnes of logins and platforms to check, scroll and update. I want to have my little corner of the internet that I can prune and shape.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;But enough of the tin foil hat, tie dye tshirt and discount James Bond villain sayings.
The question is, what am I doing about it?&lt;/p&gt;
&lt;p&gt;Well the first step is in a new page I have added.&lt;/p&gt;
&lt;h3&gt;Introducing /photos!&lt;/h3&gt;
&lt;p&gt;As you could guess from the URL, it&#39;s a page for my photos! Specifically ones that I am proud of an label as &amp;quot;photography&amp;quot;.
How it works is a more exciting process that I will explain in another post. But, to cut a long story short, I can upload photos directly from my phone to my site. And all the data is kept in the repository (I may move them to a CDN in the future).
On my homepage, I have also added a photos slider that displays the 5 most recent images. Soon, it will be available as an RSS feed, so that people can subscribe to it like you may follow an instagram account you like (but I&#39;m working on that!)&lt;/p&gt;
&lt;h3&gt;The downsides&lt;/h3&gt;
&lt;p&gt;My website is obviously not a social platform. The photos I post here will not get as much exposure as on Instagram. And without using Webmentions, there is no simple way to comment on these posts.&lt;/p&gt;
&lt;p&gt;This is just the first version of the photos page, but marks the start of me moving my data onto my own website. Next up is my reading list, tweets (notes), and music!&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>How to fix &#39;Public key authentication failed&#39; for Azure DevOps</title>
    <link href="/azure-ssh-fix/"/>
    <updated>2022-10-06T00:00:00Z</updated>
    <id>/azure-ssh-fix/</id>
    <content type="html">&lt;p&gt;If you&#39;re using DevOps and tried to clone down a repository with a Mac you might have stumbled across this error.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Cloning into &#39;example-repo&#39;...
remote: Public key authentication failed.
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You double check your SSH Key is correctly added but it&#39;s all correct. What gives?&lt;/p&gt;
&lt;p&gt;This error usually appears when you are using multiple ssh keys.&lt;/p&gt;
&lt;p&gt;The solution, thankfully, is simple.&lt;/p&gt;
&lt;h2&gt;Solution: Add &#39;IdentitiesOnly yes&#39; to your SSH config&lt;/h2&gt;
&lt;p&gt;Here&#39;s how to do it&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Open up a terminal window and type `nano ~/.ssh/config&lt;/li&gt;
&lt;li&gt;Add the following to your SSH config&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code&gt;Host ssh.dev.azure.com
    UseKeychain yes
    IdentitiesOnly yes
    AddKeysToAgent yes
    IdentityFile ~/.ssh/id_rsa
    PubkeyAcceptedKeyTypes=ssh-rsa
&lt;/code&gt;&lt;/pre&gt;
&lt;ol start=&quot;3&quot;&gt;
&lt;li&gt;And hey presto you&#39;re done!&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;You can now git clone projects with SSH using Azure.&lt;/p&gt;
&lt;p&gt;Microsoft vaguely hide this fact in some documentation &lt;a href=&quot;https://learn.microsoft.com/en-us/azure/devops/repos/git/use-ssh-keys-to-authenticate?view=azure-devops&amp;amp;tabs=current-page#q-i-have-multiple-ssh-keys--how-do-i-use-different-ssh-keys-for-different-ssh-servers-or-repos&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Blog Roll</title>
    <link href="/blogroll/"/>
    <updated>2022-09-28T00:00:00Z</updated>
    <id>/blogroll/</id>
    <content type="html">&lt;p&gt;This page will be updated periodically. It is a list of websites that I enjoy reading.
The list is in no particular order.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://www.jvt.me&quot;&gt;Jamie Tanna&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://carol.gg/blog/&quot;&gt;Carol Gilabert&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://katydecorah.com&quot;&gt;Katy DeCorah&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://patrickcollison.com&quot;&gt;Patrick Collison&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.zachleat.com&quot;&gt;Zach Leatherman&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://martinfowler.com&quot;&gt;Martin Fowler&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;http://highscalability.com&quot;&gt;HighScalability&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://blog.jessfraz.com&quot;&gt;Jess Frazelle&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://jvns.ca/&quot;&gt;Julia Evans&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.troyhunt.com/&quot;&gt;Troy Hunt&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://mxb.dev&quot;&gt;Max Bock&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://semaphoreci.com/category/engineering&quot;&gt;Semaphore&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://charity.wtf&quot;&gt;Charity&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://rachelbythebay.com&quot;&gt;RachelByTheBay&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.elletownsend.co.uk&quot;&gt;Elle Townsend&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://brunty.me&quot;&gt;Matt Brunt&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
</content>
  </entry>
  
  <entry>
    <title>Making the case for CI/CD</title>
    <link href="/case-for-ci-cd/"/>
    <updated>2022-08-31T00:00:00Z</updated>
    <id>/case-for-ci-cd/</id>
    <content type="html">&lt;p&gt;If you browse developer forums, continuous integration and continuous delivery will not be new concepts to you. We all love the utopian ideas of being able to open a pull request and commanding an army of robots to do your bidding and get the application tested and shipped.&lt;/p&gt;
&lt;p&gt;Unfortunately, the reality of many businesses is different. Developer experience is an area seldom invested in; despite being such a huge opportunity for large productivity gains.&lt;/p&gt;
&lt;p&gt;Perhaps already you have been pleading to tinker with github actions, switch to a PaaS or use a IaaC but to no avail.&lt;/p&gt;
&lt;p&gt;So, how can you champion the move to a continuous integration and delivery?&lt;/p&gt;
&lt;p&gt;First let&#39;s look at the reasons people may object.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;They are scared of change.&lt;/strong&gt; This is a powerful force not to be underestimated, even in the fast-paced technology industry. This excuse is particularly potent with stakeholders, who don&#39;t (understandable) know the ins and outs of technology.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;They don&#39;t trust the team.&lt;/strong&gt; If things are already going wrong, why would they invest time in a risky new deployment strategy? Often upstream trust issues rear their heads in these discussions and it&#39;s important to navigate as carefully as a sailor does with rocks by a shoreline.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;They lack metrics to determine if change is successful.&lt;/strong&gt; As Peter Drucker famously said &amp;quot;that which cannot be measured, cannot be improved&amp;quot;. Without numbers going up, down or sideways, a stakeholder cannot correctly understand the team and systems overall health.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;The vision seems too good to be true.&lt;/strong&gt; If we dream of deploying 50 times a day to production like Instagram does, then it can seem like we&#39;ll never catch up to that level.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Start with approval&lt;/h2&gt;
&lt;p&gt;It&#39;s utterly pointless starting any of this without some kind of green light. Although we love stories of people doing things under ground and against the odds, the fact is that you&#39;re not creating a cure for cancer. You should speak to your immediate team and get their feedback. Then create a formal proposal and, again, get more feedback. Then get the green light from the relevant stakeholder.&lt;/p&gt;
&lt;p&gt;The proposal should outline the overall vision for the system, the advantages, the risks and the downsides. If you can&#39;t think of risks and disadvantages, then you haven&#39;t thought about this problem enough.&lt;/p&gt;
&lt;p&gt;Throughout all this, make sure to not just push your vision on the team. Be collaborative. Keep an open mind. And don&#39;t pretend that it will be a breeze and nothing will go wrong (hint - it will!). The worst thing to do is to try and do it all yourself or with only a couple true believers. This is a sure fire way to make sure the initiative is abandoned.&lt;/p&gt;
&lt;h2&gt;Start with the data&lt;/h2&gt;
&lt;p&gt;Often times, stakeholders say they want a fancy dashboard with burndown rates, bug reports and the position of Saturn. In reality, an excel spreadsheet will work.&lt;/p&gt;
&lt;p&gt;Create a spreadsheet with the following attributes:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Release frequency.&lt;/strong&gt; For tracking the number of releases on a given day/week.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Rollbacks or defects detected.&lt;/strong&gt; A count of the number of times you have had problems you have had to fix or rollback on a production system.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Hours to deploy.&lt;/strong&gt; This one requires a bit more coordination. But, developers could start tracking their deployment time with a specific time tracking measure in software like Toggl or Harvest.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Start measuring these metrics for at least 4 weeks before making any changes. And make sure the spreadsheet is shared, collaborative and documented well - both inside the team and to any stakeholders.&lt;/p&gt;
&lt;h2&gt;Start small&lt;/h2&gt;
&lt;p&gt;Now you have your numbers, you can start to look at improving them!&lt;/p&gt;
&lt;p&gt;Using your original proposal as a guide, build up a step by step plan that can be implemented in a sprint (a 2 week period) or less. Be mindful of any steps that will create blockers for others and plan around this accordingly to mitigate their impact.&lt;/p&gt;
&lt;p&gt;Unlike with code, we often can&#39;t rollback changes made to deployment and testing systems. So, think carefully about how those changes can be reversed if needs be. I&#39;ve been burnt many times by not considering this carefully enough.&lt;/p&gt;
&lt;p&gt;After each step, take a 1-2 week break before making other changes, continuing to record your data points. Verify that the numbers are heading in the right direction. And if they aren&#39;t, you can address this in a retrospective meeting.&lt;/p&gt;
&lt;p&gt;A sample plan may look like this.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Goal: Be able to merge into the `staging` branch from a feature branch and have the code automatically tested and deployed to our staging environment.
Currently: FTP files onto a server manually once a week on Tuesday. No test suite.

Step 1: Create a testing suite for critical components of the application
Step 2: Create a github action (or other CI service) to run the test suite for all new pull requests
Step 3: Add a github action that FTP&#39;s files from the `staging` branch to the staging server
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Obviously I have glossed over a number of details, but hopefully you get the idea. It&#39;s about making small changes and not just going for a &amp;quot;big bang&amp;quot; type change (which are always doomed to fail).&lt;/p&gt;
&lt;p&gt;It can be tempting to prioritize big change first, but as a word of caution, try to focus on being iterative rather than catching the big fish.&lt;/p&gt;
&lt;p&gt;As you go forward, and the metrics improve, the buy in from stakeholders and others will improve. Nothing makes a stakeholder happier than knowing that it&#39;s cheaper to do some work!&lt;/p&gt;
&lt;p&gt;This can drive further change which follows a similar process as outlined above.&lt;/p&gt;
&lt;p&gt;By the end of it, you may not be deploying to production 50 times a day, but doing something much more humble. Saving you and your colleagues, hours of mundane, repetitive and error prone work. And isn&#39;t that just as valuable?&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Question your Rate limits</title>
    <link href="/rate-limits/"/>
    <updated>2022-08-02T00:00:00Z</updated>
    <id>/rate-limits/</id>
    <content type="html">&lt;p&gt;If you are building a system with an API, there is a good chance it has a rate limit or you stay awake at night afraid of a DDoS attack.&lt;/p&gt;
&lt;p&gt;Rate limits are a method to reduce network traffic by putting a cap on the number of times an action (like calling an API endpoint) can be performed in a certain time frame.&lt;/p&gt;
&lt;p&gt;Rate limits for APIs are spoken of as a security measure. And, principally, they are a good idea. They prevent heavy usage of your system in a short space of time. The kind of behaviour that a malicious actor would have. Or would they?&lt;/p&gt;
&lt;h2&gt;What is spam?&lt;/h2&gt;
&lt;p&gt;The problem with rate limits is that they are blunt instruments. More of a mallet, than a katana.&lt;/p&gt;
&lt;p&gt;You can define a limit on the number of requests, let&#39;s say 100 per second, per API key. But then you get a call from your golden goose customer, ACME Corp. They say they are getting a 429 Rate limited response when calling your API because they are such big customers of it!&lt;/p&gt;
&lt;p&gt;Granted, in this situation, you can raise the rate limit for them individually. But you don&#39;t want to involve manual work as you gain larger customers. You&#39;re trying to prevent malicious usage, not mass usage.&lt;/p&gt;
&lt;p&gt;Firewalls on top of your application aren&#39;t a silver bullet either. They can enforce semi-complex rules for rate limiting but the problem of differentiating spam from legitimate traffic remains.&lt;/p&gt;
&lt;h2&gt;Masking over scaling problems&lt;/h2&gt;
&lt;p&gt;In serverless environments, there is an associated cost with lots of requests. But for a long-running service hosted on EC2, ECS or EKS (or the Azure/GCP alternative), do more requests &lt;em&gt;really&lt;/em&gt; matter? Not really.&lt;/p&gt;
&lt;p&gt;In my experience, rate limits are often introduced to protect against scalability problems. And even if they are not, they often mask over-scaling issues. Although we don&#39;t want to prematurely optimize the product, it is prudent to have at least a 50% buffer from your peak load. In other words, if your system can handle 10,000 users at peak time, you should be aiming to be able to cope with 15,000 users.&lt;/p&gt;
&lt;p&gt;Using rate limits to avoid scaling problems can be wise and you certainly want to protect against malicious use of your system. But, it&#39;s only one side of the coin.&lt;/p&gt;
&lt;h2&gt;Be mindful of the use case&lt;/h2&gt;
&lt;p&gt;When someone mentions adding a rate limit, ask why. What attack vector are you trying to mitigate? What specific type of traffic are we trying to avoid?&lt;/p&gt;
&lt;p&gt;In any case, there should be some data behind this to say we are getting X traffic which is pushing up 95th-percentile response times by Y.&lt;/p&gt;
&lt;p&gt;As we have discussed, rate limits are blunt instruments. They are only one weapon in our fight against malicious actors. The solution is a multi-pronged approach:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Implement bot detection (like Cloudflare Bot Management) to detect bot attacks&lt;/li&gt;
&lt;li&gt;Add a firewall to deny incoming connections that aren&#39;t legitimate.&lt;/li&gt;
&lt;li&gt;Add cautious rate limits to key areas of the system. You like will need a higher rate limit on your GET /customer endpoint than a /login endpoint. Work based on the 50% buffer limit.&lt;/li&gt;
&lt;li&gt;Add systems to detect if a person is trying to login with known-to-be-breached passwords for a single user. This is likely a credential stuffing attack or a brute force against a key user.&lt;/li&gt;
&lt;li&gt;Implement a &amp;quot;blocked&amp;quot; user concept. If there are N number of failed attempts, then they need to unlock their account using a code from their email.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;There are of course many more defences you can use to fight spam. In a majority of cases though, your site won&#39;t be the target of malicious use. So, don&#39;t think you need the same security as MI5 - because you probably don&#39;t. Rate limits are an easy method to implement, but question the why. It might open up a larger conversation about more robust security measures. But, always keep the customer and their data in mind, not the protection of response time statistics.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Site Upgrades - Gatsby V4 and Webmentions</title>
    <link href="/site-upgrades/"/>
    <updated>2022-07-28T00:00:00Z</updated>
    <id>/site-upgrades/</id>
    <content type="html">&lt;p&gt;I was fed up. I was writing a post for my site the other day and was reminded of the fact that I couldn&#39;t run the site locally. It was time to fix that, once and for all. I was going to take on the task of lifting my site from the stringed together mess and add some new features along the way.&lt;/p&gt;
&lt;p&gt;Here&#39;s how I did it:&lt;/p&gt;
&lt;h2&gt;Gatsby V4&lt;/h2&gt;
&lt;p&gt;First thing on the agenda was upgrading Gatsby. The existing site was on Gatsby V2 and now didn&#39;t even run on my computer. As I was upgrading 2 major versions, and dozens of other plugins, I thought I was in for a difficult time. But, thankfully the upgrade process was relatively painless.&lt;/p&gt;
&lt;p&gt;I simply run &lt;code&gt;ncu -u &amp;amp;&amp;amp; yarn&lt;/code&gt; and I was away!&lt;/p&gt;
&lt;p&gt;I did have to update the &lt;code&gt;gatsby-plugin-feed&lt;/code&gt; as it now required configuration. I popped in the following, based on the example in the documentation and hey presto!&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;{
      resolve: `gatsby-plugin-feed`,
      options: {
        query: `
          {
            site {
              siteMetadata {
                title
                description
                siteUrl
                site_url: siteUrl
              }
            }
          }
        `,
        feeds: [
          {
            serialize: ({ query: { site, allMarkdownRemark } }) =&amp;gt; {
              return allMarkdownRemark.nodes.map((node) =&amp;gt; {
                return Object.assign({}, node.frontmatter, {
                  description: node.excerpt,
                  date: node.frontmatter.date,
                  url: site.siteMetadata.siteUrl + node.fields.slug,
                  guid: site.siteMetadata.siteUrl + node.fields.slug,
                  custom_elements: [{ &amp;quot;content:encoded&amp;quot;: node.html }],
                });
              });
            },
            query: `
              {
                allMarkdownRemark(
                  sort: { order: DESC, fields: [frontmatter___date] },
                ) {
                  nodes {
                    excerpt
                    html
                    fields {
                      slug
                    }
                    frontmatter {
                      title
                      date
                    }
                  }
                }
              }
            `,
            output: &amp;quot;/rss.xml&amp;quot;,
            title: &amp;quot;Developer Musings - Josh Ghent RSS Feed&amp;quot;,
          },
        ],
      },
    }
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Fixing the archive page&lt;/h2&gt;
&lt;p&gt;This is something that has bugged me for ages. For the eagle-eyed among you, you may have noticed all the way at the bottom of the page that February 2018 was listed before December and November 2018.&lt;/p&gt;
&lt;p&gt;Initially I thought this was to do with my custom graphql grouping that groups based on the &lt;code&gt;year-month&lt;/code&gt;. But after some debugging I found that it was grouping correctly, but not sorting the groups.&lt;/p&gt;
&lt;p&gt;I added a sort to the &lt;code&gt;allMarkdownRemark&lt;/code&gt; graphql statement but that had no effect.&lt;/p&gt;
&lt;p&gt;Then I noticed that all of the months were just single digits. So to graphql, &amp;quot;2018-10&amp;quot; was greater than &amp;quot;2018-2&amp;quot;.&lt;/p&gt;
&lt;p&gt;I updated my code that added this node to this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;// gatsby-config.js

function pad(n) {
  return n &amp;lt; 10 ? &amp;quot;0&amp;quot; + n : n;
}

exports.onCreateNode = ({ node, actions, getNode }) =&amp;gt; {
  const { createNodeField } = actions;

  if (node.internal.type === &amp;quot;MarkdownRemark&amp;quot;) {
    const value = createFilePath({ node, getNode });
    createNodeField({
      name: &amp;quot;slug&amp;quot;,
      node,
      value,
    });

    const date = new Date(node.frontmatter.date);

    const year = date.getFullYear();
    const month = pad(date.getMonth() + 1);
    const yearMonth = `${year}-${month}`;
    const day = date.getDate();

    createNodeField({ node, name: &amp;quot;year&amp;quot;, value: year });
    createNodeField({ node, name: &amp;quot;month&amp;quot;, value: month });
    createNodeField({ node, name: &amp;quot;year-month&amp;quot;, value: yearMonth });
    createNodeField({ node, name: &amp;quot;day&amp;quot;, value: day });
  }
};
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;After restarting the site build, the dates were all prefixed with zero&#39;s and the sorting worked!&lt;/p&gt;
&lt;p&gt;Unfortunately, when I created a pull request for a site, and Netlify attempted to create a deployment preview, it failed. After reviewing the issue, it was due to a plugin &lt;code&gt;Gatsby-plugin-preact&lt;/code&gt;. This plugin massively reduced the bundlesize of my site. But, I couldn&#39;t find a way of working around the problem so I had to remove it. It appears to be a problem associated with the fact that Preact and React are now no longer interchangeable as of V18+. Unfortunately, this means my sites payload is up to 400Kb. I&#39;ll be working to reduce this as that&#39;s far too bloated.&lt;/p&gt;
&lt;h2&gt;Design&lt;/h2&gt;
&lt;p&gt;Design has never been something that came naturally to me. But, I employed a trick from some great artists - I copied.&lt;/p&gt;
&lt;p&gt;For example, the little block page breaks, they are from &lt;a href=&quot;https://muan.co/&quot;&gt;Mu-An Chiou&lt;/a&gt;. I loved the clean, minimal design of her site. Mine looked a little rough around the edges and needed &lt;em&gt;something&lt;/em&gt; to make it a bit different. I&#39;ll be continuing to iterate upon the design.&lt;/p&gt;
&lt;p&gt;I also fixed a long standing issue with the mobile navigation. The buttons were so close to each other you couldn&#39;t really click on the link you wanted. Or at least I couldn&#39;t with my fat fingers.&lt;/p&gt;
&lt;div class=&quot;image&quot;&gt;
  &lt;img alt=&quot;Old navigation bar on mobile&quot; src=&quot;../../assets/images/old-nav.png&quot; /&gt;
&lt;/div&gt;
&lt;div class=&quot;image&quot;&gt;
  &lt;img alt=&quot;New navigation bar on mobile&quot; src=&quot;../../assets/images/new-nav.png&quot; /&gt;
&lt;/div&gt;
&lt;p&gt;So I updated the links to give them more space.&lt;/p&gt;
&lt;p&gt;The last thing was fonts. Websites should be interesting to look at. And, in a small part, I tried to accomplish that by using a new title font - &lt;a href=&quot;https://fonts.google.com/specimen/Space+Grotesk&quot;&gt;Space Grotesk&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;I feel this font face immediately tells you its a blog about software development. I&#39;m going to work on other small design tweaks to make the site more charming.&lt;/p&gt;
&lt;h2&gt;Webmentions&lt;/h2&gt;
&lt;p&gt;Something that&#39;s been on my list for ages is adding indieweb features, like webmentions. A number of my friends, Carol, Jamie and others, have added webmentions to their sites.&lt;/p&gt;
&lt;p&gt;The basic premise is that you can collect instances where you site has been referenced around the entire web. It sort of goes back to what the web was originally intended to be, a network of pages.&lt;/p&gt;
&lt;p&gt;In the past, I had a comments section on Discus. It was removed shortly after due to lack of usage. No one leaves comments on sites anymore. But they do tweet about it or add it to their newsletter. And webmentions allows us to collect those instances. You can give it a try by tweeting a link to this post and seeing your tweet appear at the bottom of this page!&lt;/p&gt;
&lt;p&gt;The webmentions are fed from https://webmention.io. A fantastic free API that collects all of these mentions for your site.&lt;/p&gt;
&lt;h2&gt;H-Feed&lt;/h2&gt;
&lt;p&gt;A small Indieweb update I made was to include a H-Feed for my blog posts. Think of H-Feeds as RSS for the Indieweb.&lt;/p&gt;
&lt;p&gt;I added the following code to my &lt;code&gt;/archive&lt;/code&gt; page.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;&amp;lt;ul style={{ display: &amp;quot;none&amp;quot; }} className=&amp;quot;h-feed&amp;quot;&amp;gt;
  &amp;lt;h1 className=&amp;quot;p-name site-title&amp;quot;&amp;gt;{siteTitle}&amp;lt;/h1&amp;gt;
  &amp;lt;p className=&amp;quot;p-summary&amp;quot;&amp;gt;Archive of all posts from joshghent.com&amp;lt;/p&amp;gt;
  {data.posts.edges.map(({ node }) =&amp;gt; (
    &amp;lt;li&amp;gt;
      &amp;lt;article className=&amp;quot;h-entry&amp;quot;&amp;gt;
        &amp;lt;Link className=&amp;quot;u-url&amp;quot; href={node.fields.slug}&amp;gt;
          &amp;lt;h2 className=&amp;quot;p-name&amp;quot;&amp;gt;{node.frontmatter.title}&amp;lt;/h2&amp;gt;
        &amp;lt;/Link&amp;gt;
        &amp;lt;address className=&amp;quot;p-author author h-card vcard&amp;quot;&amp;gt;
          &amp;lt;a
            href=&amp;quot;https://joshghent.com&amp;quot;
            className=&amp;quot;u-url url p-name fn&amp;quot;
            rel=&amp;quot;author&amp;quot;
          &amp;gt;
            Josh Ghent
          &amp;lt;/a&amp;gt;
        &amp;lt;/address&amp;gt;
        &amp;lt;span&amp;gt;
          &amp;lt;time className=&amp;quot;dt-published&amp;quot; dateTime={node.frontmatter.date}&amp;gt;
            {node.frontmatter.date}
          &amp;lt;/time&amp;gt;
        &amp;lt;/span&amp;gt;
        &amp;lt;p className=&amp;quot;p-summary&amp;quot;&amp;gt;{node.frontmatter.description}&amp;lt;/p&amp;gt;
      &amp;lt;/article&amp;gt;
    &amp;lt;/li&amp;gt;
  ))}
&amp;lt;/ul&amp;gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Tests&lt;/h2&gt;
&lt;p&gt;Ok so this bit might be a bit overkill. I added tests to my website.&lt;/p&gt;
&lt;p&gt;My motivation was simple:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Wanted to reliably merge dependabot (and my own) updates without issue&lt;/li&gt;
&lt;li&gt;I wanted to learn more about site testing (where there isn&#39;t code per-se to be unit tested)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I added a few tests using Jest that make sure certain components render correctly when given certain inputs. I utilized snapshots which have worked great so far and even caught some bugs!&lt;/p&gt;
&lt;p&gt;In the future, I&#39;ll be adding screenshot testing to make sure the webmentions component renders correctly and more.&lt;/p&gt;
&lt;h2&gt;What&#39;s Next?&lt;/h2&gt;
&lt;p&gt;My next phase of development for my site is adding new data sources and capturing &amp;quot;notes&amp;quot;. Many Indieweb people prefer to use apps that publish to a micropub endpoint. But I don&#39;t want to do anything &amp;quot;special&amp;quot; outside of my normal workflow (like tweeting, listening to music etc). So instead, I&#39;ve decided to attempt to do it all via Github Actions.&lt;/p&gt;
&lt;p&gt;I already &lt;a href=&quot;https://github.com/joshghent/blog/blob/master/.github/workflows/bookmark.yml&quot;&gt;have the first action setup&lt;/a&gt; that will record data associated with a page that I bookmark. It was created by &lt;a href=&quot;https://katydecorah.com&quot;&gt;Katy DeCorah&lt;/a&gt;, with the idea that each time you want to bookmark a page, you can do so by creating a github issue. The action then reads the URL and records it to a yaml file.&lt;/p&gt;
&lt;p&gt;The last part, which I haven&#39;t done, is using that data to dynamically create pages.&lt;/p&gt;
&lt;p&gt;But, onwards and upwards!&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>The Culture of Estimations</title>
    <link href="/estimation-culture/"/>
    <updated>2022-07-04T00:00:00Z</updated>
    <id>/estimation-culture/</id>
    <content type="html">&lt;p&gt;In business, moving fast is everything.&lt;/p&gt;
&lt;p&gt;Sales depend on them, marketing wants to advertise for them and the CEO want&#39;s to award themselves a large paycheck without any quarrels.&lt;/p&gt;
&lt;p&gt;But, it&#39;s the software engineers who have to design, build and test the features. And because software is an inherently creative pursuit, figuring out when something will be done is hard.&lt;/p&gt;
&lt;p&gt;Thus, we are led to the creation of software estimations. You may know them as story points, or ticket points as well.&lt;/p&gt;
&lt;p&gt;On the surface, the goal of trying to &amp;quot;estimate&amp;quot; when work will be done is a good one. Software is not something done in isolation, and we have to recognise that other teams depend on the work we are doing.&lt;/p&gt;
&lt;p&gt;But the reality is that estimations are just that - estimations. But they are treated as absolute values. Blame is assigned when your 3-point ticket took longer than a few days. Your job and any promotions are weighed in the balance of a single figure, that was perhaps not even decided by you.&lt;/p&gt;
&lt;p&gt;But where did we go wrong here? Why are estimations and the culture around them harmful?&lt;/p&gt;
&lt;h2&gt;Managers ask for estimations but want deadlines&lt;/h2&gt;
&lt;p&gt;One of the biggest issues with estimates is that they are used as a proxy word for a deadline. In reality, deadlines are important. It allows other teams to prepare their department. For example, a deadline for a feature allows marketing to prepare a campaign, sales teams to be trained on the new feature and support teams to learn the ins and outs.&lt;/p&gt;
&lt;p&gt;But estimates are not deadlines. So why not remove the middleman?&lt;/p&gt;
&lt;p&gt;In some cases (not all), it would be best to communicate with your manager and ask them what deadline there is. And then, practically, discuss the work item to meet that deadline somehow.&lt;/p&gt;
&lt;h2&gt;Developers rarely have all the information required&lt;/h2&gt;
&lt;p&gt;At their core, estimates are only as good as the information provided to them. And, unfortunately, there is no extra &amp;quot;confidence&amp;quot; number to put against an estimate. You have the estimate, and that&#39;s final.&lt;/p&gt;
&lt;p&gt;In my experience, tickets rarely have all the information you could need to make an informed estimate. You don&#39;t need everything and the kitchen sink. But, I have often seen cases where critical business logic was not listed in the requirements. Other times, designs are not listed because the work item is &amp;quot;basic&amp;quot;. Only for the feature to be redone when it turns out the stakeholders had very precise ideas of what the feature would look like.&lt;/p&gt;
&lt;p&gt;I&#39;m sure you have stumbled across cases like this before, maybe even being on the receiving end of them. One solution I have found practical is a checklist along with a ticket. It goes something like this:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Do we have designs for this item?&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Do we have all the critical business logic listed in the acceptance criteria?&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Do we have a clear idea of what systems we need to change to deliver this work?&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Are there any potential blockers we can foresee in doing this work item?&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In practice, you can never have 100% of the information. It&#39;s a ticket, not a crystal ball. But, having a high confidence level when making your estimates will improve a developer&#39;s happiness and decrease the chance that the feature needs to be changed after being built.&lt;/p&gt;
&lt;h2&gt;Estimates are formed based on a small set of factors&lt;/h2&gt;
&lt;p&gt;The premise of estimates is that the ticket being evaluated is a neat loop of functionality that can be rounded off and shipped by anyone in the team. Although in a software utopia that might be the case, the reality is much different.&lt;/p&gt;
&lt;p&gt;Tickets often overlap each other. Or only contain one part of the work involved in a larger feature. A common pattern is to divide backend and frontend tasks. In large companies, one team may need to create a ticket for another team to add a new value to an API. In all these cases, there is a question mark as to where the &amp;quot;effort&amp;quot; is assigned. To this ticket or that ticket.&lt;/p&gt;
&lt;p&gt;Furthermore, in my experience, estimates are based on previous work experience at a company in that project. The more experience a developer has, the less effort score they will give. I know I have been guilty of this. So, the estimate has an inherent bias towards people who have knowledge of the system. That then ties the ticket to an individual - which creates a bus factor.&lt;/p&gt;
&lt;p&gt;Additionally, there is the psychological factor. Estimates assume we are worker robots churning out code ad infinitum.&lt;/p&gt;
&lt;p&gt;But that&#39;s not the case. Here&#39;s an example.&lt;/p&gt;
&lt;p&gt;Your team has just finished a huge rewrite of a spaghetti ball of ASP.NET into a nice modern framework. Your team is pretty happy with their work. But, after that large set of work where progress seemed a foreign concept, they are given an enormous feature to do. Although their estimates may reflect their expertise on the project. The actual time it will take to complete the feature will be longer. Why? Because they are mentally exhausted from the first feature they built.&lt;/p&gt;
&lt;p&gt;Another example, you have a family member who has a serious illness. Again, your estimates may reflect the theoretical time it would take you. But the reality is different. You struggle to complete the task because of the mental tax on you from other sources.&lt;/p&gt;
&lt;p&gt;Although it&#39;s not possible to control these inherently human issues. We need to keep them in mind when making estimates, drawing up quarterly timelines and promising features to customers.&lt;/p&gt;
&lt;h2&gt;Estimate numbers are meaningless&lt;/h2&gt;
&lt;p&gt;What does 3 points even mean exactly? Is 3 a day, a week? In all teams I&#39;ve worked with, when you go in it seems like an advanced race who have understood time in a new dimension. A lingo evolves that categorises a 3 as a day, a 5 as 3 days and a 13 as two weeks.&lt;/p&gt;
&lt;p&gt;But there is no consistency. The numbers are simply a proxy for the days.&lt;/p&gt;
&lt;p&gt;The original purpose was to be an estimate of both risk and time.&lt;/p&gt;
&lt;p&gt;And this, in theory, is good, but in practice is hard to pin down a single number that represents two values and communicate that across a set of people. Especially given the fact that the number is arbitrary. When a doctor provides an estimate of the risk to a patient&#39;s life during an operation, it&#39;s given as a percentage. And this gives us some grounding. Because we know that it represents the number of people per 100 that will die during that operation, in theory.&lt;/p&gt;
&lt;p&gt;But estimate points lack that grounding into something real and practical.&lt;/p&gt;
&lt;p&gt;When I&#39;ve joined new teams in the past, they&#39;ve circulated an image listing the number of points and how that correlates to days. What results is that you are working like Alan Turing to decode the estimates during sprint planning meetings.&lt;/p&gt;
&lt;p&gt;My preference is to just drop the proxy. In the vast majority of cases, you can simply use days. Or, if possible drop estimates altogether and prioritize rigorously. Often work takes as long as it takes. Enough trust should be built in your team to do the work diligently but not take too much time.&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;If you use estimates, be mindful of what that number represents.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Consider switching to another estimation system.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Don&#39;t invent lingo that new team members need to learn to take part in the planning meetings.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Estimates are not one size fits all. Sometimes you may need a deadline. Other times you don&#39;t need an estimate at all.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Like many things, estimations started out with good intentions. But, along the road, they took a bad turn. Question the pre-existing routines and you quickly realise they&#39;ve been passed on as gospel. But, working as a team, you can build a system that works for all and continues to deliver a quality product.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>My journey with carpal tunnel</title>
    <link href="/carpal-tunnel/"/>
    <updated>2022-06-21T00:00:00Z</updated>
    <id>/carpal-tunnel/</id>
    <content type="html">&lt;p&gt;About 5 years ago, I went through a time when I doubted I could continue programming.&lt;/p&gt;
&lt;p&gt;I had carpel tunnel syndrome. A repetitive strain injury where the tendon going over the wrist at the base of your hand compresses the median nerve.&lt;/p&gt;
&lt;p&gt;This compression causes your hands to feel numb, aching in the forearm and the inability to move your fingers without stabs of pain reminding you of your injury.&lt;/p&gt;
&lt;p&gt;Carpal tunnel is the injury of the internet age. With many of us hunched over our screens and using them for everything (work, banking, socializing and entertaining etc), RSI is a natural consequence.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.maxpou.fr/rsi-as-developer&quot;&gt;Many&lt;/a&gt; &lt;a href=&quot;https://flaviocopes.com/repetive-strain-injury/&quot;&gt;programmers&lt;/a&gt; &lt;a href=&quot;https://www.swyx.io/rsi-tips&quot;&gt;have&lt;/a&gt; &lt;a href=&quot;https://mdlayher.com/blog/a-programmers-journey-with-rsi/&quot;&gt;written&lt;/a&gt; about their experiences. I wanted to share mine. Cutting through the noise and presenting, what I believe to be, reality.&lt;/p&gt;
&lt;p&gt;First, to give you a context of my overall health, I was 17-18 at the time, reasonably thin but not someone who frequented a gym.&lt;/p&gt;
&lt;h2&gt;Treating myself&lt;/h2&gt;
&lt;p&gt;When I first noticed pain in my forearm and tingling in my fingers, I did what all good internet children do - Google it. I stumbled across many different articles discussing tennis elbow, repetitive strain injury and carpal tunnel syndrome. Based on my own symptoms, I believed I had CTS.&lt;/p&gt;
&lt;p&gt;Foolishly, I had ignored the pain for the longest time. Choosing instead to block it out, favouring writing endless code to maximise my learning and playing Starcraft until the early hours.&lt;/p&gt;
&lt;p&gt;By the time I decided to do anything about it, it was already too late.&lt;/p&gt;
&lt;p&gt;My wrists ached with a pain I couldn&#39;t get rid of. And the underside of my forearm had filled with fluid to protect the area (aren&#39;t bodies clever?).&lt;/p&gt;
&lt;p&gt;I quickly searched YouTube for stretches I could do - including my favourite by &amp;quot;Dr Levi&amp;quot;. Although these exercises gave me some relief, it was temporary. I soon turned to painkillers desperate to rid myself of the pain.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Lesson:&lt;/strong&gt; Don&#39;t ignore your body. I could have managed the pain a lot better had I done things earlier.&lt;/p&gt;
&lt;h2&gt;Diagnosis&lt;/h2&gt;
&lt;p&gt;Getting diagnosed was easily the most annoying part of the experience. When you go to a doctor (on the NHS), the first thing they have to do is refer you to physiotherapy. One of the many failings of the British healthcare system is it&#39;s based on flow charts. It standardizes care but doesn&#39;t fairly give a patient autonomy to skip stages that will likely be a waste of time and money.&lt;/p&gt;
&lt;h2&gt;Treatment&lt;/h2&gt;
&lt;h3&gt;Physio&lt;/h3&gt;
&lt;p&gt;The first lot of treatment was 5 sessions of physiotherapy. I tried to be as open-minded as possible but knew deep down that it wasn&#39;t going to give any lasting benefits.&lt;/p&gt;
&lt;p&gt;I was given a large rubber band to pull on for resistance training. Tennis balls for rolling out the affected area. And the same stretches I had been doing from YouTube.&lt;/p&gt;
&lt;p&gt;I got the impression that many of the doctors I met weren&#39;t well equipped to deal with someone of my age and fitness. Carpal tunnel usually affects pregnant women and the elderly.&lt;/p&gt;
&lt;p&gt;After all of this, my initial impression had been correct. It gave me some benefit, but nothing lasting.&lt;/p&gt;
&lt;p&gt;It wasn&#39;t all in vain, however. One particular doctor did figure out that I had the median nerve trapped in both my wrist and back (near my shoulder blade). This gave me a formal diagnosis of carpal tunnel syndrome.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Lesson:&lt;/strong&gt; Try to keep an open mind to people even if you think they can&#39;t solve the problem at hand.&lt;/p&gt;
&lt;h2&gt;Osteotherapy&lt;/h2&gt;
&lt;p&gt;During the time I was getting physio, my mum had been going to an osteopath to help with her back. She recommended that I go and visit him and see what he could do. I&#39;m skeptical of alternative medicines but had seen the impact he had on my mum so I decided to give it a go.&lt;/p&gt;
&lt;p&gt;A single session was £50 for 30 minutes. That&#39;s a lot of money for me even today. But back then it felt like a huge chunk of my wage was being spent on these sessions. Nonetheless, I was willing to pay for just some relief and the glimmer of hope that it would work long-term.&lt;/p&gt;
&lt;p&gt;Despite numerous different techniques, nothing worked. However, after massage (not just the affected area), I did come out feeling incredible and was able to put away my painkillers for at least 2 days.&lt;/p&gt;
&lt;h2&gt;Surgery&lt;/h2&gt;
&lt;p&gt;After deciding osteopath treatments were a waste of time and physio had yielded no results, I decided to push my doctor to get me seen by someone else.&lt;/p&gt;
&lt;p&gt;Thankfully my GP referred me right away to a &amp;quot;hand specialist&amp;quot;.&lt;/p&gt;
&lt;p&gt;She also gave me a generous supply of co-codamol (30mg codeine + 500mg paracetamol). A drug I&#39;d been taking out of necessity after my body began to resist the effects of paracetamol and then Paramol (7mg codeine + 500mg paracetamol).&lt;/p&gt;
&lt;p&gt;The result of this cocktail of pills was that my work became extremely difficult to do. I became aware of a lack of focus and tiredness. Almost as if I was slightly tipsy and hadn&#39;t slept the night before. Initially, I had to time the drug carefully. If I took it too soon before or after eating, I felt like vomiting. And if I took 2 pills (the recommended dose) at the same time, I felt like my brain wanted to float out of my skull. Eventually, I figured out that I could take 1 tablet, wash it down with a coffee and then take another 20 minutes later. This would have minimal effects on my focus and not make me high, whilst also giving me pain relief for the day.&lt;/p&gt;
&lt;p&gt;Around this time, I came home one day to my mum pointing out that my skin had started to discolour. My livers protest at having to deal with the painkillers.&lt;/p&gt;
&lt;p&gt;She prescribed a mountain of spinach and other iron-rich foods, which helped alleviate this.&lt;/p&gt;
&lt;p&gt;But the message was clear, &lt;strong&gt;I needed surgery.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;When I saw the hand specialist, they immediately put me on the list for surgery. That was the good news. The bad news was that I&#39;d need to wait 4 months. But, he did offer that he had availability next Tuesday at his private clinic. I forget how much the operation would have cost, my family and I were unwilling to pay. I could wait it out.&lt;/p&gt;
&lt;p&gt;Whilst I waiting for D-Day, I tried a laundry list of treatments that many with RSI recommend. With mixed results.&lt;/p&gt;
&lt;h2&gt;What did work&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Using a low-key-travel keyboard (not mechanical).&lt;/strong&gt; This helped immensely as the amount of force I needed to put in from my forearms to my fingers was minimised. As much as I loved my cherry red switches, it wasn&#39;t worth it.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Rest.&lt;/strong&gt; As much as this helped, it wasn&#39;t practical to never use my wrists. I&#39;d recommend resting as much as possible, as soon as possible. I foolishly continued with hobbies based on a computer.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Stretching.&lt;/strong&gt; My forearms had tendons so tight, I could have played Beethovans 5th using it as a violin. Stretching helped me loosen up.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Exercise.&lt;/strong&gt; In small, light doses exercise did help. Flexibility workouts and lifting weights to strengthen my back aided in maintaining good posture.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Hand warmers.&lt;/strong&gt; One side effect of carpel tunnel is it affects blood flow into your fingers. Cold fingers being forced to type quickly feels like coaxing an unoiled engine to life after years of idle standstill. A &lt;a href=&quot;https://www.amazon.co.uk/Trongle-Warmers-Rechargeable-Arthritic-Sufferers/dp/B08DRHKMJM/ref=as_li_ss_tl?ie=UTF8&amp;amp;linkCode=ll1&amp;amp;tag=yoursarticle219-21&amp;amp;linkId=503a745072cba73a36bf74b60714cf3f&amp;amp;language=en_GB&quot;&gt;USB-C hand warmer&lt;/a&gt; is a life saver for the price.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;What didn&#39;t work&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Believing it wasn&#39;t there.&lt;/strong&gt; I don&#39;t know what voodoo scientist came up with this idea, or how it became so popular in the tech community but believing the pain wasn&#39;t there (quelle surprise) didn&#39;t work. The book entitled &amp;quot;Healing Back Pain: The Mind-Body Connection&amp;quot; came highly recommended by both hacker news and other programmers who had faced RSI. I read a portion online and decided quickly that it stank of bad science. If it works for you... great. You clearly have access to a realm far beyond my reach.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Exercise.&lt;/strong&gt; As previously stated, doing small, light exercises did help. But the wisdom of lifting lots of heavy weights did not. I think this would have worked had I started sooner. But by the time I attempted it, my wrists felt like they would snap.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Wrist splints.&lt;/strong&gt; I used wrist supports every single night. I even took them to social events sometimes if my pain was particularly bad. Although in theory, they keep the pressure off your wrists by tilting them upward. The reality is I felt no material difference in pain when I did and didn&#39;t use them.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;The Surgery&lt;/h2&gt;
&lt;p&gt;When the day came for the surgery, I felt relief but also a little anxiety. I had never had surgery before. I was unsure of the &amp;quot;setup&amp;quot; you know. In movies where people are having bullets removed and organs transplanted, we are familiar with the look of an oval operating theatre with dozens of doctors milling about.&lt;/p&gt;
&lt;p&gt;But my operation was different. I even, &lt;em&gt;sort of&lt;/em&gt;, knew how to do it.&lt;/p&gt;
&lt;p&gt;Dr. Youtube had provided me with a real-life glimpse into what my procedure would look like. It involved slicing the tendon that goes over the carpal tunnel. Simple as that. When it heals, you pray that it doesn&#39;t compress the nerve again.&lt;/p&gt;
&lt;p&gt;The surgeon himself was very friendly. There was soft classical music humming in the background and tubed lighting flickering above me. It was a small rectangle room with the operating table in the middle, two swivel stools at opposite corners and an ancient windows XP tower whirring in one corner.&lt;/p&gt;
&lt;p&gt;A sort of screen that resembled a thick paper towel was erected to obscure my view from the operation that was taking place.&lt;/p&gt;
&lt;p&gt;After the surgeon ran through his checklist, confirming the operation I was having and my details, it started. First, I was injected with a local anaesthetic near the incision site. And then the surgery was underway. Due to the numbness, it felt like knives scrapping at a brick. It felt odd. The pressure was familiar, but the lack of pain was something my brain couldn&#39;t quite fathom.&lt;/p&gt;
&lt;p&gt;In about 15 minutes, the surgery was done.&lt;/p&gt;
&lt;p&gt;I was helped to sit up and the doctor started asking me questions. About 30 seconds after, I passed out. It was the first time I&#39;d passed out (and I hope last). I saw what resembled a wormhole. With walls of a metallic jelly herding me towards the infinite centre. I tumbled down. And then suddenly woke up, in a different room with my, then, girlfriend at my side. She asked how I was doing as I came round, bleary-eyed as if after a long night&#39;s sleep.&lt;/p&gt;
&lt;p&gt;The nurse assured me this was quite common but I felt quite weak having fainted at such a minor operation. I hadn&#39;t seen the blood or the wound. I just passed out as the blood rushed back to my head.&lt;/p&gt;
&lt;p&gt;Day by day, my hand recovered. I gained mobility back in around a few days but my hand was stiff from the stitches for a couple of weeks until they were removed.&lt;/p&gt;
&lt;p&gt;After 2 weeks, I returned to work. My hand was quite stiff and the stitches itched but my pain was gone!&lt;/p&gt;
&lt;p&gt;The fluid that had built up around my forearm still remained so I was a bit concerned it hadn&#39;t worked completely. But about a month afterwards, it was clear that the procedure had worked. But the fluid was not going to go away.&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;After all these treatments, this is how my hand now looks.&lt;/p&gt;
&lt;div class=&quot;image&quot;&gt;
	&lt;img src=&quot;../../assets/images/hand.jpeg&quot; /&gt;
&lt;/div&gt;
&lt;p&gt;You can see the small scar just above the ridge of the wrist where the incision was.&lt;/p&gt;
&lt;p&gt;Overall, surgery was the best thing I could have done. My only regret is not acting sooner. Here are my main takeaways:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Start the chain of care early.&lt;/strong&gt; It can take time to get referred to different clinics.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Listen to your body. Don&#39;t ignore it.&lt;/strong&gt; Get things resolved now. Like right now. Health is the most important thing you can have.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Surgery is not scary and does work.&lt;/strong&gt; Seriously. Look at the stats. You can&#39;t believe the pain away. If you feel you need surgery, and doctors agree, then go for it. It means you don&#39;t need to worry about having pain again.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If you have any questions about surgery or anything else then hit me up on &lt;a href=&quot;https://twitter.com/joshghent&quot;&gt;twitter&lt;/a&gt;.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Deeply Remove a Key from an Object</title>
    <link href="/deep-remove-key-from-object/"/>
    <updated>2022-06-14T00:00:00Z</updated>
    <id>/deep-remove-key-from-object/</id>
    <content type="html">&lt;p&gt;Recently I had the problem of removing a specific key from an object. Normally I would use &lt;code&gt;omit&lt;/code&gt; from the Lodash or Ramda library. But, there was a catch - I also needed to remove the key from nested structures within the object.&lt;/p&gt;
&lt;p&gt;Here is a code snippet of how I solved it in NodeJS with Lodash.&lt;/p&gt;
&lt;p&gt;I&#39;m sure you can make this VanillaJS but &lt;code&gt;transform&lt;/code&gt; is a handy function to use.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-javascript&quot;&gt;import { transform } from &#39;lodash&#39;;

const isObject(value) {
    const type = typeof value
    return type === &#39;function&#39; || type === &#39;object&#39; &amp;amp;&amp;amp; !Array.isArray(value) &amp;amp;&amp;amp; !!value
}

// Deeply remove keys from an object
// @param - obj: Object - the object to remove the key from
// @params - keysToOmit: Array/String - string or array of strings of keys to remove
const deepOmit = (obj, keysToOmit) =&amp;gt; {
  const keysToOmitIndex = Array.isArray(keysToOmit) ? keysToOmit : [keysToOmit]);

  function omitFromObject(o: any) { // the inner function which will be called recursivley
    return transform(o, (result, value, key) =&amp;gt; { // transform to a new object
      if (keysToOmitIndex.indexOf(key) !== -1) { // if the key is in the index skip it
        return;
      }

      result[key] = isObject(value) ? omitFromObject(value) : value; // if the key is an object run it through the inner function - omitFromObject
    })
  }

  return omitFromObject(obj); // return the inner function result
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Usage&lt;/h3&gt;
&lt;p&gt;You can use the function like this&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-javascript&quot;&gt;const obj = {
  _id: 1,
  name: &amp;quot;Josh Ghent&amp;quot;,
  title: &amp;quot;Software Engineer&amp;quot;,
  metadata: { _id: 2, company: &amp;quot;Turbo Technologies&amp;quot; },
};

const result = deepOmit(obj, [&amp;quot;_id&amp;quot;, &amp;quot;name&amp;quot;]);
// =&amp;gt;
// {
//   title: &#39;Software Engineer&#39;,
//   metadata: { company: &#39;Turbo Technologies&#39; }
// }
&lt;/code&gt;&lt;/pre&gt;
</content>
  </entry>
  
  <entry>
    <title>Creating legacy code is ok</title>
    <link href="/creating-legacy/"/>
    <updated>2022-06-08T00:00:00Z</updated>
    <id>/creating-legacy/</id>
    <content type="html">&lt;p&gt;&amp;quot;Legacy code&amp;quot;. Words that will strike fear into the hearts of most developers. We hate to work on legacy code, it&#39;s complicated, untested and often larger than the observable universe.&lt;/p&gt;
&lt;p&gt;Worse still, we shudder at the thought that any of our code would be considered &amp;quot;legacy&amp;quot;. And so, as developers, we try hard to create &amp;quot;clean&amp;quot; code, choose the &amp;quot;right&amp;quot; framework, and implement solid principles.&lt;/p&gt;
&lt;p&gt;But I&#39;m going to argue that you shouldn&#39;t worry about creating legacy code.&lt;/p&gt;
&lt;p&gt;Now, this isn&#39;t a post to blag on about how you should forget programming principles and write spaghetti code. Rather it&#39;s to say that you shouldn&#39;t worry about creating legacy systems[^1].&lt;/p&gt;
&lt;p&gt;Why? Because most[^2] systems become legacy on a long enough time.&lt;/p&gt;
&lt;p&gt;Think about it. How many systems have you built, or contributed to, professionally that is still running more than 5 years later? I&#39;d wager the figure is below 3[^3].&lt;/p&gt;
&lt;p&gt;And this is ok.&lt;/p&gt;
&lt;h4&gt;But, why does this happen?&lt;/h4&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Requirements change.&lt;/strong&gt; Probably the biggest reason for system change is because requirements do. If a product is made a certain way and it&#39;s cheaper to rebuild it with new requirements, then it will be. Requirements are one of the few things that I have never seen done flexibly. So, there is a tendency to commit the &amp;quot;not invented here&amp;quot; bias.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;People change.&lt;/strong&gt; As teams change, so does the knowledge that is inside the team. In my experience, knowledge transfers between developers can take place twice before the link to the original ideas is broken. If a system is complex enough, and the team has long gone, then it might be easier to rebuild the application. Often teams justify this with the &amp;quot;better the devil you know&amp;quot; argument.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Technologies change.&lt;/strong&gt; New frameworks, libraries and patterns come up all the time. What was once considered cutting edge soon becomes cumbersome. The length of time varies depending on the language. But, in Javascript-land, large changes happen yearly. As teams evolve, so will the preferences and ideas that they bring to it. So, it&#39;s natural to assume that the coding style will change as individuals change and the ecosystem matures.&lt;/li&gt;
&lt;/ol&gt;
&lt;h4&gt;Where does this leave us?&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Don&#39;t sweat choosing the &amp;quot;right&amp;quot; thing.&lt;/strong&gt; I remember sitting through lots of meetings where we would debate the merits of various frameworks. Although this felt like important work at the time, it was largely pointless. Developers often forget that technology will improve. We tend to focus on the inputs rather than the outputs. Because that&#39;s where we live - the code, the framework, the deployment system. Rather than the customer who lives with the outputs - the website, app or game. Ultimately, technology dies and gets replaced over time. And this is a good thing.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Write tests, but not too many.&lt;/strong&gt; There are two polar situations I have seen with tests. Either they have tonnes that break when you indent a line, or there are absolutely none. Aim for enough to cover your main happy paths, the complicated parts that no one likes to change and customer critical functions. But don&#39;t sweat creating a large test suite, it won&#39;t reduce the likelihood that the system will be replaced.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Write documentation, but not too much.&lt;/strong&gt; Documentation is difficult to keep up to date. So, aim to write documentation that answers logical questions and covers the most common use cases in a sensible order. This will assist a new team when they come along. This is a piece of work that may, principally, stand the test of time. Because it could be used by a new team to develop a compatible system in a new framework or language.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Overall, remember that technology moves fast. Be accepting of change. And don&#39;t try to make stuff that will outlive you. Solve the problem the customer has, and keep the system tidy whilst doing it. Write tests that save you time and documentation that saves your customers time. Legacy code is not something to be feared. It&#39;s to be embraced.&lt;/p&gt;
&lt;p&gt;[^1]: By &amp;quot;system&amp;quot; I mean a module or discreet unit of code. For example, an email system, an authentication system, a customer API etc.
[^2]: I say most because there are obviously exceptions to the rule. The Curl code base for example is 24 years old at the time of writing.
[^3]: This is based on my own personal experience. Having had some contact with developers at businesses I previously worked at.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Questions for Developers to ask at interviews</title>
    <link href="/interview-questions/"/>
    <updated>2022-05-24T00:00:00Z</updated>
    <id>/interview-questions/</id>
    <content type="html">&lt;p&gt;Interviews are for both the interviewer and interviewee. Largely though, they are designed to ask questions to the interviewee.&lt;/p&gt;
&lt;p&gt;Recently, I&#39;ve had a few interviews and was searching for questions to ask the company I was applying to. I realised that there are lots of lists of questions for interviewers but not the other way around.&lt;/p&gt;
&lt;p&gt;The goal of your questions should be to clarify:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;What&#39;s it like to work there?&lt;/li&gt;
&lt;li&gt;Can I perform the role and grow?&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;With that in mind, my general approach is to prepare 2-4 questions to ask my interviewers. Usually a mix of a couple of general questions (that can be asked of any software business), and some specific ones (bespoke to that company).&lt;/p&gt;
&lt;p&gt;Here are my general questions:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;What is your software development life cycle like?&lt;/li&gt;
&lt;li&gt;How big are teams?&lt;/li&gt;
&lt;li&gt;What are the opportunities to grow in this role?&lt;/li&gt;
&lt;li&gt;What metrics are being used to measure success in this role?&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Besides general questions, here are three pieces of advice to craft bespoke questions. Doing so will mean you stand out and show you&#39;ve done your homework about the business.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Look at their online presence.&lt;/strong&gt; Reading the company&#39;s Twitter, LinkedIn and blog can allow you to glean what their culture is like. Notice the &amp;quot;voice&amp;quot; they use. What do they share? If there are photos of the office does that look like the place you want to work? A developer may have a blog where they share stories of software they&#39;ve built or bugs overcome. Understand the process they used, and if the tech they used is of interest to you. I usually spend 20 minutes on this at most. You don&#39;t need to read everything in depth, just skim the details.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Research their latest projects.&lt;/strong&gt; Using the research you gathered in step 1, you should be able to find their latest project. Build questions around this. If they used some new technologies, ask them why they chose their technologies. Perhaps it&#39;s in an unusual industry, ask about the unique challenges of that industry. Maybe it&#39;s a regular CRUD app or Shopify store, you can ask about how they approach adding specific business logic into those applications.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Learn about who started the company.&lt;/strong&gt; It&#39;s essential when interviewing to establish who you&#39;re working for. You can ask questions about the history of the company and its future growth plans. This will paint a complete picture of how the business runs. You can also ask about leadership. The CEO won&#39;t have a huge impact on your day-to-day, but they will govern the overarching strategy of the company. So understand where their focus is - sales or technology.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Your questions should be things you want the answer to. Don&#39;t ask for asking sake. Remember you are trying to establish if the company is a good fit for you.&lt;/p&gt;
&lt;p&gt;Interviewing can be a daunting task because you feel like you&#39;re under a microscope. But it&#39;s a two-way street. Judge them too! Use interesting questions to gather information and stand out as a candidate.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Bookshelf</title>
    <link href="/bookshelf/"/>
    <updated>2022-05-22T00:00:00Z</updated>
    <id>/bookshelf/</id>
    <content type="html">&lt;p&gt;I believe reading and travel are the best ways to expose yourself to new ideas. Travelling can be expensive. So, largely I &amp;quot;travel&amp;quot; (through time and space) through the books I read.&lt;/p&gt;
&lt;p&gt;Here is a semi-complete list of the books I own. I have lent many to friends and family and donated others. I prefer to have at least 80% of my books unread. I only keep a book I&#39;ve read if I am going to re-read it or use it for reference later.&lt;/p&gt;
&lt;p&gt;I&#39;ve highlighted good books in &lt;span style=&quot;color:blue&quot;&gt;blue&lt;/span&gt;, and great books in &lt;span style=&quot;color:orange&quot;&gt;orange&lt;/span&gt;. These recommendations are contextual though. At one time, a certain book might be life changing, but at another time be meaningless.&lt;/p&gt;
&lt;p&gt;This page will be updated as I get new books. Send me recommendations!&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Four hour work week&lt;/li&gt;
&lt;li&gt;&lt;span style=&quot;color:orange&quot;&gt;Dealers of Lightning&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;Courage to be disliked&lt;/li&gt;
&lt;li&gt;Skunk works&lt;/li&gt;
&lt;li&gt;&lt;span style=&quot;color:orange&quot;&gt;Soul of the new machine&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;Jobs&lt;/li&gt;
&lt;li&gt;&lt;span style=&quot;color:blue&quot;&gt;Stalingrad&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;Creativity Inc&lt;/li&gt;
&lt;li&gt;Infinite Jest&lt;/li&gt;
&lt;li&gt;Crying in H Mart&lt;/li&gt;
&lt;li&gt;The Sun also rises&lt;/li&gt;
&lt;li&gt;The old man and the sea&lt;/li&gt;
&lt;li&gt;The Game&lt;/li&gt;
&lt;li&gt;Kafka&lt;/li&gt;
&lt;li&gt;Klara and the sun&lt;/li&gt;
&lt;li&gt;Guernica&lt;/li&gt;
&lt;li&gt;Thursday Murder Club&lt;/li&gt;
&lt;li&gt;Thursday Murder Club 2&lt;/li&gt;
&lt;li&gt;&lt;span style=&quot;color:blue&quot;&gt;Prey&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;Girl A&lt;/li&gt;
&lt;li&gt;Groupthink - A study in self-delusion&lt;/li&gt;
&lt;li&gt;Surface Detail&lt;/li&gt;
&lt;li&gt;The Great Indoors&lt;/li&gt;
&lt;li&gt;Walkable City&lt;/li&gt;
&lt;li&gt;Do androids dream of electric sheep&lt;/li&gt;
&lt;li&gt;Meditations&lt;/li&gt;
&lt;li&gt;The Island&lt;/li&gt;
&lt;li&gt;The picture of dorian grey&lt;/li&gt;
&lt;li&gt;Talking to strangers&lt;/li&gt;
&lt;li&gt;Walden&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.amazon.co.uk/Ralph-Leighton-Feynman-Adventures-Character/dp/B00I63O9OQ/ref=sr_1_3?crid=1BK1BCXKBU7EC&amp;amp;keywords=surely+youre+joking+mr+feynman&amp;amp;qid=1653210148&amp;amp;sprefix=surely%2Caps%2C101&amp;amp;sr=8-3&quot;&gt;Surely you&#39;re joking Mr Feynman&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.amazon.co.uk/Cats-Cradle-Penguin-Modern-Classics/dp/0141189347/ref=sr_1_1?crid=10IYJJ7IE6E9G&amp;amp;keywords=cats+cradle+kurt+vonnegut&amp;amp;qid=1653210180&amp;amp;s=books&amp;amp;sprefix=cats+cradle+ku%2Cstripbooks%2C117&amp;amp;sr=1-1&quot;&gt;Cats Cradle&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Zen and the art of motorcycle maintainence&lt;/li&gt;
&lt;li&gt;All quiet on the western front&lt;/li&gt;
&lt;li&gt;The handmaidens tale&lt;/li&gt;
&lt;li&gt;The rum diary&lt;/li&gt;
&lt;li&gt;Mans search for meaning&lt;/li&gt;
&lt;li&gt;Stuff matters&lt;/li&gt;
&lt;li&gt;Checklist manifesto&lt;/li&gt;
&lt;li&gt;&lt;span style=&quot;color:orange&quot;&gt;Bad blood&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;The man who mistook his wife for a hat&lt;/li&gt;
&lt;li&gt;Perfume&lt;/li&gt;
&lt;li&gt;Bridge over the river kwai&lt;/li&gt;
&lt;li&gt;Principles&lt;/li&gt;
&lt;li&gt;Spy the lie&lt;/li&gt;
&lt;li&gt;&lt;span style=&quot;color:orange&quot;&gt;Bad science&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span style=&quot;color:blue&quot;&gt;Why we sleep&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;Chekov Plays&lt;/li&gt;
&lt;li&gt;Shoe dog&lt;/li&gt;
&lt;li&gt;Business for punks&lt;/li&gt;
&lt;li&gt;&lt;span style=&quot;color:blue&quot;&gt;Flash boys&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;Zero to one&lt;/li&gt;
&lt;li&gt;Trust me, I&#39;m lying&lt;/li&gt;
&lt;li&gt;Masters of doom&lt;/li&gt;
&lt;li&gt;Animal farm&lt;/li&gt;
&lt;li&gt;Undercover economist&lt;/li&gt;
&lt;li&gt;&lt;span style=&quot;color:blue&quot;&gt;Blitzed&lt;/span&gt;&lt;/li&gt;
&lt;/ul&gt;
</content>
  </entry>
  
  <entry>
    <title>There is always more</title>
    <link href="/more/"/>
    <updated>2022-04-26T00:00:00Z</updated>
    <id>/more/</id>
    <content type="html">&lt;p&gt;Life can often seem like we don&#39;t have enough hours in the day. The media glorifies many who work 80+ hours a week. And there is an attitude that if you&#39;re not launching a startup and earning 5k MRR, then you&#39;re not doing it right.&lt;/p&gt;
&lt;p&gt;I wrestle with the thought that I&#39;m not doing &amp;quot;enough&amp;quot;. I should be building another business, growing my client base, writing, or creating videos.&lt;/p&gt;
&lt;p&gt;Like many, I have read countless articles and watched numerous videos on &amp;quot;productivity&amp;quot;. With the goal of getting more tasks done.&lt;/p&gt;
&lt;p&gt;But, I&#39;m trying to talk myself around to the idea, that there will &lt;strong&gt;always be more to do.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;No matter how much you prioritize, refine or cut, there will always be another feature to add, bug to squash or email to reply to. &lt;strong&gt;And that&#39;s ok.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;That isn&#39;t to say prioritization and refinement are worthless. But it&#39;s a fallacy to think this will leave you feeling like your work can ever be &amp;quot;complete&amp;quot;.&lt;/p&gt;
&lt;p&gt;Instead, try to live slowly. Accept that there will be stuff that doesn&#39;t get done.&lt;/p&gt;
&lt;p&gt;But in the moment, be happy about what you&#39;re doing, and live so that at the end of each day you&#39;re pleased with what you&#39;ve done (not ticked off a list).&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Advancing from a Mid-Level to Senior Developer</title>
    <link href="/mid-to-senior-dev/"/>
    <updated>2022-04-12T00:00:00Z</updated>
    <id>/mid-to-senior-dev/</id>
    <content type="html">&lt;p&gt;&amp;quot;Senior&amp;quot; developer is a coveted title amongst software engineers. Many you work with will be promoted seemingly overnight, despite you thinking you&#39;re a better engineer. While you stay firmly as a mid-level developer.&lt;/p&gt;
&lt;p&gt;This is the situation I found myself in. I was curious about what would make me a senior developer. I figured that the title was when I would have finally &amp;quot;made it&amp;quot;. I could tackle interesting problems, make decisions and work on more areas of the product.&lt;/p&gt;
&lt;p&gt;But I soon realised I had an incorrect view of what a senior developer was.&lt;/p&gt;
&lt;p&gt;I thought a senior developer was someone who&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Had 5+ years of experience.&lt;/li&gt;
&lt;li&gt;Wrote great code (which I understood to mean never receiving a PR comment).&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;But, the reality is much different.&lt;/p&gt;
&lt;p&gt;I&#39;m going to discuss what it means to be a senior developer and what you should do to become one. &lt;em&gt;Hint: It&#39;s nothing to do with your pull requests!&lt;/em&gt;&lt;/p&gt;
&lt;h2&gt;What it takes to become a senior developer&lt;/h2&gt;
&lt;h3&gt;1. Don&#39;t chase the title&lt;/h3&gt;
&lt;p&gt;The best things come in life when you don&#39;t search for them.&lt;/p&gt;
&lt;p&gt;In the same way, one of the best ways to become a senior developer is not to strive after the title. Because the title is meaningless without a solid foundation. I could call myself an architect. But do I know how to design a house? Not at all. In the same way, you have to have the knowledge and attitude of a senior developer to be called one.&lt;/p&gt;
&lt;p&gt;But titles are not the be-all and end-all. Reflecting on it, I now consider titles meaningless. I&#39;ve met plenty of &amp;quot;senior&amp;quot; developers who have that title by being at a job the longest.&lt;/p&gt;
&lt;p&gt;Be driven by the knowledge you will gain at the end of the process and not some words on your company&#39;s &amp;quot;about&amp;quot; page. You don&#39;t need a title to be a good engineer. Stay humble and be a junior.&lt;/p&gt;
&lt;h3&gt;2. Develop T shaped understanding&lt;/h3&gt;
&lt;p&gt;&amp;quot;T&amp;quot; is a great shape. It&#39;s got the bottom stick and the broad top. We can liken this to our knowledge.&lt;/p&gt;
&lt;p&gt;Let&#39;s first look at the bottom of the &amp;quot;T&amp;quot; - depth of understanding.&lt;/p&gt;
&lt;p&gt;By now, you will likely have at least 1 programming language you use more than others. Develop further understanding of this language. How? Learn how the compiler for the language works, its flaws and what situations is it good in.&lt;/p&gt;
&lt;p&gt;Developing a deep understanding (the bottom of the T) of a particular topic will enable you to debug the difficult problems and tell others about it. You can speak from a position of knowledge, rather than repeating blog posts you read verbatim.&lt;/p&gt;
&lt;p&gt;There is a joke that touches on this topic:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;A priest is giving a nun a ride home.&lt;/p&gt;
&lt;p&gt;As they&#39;re in the car, each time the Priest goes to switch gears, he rests his hand on the nuns knee.&lt;/p&gt;
&lt;p&gt;The nun looks up at the priest and says &amp;quot;Father, remember Luke 14:10.&amp;quot;&lt;/p&gt;
&lt;p&gt;The priest moves his hand away, embarrassed. The next time they stop at a light, he places his hand a little higher on her leg.&lt;/p&gt;
&lt;p&gt;Once again, the nun says &amp;quot;Remember Luke 14:10, father.&amp;quot;&lt;/p&gt;
&lt;p&gt;The priest apologizes, &amp;quot;The flesh is weak&amp;quot; he says.&lt;/p&gt;
&lt;p&gt;The priest drops the nun off, and when he gets home, he reaches for his bible and flips to Luke 14:10, which says.&lt;/p&gt;
&lt;p&gt;&amp;quot;Friend, come up higher. Then shalt thou have glory.&amp;quot;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;The lesson? Know your subject.&lt;/p&gt;
&lt;p&gt;The other part of the &amp;quot;T&amp;quot; is the top - breadth of knowledge.&lt;/p&gt;
&lt;p&gt;It&#39;s all well and good being an expert in one topic. But, without a wide breadth of knowledge, you will struggle to apply that knowledge. You need to have a surface understanding of a bunch of topics.&lt;/p&gt;
&lt;p&gt;For example, if you&#39;re a genius at NodeJS, then you can learn React, HTML and CSS. Continue to skill yourself on new topics, even if they are only slightly related to your field of expertise. You might choose to learn Algebra, Game theory or Design psychology. Learn what interests you and supports you.&lt;/p&gt;
&lt;h3&gt;3. Apply principles and previous experience&lt;/h3&gt;
&lt;p&gt;A key feature of a senior developer is that they&#39;ve been around the block. They&#39;ve seen the destructive effects of forgetting tests and have seen the successes of teams that foster psychological safety.&lt;/p&gt;
&lt;p&gt;Using these experiences, their principles, and their &amp;quot;T&amp;quot; based knowledge, they can apply these to future scenarios. When a decision is being made, reflect on your past and use these to make comments and suggestions.&lt;/p&gt;
&lt;p&gt;Additionally, when doing your day-to-day programming, apply programming principles consistently such as KISS, documenting code, and handling edge cases and performance issues proactively. To illustrate this, consider a new carpenter and an experienced one. Likely, they can both construct a chair. But the experienced carpenter will be able to apply his experience to discern that the chair needs extra lumbar support and so requires a particular technique of joining the wood. In the end, the experienced carpenters&#39; chair will stand the test of time.&lt;/p&gt;
&lt;p&gt;In the same way, consistently applying your principles will enable your code to be robust and stand the test of time.&lt;/p&gt;
&lt;p&gt;Of course, there are many other ways to develop from a mid-level to a senior developer but these three stand out in particular. And, in my case, were the deciding factor. Enjoy the process of continuous learning and growth. Although cliché, it&#39;s about the journey, not the destination.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Quarterly planning for life</title>
    <link href="/quarterly-plan/"/>
    <updated>2022-03-29T00:00:00Z</updated>
    <id>/quarterly-plan/</id>
    <content type="html">&lt;blockquote&gt;
&lt;p&gt;Most People Overestimate What They Can Do in One Year - Bill Gates&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;By now, as we enter the fourth month of the year, many of us will have abandoned our New Years&#39; resolutions or lost sight of our goals. I did this for many, many years.&lt;/p&gt;
&lt;p&gt;I would start January, hopeful that this year was going to be different. I was confident that I could get a 6 pack, run a marathon and wake up at 5am. Quelle surprise, I only ended up with a 6 pack... of beer.&lt;/p&gt;
&lt;p&gt;What happened to all that time?&lt;/p&gt;
&lt;p&gt;Like most, I&#39;d bitten off more than I could chew. And I found my motivation waned after a month or two. I would constantly reason that I had X many months left so I had plenty of time to get that 6 pack or run that marathon.&lt;/p&gt;
&lt;p&gt;What helped me was breaking down the year in &lt;strong&gt;Quarters&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;Commonly, businesses break down their financial year into 3-month blocks named &amp;quot;Q1&amp;quot;, &amp;quot;Q2&amp;quot; etcetera. How their key metrics change between these quarters affects their share price.&lt;/p&gt;
&lt;p&gt;But you can use this same system in your life.&lt;/p&gt;
&lt;h3&gt;Why use quarters?&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;Benefits can be seen quickly.&lt;/li&gt;
&lt;li&gt;Helps to prune the scope of your goals.&lt;/li&gt;
&lt;li&gt;Able to continuously re-evaluate your objectives as circumstances change.&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;How to do a quarterly plan&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;Using a notebook, or your project planning tool of choice, write down what your future self would be proud of accomplishing.&lt;/li&gt;
&lt;li&gt;Next, reduce that down to a maximum of four goals. Ideally, these should centre around different areas of your life (financial, health, relationships, work, spiritual).&lt;/li&gt;
&lt;li&gt;Now, for the key part, divide the outcome of that goal into 4 segments. For example, if your goal is to read 40 books. Then each segment would be to read 10 books.&lt;/li&gt;
&lt;li&gt;Now spread those segments for each goal across the four quarters. The result will be that every 3 months, you are completing a quarter for each of your goals.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;If after looking at the quarters it seems like that&#39;s too much to tackle, &lt;strong&gt;reduce the scope&lt;/strong&gt;. It&#39;s as simple as that. A goal that you overachieve is much better than the one you underachieve. The latter will lead to disappointment, whilst the former will lead to joy.&lt;/p&gt;
&lt;h3&gt;What if I can&#39;t separate my goal down?&lt;/h3&gt;
&lt;p&gt;Sometimes this can be a challenge. It appears that the goal is just to &amp;quot;do the thing&amp;quot;. But, in nearly all cases there is some kind of planning or preparation that goes into things. For example, if your goal is to do one month of no alcohol. A segment of that goal might be to stock up on alcoholic replacements. Or, it could be to make an effort to turn down social settings where you may drink alcohol for that month.&lt;/p&gt;
&lt;p&gt;Even though we are partway through the year, we still have three quarters of the year left! Use it!&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Should I split my monolith into microservices?</title>
    <link href="/monolith-to-microservice/"/>
    <updated>2022-03-22T00:00:00Z</updated>
    <id>/monolith-to-microservice/</id>
    <content type="html">&lt;p&gt;Likely you have clicked on this article to find the answer to the aforementioned question. The quick answer is - &lt;strong&gt;it depends&lt;/strong&gt;.&lt;/p&gt;
&lt;h3&gt;Rephrase your question&lt;/h3&gt;
&lt;p&gt;Before I explain why try to rephrase your question. &lt;strong&gt;What problem are you trying to solve by splitting up your monolith?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Answering this will help you to clarify if the approach you take is going to solve those problems.&lt;/p&gt;
&lt;p&gt;If you simply want to split it up because you feel microservices are &amp;quot;better&amp;quot;. Then you should think again. Because it will create more problems than it solves.&lt;/p&gt;
&lt;p&gt;But, if you have a burgeoning team who are spending hours resolving merge conflicts, then microservices are something to consider.&lt;/p&gt;
&lt;h3&gt;Write down your problems&lt;/h3&gt;
&lt;p&gt;Using the list of &amp;quot;problems&amp;quot; your team has, write down next to each of them how microservices will solve or alleviate that issue.&lt;/p&gt;
&lt;p&gt;But do the same with other solutions. Consider refactoring parts of the system, removing them, or restructuring the folders.&lt;/p&gt;
&lt;p&gt;This can sometimes be a deceptive process, however. Often, people consider microservices because of both/either of these factors:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;The monolith is buggy and/or slow&lt;/li&gt;
&lt;li&gt;Developing new features is a nightmare&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Unfortunately, on a long enough timeline, you are going to face those problems with microservices. Except for this time, instead of having one buggy and slow place to fix, you have fifty. It&#39;s worth doing your research and thinking about whether these things will &lt;em&gt;actually&lt;/em&gt; solve your problems or just mask over them.&lt;/p&gt;
&lt;h3&gt;Consider the downsides&lt;/h3&gt;
&lt;p&gt;Microservice advocates often gloss over the downsides of microservice architectures. But they are important to consider.&lt;/p&gt;
&lt;p&gt;As a guiding principle, if you don&#39;t have a dedicated DevOps team or your engineering headcount is below 25 then I&#39;d strongly recommend keeping things as simple as humanly possible. Remember that value comes in the form of features and fixes, not in restructuring the application.&lt;/p&gt;
&lt;p&gt;Everything has a &amp;quot;cost&amp;quot;, both upfront and ongoing (in technology as well as life). Make sure you know what these two figures are. For example, it might require a 4-week project upfront by 3 engineers and 1 day of engineering time per week ongoing. Weigh this up against the alternatives to tell you if it&#39;s truly valuable.&lt;/p&gt;
&lt;p&gt;Some other downsides include&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;The learning curve for the new project structure&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Bug fixes are harder to track&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Monitoring is more challenging&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;End-to-end testing is difficult&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Lots of surface area to secure, configure infrastructure and release pipelines for&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The learning curve for infrastructure deployment&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;New problems such as latency and load balancing&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In light of these downsides, maybe microservices are less appealing. What&#39;s the alternative?&lt;/p&gt;
&lt;h3&gt;The alternatives&lt;/h3&gt;
&lt;p&gt;Depending on the problems you&#39;re trying to solve, I&#39;d suggest some alternatives. You can use many of these approaches.&lt;/p&gt;
&lt;p&gt;It&#39;s worth noting that just because moving to microservices isn&#39;t the best right now, it might be in the future. These suggestions will help to make that migration easier if/when you make it.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Write tests.&lt;/strong&gt; Focus primarily on integration. Attempt to cover your entire application.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Remove the cruft.&lt;/strong&gt; Over time, components of an app are retired but the code lives on. Remove these parts to pare your codebase down to only what is used. If they can&#39;t be removed, then make sure it&#39;s well documented.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Configure monitoring.&lt;/strong&gt; Make the components of the system extensively monitored. For example, the email notification system, user creation and back-office reports should all notify you if they go wrong. This should be separate from the code itself using a monitoring solution like Sentry or Datadog.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Upgrade your tools.&lt;/strong&gt; Make development fast by investing time in your tools. If your app takes 15 seconds to recompile, make it so it compiles in less than a second. The payoff for this time is ten-fold and will make your team rejoice.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Sometimes microservices are the right approach. Sometimes they aren&#39;t. Mindfully consider the situation and make the migration easy.&lt;/p&gt;
&lt;p&gt;I&#39;ve talked only about the decision of &amp;quot;monolith to microservices&amp;quot;. But, lots of this advice applies to any technical decision your team is making. It depends.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>How to make changes as a Junior Developer</title>
    <link href="/junior-developer/"/>
    <updated>2022-03-15T00:00:00Z</updated>
    <id>/junior-developer/</id>
    <content type="html">&lt;p&gt;When you start your software development career, you come into a new job with excitement filling your eyes. Think of the cutting edge technologies you will use, collaborating with other amazing engineers and building a fantastic product. Sadly, it doesn&#39;t always go like this. It rarely does. And that&#39;s ok! Ultimately products have budgets and deadlines.&lt;/p&gt;
&lt;p&gt;Perhaps many of the expectations of a job that you had, have been unfulfilled. Testing is a secondary concern. And principles have been abandoned in favour of &amp;quot;throwing something together because it needs to be done by Friday&amp;quot;.&lt;/p&gt;
&lt;p&gt;At this stage, it is easy to become disenfranchised. After all, you do not have the experience or position as a manager to make widespread changes. But you feel like you&#39;re stomping over the software engineer&#39;s credo every time you ship a pull request. What can you do in this position?&lt;/p&gt;
&lt;p&gt;First of all, especially if you&#39;re new and/or it&#39;s your first job, don&#39;t make testing, security updates, plain text passwords or anything else a hill you&#39;re willing to die on. I&#39;ve been there, done that, and got the t-shirt (many times). And I can tell you it never works.&lt;/p&gt;
&lt;p&gt;The decisions have already been made, often by your manager&#39;s manager - who is equally saddened by the product&#39;s form. It takes time to build credibility to make a change in a company.&lt;/p&gt;
&lt;p&gt;And sometimes, you might be wrong. A decision may have been made that it is the wrong one by the &amp;quot;golden standard&amp;quot;, but the correct one for that company.&lt;/p&gt;
&lt;p&gt;Take time to digest a company, ask questions and be receptive to the answer. If you find yourself saying: &amp;quot;so why do you just...&amp;quot;, &lt;strong&gt;stop&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;So, what can you do? Make a personal change. After all, you&#39;re at the start of your career, there is still plenty to learn.&lt;/p&gt;
&lt;h3&gt;1. Write tests for yourself&lt;/h3&gt;
&lt;p&gt;Even if you&#39;re not &amp;quot;supposed&amp;quot; to write tests due to time constraints, do it anyway, but don&#39;t take ages with it. Block out 10% of the total time it took you to create a feature to write a test or two. It doesn&#39;t need to be a suite with 100% coverage, just to cover your little corner of work.&lt;/p&gt;
&lt;p&gt;Use Puppeteer to create your own End-to-end (E2E) tests in a separate folder to the codebase. And use the defacto testing framework of choice (Pytest for Python, Jest for Javascript etc.) for writing integration tests. Again, put these in a folder that can be &lt;code&gt;git ignore&lt;/code&gt;&#39;d so it&#39;s not in source control for the rest of the team.&lt;/p&gt;
&lt;h3&gt;2. Review your code&lt;/h3&gt;
&lt;p&gt;Reviewing your code is a vital step to correct your work. Like proof reading an essay.&lt;/p&gt;
&lt;p&gt;Before you create a pull request, look through the green part (the code you&#39;ve added) and ask the following questions&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Can I explain what this code does to another (imaginary) person with little knowledge of this system?&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Can I explain why I&#39;ve implemented this solution?&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Is it clear to others reading the code base the answer to the above questions? (i.e., is it in code comments)&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Following this process encourages you to keep pull requests small and helps you to justify your technical decisions.&lt;/p&gt;
&lt;h3&gt;3. Do a personal post-mortem&lt;/h3&gt;
&lt;p&gt;After an incident occurs, many companies do a &amp;quot;post-mortem&amp;quot;. A review into the problem, its root cause and ways to prevent it in the future.&lt;/p&gt;
&lt;p&gt;When you make a mistake like taking down the app, shipping some broken code, or deleting the production database (check, check and check), write your post-mortem. There are many templates online that can help you formalize this document. Review these once a quarter and reinforce the learnings from each incident.&lt;/p&gt;
&lt;p&gt;It can be frustrating if you want to make decisions but can&#39;t. Everyone goes through this process. Don&#39;t dwell on it. Take the time to learn, understand and ask good questions. Be an agent for change in your work and then use your expertise to help others. Doing so will build a solid foundation of knowledge and connections throughout your career.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Be friendly and don&#39;t ignore Recruiters</title>
    <link href="/recruiters/"/>
    <updated>2022-03-10T00:00:00Z</updated>
    <id>/recruiters/</id>
    <content type="html">&lt;p&gt;Increasingly, I&#39;ve noticed an increased level of resistance to recruiters among engineers. There has always been a love/hate relationship between the two parties. From the beginning of my career, I always heard &amp;quot;recruiters are bad&amp;quot;. And, to be honest, I accepted that as truth for the longest time. But, reflecting on it now, I don&#39;t understand it at all.&lt;/p&gt;
&lt;p&gt;I&#39;m writing this article for fellow software engineers to not accept the rhetoric of what others say - see for yourself. In a nutshell, you and I are not &amp;quot;better&amp;quot; because we are engineers. Recruiters are another role that is necessary for the sector to function.&lt;/p&gt;
&lt;p&gt;Here&#39;s my advice for software engineers dealing with recruiters.&lt;/p&gt;
&lt;h3&gt;1. Use canned responses&lt;/h3&gt;
&lt;p&gt;One of the main aversion to recruiters is sending countless emails to irrelevant jobs. This is understandable annoying. Especially when you are specific about not looking for a job, or the types of job you want. But consider for a second why this happens. Recruiters are paid for successful placements. Because there is a shortage of talented engineers, it figures that most engineers are already employed. If recruiters took the fact that someone was already employed as a reason to not approach them, it would be impossible to hire. For myself, I&#39;ve always been employed when approached about another job.&lt;/p&gt;
&lt;p&gt;So, we can&#39;t avoid the inMail and emails. But, what&#39;s the solution? &lt;strong&gt;Canned Response&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Canned responses save you time by being clear about your expectations for a job and what kind of work you might be open to. You can find many templates for this online, but they are mostly full of snark.&lt;/p&gt;
&lt;p&gt;Here is one I use:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Hi X,

Thanks for your message! This looks like a fantastic opportunity.

Unfortunately, I have recently accepted a position at a new company and am not currently seeking new opportunities.

I’ll be sure to bear you in mind for future roles.

Kind regards,
Josh
https://joshghent.com
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Short, friendly, clear. Simple as that.&lt;/p&gt;
&lt;p&gt;Or, if you are open to work. Here&#39;s another template I use for that:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Hey X,

Thanks for your message! This looks like a fantastic opportunity.

Although this job doesn&#39;t fit me, I am currently looking for a Remote Senior Developer position working with NodeJS, React, and AWS. I have worked with these technologies for over 5 years across an array of projects. Most recently, I have been architecting and building a greenfield project for a large eCommerce company.

I have attached my CV in case you have anything that fits the bill.

Kind regards,
Josh
https://joshghent.com
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Again, it&#39;s short and sweet but gets across the message.&lt;/p&gt;
&lt;h3&gt;2. Recruiters are great at building communities&lt;/h3&gt;
&lt;p&gt;Due to being involved with a large network of developers, recruiters are incredible at creating communities. Many events that I have run, have been so well supported in large part due to the work of recruiters leveraging their networks. Meeting these people at events can then further your own career.&lt;/p&gt;
&lt;p&gt;Additionally, they have more access to commercial ends of businesses. If you have a technical event you&#39;re running, recruitment agencies are usually among the first to sponsor the event. They get the exposure and you get the finances to run a kick-ass event. Win, win. Leverage these resources that they have access to.&lt;/p&gt;
&lt;h3&gt;3. Don&#39;t let a bad egg put you off the whole batch&lt;/h3&gt;
&lt;p&gt;Now, I know what you&#39;re thinking. You&#39;ve read this article so far and said to yourself &amp;quot;that&#39;s all well and good. But this recruiter was &lt;em&gt;truly&lt;/em&gt; awful&amp;quot;.&lt;/p&gt;
&lt;p&gt;I agree.&lt;/p&gt;
&lt;p&gt;There are bad recruiters out there. Terrible ones. But there are lots of bad engineers too. There are bad healthcare workers, builders, architects, designers, painters. With anything and everything, there is a &amp;quot;bad&amp;quot; version of. And that&#39;s ok.&lt;/p&gt;
&lt;p&gt;At worst, a &amp;quot;bad&amp;quot; recruiter might spam you with some emails or calls. You can easily block these. A bad engineer might give you a haunting nightmare of yarn that you have to untangle over the next year. The effects of these are vastly different in size.&lt;/p&gt;
&lt;p&gt;But just as when we recognise a bad engineer, we don&#39;t assume all engineers are bad. We should think the same about recruiters. Don&#39;t let one bad egg spoil the whole batch. There are good eggs out there. This brings me nicely onto my next point...&lt;/p&gt;
&lt;h3&gt;4. Work with individuals, not companies&lt;/h3&gt;
&lt;p&gt;A recruitment agency, like any other company, is a faceless emotionless entity. Inside each company, there will be some great people and some not so great people. Find the individuals in those businesses that you get on with and place you into jobs you enjoy. Then work with them throughout your career. Personally, I&#39;ve held 5 jobs given to me by 2 recruiters that I work with and have built up trust over time.&lt;/p&gt;
&lt;p&gt;You can build trust with them by running events with them and referring people in your own network to them.&lt;/p&gt;
&lt;h3&gt;5. Accept them as part of the process&lt;/h3&gt;
&lt;p&gt;Many believe we should live without recruitment agencies entirely. This is an understandable viewpoint. But this belief underscores a fundamental misunderstanding about business - stuff costs money. And if that &amp;quot;stuff&amp;quot; is hiring people, then it costs a lot. Why? Reviewing resumé&#39;s/CV&#39;s, interviewing and technical skill tests all takes time. Time from someone who is paid by the company. A recruitment agency&#39;s main value offering lies in getting high-quality candidates from a large network and handling all the marketing associated with advertising for a job.&lt;/p&gt;
&lt;p&gt;Just as many developers &amp;quot;outsource&amp;quot; their code by using third party libraries to save time, businesses do the same with recruiters. It saves time, and there is no point in reinventing the wheel. It allows them to unlock access to resources that would have taken a considerable amount of time to develop otherwise.&lt;/p&gt;
&lt;p&gt;For better or worse, recruiters are part of the process of getting a job and are here to stay.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;Hopefully, this post has helped soften your attitude toward recruiters. I&#39;m not trying to win favours with recruiters by writing this. It&#39;s a response to several snarky posts about the recruitment industry and tech. At the end of the day, these are people trying to do their jobs - like you and I. Sure there are some bad apples, but where isn&#39;t there? Default to truth and follow the advice outlined above, it will work out in your best interests to do so.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Mistakes I made as a self-taught developer</title>
    <link href="/self-taught-mistakes/"/>
    <updated>2022-02-15T00:00:00Z</updated>
    <id>/self-taught-mistakes/</id>
    <content type="html">&lt;p&gt;Learning to become a software developer is not a trivial task. There is a plethora of guides, tutorials and courses to take. And of course, there is the question of self-teaching or going to university.&lt;/p&gt;
&lt;p&gt;But no matter, which path you chose you will always make mistakes.&lt;/p&gt;
&lt;p&gt;I made a tonne of mistakes. And I want to share them with people who are learning software development. Here are 5 mistakes I made whilst learning to code.&lt;/p&gt;
&lt;h3&gt;1. Analysis paralysis of resources&lt;/h3&gt;
&lt;p&gt;I wasted a lot of time analysing the &amp;quot;best&amp;quot; place to learn to code. Early on, I often found myself reading articles about how &amp;quot;good&amp;quot; Codecademy was vs. a collection of &amp;quot;Head-first&amp;quot; books. This time would have been better used on doing the work. It&#39;s easy to enter this state of panic thinking you&#39;re committed to a certain learning path. But the truth is, you aren&#39;t. Mix and match, try different things and go with what works for you.&lt;/p&gt;
&lt;p&gt;I also did this same analysis when choosing a language to learn.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Is Python the best? What about Javascript? My favourite sites use Rails, maybe I should learn that?&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;That was the list of questions that plagued me. It was exhausting and I shouldn&#39;t have spent so much time overthinking this. My advice for new programmers is to learn Javascript and web technologies. Yes, you might want to learn games, but Javascript is so widely used (&lt;a href=&quot;https://www.infoq.com/news/2020/06/javascript-spacex-dragon/&quot;&gt;now even in space&lt;/a&gt;!) that it will allow you to go down any path you want later on.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Lesson&lt;/strong&gt;: Just learn Javascript. Mix and match resources that work for you to build a consistent practice of learning.&lt;/p&gt;
&lt;h3&gt;2. Not building early and often&lt;/h3&gt;
&lt;p&gt;What do I mean by &amp;quot;building&amp;quot;? I mean creating libraries, API&#39;s, demo sites and more. But to begin with, I thought the idea of &amp;quot;building&amp;quot; something was too complex to handle. Translating the code into something practical would have made me familiar solving problems. When I did start building projects, I found it challenging to shift from syntax to finished products. To liken it to real languages, it&#39;s the difference between knowing the word for &amp;quot;Apple&amp;quot; and knowing how to say &amp;quot;Would you like an Apple?&amp;quot;.&lt;/p&gt;
&lt;p&gt;This fixation on learning the syntax also had the downside that I didn&#39;t have anything to show for it at the end. To a potential employer, I was just someone who said they had learnt to code and could do some simple problems. By having a portfolio of projects that I could show off, I would have proven to myself and others that I had the skills to work.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Lesson&lt;/strong&gt;: Apply syntax to real-world style projects after learning it.&lt;/p&gt;
&lt;h3&gt;3. Being tied to tutorials rather than problems&lt;/h3&gt;
&lt;p&gt;In line with the above, I spent too much time on specific tutorials. I should have learnt to break a project down into small problems and seek solutions. This practice of breaking larger projects into small problems would have been valuable when I started working. It would have also helped me train my &amp;quot;google-fu&amp;quot; to search out error codes, and problems I needed to solve.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Lesson&lt;/strong&gt;: Learn to break projects down into small solvable problems.&lt;/p&gt;
&lt;h3&gt;4. Sweating &amp;quot;interview&amp;quot; preparation&lt;/h3&gt;
&lt;p&gt;Before I began to interview I spent hours doing code &amp;quot;katas&amp;quot; - small challenges using no libraries in your language of choice. I did this because they are supposedly common interview questions. But, I have never been asked to do specific coding challenges like this on a whiteboard or otherwise (having interviewed for 7 jobs in total). What I have had is take-home projects, and asked general technical questions.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Lesson&lt;/strong&gt;: If you are interviewing for the FAANG companies you are likely to get these questions. If you are not, then skip this. Focus on projects instead.&lt;/p&gt;
&lt;h3&gt;5. No SQL Exposure&lt;/h3&gt;
&lt;p&gt;In self-taught land, starting a new project is quite simple. Install NodeJS, install React, launch Chrome - job done. But, installing and using SQL? That was far too scary for me to tackle. There were ports to configure, connecting it to NodeJS and then the table structure. In part, MongoDB gained popularity because it&#39;s so simple to set up. By not having the exposure to SQL, I struggled when I got a job.&lt;/p&gt;
&lt;p&gt;It also meant that my coding style tended to lean on parsing data within the language. Over time, I&#39;ve trained myself (for software performance reasons) to use the database to crunch and mould the data for me.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Lesson&lt;/strong&gt;: My advice on this point is to set up a free account with Render.com or Heroku and add a MySQL or PostgreSQL instance.&lt;/p&gt;
&lt;p&gt;If you&#39;re learning to become a software engineer, don&#39;t give up! It is difficult no matter which path you chose. I hope by listing my failures that you can avoid them. You will make others in your journey and I implore you to write about them and learn from them.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Building Collaboration with Remote Teams</title>
    <link href="/remote-collaboration/"/>
    <updated>2022-02-08T00:00:00Z</updated>
    <id>/remote-collaboration/</id>
    <content type="html">&lt;p&gt;&lt;strong&gt;TL;DR&lt;/strong&gt;: Provide the tools, empower people to use them and embrace remote work for what it is - remote.&lt;/p&gt;
&lt;p&gt;Steve Jobs designed Apple Headquarters to maximise the length of time it would take people to get to the bathroom. He did this to increase collaboration stemming from running into others in the corridor.&lt;/p&gt;
&lt;p&gt;Since the pandemic, remote working skyrocketed. &lt;a href=&quot;https://resources.owllabs.com/state-of-remote-work&quot;&gt;Owl Labs recorded in 2021&lt;/a&gt; that almost 70% of full-time workers in the US were working from home. And based on the stats from job boards such as &lt;a href=&quot;https://remoteok.com/open&quot;&gt;RemoteOk.com&lt;/a&gt;, you can see a massive uptick in remote job opportunities that are not dying down.&lt;/p&gt;
&lt;p&gt;But the question is, how can you &amp;quot;bump into&amp;quot; people in the corridor in the remote-first working world? Many say that weekly meetings to keep people aligned are the answer. But I disagree, meetings are viewed with disdain. The same study by Owl Labs found that 80% of remote workers wanted at least a day per week without any meetings at all.&lt;/p&gt;
&lt;p&gt;Instead, I&#39;d suggest a different approach.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Make communication public by default.&lt;/strong&gt; The main rebuff of remote work is that the communication is too &amp;quot;formalized&amp;quot;, recorded and preserved. People use that excuse to communicate only in direct messages. But, putting messages in public channels, allows everyone to be informed and to take part. Additionally, making individuals comfortable with public communication will open the doorway for asynchronous work. It will discourage people from calling meetings simply to gather a consensus or input.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Don&#39;t recreate the watercooler.&lt;/strong&gt; Commonly, I&#39;ve seen teams create a #watercooler chat in Slack, an informal space to post memes and have a chit-chat. Although in ultra-large businesses I&#39;ve seen these spaces improve individual relationships, I have not seen them successfully increase collaboration. These spaces are attempting to recreate an in-person space. Whilst well-intentioned, these spaces do not translate to the online realm. Embrace remote work for what it is, remote. It will naturally take time for an in-person organisation to change into the &amp;quot;remote&amp;quot; frame of mind.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Avoid hybrid working.&lt;/strong&gt; Where possible, avoid having some of the team in an office and the rest remote. I&#39;ve been on both sides of the office divider for this one. And it doesn&#39;t do well on either. On the office side, you can often be blocked by a remote worker not working the same hours and not having the tools to perform a task asynchronously. But on the remote side, you have a constant FOMO for all the undocumented decisions that have been made. Pushing all this communication asynchronous and online alleviates these issues and keeps everyone up to date.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Use the tools.&lt;/strong&gt; We are all using Zoom and Slack. But that&#39;s not the end. There is a myriad of tools to help you unite a remote team. Use Google Drive for documents and presentations (share them without a meeting!), Tuple for pair programming and GitHub for tickets and code. Directing people toward these channels (asynchronous) and away from meetings (synchronous) will accelerate collaboration for your team by accomplishing meaningful objectives.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Build a documentation culture.&lt;/strong&gt; In an office, decisions can be made and passed around in small conversations. In remote teams, that vanishes. So, make sure that every decision is documented in a consistent and precise way. Encourage people to write about problems they&#39;ve solved and how they solved them. Help people to write up accurate guides on getting up and running with a system. And document various approaches to a particular ticket. Make sure you lead by example here. Don&#39;t rely on Slack to store a decision. If you ever hear the phrase &amp;quot;I forget exactly what was said&amp;quot;, it is a chance to write it down. Further, address this at its source by making sure to hire individuals with good writing skills.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Not all these techniques will work with your team. I encourage you to experiment and see what works.&lt;/p&gt;
&lt;p&gt;Overall, embrace the work situation you are in and capitalize on its advantages. If your business can be in-person, capitalize on the fact that it will likely be more collaborative. If you have a remote business, embrace a diverse global workforce, lower costs and asynchronous work for increased productivity.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Facing the Legacy Code Monster</title>
    <link href="/scary-legacy/"/>
    <updated>2022-01-25T00:00:00Z</updated>
    <id>/scary-legacy/</id>
    <content type="html">&lt;p&gt;I start new jobs like a spelunking caver, exploring all the systems, code and pipelines. I time myself to see how long I can ask questions about a system before I get a dreaded response:&lt;/p&gt;
&lt;p&gt;&lt;em&gt;&amp;quot;Oh, that&#39;s a critical system that handles our core business. A developer wrote it years ago, who has since left. Now we reboot it when it breaks. You don&#39;t want to mess with that.&amp;quot;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;I&#39;m sure we&#39;ve all heard words like that.&lt;/p&gt;
&lt;p&gt;It seems almost a law at this point that every mature software company has at least one system that &amp;quot;works&amp;quot;. But, has a few horrendous bugs meaning it needs rebooting and there are workarounds to its inflexibility that have built over time.&lt;/p&gt;
&lt;p&gt;It&#39;s software that has no tests. Runs on an extremely specifically configured server (likely in a closet in the office). Where documentation has been passed down in folklore. And the person who originally wrote it now resides on the moon where they are unreachable.&lt;/p&gt;
&lt;p&gt;Oftentimes, this software is left to rot because it&#39;s so critical that it&#39;s safer to have the workarounds and reboots than to fix it for good.&lt;/p&gt;
&lt;p&gt;But, you shouldn&#39;t be deterred by this rhetoric of what current employees tell you. Why not?&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;No one deliberately writes broken or bad software&lt;/strong&gt;. I say no one, there are of course exceptions. But buy-and-large, software is written with good intentions in mind. And as it is running in production, it means it solved the original problem. Adopting this ethos will help dissolve the image of this evil unknown developer spiting you.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Current employees might be misinformed&lt;/strong&gt;. Depending on how long ago the software was implemented, current employees might have been misinformed. Swaths of developers may have come and gone and simply repeated what they were told on their first day. What started as a healthy respect for a critical system, soon becomes the subject of fear and anxiety.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Fighting these tendencies to buy into the anti-legacy cult will mean you can deal with legacy code. It will be dirty work, but gaining a deep level understanding of these systems will make you a hero among your team and an invaluable employee.&lt;/p&gt;
&lt;p&gt;How can you gain insight into these systems?&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Understand why it became legacy in the first place&lt;/strong&gt;. Finding the why on these systems is vital because it can help clarify your own journey&#39;s priorities. In some cases, it might be that the system was impossible to run in a sandbox. If so, that should be your first port of call. Focus on first principles and get to the core of the problem you&#39;re trying to solve.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Get it running in a sandboxed environment&lt;/strong&gt;. Provide safety for yourself and others, by setting this up to run in a sandbox. This might need the buy-in from someone on the DevOps team. This is a critical dependent step to working with a legacy system as without it you do not have the psychological safety to make changes. This process might take a long time and should be considered just as important as writing tests.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Follow the code, and document it&lt;/strong&gt;. Target your reading of the codebase by following the various paths it takes. Review the entry points and build a map of the various &amp;quot;journeys&amp;quot;. Then, pretending you are one of those calls, follow the path that the code takes. It&#39;s sometimes helpful to draw these paths on paper for further reference. After reading it through once, read it again, but document the functions as you go. In some places, you might not know &amp;quot;why&amp;quot; something has been written a certain way; mark these areas with a quick &lt;code&gt;TODO&lt;/code&gt; comment for later review.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Write integration tests&lt;/strong&gt;. Not unit, functional or anything else. Focus on integration at the moment. This maximises the test coverage whilst minimizing the level of understanding you need. Early on in this investigation, you likely don&#39;t know all the in&#39;s, out&#39;s and gotchas of the system. So, to avoid getting overwhelmed, it&#39;s best to write a suite of integration tests that mirror the &amp;quot;journeys&amp;quot; you discovered in the previous step.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Share the knowledge&lt;/strong&gt;. Don&#39;t let yourself become another bus factor. Be quick to share the knowledge, even if there are gaps. This also encourages other developers to get involved, helping write tests and documentation.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;This process is by no means perfect. But, be sure not to skip any of the steps, as they are all as critical as each other. It will likely not mean you are a complete expert in this system. Nor will it mean you can rewrite it in a more modern tech stack with the confidence of a solid test suite behind you. But, it does mean you have shed some light on an otherwise dark scary legacy code monster.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>How to Ship Software Faster</title>
    <link href="/ship-faster/"/>
    <updated>2022-01-18T00:00:00Z</updated>
    <id>/ship-faster/</id>
    <content type="html">&lt;p&gt;Remember when software came on a physical medium like discs, USB sticks or &lt;a href=&quot;https://www.smithsonianmag.com/smithsonian-institution/margaret-hamilton-led-nasa-software-team-landed-astronauts-moon-180971575/&quot;&gt;punch cards&lt;/a&gt;? Me either. Software release lifecycles used to be lengthy - years-long in most cases.&lt;/p&gt;
&lt;p&gt;As software flourished on the web, we grew accustomed to &amp;quot;moving fast and breaking things&amp;quot;. This approach has a lot of drawbacks. Not least because some customer bases are more sensitive to problems than others.&lt;/p&gt;
&lt;p&gt;They still wanted to &amp;quot;move fast&amp;quot;, but not &amp;quot;break things&amp;quot;. The speed of the web, with the safety of physical releases.&lt;/p&gt;
&lt;p&gt;The solution that many teams came up with was to mass up lots of work and then release it every so often. Taking the bad of both release systems. The lack of safety from a &amp;quot;move fast&amp;quot; release and the slow speed of physical releases.&lt;/p&gt;
&lt;p&gt;It&#39;s likely that you work in a place like this, or have worked there in the past. Organisations where DevOps are a secondary concern to the application itself. Places where &amp;quot;continuous delivery&amp;quot; is considered voodoo reserved for the FAANG&#39;s.&lt;/p&gt;
&lt;p&gt;I have found myself at these organisations all my working life. I quickly noticed the following patterns:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Developers have little to no confidence that a new release will not break something.&lt;/li&gt;
&lt;li&gt;That low confidence means there is anxiety when it comes time to release.&lt;/li&gt;
&lt;li&gt;The time between releases meant upstream work causes conflicts.&lt;/li&gt;
&lt;li&gt;Manual testing cycles had to be done to establish any confidence.&lt;/li&gt;
&lt;li&gt;Bugs upon release caused finger pointing; with &lt;a href=&quot;https://www.pageittothelimit.com/psy-safety-with-tom-geraghty/&quot;&gt;psychological safety&lt;/a&gt; diminishing as a result.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;What can you do if you notice these patterns?&lt;/p&gt;
&lt;h3&gt;The solution is to ship faster. How?&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;Set expectations of delivery time&lt;/strong&gt;. Start by opening a discussion, with stakeholders, about what the expected time to ship new versions will be. Establishing these rough boundaries govern the setup of processes used to ship software. Generally speaking, shareholders will want features as soon as possible. But, if you are currently releasing once a month, you should aim to start releasing bi-weekly. Get a bit further along before promising to &lt;a href=&quot;https://instagram-engineering.com/continuous-deployment-at-instagram-1e18548f01d1&quot;&gt;deploy 30-50 times a day like Instagram&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Make systems observable&lt;/strong&gt;. Low confidence in releases often originates from systems with little observability. This means that if something does go wrong it&#39;s a nightmare to figure out why. Before starting to increase deployment frequency, you need a system you trust. Focus on the fundamentals - searchable logging, automatic monitoring of key website pages and API endpoints (using &lt;a href=&quot;https://uptimerobot.com&quot;&gt;UptimeRobot&lt;/a&gt;) and automatic tests (integration and unit at least).&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Set small concise deliverables&lt;/strong&gt;. Doing manual releases requires an immense amount of cognitive overhead. Having a small number of tickets and clear deliverables in each release reduces this cognitive load. There is less to remember to test and check. And other areas of the system are less likely to be affected. If releases are simple to do, it&#39;s more likely they&#39;ll get done.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Invest in your DevOps&lt;/strong&gt;. This is the crucial technical step. There are many other articles about having top quality development tools to aid deployment, so I won&#39;t add to them. But principally, look at the areas that take the most time or have the least confidence, and automate them. For example, a bash script written 2 years ago for bundling the app is unreliable, take time to address this.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Use Feature flags&lt;/strong&gt;. Often releases get delayed because stakeholders don&#39;t want to reveal new features to customers. Using feature flags allows you to ship unfinished features without breaking things for everyone. A further selling point for stakeholders is that feedback can be gathered from select customers before a full rollout is done.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Make mistakes a non-issue&lt;/strong&gt;. If the risk of a new release causing a bug is on par with starting a nuclear war, people will shy away from it. By making it easy for developers (or better yet, a blue-green system) to rollback to the last known stable release, it will break down the fear around releasing.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Use checklists&lt;/strong&gt;. Anything that cannot be automated (or you don&#39;t have time to do so), should be made as programmatic as possible. Using checklists reduces the guesswork out of manual tasks. It means reliable releases are simple to do.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;Shipping software faster is a mix of both cultural and technical aspects of an organisation. Both are equally as difficult. Work towards the &amp;quot;release nirvana&amp;quot; that awaits once these systems are set up. Your team will be rewarded with lower blood pressure and your business will be rewarded by getting and retaining more customers.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Cache Auth0 M2M Tokens</title>
    <link href="/auth0-cache/"/>
    <updated>2022-01-11T00:00:00Z</updated>
    <id>/auth0-cache/</id>
    <content type="html">&lt;p&gt;&lt;a href=&quot;https://auth0.com&quot;&gt;Auth0&lt;/a&gt; is an easy to integrate service that handles all your applications authentication needs. But, if you&#39;ve worked with it before, you&#39;ll know it&#39;s downfalls.&lt;/p&gt;
&lt;p&gt;One of them Machine-to-Machine (M2M) tokens; used to authenticate between your services.&lt;/p&gt;
&lt;p&gt;But the limits are restrictive for serverless infrastructures. In the free plan you only get 1000 per month. And even on a paid plan, it would be expensive to get the number of tokens you might need in a given month.&lt;/p&gt;
&lt;p&gt;The solution is to &lt;strong&gt;cache Machine-to-Machine tokens&lt;/strong&gt; so we don&#39;t need to request new ones until they expire.&lt;/p&gt;
&lt;p&gt;In traditional infrastructure, this would be trivial. Save the token globally somewhere and done.&lt;/p&gt;
&lt;p&gt;Serverless architectures are a tricky because there is no persistence between instances.&lt;/p&gt;
&lt;p&gt;Here&#39;s how to handle caching Auth0 Tokens for AWS Lambda Microservices. But, the same principles apply for other cloud providers.&lt;/p&gt;
&lt;h3&gt;Create the DynamoDB Table&lt;/h3&gt;
&lt;p&gt;(or equivalent serverless DB table in other cloud providers)&lt;/p&gt;
&lt;p&gt;Set your own name for the table, and set the partition key to &lt;code&gt;token&lt;/code&gt; as a &lt;em&gt;String&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;../../assets/images/dynamodb-creation.png&quot; alt=&quot;Screenshot 2022-01-11 at 15.44.50&quot; /&gt;&lt;/p&gt;
&lt;p&gt;Add the name of the table as an environment variable &lt;code&gt;CACHE_TOKEN_DB&lt;/code&gt;&lt;/p&gt;
&lt;h3&gt;Retrieve and store tokens&lt;/h3&gt;
&lt;p&gt;First let&#39;s add a method to store new M2M&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-ts&quot;&gt;// ===
// cacheToken.ts
// ===
import AWS from &amp;quot;aws-sdk&amp;quot;;

const storeNewToken = async (token: string) =&amp;gt; {
  const docClient = new AWS.DynamoDB.DocumentClient();
  const response = await docClient
    .put({ TableName: `${process.env.TOKEN_CACHE_DB}`, Item: { token } })
    .promise();
  return response;
};
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The code is simple enough and fairly self explanatory.&lt;/p&gt;
&lt;p&gt;So, let&#39;s move on and add method that we can use in our Lambda Handler to retrieve a new M2M token.&lt;/p&gt;
&lt;p&gt;There are two paths for this method&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;There is a existing unexpired token in DynamoDB, so we use that.&lt;/li&gt;
&lt;li&gt;There is no token or only expired ones, so we generate a new one, store it in DynamoDB and use that.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;We will design this system to only store one token at a time. This means we do not have to worry about old tokens and filtering them out on each initialization.&lt;/p&gt;
&lt;p&gt;So let&#39;s write our method!&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-ts&quot;&gt;// ===
// cacheToken.ts
// ===
import request from &amp;quot;request-promise&amp;quot;;

export const getAuthToken = async (): Promise&amp;lt;string&amp;gt; =&amp;gt; {
  const token = await getExistingToken();
  if (token !== &amp;quot;&amp;quot; &amp;amp;&amp;amp; hasTokenExpired(token) === false) {
    return token;
  }

  const params = {
    method: &amp;quot;POST&amp;quot;,
    url: `https://${process.env.AUTH0_NAME}.auth0.com/oauth/token`,
    headers: { &amp;quot;content-type&amp;quot;: &amp;quot;application/json&amp;quot; },
    body: `{&amp;quot;client_id&amp;quot;:&amp;quot;${process.env.AUTH0_CLIENT_ID}&amp;quot;,&amp;quot;client_secret&amp;quot;:&amp;quot;${process.env.AUTH0_CLIENT_SECRET}&amp;quot;,&amp;quot;audience&amp;quot;:&amp;quot;${process.env.AUTH0_AUDIENCE}&amp;quot;,&amp;quot;grant_type&amp;quot;:&amp;quot;client_credentials&amp;quot;}`,
  };

  const result = JSON.parse(await request(params));
  if (!result[&amp;quot;access_token&amp;quot;]) {
    throw new Error(&amp;quot;No Access Token returned&amp;quot;);
  }

  await deletePreviousTokens(token);
  await storeNewToken(result[&amp;quot;access_token&amp;quot;]);

  return result[&amp;quot;access_token&amp;quot;];
};
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Let&#39;s break this down a little&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;We first get the &lt;strong&gt;existing token in DynamoDB&lt;/strong&gt;. It returns the token or an empty string.&lt;/li&gt;
&lt;li&gt;If it returns a token, we check it&#39;s not expired and then return that token.&lt;/li&gt;
&lt;li&gt;If it is expired, or there is no token, we go ahead an &lt;strong&gt;generate one from Auth0&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;We then &lt;strong&gt;delete the old token in DynamoDB, and store the new one&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Potentially, with this flow (and the fact that DynamoDB is non-locking), could mean that multiple instances of your service save a token at the same time. But, this will be minor compared to how much you&#39;re able to save by caching in the first place.&lt;/p&gt;
&lt;p&gt;Let&#39;s now create the methods we referenced in the &lt;code&gt;getAuthToken&lt;/code&gt; function that help us interact with the tokens storage and validation&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-ts&quot;&gt;// ===
// cacheToken.ts
// ===
import jwt_decode from &amp;quot;jwt-decode&amp;quot;;

const deletePreviousTokens = async (token: string) =&amp;gt; {
  const docClient = new AWS.DynamoDB.DocumentClient();
  const tokenRecords = await getAllTokens();

  // Clear down the table
  if (tokenRecords.Items) {
    tokenRecords.Items.forEach(async (row) =&amp;gt; {
      const token = row.token;
      await docClient
        .delete({
          TableName: `${process.env.TOKEN_CACHE_DB}`,
          Key: { token: token },
        })
        .promise();
    });
  }
};

const hasTokenExpired = (token: string) =&amp;gt; {
  const decoded = jwt_decode(token) as { exp: number; iat: number };
  if (decoded) {
    return decoded.exp &amp;lt; new Date().getTime() / 1000;
  }

  return false;
};

const getAllTokens = async () =&amp;gt; {
  const docClient = new AWS.DynamoDB.DocumentClient();
  const response = await docClient
    .scan({
      TableName: `${process.env.TOKEN_CACHE_DB}`,
    })
    .promise();

  return response;
};

const getExistingToken = async () =&amp;gt; {
  const response = await getAllTokens();

  if (response.Items &amp;amp;&amp;amp; response.Items.length &amp;gt; 0) {
    return response.Items[0][&amp;quot;token&amp;quot;];
  }

  return &amp;quot;&amp;quot;;
};
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Again, let&#39;s break this down&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;In &lt;code&gt;deletePreviousTokens&lt;/code&gt; we grab all existing tokens and delete them one by one. This is to avoid concurrency issues where another instance has written a new token which we do not want to delete.&lt;/li&gt;
&lt;li&gt;In &lt;code&gt;hasTokenExpired&lt;/code&gt; we do a basic JWT validation to make sure that it is not expired. This could be improved by not using the token if it&#39;s only got 1ms left but has worked so far for me.&lt;/li&gt;
&lt;li&gt;In &lt;code&gt;getExistingToken&lt;/code&gt; we get all rows in the table and return the first token or an empty string if none is found.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Usage in the handler&lt;/h3&gt;
&lt;p&gt;Now all that&#39;s left to do is to add it to your Lambda functions handler method.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-ts&quot;&gt;export const handler = async (event: any, context: any) =&amp;gt; {
  const token = await getAuthToken();

  // Do something with the token
  await sendResultsToService(token, event.Results);
};
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Hopefully you found this interesting and saved some money on your Auth0 Bill!&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>How You Work</title>
    <link href="/how-you-work/"/>
    <updated>2022-01-07T00:00:00Z</updated>
    <id>/how-you-work/</id>
    <content type="html">&lt;p&gt;Learning how you work best is a superpower. Imagine, creating and seeking environments where you succeed best. Likely you remember times where you got into a flow state and produced magic, but you can&#39;t pinpoint why.&lt;/p&gt;
&lt;p&gt;I was in this position and so took some time to figure out how I work best.&lt;/p&gt;
&lt;p&gt;Product user manuals have a section dedicated to &amp;quot;ideal working conditions&amp;quot; - to get the best out of the machine. This includes the maintenance, the temperature and location and how it should be operated. In the same way, you can develop a &amp;quot;personal user manual&amp;quot; documenting how you work best and where.&lt;/p&gt;
&lt;p&gt;In my user manual, it&#39;s broken down into five sections.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;The kind of work I do best&lt;/li&gt;
&lt;li&gt;The environment for doing that work&lt;/li&gt;
&lt;li&gt;How I enjoy doing that work&lt;/li&gt;
&lt;li&gt;How I receive feedback&lt;/li&gt;
&lt;li&gt;How I get motivated&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;For example, my user manual is included below.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;The kind of work I do best&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Creating things that I know will benefit people.&lt;/li&gt;
&lt;li&gt;Automation and saving time.&lt;/li&gt;
&lt;li&gt;Turning rough specs into tangible products.&lt;/li&gt;
&lt;li&gt;Thinking of edge cases and being able to dream of scenarios where bugs will occur at scale.&lt;/li&gt;
&lt;li&gt;Building clear API&#39;s that are secure and scalable.&lt;/li&gt;
&lt;li&gt;Writing technical documentation.&lt;/li&gt;
&lt;li&gt;Architecting and building systems that can handle millions of customers.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;The environment for doing that work&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;I&#39;m informed of the bigger picture and the impact of my work.&lt;/li&gt;
&lt;li&gt;I like a small to-do list.&lt;/li&gt;
&lt;li&gt;Requirements stay reasonably consistent. But I have the autonomy to figure out how to meet those requirements.&lt;/li&gt;
&lt;li&gt;The benefit of my work is somewhat measurable.&lt;/li&gt;
&lt;li&gt;There is a culture of gathering and reviewing data to make decisions.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;How I enjoy doing that work&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;I prefer to work asynchronously and reserve synchronous work for solving specific problems.&lt;/li&gt;
&lt;li&gt;In line with the above, I like to keep meetings to an absolute minimum. Including many of the sprint reviews, standups and retros that are commonplace in most software businesses.&lt;/li&gt;
&lt;li&gt;I like to have the flexibility to work inconsistent hours. Often, I find I solve problems better away from the computer rather than bashing my head against the wall.&lt;/li&gt;
&lt;li&gt;I believe in transparency and equality, I prefer to work with organisations that foster that same ethos.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;How I receive feedback&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;I love to improve my work and constantly question personally on how to do this. But, I need to understand the reason why something is better or is important.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;The Benefit&lt;/h2&gt;
&lt;p&gt;By preparing a user manual, you can quickly establish a rapport with your co-workers and effectively communicate your &amp;quot;ideal working conditions&amp;quot; to your manager. Having led teams, I ended up putting together versions of a &amp;quot;user manual&amp;quot; for everyone in my team in my head. A written document from that person would have proved vital.&lt;/p&gt;
&lt;p&gt;The advantages are that both yourself and the people around you can understand each other and make everyone happy by aiming to keep work within those ideal parameters. Of course, this is not always possible. Tough, stupid things sometimes need doing. But a keen-eyed manager will always be aiming to balance these tasks with work you love.&lt;/p&gt;
&lt;p&gt;I&#39;ve found my manual allows me to reaffirm my ideal work. Whenever I am thinking &amp;quot;I hate this work&amp;quot;, I review this manual and figure out &lt;em&gt;why&lt;/em&gt; I dislike it.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Maybe don&#39;t hire</title>
    <link href="/hiring/"/>
    <updated>2022-01-06T00:00:00Z</updated>
    <id>/hiring/</id>
    <content type="html">&lt;p&gt;Increasingly, I&#39;ve become sceptical of businesses looking to &amp;quot;rapidly scale&amp;quot; their technical teams. All businesses seem to get a certain amount in their backlog, or an urgent request from a large would-be customer, or just too much money and decided that the only solution is to hire at a tremendous scale.&lt;/p&gt;
&lt;p&gt;I&#39;m going to argue that perhaps you don&#39;t need to hire at all. And doing so would be counter productive.&lt;/p&gt;
&lt;h3&gt;Why startups need to hire&lt;/h3&gt;
&lt;p&gt;Hiring is driven by demand for product features. Shareholders within the business get requests for certain things. &amp;quot;Can we add this quickly?&amp;quot;, &amp;quot;I need this urgently for X&amp;quot;, &amp;quot;Our competitor has Y so we need Y&amp;quot;. These are all common phrases that I&#39;m sure we&#39;ve all heard. These requests are understandable. These shareholders are trying to do their job by making the company (and by-proxy the product) more profitable. I don&#39;t blame them for these requests. But, they stem from an environment of their own making.&lt;/p&gt;
&lt;p&gt;These businesses start, like many, with few resources. They build an MVP, sell it to a few customers and start to make money.&lt;/p&gt;
&lt;p&gt;But they have a problem - they want to grow, and don&#39;t have the money to hire a full time team. So, they get funding. That funding puts them into 6th gear.&lt;/p&gt;
&lt;p&gt;Because they are now at the mercy of investors, they need to get more customers, which means more features, which means more developers.&lt;/p&gt;
&lt;p&gt;And more developers means they need more money. So, they raise more money and agree to add more features to get customers. And on and on.&lt;/p&gt;
&lt;p&gt;Unfortunately, this cycle never stops.&lt;/p&gt;
&lt;h3&gt;So what&#39;s so bad about this?&lt;/h3&gt;
&lt;p&gt;The cycle I&#39;ve described above is what some would call &amp;quot;growing a business&amp;quot;. Whilst that might be largely true, it&#39;s not the only solution. One problem with this approach is &lt;strong&gt;hiring&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;Hiring is viewed as a task that can simply be checked off. Recruitment agencies can help you feel like this. But the fact is that hiring is extremely difficult and time consuming. From initial resume review, to interview, to code challenge, whatever your recruitment process, it takes time on your part.&lt;/p&gt;
&lt;p&gt;And after the contracts are signed, the process isn&#39;t over. You&#39;ve got to onboard, educate and give a lead time before those developers get productive (in my experience, even the most senior developers aren&#39;t truly productive until at least 2 months after starting).&lt;/p&gt;
&lt;h4&gt;The silent killer&lt;/h4&gt;
&lt;p&gt;The silent killer of many of these business is they start creating work for the sake of it.&lt;/p&gt;
&lt;p&gt;When a carpenter creates a chair, they skilfully sculpt the wood, attach the parts together and sand, oil and paint it before declaring it finally finished. Software is similar, but we miss the finish. How many pieces of software have you worked on where it was declared &amp;quot;finished&amp;quot;?&lt;/p&gt;
&lt;p&gt;In hiring a huge army of developers, eventually these features from customer peter out. What then fills the void is an endless game of optimizations, minor pivots of existing features and &amp;quot;reckons&amp;quot; about things people think are good ideas (without validating them). Evernote and Dropbox are classic examples of this at play. They created a great piece of software, but continue to annoy its customers and create meaningless changes that get killed. Without the pressure of continuous growth, it would be more acceptable to put software into &amp;quot;maintenance mode&amp;quot;.&lt;/p&gt;
&lt;h3&gt;The Alternative(s)&lt;/h3&gt;
&lt;p&gt;Increasingly, &amp;quot;indie makers&amp;quot; are building out products in their spare time and waiting for them to become profitable before making the leap full time (or hiring staff). This means there is not the huge impetus to continue growing at all costs.&lt;/p&gt;
&lt;p&gt;Others, in the wake of the pandemic, are building out remote, part-time teams. &lt;a href=&quot;https://sahillavingia.com/work&quot;&gt;Gumroad&lt;/a&gt; has done this to great success after suffering from many of the failings described above.&lt;/p&gt;
&lt;p&gt;In another case, I had a non-technical founder approach me to build out an MVP for him. He was a reasonable chap, had a savings pot to pay me, a specific scope and a clear market. But, rather than investing all that money, I advised him to build out a prototype using Nocode tools like Bubble and Webflow. Even if you are not technical, the barriers to entry for building products is lower than ever.&lt;/p&gt;
&lt;p&gt;These approaches are not foolproof, they have their own issues. For example, for many products, it isn&#39;t possible to &amp;quot;build it in your spare time&amp;quot;. Still others may not have spare time to build a product in. But, don&#39;t be afraid to break the mould and do things the non-traditional way. Think of creative solutions to the problems you have. Resist the rush to build lots of new features, seek to please customers at all costs, and compromise your product. Your product and customers will thank you.&lt;/p&gt;
&lt;p&gt;In summary:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Relax your requirements about the pace of work&lt;/strong&gt;. It will lead to better quality product as a result.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Don&#39;t get sucked into the investment strategy grind&lt;/strong&gt;. You&#39;ll eventually have an army of developers with nothing to do.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Seek alternative solutions&lt;/strong&gt;. The barriers to create are lower than ever, use Nocode tools or build the project on the side.&lt;/li&gt;
&lt;/ul&gt;
</content>
  </entry>
  
  <entry>
    <title>Software Beauty</title>
    <link href="/software-beauty/"/>
    <updated>2022-01-05T00:00:00Z</updated>
    <id>/software-beauty/</id>
    <content type="html">&lt;blockquote&gt;
&lt;p&gt;Design is a funny word. Some people think design means how it looks. But of course, if you dig deeper, it&#39;s really how it works.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Those famous words spoken by Steve Jobs, was the cornerstone of Apples great success in building beautiful products. But these words can be applied, not just to hardware, but to software too. Code at the end of the day is simply words - albeit in a wonky form. Asimov can write beautiful works in English. Why can’t we do the same in code? The answer is, we can. But why should we as developers be concerned with the “beauty” of our code? In this post, I’m going to answer this question. But also drill down into specifics on how (often posts like this can be a bit too romantic and theoretical).&lt;/p&gt;
&lt;h2&gt;Why Beauty is Important&lt;/h2&gt;
&lt;p&gt;So, why is software beauty important? After all, we are engineers, not artists! By way of example, let’s say you buy a new car. It’s the fastest one on the market. When you get it, you’re all excited - until you open the drivers door. Before your eyes you see the pedals on the dashboard, the steering wheel on the roof and a literal bucket for a seat. Safe to say, you wouldn’t be impressed.
What’s the connection to software? Although something is usable, it may not be nice for someone to use.&lt;/p&gt;
&lt;p&gt;If you work on a system with an API of any kind, whether internal-only or integrated with third-parties, beauty should be a concern. Bad design breeds bugs, extra cognitive load, increased time to develop new features and more.&lt;/p&gt;
&lt;p&gt;Gliding past the obvious examples of having a nice REST API like Stripe (which we will cover later). Let’s say you have a large React frontend and express backend. If your code bases are not “beautiful”, on boarding new developers becomes an increasing hassle. You may try to fight this by having lots of documentation. But there comes a point in software, as with jokes, where if you have to explain it, it’s bad.
Further, when someone comes to add something to the code base, because it wasn’t designed in an extensible manner, it becomes brittle and wonky. If you’ve ever played Tetris you’ll know this feeling. You keep having to deal with these fast falling blocks and before you know it, you’ve lost the game.&lt;/p&gt;
&lt;h2&gt;Where has beauty gone?&lt;/h2&gt;
&lt;p&gt;If beauty in software is important, why is it seldom considered? In your organization, there is some care and thought given to the general architecture. But, the specifics of how the software plugs together is handled ad-hoc - without any guide rails to support developers. This leads to individuals writing software for them, not for others. It&#39;s natural - we are all a little bit selfish.&lt;/p&gt;
&lt;p&gt;A further reason is that rarely are companies &amp;quot;dogfooding&amp;quot; their software. Sometimes, it&#39;s not possible because the developer is not the customer. That&#39;s fine. But failing to have a deep understanding of the software and how it will be used leads to false assumptions and clumsy design.&lt;/p&gt;
&lt;h2&gt;Principles&lt;/h2&gt;
&lt;p&gt;What&#39;s the solution? Design has no hard and fast rules. And generally, people have a good sense of something working &amp;quot;well&amp;quot; or not simply by using the thing. Software design is no different. Therefore, principles make sense in order to guide developers to design great software. I’ve attempted to distil the principles of beautiful software into three key attributes.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;It should be a joy to work with&lt;/strong&gt;. No headaches or screaming at the computer should be seen. It should be joyous to work with or on your system to be able to accomplish their tasks.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Simple&lt;/strong&gt;. “If you can’t explain it simply, you don’t understand it” - Einstein. Beautiful systems need to be simple by definition. That doesn’t mean they cannot be complex. Rather, the complexity should be presented simply. If quantum mechanics can be explained then so can your system.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Extensible&lt;/strong&gt;. Doubtless, requirements change and you often need to modify or add functionality to meet a use case. In the case of beautiful software, this should be easy to implement, document and test.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;How to Use These Principles&lt;/h2&gt;
&lt;p&gt;On a practical level, I&#39;ve found it best to codify these principles into a short checklist.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Are all successful use cases of methods of the API documented?&lt;/li&gt;
&lt;li&gt;Are error codes and edge cases documented for each method of the API? - i.e., if you pass X with a value of Y, you also need to provide Z.&lt;/li&gt;
&lt;li&gt;Does the API have side effects?&lt;/li&gt;
&lt;li&gt;Do all the methods do “what they say on the tin”? - in other words, does the API method in question do what is described by the method itself. For example, if we have a REST API method &lt;code&gt;GET /packages&lt;/code&gt; - does this return a list of packages for a customer? It’s always good at this stage, if you’re experienced with this system to ask someone who has never seen it - even if they are non-technical. Just ask “if you asked me to get packages”, what would you expect me to answer you with?&lt;/li&gt;
&lt;li&gt;Is it possible to run the API with 2 commands or less? - if the answer is no, then we can look into creating a setup script.&lt;/li&gt;
&lt;li&gt;Are testing patterns already established to test the API, including mocking data or dependant systems etc.&lt;/li&gt;
&lt;li&gt;Can I quickly tell what version of the API I’m using&lt;/li&gt;
&lt;li&gt;Can I quickly resolve any errors myself? - does the API return error messages that are actionable and concise.&lt;/li&gt;
&lt;li&gt;If it is not possible to rectify an issue myself, can I provide a means of recreating an issue to the API author? - requestId’s, trace logs and the like are all helpful here and need to be accessible to the consumer.&lt;/li&gt;
&lt;li&gt;Are there significant efforts to mitigate issues? - Does it handle retries and other complexity that should not be a concern for the end user.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;There is a lot of overlap here between sound software development practises and beauty. As the famous design adage goes &amp;quot;form follows function&amp;quot;. By creating good development practises, you end up creating beautiful software.&lt;/p&gt;
&lt;h2&gt;Takeaways&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Keep an eye on beauty, no one else will. Even in your little corner, strive for beautiful design. If you’re stuck making something beautiful, ask coworkers, friends, customers or join the #softwarebeauty irc channel on freenode.&lt;/li&gt;
&lt;/ul&gt;
</content>
  </entry>
  
  <entry>
    <title>Continuous Delivery to ECS with Terraform</title>
    <link href="/terraform-ecs-cicd/"/>
    <updated>2021-08-11T00:00:00Z</updated>
    <id>/terraform-ecs-cicd/</id>
    <content type="html">&lt;p&gt;Continuous delivery is something that we&#39;re all striving for. I was doing the same, but there was a snag:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;My terraform code and API code were in separate projects&lt;/li&gt;
&lt;li&gt;I wanted to make updates to the API code and have it build and update the ECS service&lt;/li&gt;
&lt;li&gt;I didn&#39;t want to manage the container definition separately as it had too many dependant resources (Datadog sidecar etc.)&lt;/li&gt;
&lt;li&gt;You have multiple environments and don&#39;t want to have to use a separate git branch for the API code.&lt;/li&gt;
&lt;li&gt;You have a branch for each environment for your infrastructure that is independently deployed.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Maybe you have this problem too. Because Terraform uses a specific ECR image path to build the container definition, how do we ever update it automatically?&lt;/p&gt;
&lt;p&gt;There are several ways to solve this problem, many of which are discussed in &lt;a href=&quot;https://github.com/hashicorp/terraform-provider-aws/issues/632&quot;&gt;this thread&lt;/a&gt;. But, today, I&#39;m going to show you how I resolved this issue.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Setup your Docker build/deploy pipeline&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;First things first, we need the API Docker image inside ECR. This will vary according to what CI system you use - in our case we use Azure DevOps.&lt;/p&gt;
&lt;p&gt;When the image is being built, we want to tag it uniquely. Ideally, you want something numerable. In our case, we chose to use Azure&#39;s built in &amp;quot;BuildId&amp;quot; parameter to tag the images.&lt;/p&gt;
&lt;p&gt;Below you can see the build steps we take in the CI pipeline. After the image is built, it creates a text file with the BuildId in it and ships that as an &amp;quot;Artifact&amp;quot;. This will become important later. But the main thing is you need to trigger a further pipeline for your environments based on that parameter changing.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yml&quot;&gt;- task: Docker@2
  inputs:
    command: build
    DockerFile: &amp;quot;$(Build.SourcesDirectory)/Dockerfile&amp;quot;
    repository: $
    tags: |
      $(Build.BuildId)

- task: ECRPushImage@1
  inputs:
    imageSource: &amp;quot;imagename&amp;quot;
    sourceImageName: $
    sourceImageTag: &amp;quot;$(Build.BuildId)&amp;quot;
    repositoryName: $
    pushTag: &amp;quot;$(Build.BuildId)&amp;quot;

- task: Bash@3
  displayName: &amp;quot;Upload Build Artifact of the Docker image Id&amp;quot;
  inputs:
    targetType: &amp;quot;inline&amp;quot;
    script: |
      # Add the build Id to a new file that will then be published as an artifact
      echo $(Build.BuildId) &amp;gt; .buildId
      cat .buildId

- task: CopyFiles@2
  displayName: &amp;quot;Copy BuildId file&amp;quot;
  inputs:
    Contents: &amp;quot;.buildId&amp;quot;
    TargetFolder: &amp;quot;$(Build.ArtifactStagingDirectory)&amp;quot;

- task: PublishBuildArtifacts@1
  displayName: &amp;quot;Publish Artifact&amp;quot;
  inputs:
    pathToPublish: $(Build.ArtifactStagingDirectory)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Make sure to run this pipeline now you&#39;ve created it.&lt;/p&gt;
&lt;ol start=&quot;2&quot;&gt;
&lt;li&gt;Setup an SSM (Systems Manager) parameter&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;SSM is an AWS service I had previously never really used. Its parameter store feature will allow us to store a variable that we can update later - in this case, the docker image tag.&lt;/p&gt;
&lt;p&gt;Create a new parameter by going to AWS Systems Manager &amp;gt; Application Management &amp;gt; Parameter Store. Name the parameter something like &lt;code&gt;/my-api/${env}/docker-image-tag&lt;/code&gt; (where &lt;code&gt;env&lt;/code&gt; is the environment, you&#39;ll need to duplicate this parameter for all the environments you have). It should be a &amp;quot;String&amp;quot; variable and be the unique tag that is generated by your CI build pipeline - in my case, the BuildId.&lt;/p&gt;
&lt;ol start=&quot;3&quot;&gt;
&lt;li&gt;Create your deployment pipeline&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Now, we need to define a way to update the image just in a certain environment (e.g., just &lt;code&gt;development&lt;/code&gt;). How can we do that?
Because of our setup, the duplication effort is fairly minimal. We already have our image build (which is consistent across all environments). We just need to update that SSM parameter to use the unique tag (BuildId) that the build pipeline generated.&lt;/p&gt;
&lt;p&gt;In Azure, you can trigger a pipeline based on an artifact as we generated in step 1. I then configured 3 tasks based on this:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Get the BuildId from the file and add it to the runners environment&lt;/li&gt;
&lt;li&gt;Update the SSM parameter for that environment to the new BuildID&lt;/li&gt;
&lt;li&gt;Trigger the Infrastructure/Terraform pipeline for that environment - this is where the new SSM parameter value will get picked up and used in a container definition.&lt;/li&gt;
&lt;/ul&gt;
&lt;div class=&quot;image&quot;&gt;
	&lt;img src=&quot;../../assets/images/azure-update-ssm-pipeline.png&quot; alt=&quot;Sample of the update SSM job&quot; /&gt;
&lt;/div&gt;
&lt;ol start=&quot;4&quot;&gt;
&lt;li&gt;Update Terraform to use the SSM parameter&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Now you&#39;ve got the SSM parameter being updated each time there is a new build, we need to set up Terraform to use the SSM parameter as part of the image name.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-hcl&quot;&gt;
// Import the SSM parameter
// This can be done on a module level because it depends on the environment
data &amp;quot;aws_ssm_parameter&amp;quot; &amp;quot;docker_image_id&amp;quot; {
  name = &amp;quot;/my-api/${var.environment}/docker-image-tag&amp;quot;
}

// Use it later on...
container_image = &amp;quot;&amp;lt;AWS_ACCOUNT_ID&amp;gt;.dkr.ecr.&amp;lt;REGION&amp;gt;.amazonaws.com/my-api:${data.aws_ssm_parameter.docker_image_id.value}&amp;quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;ol start=&quot;5&quot;&gt;
&lt;li&gt;All done!&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Now you&#39;ve set up a complete continuous deployment pipeline with Terraform and ECS. To review, here&#39;s how the system works&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Creates and builds a new ECR image tagged in a unique way&lt;/li&gt;
&lt;li&gt;The build pipeline notifies the release pipeline of this new image tag in some way (in Azure&#39;s case, a build artifact)&lt;/li&gt;
&lt;li&gt;The release pipeline updates the SSM parameter based on the image tag and triggers the Terraform deployment pipeline for that environment.&lt;/li&gt;
&lt;li&gt;Terraform picks up the new SSM value and implements it&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;There are two obvious downsides to this approach:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Multiple updates to multiple API&#39;s could cause locking issues where you have to manually run the Terraform pipeline once.&lt;/li&gt;
&lt;li&gt;It&#39;s a bit slower than other approaches, but the best one I could find. Plus if your Terraform repo deploys in less than 2 minutes like ours, then it&#39;s not a big problem.&lt;/li&gt;
&lt;/ol&gt;
</content>
  </entry>
  
  <entry>
    <title>Web Performance for Developers on a Deadline</title>
    <link href="/webperf-on-deadline/"/>
    <updated>2021-04-29T00:00:00Z</updated>
    <id>/webperf-on-deadline/</id>
    <content type="html">&lt;p&gt;Web performance is a vital part of your business to ensure customers keep returning. &lt;a href=&quot;https://wpostats.com/2019/01/08/carousell-traffic-ctr.html&quot;&gt;One retailer found that by reducing their page load by 65%, they saw a 63% increase in organic traffic&lt;/a&gt;. Google is also beginning to use the &lt;a href=&quot;https://web.dev/defining-core-web-vitals-thresholds/&quot;&gt;&amp;quot;core web vitals&amp;quot;&lt;/a&gt; to rank your page in search results. Despite this, it can be difficult to get the time for it. So often, teams will de-prioritize performance for the sake of more features.&lt;/p&gt;
&lt;p&gt;Although there are specific ways on how to improve site performance based on your technology stack, server, customer requirements and more, here&#39;s a guide on how to improve the performance of your website if you&#39;re on a deadline - regardless of those factors. Got no time to read the article? Simply read the headings for each section.&lt;/p&gt;
&lt;h2&gt;1. Gzip and Brotli&lt;/h2&gt;
&lt;p&gt;Gzip and Brotli are two compression algorithms, they are used to compress resources on your server before sending them to the customer&#39;s browser. This makes the file smaller, thereby making it quicker to transfer. The browser then unzips these files and then uses them. All the major browsers have support for at least 1 of these compression types.
This is such an easy change that gives such a large impact. A report by Akamai in 2016, reported a file size reduction of 63% by Gzip for JS files and 68% by Brotli.&lt;/p&gt;
&lt;p&gt;Here are links to a few guides on how to do it for your setup&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/ServingCompressedFiles.html&quot;&gt;AWS S3 + Cloudfront&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://docs.microsoft.com/en-us/iis/extensions/iis-compression/iis-compression-overview&quot;&gt;IIS&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://computingforgeeks.com/how-to-enable-gzip-brotli-compression-for-nginx-on-linux/&quot;&gt;Nginx&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://bash-prompt.net/guides/apache-brotoli/&quot;&gt;Apache&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;2. Cache-control on all resources / Use Cloudflare&lt;/h2&gt;
&lt;p&gt;Caching can hugely speed up subsequent page loads by storing resources on the customer&#39;s computer. This means you only need to deliver any dynamic content such as API responses and so on. Add a long cache control so that resources are cached for a month or more. Will this cause bugs? No! If you use a build tool like &lt;a href=&quot;https://webpack.js.org&quot;&gt;Webpack&lt;/a&gt;, it will create new file names that will force the browser to re-download the page when you do a new release.
This very blog is cached heavily, so try this to see the impact of caching on page speed.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Open up chrome dev tools (F12)&lt;/li&gt;
&lt;li&gt;Go to the Network Tab&lt;/li&gt;
&lt;li&gt;Notice the red &amp;quot;Load&amp;quot; time.&lt;/li&gt;
&lt;li&gt;Now click and hold on the Refresh button - a dropdown menu will appear. Click, &amp;quot;Empty Cache and Hard Reload&amp;quot;&lt;/li&gt;
&lt;li&gt;Check the &amp;quot;Load&amp;quot; time - when I did this test it was 482ms.&lt;/li&gt;
&lt;li&gt;Click the Refresh button normally&lt;/li&gt;
&lt;li&gt;Check the time again - now it&#39;s only 154ms! A 68% percent improvement!&lt;/li&gt;
&lt;/ol&gt;
&lt;div class=&quot;image&quot;&gt;
  &lt;img alt=&quot;Screenshot of my website with chrome developer tools showing the impact of caching assets&quot; src=&quot;../../assets/images/website-caching.png&quot; /&gt;
&lt;/div&gt;
&lt;p&gt;Here are some links on how to do it.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Expiration.html&quot;&gt;AWS S3 + Cloudfront&lt;/a&gt; - Scroll to the bottom&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://docs.microsoft.com/en-us/iis/configuration/system.webserver/caching/&quot;&gt;IIS&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.nginx.com/blog/nginx-caching-guide/&quot;&gt;Nginx&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.digitalocean.com/community/tutorials/how-to-configure-content-caching-using-apache-modules-on-a-vps&quot;&gt;Apache&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;3. Implement a build process for front-end resources (JS and CSS)&lt;/h2&gt;
&lt;p&gt;If you haven&#39;t already, try to implement a build process for your Javascript and CSS files. Why? Because using a build process can make the files smaller and compatible with the browsers you need to support. If you have tested your site on &lt;a href=&quot;https://developers.google.com/speed/pagespeed/insights/&quot;&gt;Google&#39;s Page Speed Insights&lt;/a&gt; and have seen the dreaded &amp;quot;Remove unused Javascript&amp;quot; then this step is for you. A build process will significantly reduce your initial bundle size thereby reducing load times.&lt;/p&gt;
&lt;p&gt;Here are a few links on how to do this. Webpack is a little complex to setup but is worth the effort in the long run. You can also expand its usage to optimize images and other funky things.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://www.blog.duomly.com/what-is-webpack-and-how-to-setup-webpack/&quot;&gt;https://www.blog.duomly.com/what-is-webpack-and-how-to-setup-webpack/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://torquemag.io/2019/06/optimize-javascript-css/&quot;&gt;https://torquemag.io/2019/06/optimize-javascript-css/&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;4. Defer and Asynchronously load 3rd party resources&lt;/h2&gt;
&lt;p&gt;&amp;quot;You&#39;ve got a massive head&amp;quot; - that&#39;s what I say when I visit most websites. But seriously, they have a huge amount of resources in their &lt;code&gt;&amp;lt;head&amp;gt;&lt;/code&gt; tag! Why is this a problem? Because resources, such as JS and CSS, are loaded at the top of the page, this is the first thing the browser loads. Thereby blocking it from loading any of the content further down! You might say you &lt;em&gt;need&lt;/em&gt; to load Google Analytics, Intercom and a myriad of other trackers right away, but the answer is simply that you don&#39;t. By deferring and loading resources asynchronously, you will drastically improve load times for your customers and the resources will still be loaded quickly anyway.&lt;/p&gt;
&lt;p&gt;How can you do this? I&#39;ll share this directly because it&#39;s a simple change.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-html&quot;&gt;&amp;lt;!-- This --&amp;gt;

&amp;lt;link
  href=&amp;quot;https://fonts.googleapis.com/css2?family=Ubuntu:wght@400;700&amp;amp;display=swap&amp;quot;
  rel=&amp;quot;stylesheet&amp;quot;
/&amp;gt;
&amp;lt;script src=&amp;quot;https://mysite.com/script.js&amp;quot;&amp;gt;&amp;lt;/script&amp;gt;

&amp;lt;!-- Becomes This --&amp;gt;

&amp;lt;link
  href=&amp;quot;https://fonts.googleapis.com/css2?family=Ubuntu:wght@400;700&amp;amp;display=swap&amp;quot;
  rel=&amp;quot;stylesheet&amp;quot;
  media=&amp;quot;print&amp;quot;
  onload=&amp;quot;this.onload=null;this.removeAttribute(&#39;media&#39;);&amp;quot;
/&amp;gt;

&amp;lt;script defer async src=&amp;quot;https://mysite.com/script.js&amp;quot;&amp;gt;&amp;lt;/script&amp;gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now you can move these tags to before the closing &lt;code&gt;&amp;lt;/body&amp;gt;&lt;/code&gt; tag at the bottom of the page. Job done! You should see a huge improvement in your &lt;a href=&quot;https://web.dev/fcp/&quot;&gt;First Contentful Paint&lt;/a&gt; Times.&lt;/p&gt;
&lt;h2&gt;5. Load polyfills only when needed&lt;/h2&gt;
&lt;p&gt;Polyfills are small snippets of code that allow developers to use modern Javascript features on older browsers. But the problem is, how do you load polyfills only for customers who need it? Otherwise, you will compromise the modern browser experience which likely makes up the majority of your customer base. Here is a great write up by Phillip Walton on this very topic - &lt;a href=&quot;https://philipwalton.com/articles/loading-polyfills-only-when-needed/&quot;&gt;https://philipwalton.com/articles/loading-polyfills-only-when-needed/&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Bonus 6. Set Performance budgets&lt;/h2&gt;
&lt;p&gt;Bonus round now. Hopefully, you&#39;ve made some performance improvements and you can see a difference in page load times and your Lighthouse score. But so often, without monitoring something, it&#39;s likely to drift. For example, if you are putting up shelves in your house, you need to constantly keep an eye on the level using a spirit level. You cannot just put it there once and then eyeball it the rest of the way. In the same way, you need to monitor your key performance metrics to make sure they don&#39;t return to how they were.&lt;/p&gt;
&lt;p&gt;You can do this with tools like &lt;a href=&quot;https://github.com/siddharthkp/bundlesize&quot;&gt;bundlesize&lt;/a&gt;, &lt;a href=&quot;https://www.npmjs.com/package/webpack-dashboard&quot;&gt;webpack dashboard&lt;/a&gt;, &lt;a href=&quot;https://speedcurve.com&quot;&gt;speedcurve&lt;/a&gt; and more.&lt;/p&gt;
&lt;p&gt;So, now you can monitor the performance you need to set a budget. In other words, what is the limit of performance the business can take (e.g., what is the slowest average load time we can take without affecting revenue)? I will write an article on this in the future, but I&#39;d recommend taking the worst result of each metric in the past week and set that as the budget. This can then be in-forced by tools such as &lt;a href=&quot;https://github.com/GoogleChrome/lighthouse-ci&quot;&gt;LighthouseCI&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Takeaways&lt;/h2&gt;
&lt;p&gt;Overall, I hope you&#39;ve learnt some quick ways to improve your web performance. As mentioned, there is a myriad of factors to consider when looking to improve web performance so the methods above will not be a one-size fits all. If you&#39;d like to know why your site is slow, I can help you. I&#39;m a software performance consultant who works with organizations of all sizes to analyse and fix site speed issues.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>How to Run Sequelize Migrations in Azure Pipelines</title>
    <link href="/azure-pipelines-sequelize/"/>
    <updated>2021-04-07T00:00:00Z</updated>
    <id>/azure-pipelines-sequelize/</id>
    <content type="html">&lt;p&gt;Database migrations are the concept of managing your database schema via reversible, version controlled files. A program is then used to run these &amp;quot;migrations&amp;quot; and keep track of which ones have been run on your database. Migrations are immutable, meaning if you want to change a column name, type or anything else then you have to create a new &amp;quot;migration&amp;quot;.
Handling your database programmatically gives you many benefits. Namely, providing a consistent schema across all your environments and portability if something happens to your DB. Further, with these files being committed to source control, migrations can be reviewed by others on your team. If you&#39;re not already using a tool to do this, I&#39;d encourage you to do so.&lt;/p&gt;
&lt;p&gt;Throughout this article, I&#39;ll refer to two terms&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Migrations - meaning files that change the schema of the database but not the underlying data&lt;/li&gt;
&lt;li&gt;Seeders - files that insert anonymous data into our staging environments for testing purposes&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Normally, these migration and seed files would have to be run manually from a developers computer against the different databases. But, with my obsession with automation, this wouldn&#39;t fly. I decided to create an Azure Pipeline runner to handle this for us. It will run automatically whenever new commits on our development or master branch are found.
It also reduced stress for me as I know that I will make mistakes, whereas a computer, configured correctly, won&#39;t! 😅&lt;/p&gt;
&lt;p&gt;Although this article is designed around building azure pipelines for &lt;a href=&quot;https://sequelize.org&quot;&gt;Sequelize&lt;/a&gt; migrations. This process can be adapted to other ORM&#39;s such as &lt;a href=&quot;https://knexjs.org&quot;&gt;Knex&lt;/a&gt; and &lt;a href=&quot;https://typeorm.io/#/&quot;&gt;TypeORM&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Create Your Artifacts&lt;/h2&gt;
&lt;p&gt;If you&#39;re not familiar with Azure, it has a concept of &lt;a href=&quot;https://azure.microsoft.com/en-us/services/devops/artifacts/&quot;&gt;&amp;quot;Artifacts&amp;quot;&lt;/a&gt;. These are a collection of files that can then be used by other pipelines.
We need to create two artifacts, one for our migrations pipeline and the other for our seeding pipeline.
In your source control create the following two files - you can copy-paste the code below!&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;# azure-migrate.yml
pool:
  name: azure-pipeline-runner
pr: none

steps:
  - task: CopyFiles@2
    displayName: &amp;quot;Copy migration scripts&amp;quot;
    inputs:
      contents: &amp;quot;$(Build.SourcesDirectory)/migrations/**&amp;quot;
      targetFolder: $(Build.ArtifactStagingDirectory)

  - task: PublishBuildArtifacts@1
    displayName: &amp;quot;Publish Artifact&amp;quot;
    inputs:
      pathToPublish: $(Build.ArtifactStagingDirectory)
      artifactName: migrate
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;# azure-seed.yml
pool:
  name: azure-pipeline-runner # the name of your azure pipeline runner
pr: none # Don&#39;t run this pipeline for pull requests

steps:
  - task: CopyFiles@2
    displayName: &amp;quot;Publish SequelizeRC&amp;quot;
    inputs:
      Contents: .sequelizerc
      FlattenFolders: true
      TargetFolder: &amp;quot;$(Build.ArtifactStagingDirectory)&amp;quot;

  - task: PublishBuildArtifacts@1
    displayName: &amp;quot;Publish Seed&amp;quot;
    inputs:
      PathtoPublish: seeders
      TargetPath: &amp;quot;$(Build.ArtifactStagingDirectory)&amp;quot;
      ArtifactName: seeders

  - task: PublishBuildArtifacts@1
    displayName: &amp;quot;Publish Sequelize Config Folder&amp;quot;
    inputs:
      PathtoPublish: config
      TargetPath: &amp;quot;$(Build.ArtifactStagingDirectory)&amp;quot;
      ArtifactName: config
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In the code above, it first copies the &lt;code&gt;.sequelizerc&lt;/code&gt; file that is used to denote various configuration parameters to Sequelize. Next, it creates &amp;quot;build artifacts&amp;quot; for the seeding and migration folders where all the files related to setting up the database are stored. Finally, it publishes them to the azure artifacts library.&lt;/p&gt;
&lt;h2&gt;Configure Your Runner (Optional)&lt;/h2&gt;
&lt;p&gt;This step is optional because it depends on your existing setup.
All you need to make sure is that your Azure runner (whether self-hosted or not) can access the Database.&lt;/p&gt;
&lt;p&gt;We use MySQL Aurora to host our database which sits in a VPC. Our &lt;code&gt;azure-pipeline-runner&lt;/code&gt; (defined in the &amp;quot;pool&amp;quot; parameter) is hosted inside the same VPC but a different security group. So, we needed to allow access from the runners&#39; security group to the RDS&#39; security group. This is called &amp;quot;ingress&amp;quot; in AWS. The port you need to allow access to may vary - in our case, it&#39;s 3306 which is the default for MySQL.&lt;/p&gt;
&lt;div class=&quot;image&quot;&gt;
	&lt;img src=&quot;../../assets/images/runner-sg-config.png&quot; alt=&quot;Allowing ingress from one security group to another on Port 3306 - the default for MySQL&quot; /&gt;
&lt;/div&gt;
&lt;p&gt;Getting this setup is a simple process. Check &lt;a href=&quot;https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/working-with-security-groups.html#adding-security-group-rule&quot;&gt;this guide for more info&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Create Your Pipeline&lt;/h2&gt;
&lt;p&gt;Now we&#39;ve got our runner configured and our build artifacts published, we can move onto creating the actual pipeline.
Go to &lt;strong&gt;Pipelines&lt;/strong&gt; and &lt;strong&gt;Releases&lt;/strong&gt; and click &amp;quot;+ New&amp;quot; and select &amp;quot;Create Release Pipeline&amp;quot; from the dropdown.&lt;/p&gt;
&lt;p&gt;You&#39;ll be prompted to select a template but we can click &amp;quot;Empty Job&amp;quot;&lt;/p&gt;
&lt;div class=&quot;image&quot;&gt;
	&lt;img src=&quot;../../assets/images/runner-1.png&quot; alt=&quot;Empty Azure Pipeline job with an interface to select a template. We chose to start from scratch with an empty job.&quot; /&gt;
&lt;/div&gt;
&lt;p&gt;Next, click the Artifacts box on the left and then find your artifact by searching for it.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;You&#39;ll be able to find the name of the artifact under &amp;quot;Pipelines &amp;gt; Pipelines&amp;quot;. You should see your migration or seeding pipeline that you created earlier. Clicking into one of the runs of this job will reveal the artifact name that the job created.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Now, configure a stage. Click on the &amp;quot;Tasks&amp;quot; tab at the top of the page. This will take you to the list of &amp;quot;tasks&amp;quot; that will run for each stage.&lt;/p&gt;
&lt;div class=&quot;image&quot;&gt;
	&lt;img src=&quot;../../assets/images/runner-2.png&quot; alt=&quot;Viewing the first default stage of the azure pipeline job&quot; /&gt;
&lt;/div&gt;
&lt;p&gt;Click to add a new task and search for &amp;quot;npm&amp;quot;. We want to first install Sequelize globally on the command line so that it can be used to run the migrations or seeding process. Because we are using MySQL, we also need to install the &lt;code&gt;mysql2&lt;/code&gt; package.&lt;/p&gt;
&lt;p&gt;The job should end up looking something like this:&lt;/p&gt;
&lt;div class=&quot;image&quot;&gt;
	&lt;img src=&quot;../../assets/images/runner-3.png&quot; alt=&quot;An azure pipeline task with a configured job to install sequelize and other dependant packages required to run the migrations on the command line. The dependencies are installed globally with NPM.&quot; /&gt;
&lt;/div&gt;
&lt;p&gt;Now we need to add the stage that runs the migrations or seeding process. Click the plus button again and select the &amp;quot;Command Line&amp;quot; job. This will allow us to run the Sequelize commands.&lt;/p&gt;
&lt;p&gt;The command we want to run for migrations is:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;sequelize-cli db:migrate --url ${DB_URL}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;For the seeding process it is:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;sequelize-cli db:seed:all --url ${DB_URL}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The documentation for these commands can be found &lt;a href=&quot;https://github.com/sequelize/cli#documentation&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;We aren&#39;t doing anything fancy asides from passing our Database URL. Since this won&#39;t be stored in our Git repo, we need to provide it here as an environment variable.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;If something goes wrong. You may need to check that the runner is running files in the correct directory. Ensure that the directory is the root. It should contain a folder called &amp;quot;seeders&amp;quot; or &amp;quot;migrations&amp;quot;. These folders should contain the migration and seed files.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Below is how our job ended up&lt;/p&gt;
&lt;div class=&quot;image&quot;&gt;
	&lt;img src=&quot;../../assets/images/runner-4.png&quot; alt=&quot;An azure pipeline task with a configured job running the sequelize command to migrate the database. It shows the working directory and an environment variable of DB_URL.&quot; /&gt;
&lt;/div&gt;
&lt;p&gt;Now you&#39;ve configured one stage, you can clone it for the others! Go back to the Pipelines view and click the clone button beneath the stage card.&lt;/p&gt;
&lt;div class=&quot;image&quot;&gt;
	&lt;img src=&quot;../../assets/images/runner-5.png&quot; alt=&quot;The Azure pipeline stage card under the Pipelines view. It demonstrates how to click the clone button&quot; /&gt;
&lt;/div&gt;
&lt;h2&gt;Wrapping Up&lt;/h2&gt;
&lt;p&gt;I hope this helped you configure your database migrations! Here is a quick summary of what we have learnt.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Database migrations and seeding processes are important to codify for consistency, portability and review purposes.&lt;/li&gt;
&lt;li&gt;How to configure Azure runners to allow ingress to RDS&lt;/li&gt;
&lt;li&gt;How to create build artifacts in Azure Pipelines&lt;/li&gt;
&lt;li&gt;How to write basic azure pipeline jobs&lt;/li&gt;
&lt;li&gt;How to migrate and seed your database using Azure pipelines.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;I&#39;m glad to have this work sorted as it was a bit of a hassle to configure. But, we got there in the end and this is now a durable process that will scale with the team.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>How to Improve Your Typing Speed</title>
    <link href="/typing-practise/"/>
    <updated>2021-03-30T00:00:00Z</updated>
    <id>/typing-practise/</id>
    <content type="html">&lt;p&gt;Recently, I&#39;ve started practising my typing each day - for around 5-10 minutes. So far, according to 10fastfingers, I&#39;ve increased my typing speed from 70WPM to 80WPM. Not much, but I can feel myself getting more &amp;quot;familiar&amp;quot; with the keyboard. In this post, I wanted to dig into why I&#39;ve started this daily habit and how I&#39;m practising.&lt;/p&gt;
&lt;h2&gt;&lt;a href=&quot;#why&quot;&gt;Why?&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;This all started with trying to adopt a &amp;quot;keyboard-first&amp;quot; mentality. If you&#39;re anything like me, you mix and match between operations performed by your mouse and keyboard. But, when increasing my usage of Vim, using the mouse isn&#39;t possible. All operations have to be performed via the keyboard. Not only does this make actions faster but also keeps your hands on your keyboard, enabling you to get back to typing as soon as possible. I decided I could translate this mentality to other programs by using their keyboard shortcuts. Learning simple shortcuts like selecting the URL bar of your browser (&lt;code&gt;CMD+L&lt;/code&gt;) and cycling through tabs (&lt;code&gt;CMD+SHIFT+]&lt;/code&gt; or &lt;code&gt;CMD+SHIFT+[&lt;/code&gt;) has saved me a bunch of time.&lt;/p&gt;
&lt;p&gt;The keyboard is the primary interface between my brain and the computer. So increasing the &amp;quot;bandwidth&amp;quot; of this connection is super valuable. As shortcuts improve the speed of actions, typing faster will help me get my thoughts out quicker and code written faster and with fewer errors. I always seem to get in a flow of programming, only to be interrupted by making a typing mistake - reducing these errors was a priority.&lt;/p&gt;
&lt;h2&gt;&lt;a href=&quot;#how&quot;&gt;How?&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;Principally, I&#39;m doing this through a daily typing test on 10fastfingers.com and practise on keybr.com.
Both of these tools differ in key ways (no pun intended).&lt;/p&gt;
&lt;h3&gt;&lt;a href=&quot;#10fastfingers&quot;&gt;10FastFingers&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;10FF, gives you a 200-word typing test and a time limit of a minute. The challenge is to type as many words as you can within that period. The words chosen are the most used in English.
I like taking this test regularly because it builds my muscle memory for common words and phrases. Given that you can learn 800 words and know how to speak 75% of a spoken language, it pays to be able to type these common words quickly. I&#39;m ok with taking a couple of seconds to type verisimilitude - however much I love that word.
There are a bunch of social features with 10FF, but I don&#39;t care for these. My only opponent is myself. Often, I catch myself in this test in a sort of &amp;quot;flow&amp;quot; where I magically type all the correct keys without even thinking about it. This momentary realisation makes me smile, but soon turns into scowls when I mistype &amp;quot;soccer&amp;quot; as a result.&lt;/p&gt;
&lt;h3&gt;&lt;a href=&quot;#keybr.com&quot;&gt;Keybr.com&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;Keybr is a beautifully designed tool that first takes you through the entire keyboard to see where your weaknesses lie. Most of the words are &amp;quot;riffs&amp;quot; on familiar words - &amp;quot;influencecapa&amp;quot;, &amp;quot;comprom&amp;quot; and &amp;quot;discuse&amp;quot;. In doing so, it breaks your muscle memory and forces you to think carefully about where you place your fingers.
This tool makes up the bulk of my practice as it &amp;quot;moves&amp;quot; you around the keyboard in a way that typing common words won&#39;t.&lt;/p&gt;
&lt;h2&gt;&lt;a href=&quot;#wrapping-up&quot;&gt;Wrapping up&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;Maybe you&#39;re sold on improving your typing speed, maybe not. In any case, I encourage you to sharpen the saw and look at the fundamentals rather than chasing productivity through &amp;quot;tools&amp;quot;.
Now the sun is out, and I&#39;ve typed enough today so cya!&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Super Fast React/Node App Testing with GitHub Actions</title>
    <link href="/github-actions-perf/"/>
    <updated>2021-03-25T00:00:00Z</updated>
    <id>/github-actions-perf/</id>
    <content type="html">&lt;p&gt;A seldom thought of component of performance is that of continuous integration performance. Here at &lt;a href=&quot;https://york-e.com&quot;&gt;York Press&lt;/a&gt;, we are big users of both Azure Pipelines and GitHub Actions. Due to us hosting our Azure pipeline runners, &amp;quot;job minute&amp;quot; restrictions were never a concern from a billing perspective. Although, the long running jobs did frustrate the team. Having moved some key processes over to GitHub Actions, I decided it was time that we looked at improving the performance of one of our core repositories. Not only would this mean developers had quicker feedback, but it would also mean we burned through our actions minutes a lot slower. Here&#39;s how I did it.&lt;/p&gt;
&lt;h2&gt;Initial Investigation&lt;/h2&gt;
&lt;p&gt;GitHub actions (and I promise this isn&#39;t an ad) has a handy feature whereby you can see how many seconds each stage of the job took.&lt;/p&gt;
&lt;div class=&quot;image&quot;&gt;
	&lt;img src=&quot;../../assets/images/github-actions-timing.png&quot; /&gt;
&lt;/div&gt;
&lt;p&gt;The first thing I spotted was how long installing npm modules took - nearly 3 minutes! Because of this, I chose to combine the Test and Lint pipeline so that we would not need to duplicate the module installation.&lt;/p&gt;
&lt;p&gt;Secondly, I swapped out my normal &lt;code&gt;npm ci&lt;/code&gt; command for another action &lt;code&gt;bahmutov/npm-install@v1&lt;/code&gt;. This action handles all the cache invalidation and storage of node modules across builds so you can save time with installing them. After those changes, here is what the timings looked like...&lt;/p&gt;
&lt;div class=&quot;image&quot;&gt;
	&lt;img src=&quot;../../assets/images/github-actions-timing-2.png&quot; /&gt;
&lt;/div&gt;
&lt;p&gt;Half the time gone! That&#39;s a good start but still not far enough. I found the modules were taking ages to install due to a Webpack plugin responsible for optimizing images, something we didn&#39;t need in the CI process. I moved this out into an &lt;code&gt;optionalDependencies&lt;/code&gt; and then set the command to &lt;code&gt;npm ci --no-optional&lt;/code&gt;&lt;/p&gt;
&lt;h2&gt;ESLint&lt;/h2&gt;
&lt;p&gt;The other big fish to fry was ESLint, it took nearly 2:30 minutes to run. I tried to debug this locally using an environment variable &lt;code&gt;TIMING=1&lt;/code&gt;. This gives you a table view of how long each ESLint rule took to check.&lt;/p&gt;
&lt;div class=&quot;image&quot;&gt;
	&lt;img src=&quot;../../assets/images/eslint-timing.png&quot; /&gt;
&lt;/div&gt;
&lt;p&gt;Interestingly, it was the &lt;code&gt;import/&lt;/code&gt; rules that were taking the longest. After some google-fu, &lt;a href=&quot;https://github.com/benmosher/eslint-plugin-import/issues/1793&quot;&gt;I discovered that it was due to having to build a dependency graph&lt;/a&gt; across the codebase. Our codebase is fairly large so it was understandable why it would take this long. I didn&#39;t want to remove the rule entirely as it was useful, but surely there was a way around it...&lt;/p&gt;
&lt;p&gt;Actions to the rescue! Fortunately, a kind internet person has created a github action that will run ESLint only on the files that have changed. I swapped this out like so&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;- uses: tinovyatkin/action-eslint@v1
  with:
    repo-token: $
    check-name: eslint
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This completely eliminated that time taken if no files had been changed matching the scanning glob.&lt;/p&gt;
&lt;p&gt;From there, I spent more time than I care to admit trying to trim the time down. The main blockers were the dependency install (1:20s average) and the Jest test suite (50s average). Although there are ways to &lt;a href=&quot;https://imhoff.blog/posts/parallelizing-jest-with-github-actions&quot;&gt;run the Jest suite in parallel&lt;/a&gt; it sort of seems redundant at this stage. The install is the big job, but the unfortunate battle is that we have Webpack image-loader as a &lt;code&gt;devDependency&lt;/code&gt;. This then installs a whole host of binary packages that then get built from source - every. single. time. Anywho, I&#39;m pleased with reducing it by 76%.&lt;/p&gt;
&lt;p&gt;Here are my main takeaways:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Spend time speeding up your CI - fast developer feedback is important and saves you money (if you&#39;re restricted on pipeline minutes)&lt;/li&gt;
&lt;li&gt;Use the pre-built actions - there is a huge marketplace of actions that solve a bunch of problems and have smart defaults. GitHub Actions is great (and I promise this isn&#39;t an ad), in part, because it&#39;s like code lego.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;I hope this helps you with your journey in speeding up your CI.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Solve Your Problems. Not Others.</title>
    <link href="/your-business/"/>
    <updated>2021-03-24T00:00:00Z</updated>
    <id>/your-business/</id>
    <content type="html">&lt;p&gt;What is your business? What is your product? What is your core mission? These are questions I ask myself and my clients, continuously. They help delve into &lt;em&gt;what&lt;/em&gt; the customers are paying for, and in turn, what your staff are paid for. In the world of flowing VC money, it&#39;s often never questioned what practical value that £X is giving you. But businesses of all sizes are prone to this. Losing sight of these questions often evolves complex service criteria that leads teams to create a bespoke solution rather than something off the shelf. I&#39;m willing to wager that, if you&#39;re a software developer, you&#39;ve worked on a service or feature that could have been handled by a third party in the past month. Or, worse, you&#39;ve resolved a bug that wouldn&#39;t exist if you had used a third-party solution. I know I have - on both counts.&lt;/p&gt;
&lt;p&gt;Through my career and starting my own business, I&#39;ve developed my understanding of what I&#39;ll dub &amp;quot;business thinking&amp;quot;. Business thinking is putting your numbers hat on and practically evaluating options. Not just on &amp;quot;can I do this or not&amp;quot; but what it will cost the business in both money and time. As an example, at York-E we needed a contact form with some complex routing to different support teams based on the locale a customer-specified (an Egyptian customers request would go to the Egyptian support team etc). I began to scope this out before realising we could likely use something off the shelf. After some DuckDuckGo&#39;ing, I found Formspree could handle this. The catch? It&#39;s $40/mo. That&#39;s quite steep for a contact form. Any developer worth their salt could bash one out in a few days. Right? At this stage, many teams would dismiss the idea of handling such a trivial task to a third party, but we chose to keep it as an option. Afterwards, we calculated that it would take 2 years before building our own contact form system would be worthwhile - not factoring in maintenance.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The lesson? Don&#39;t ignore third-party solutions even for trivial tasks. Contact forms are not York-E&#39;s business. Providing an online education experience is.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;Boilerplate&lt;/h2&gt;
&lt;p&gt;When creating new products, I&#39;ve reviewed these questions as well. I poured a great deal of time into creating &lt;a href=&quot;https://turboapi.dev&quot;&gt;TurboAPI&lt;/a&gt; from scratch. The only &amp;quot;boilerplate&amp;quot; I used was create-react-app. The express API, lambdas, serverless configs, GitHub actions CI/CD pipelines, all written from the ground up. In retrospect, this was a mistake, I should have taken more time to find a boilerplate and use that. The parts that took me the longest with TurboAPI, were fixing bugs with TypeORM queries, Stripe billing and user authentication - not the actual &amp;quot;app&amp;quot; itself. The core technology of TurboAPI was written between 1am to 3am on our hallway floor. This pales in comparison to the overall time I spent on TurboAPI for the MVP. Which, according to Toggl, was about 45 hours. To have used a boilerplate that saved me 20 hours would have been a lifesaver. Now, when developing new products, I use a boilerplate and am familiar with chopping and changing it according to my needs. Sure, I still need to build some bespoke but basic elements but this is small by comparison.&lt;/p&gt;
&lt;p&gt;There are lots for free that offer a basic CRUD API with Mongo and React. And recently, there has been an explosion of paid boilerplates, either offering a one-time payment or monthly subscription. Although it seems like a great deal of money to part with, getting up and running quickly is invaluable. I&#39;m in the process of creating a niche boilerplate that I&#39;ll share soon.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;There is another lesson here. Solve the problems that are not solved. Displaying tables of data, handling authentication, and billing are problems that have already been solved and many boilerplates are stitching these together.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Some argue that boilerplates mean that you won&#39;t have system understanding and will make it more difficult to debug. This might be the case. But, the focus here is to get your MVP out and start delivering value. Even if this is the MVP of a feature inside a company, the same principle applies.&lt;/p&gt;
&lt;p&gt;Here are the key takeaways for all software developers - regardless of experience:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Know how much your time is worth - this is important in evaluating if it&#39;s cheaper and faster to use a third-party service&lt;/li&gt;
&lt;li&gt;Know what problems your solving - is it part of your core business? If not then try to outsource it in some way&lt;/li&gt;
&lt;/ol&gt;
</content>
  </entry>
  
  <entry>
    <title>SpellcheckCI</title>
    <link href="/spellcheck-ci/"/>
    <updated>2021-03-04T00:00:00Z</updated>
    <id>/spellcheck-ci/</id>
    <content type="html">&lt;p&gt;Making sure you have correct spelling on your blog posts is vital to keep readers attention. Unfortunately, it&#39;s a laborious process and sometimes things fall through the cracks.
Being the nerd I am, I decided I needed a shell script to solve this problem.&lt;/p&gt;
&lt;p&gt;Thankfully, someone has created an open source markdown based spellcheck module that is Node based - &lt;a href=&quot;https://github.com/lukeapage/node-markdown-spellcheck&quot;&gt;mdspell&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Since I&#39;m using Gatsby, my posts can be found under &lt;code&gt;content/blog/*/index.md&lt;/code&gt; - where &lt;code&gt;*&lt;/code&gt; is the name of the blog post. The command to run the spell check was then&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ npm i -g node-markdown-spellcheck &amp;amp;&amp;amp; mdspell -a -n &amp;quot;content/blog/**/*.md&amp;quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This would go through each of my posts and then validate the spelling is correct. When it comes across an incorrect spelling, it notifies me and asks me if I want to correct it, or add it to a local dictionary.&lt;/p&gt;
&lt;p&gt;But, because I often blog from my iPad, where I don&#39;t have a terminal, I wanted this feedback to be visible on the CI for the new blog posts.
My workflow for creating new posts is create a new git branch, create the file and write the post, push to github and create a new pull request. You can find this exact blog post&#39;s pull request &lt;a href=&quot;https://github.com/joshghent/blog/pull/165&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Time to Automate&lt;/h2&gt;
&lt;p&gt;I&#39;m a big user of GitHub Actions so I went with that to setup this process.&lt;/p&gt;
&lt;p&gt;Initially, I went down the road of installing all the node dependencies, then installing mdspell and then running the spellcheck. However, I found that it took over a minute to download all the node modules! It turns out, I could have used &lt;code&gt;npx&lt;/code&gt; to use mdspell without having to install the project.&lt;/p&gt;
&lt;p&gt;Here is the complete GitHubCI - which, across over 50 blog posts, takes around 10 seconds to run!&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yml&quot;&gt;# ./.github/workflows/spellcheck.yml
name: Spellcheck

on: [pull_request]

jobs:
  spellcheck:
    runs-on: ubuntu-latest

    strategy:
      matrix:
        node-version: [14.x]

    steps:
      - uses: actions/checkout@v2
      - name: Use Node.js $
        uses: actions/setup-node@v1
        with:
          node-version: $
      - run: npm i markdown-spellcheck -g
      - run: mdspell -a -n -r &amp;quot;content/blog/**/*.md&amp;quot;
        name: Spellcheck
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I hope this proves useful to you for your own blog. If you don&#39;t have one already, I&#39;d highly recommend creating one!&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Setting up LightHouse CI for React in GitHub Actions</title>
    <link href="/lighthouse-ci-react/"/>
    <updated>2021-02-16T00:00:00Z</updated>
    <id>/lighthouse-ci-react/</id>
    <content type="html">&lt;p&gt;At &lt;a href=&quot;https://york-e.com&quot;&gt;York Press&lt;/a&gt;, we noticed that our pages were gaining weight. In some cases, pages were loading over 1MB of resources before showing for the customer. This was unacceptable considering the modal broadband speed is around 1MB/s. So, we decided we needed stricter checks. This would ensure that pages are lighter than an ants leg made of clouds. And, faster load times would mean customers could get to studying faster - which I trust they yearn for.&lt;/p&gt;
&lt;h2&gt;Lighthouse to the Rescue!&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://github.com/GoogleChrome/lighthouse-ci&quot;&gt;Lighthouse&lt;/a&gt; is a tool developed by Google. It analyses a page and gives it a score, out of 100, on SEO, Performance, Accessibility, PWA and Best Practises. Although these are arbitrary numbers, they give a rough guide to how your website is doing. These scores are also used to rank your page in Google search rankings. So they are vital to maintain for business reasons, not technical prowess.&lt;/p&gt;
&lt;p&gt;The challenge is how to get this tool setup as there are lots of outdated articles and guides. Furthermore, none of these seem to cover a regular use case - setting up Lighthouse for your React app.&lt;/p&gt;
&lt;p&gt;Here&#39;s a definitive guide on how to setup LighthouseCI for your React app - and have it tracked in Github Actions.&lt;/p&gt;
&lt;h2&gt;Setup Lighthouse CI&lt;/h2&gt;
&lt;p&gt;First, you will want to install LighthouseCI and http-server locally for testing purposes.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ npm i -g @lhci/cli http-server
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The former is the LighthouseCI tool. The latter is a small module to run the React app after it has been built.&lt;/p&gt;
&lt;p&gt;Next you can create a file called &lt;code&gt;lighthouserc.json&lt;/code&gt;. This should have the following contents&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
  &amp;quot;ci&amp;quot;: {
    &amp;quot;collect&amp;quot;: {
      &amp;quot;url&amp;quot;: [&amp;quot;http://127.0.0.1:4000&amp;quot;],
      &amp;quot;startServerCommand&amp;quot;: &amp;quot;http-server ./build/client -p 4000 -g&amp;quot;,
      &amp;quot;startServerReadyPattern&amp;quot;: &amp;quot;Available on&amp;quot;,
      &amp;quot;numberOfRuns&amp;quot;: 1
    },
    &amp;quot;upload&amp;quot;: {
      &amp;quot;target&amp;quot;: &amp;quot;temporary-public-storage&amp;quot;
    },
    &amp;quot;assert&amp;quot;: {
      &amp;quot;preset&amp;quot;: &amp;quot;lighthouse:recommended&amp;quot;
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The section under &amp;quot;collect&amp;quot; is where the server that runs the React app is defined. The interesting properties are the &lt;code&gt;startServerCommand&lt;/code&gt; and &lt;code&gt;startServerReadyPattern&lt;/code&gt;. The first tells Lighthouse how to start your application. And the second, tells Lighthouse what text to look for to see that the server is running and the test can begin. In this case, it starts the server via &lt;code&gt;http-server&lt;/code&gt; and then it listens for the text &lt;code&gt;Available On&lt;/code&gt;. Run the command shown above for yourself and see what text it displays in your terminal.
You may need to change &lt;code&gt;/build/client&lt;/code&gt; to the directory where your application gets built&lt;/p&gt;
&lt;p&gt;Now you can give your LighthouseCI a whirl! Build your application (if you used create-react-app then run &lt;code&gt;npm run build&lt;/code&gt;), then run:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ npm run build
$ lhci autorun
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You should then see an output like this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;✅  .lighthouseci/ directory writable
✅  Configuration file found
✅  Chrome installation found
Healthcheck passed!

Started a web server with &amp;quot;http-server ./build/client -p 4000 -g&amp;quot;...
Running Lighthouse 1 time(s) on http://127.0.0.1:4000
Run #1...done.
Done running Lighthouse!

Checking assertions against 1 URL(s), 1 total run(s)

33 result(s) for http://127.0.0.1:4000/ :
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Setting up GitHub Actions CI&lt;/h2&gt;
&lt;p&gt;Now, let&#39;s automate that. The best way to enforce these sorts of checks is to make them part of your pull request workflow. This means preventing merge on requests that fail to meet these standards.&lt;/p&gt;
&lt;p&gt;All we need to do with GitHub Actions is imitate the commands we did in the setup process. Paste the following into a new file called &lt;code&gt;/.github/workflows/lighthouse.yml&lt;/code&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yml&quot;&gt;# ./.github/workflows/lighthouse.yml
name: LighthouseCI

 on:
   pull_request:

 jobs:
   lighthouse:
     runs-on: ubuntu-latest
     steps:
       - uses: actions/checkout@v2

       - name: Setup node
         uses: actions/setup-node@v1
         with:
           node-version: &amp;quot;14.x&amp;quot;

       - name: Install
         run: npm ci &amp;amp;&amp;amp; npm i -g http-server @lhci/cli

       - name: Build
         run: npm run build

       - name: LighthouseCI
         run: lhci autorun
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Next, push up your changes and create a new pull request. You should see your Action running at the bottom of the pull request.&lt;/p&gt;
&lt;div class=&quot;image&quot;&gt;
	&lt;img src=&quot;../../assets/images/lighthouseci-pr.png&quot; alt=&quot;Pull Request Feedback for LighthouseCI Github Action&quot; /&gt;
&lt;/div&gt;
&lt;p&gt;And that&#39;s that! I hope that has saved you a lot of time if you were struggling to get your React app to play nice with GitHub Actions.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>So, You&#39;ve Messed Up</title>
    <link href="/you-messed-up/"/>
    <updated>2021-02-15T00:00:00Z</updated>
    <id>/you-messed-up/</id>
    <content type="html">&lt;p&gt;So you&#39;ve messed up... big time. You&#39;ve dropped the production database, pushed a broken update to production, throttled the API and literally set the internet on fire. Cold sweat gathers on your brow, you stare longingly at your alcoholic beverage of choice and you perhaps start to update your resume. You are convinced you&#39;re going to be fired.&lt;/p&gt;
&lt;p&gt;I&#39;m sure if you&#39;ve been a software developer for more than 6 months, you&#39;ve been in this situation. It can be an manic situation and you might be unsure about what you are supposed to do.&lt;/p&gt;
&lt;p&gt;Here is my guide for when you inevitably make a catastrophic mistake.&lt;/p&gt;
&lt;h2&gt;Things to Remember&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;em&gt;You &lt;em&gt;probably&lt;/em&gt; won&#39;t get fired&lt;/em&gt; - I say probably as I&#39;m sure there are some instances that a person may get let go, but in 99.99% of situations you won&#39;t. Employers know you will make mistakes, they themselves make mistakes. What matters if how you respond to your mistakes.&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Don&#39;t Rush&lt;/em&gt; - The natural instinct when things go wrong is to try to resolve them - quickly. Try to slow yourself down and don&#39;t act irrationally.&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Be Radically Transparent&lt;/em&gt; - Even if you feel you work in a company that thrives on concealing mistakes, it&#39;s best to be radically transparent with your mistake. If you literally did &lt;code&gt;DELETE * FROM *&lt;/code&gt;, then admit that. &lt;a href=&quot;https://about.gitlab.com/2017/02/10/postmortem-of-database-outage-of-january-31/&quot;&gt;Even GitLab admitted they did an &lt;code&gt;rm -rf&lt;/code&gt; on a production server that caused an outage&lt;/a&gt;. It&#39;s best to be upfront about your mistakes and learn from them. You could even write your own personal post mortem.&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Keep a Cool Head&lt;/em&gt; - It&#39;s easy when you&#39;ve made a mistake to lash out and blame others. Instead, keep a cool head. Remind yourself that A) nothing will be accomplished by getting angry. And, B) the situation has most likely not resulted in direct physical harm to anyone, so it&#39;s not that bad in the grand scheme of things.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;What&#39;s Next&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;&lt;em&gt;Access the Impact&lt;/em&gt; - who is affected? Is it actually as bad as you think? Can you resolve the situation quickly? For example, if you have just done a deployment that has gone wrong, then can you roll back? In any case, continue to the next step.&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Start a Log&lt;/em&gt; - You&#39;ll thank yourself later for this one. Open up a new text file on your computer and record the current time and date. Write down exactly what the impact is, and what has happened. It can seem like a waste of time to do this in the middle of an incident, but is important for reflection later on.&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Report the Problem&lt;/em&gt; - Next, you need to report the problem to someone. You should know who this is - likely it&#39;s your line manager, the technical project manager or lead developer. Speak to them about the problem, your findings and ask if they have any suggestions. In some cases, you may need to send an email to your support team to inform them that there is an ongoing problem and updates will be given every 30 minutes - this gives them confidence that they can relay to angry customers. Make sure to follow through with this, even if you haven&#39;t found the &amp;quot;solution&amp;quot;.&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Debug the Problem&lt;/em&gt; - If you have made a direct mistake, then hopefully this step will not take too long. But, work to debug the problem and find the root cause. Use logs, graphs in your APM and anything else you can get your hands on. The focus here is the root cause. Often, it&#39;s the second order problems that are the surfaced issues. As an example, you may find a particular API endpoint that stops working and returns a 500 - problem 1. Upon further investigation, it appears that the request is timing out at the gateway - problem 2. After careful examination of the &lt;code&gt;top&lt;/code&gt; logs, you see that the CPU usage spikes when that endpoint is called and crashes the server - problem 3. But why does it spike the CPU? You spot that Jerry has added a random &lt;code&gt;for...loop&lt;/code&gt; calculating the digits of Pi to 10,000 places - the root cause.&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Resolve the Problem&lt;/em&gt; - Now you know the cause, you can resolve it. Work with people on your team and be extra communicative during this entire process - you&#39;ll build trust and prevent work being done twice.&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Review Your Log&lt;/em&gt; - Look into each stage of the process and analyse whether you used your time well here and what mistakes were made along the way. Root out any efficiencies you implement personally as well as changes to make at the organisational and team level.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;But what if you&#39;re in a position where others come to you with their catastrophic problems? Here are some things to remember.&lt;/p&gt;
&lt;h2&gt;What to do when someone comes to you with an incident&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;&lt;em&gt;Ask them to Keep a log&lt;/em&gt; - This will be a saviour when it comes to implementing preventative change and help you learn the mindset of those on your team.&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Be empathetic&lt;/em&gt; - It&#39;s not the time to point fingers, or say I told you so (to anyone, not just the person who reported the incident). Focus on resolving the issue and learning.&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Be a force for change to prevent future issues&lt;/em&gt; - As someone with some authority, you have the power to implement changes to prevent future issues. Otherwise, these things will happen over and over again and customers will get angry. You&#39;d think that things fix themselves on their own and get sorted but you&#39;d be amazed at how many teams spend large swathes of time, simply putting out fires. Implement changes such as mandatory testing, code reviews by experts and restricted access to prevent these incidents in the future.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;I&#39;d like to say this won&#39;t happen, but it will. Overall, learn from your mistakes and communicate abundantly. You&#39;ll learn valuable skills in the process and be an asset whenever more incidents come down the line.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Shutdown Routine</title>
    <link href="/shutdown-routine/"/>
    <updated>2021-02-04T00:00:00Z</updated>
    <id>/shutdown-routine/</id>
    <content type="html">&lt;p&gt;In the current pandemic, it&#39;s even more challenging to switch off from work. With many working from home, it can seem impossible to &amp;quot;leave&amp;quot; work in a physical and mental sense.&lt;/p&gt;
&lt;p&gt;I introduced a shutdown routine to help combat this. It&#39;s a practise touted by many &amp;quot;productivity guru&#39;s&amp;quot;, which made me sceptical at first. But, the basic idea is to create a short checklist of things to do before you &amp;quot;finish&amp;quot; your day. The goals are as follows:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Realise how much you&#39;ve accomplished that day&lt;/li&gt;
&lt;li&gt;Be clear about what you need to do tomorrow&lt;/li&gt;
&lt;li&gt;Tie any loose ends&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I usually allot anywhere from 10-20 minutes to do this process. Yours may vary but here is my list and the reasons why:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Close any open tabs&lt;/li&gt;
&lt;li&gt;Write down tomorrows time block plan&lt;/li&gt;
&lt;li&gt;Reply to any messages (Signal, Text, Slack) - I&#39;ve uninstalled slack from my phone as well as leaving it on my desk in DnD after 5pm. This task helps me to get rid of those niggles to check my phone after this.&lt;/li&gt;
&lt;li&gt;Zero my inbox - Even if it means snoozing mail to the next day, I don&#39;t want it in my inbox. It all needs to be dealt with.&lt;/li&gt;
&lt;li&gt;Clear my todo list - After I&#39;ve finished work, I&#39;ve finished and I want to spend time with my family. This helps me to stop feeling productivity guilt about things I haven&#39;t done. It also saves all the tasks rolling over to the next day.&lt;/li&gt;
&lt;li&gt;Clean my desk - Sounds obvious but it&#39;s nice to not return to a stale coffee cup in the morning.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;These steps may seem reductive. But, I have found the process mindfully winds me down from work and helps me get into the &amp;quot;relax&amp;quot; flow for the evening. I hope you too can leave your stress at work for the next day. There is always more work.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Resumé Red Flags</title>
    <link href="/resumes/"/>
    <updated>2021-02-03T00:00:00Z</updated>
    <id>/resumes/</id>
    <content type="html">&lt;p&gt;After looking through tonnes of resumés, they all begin to blend into one. But, I wanted to share some red flags I see that are unique to software developers. It&#39;s a mine field to prepare an eye catching, yet informative CV. According to the undercover recruiter, hiring managers &lt;a href=&quot;https://theundercoverrecruiter.com/infographic-recruiters-spend-5-7-seconds-reading-your-cv/&quot;&gt;spend 5-7 seconds looking at a CV&lt;/a&gt;. I can attest to the fact that this is unfortunately true.
It&#39;s another discussion about whether CV&#39;s are an effective way of conveying a persons ability (spoiler: it&#39;s not). But, that&#39;s by the by. It&#39;s a reality we all have to face. So, here&#39;s some things to avoid to help you gain that edge.&lt;/p&gt;
&lt;h2&gt;1. Don&#39;t include a picture of yourself&lt;/h2&gt;
&lt;p&gt;This is not so prevalent in the UK and US but is often the case in other countries. I would recommend airing on the side of caution and get rid of the photo. It is (or at least should be) irrelevant to someones decision to hire you.&lt;/p&gt;
&lt;h2&gt;2. Put Your Work Experience At the Top&lt;/h2&gt;
&lt;p&gt;This is what myself and others are looking for when hiring someone. Why put this above your education? You have to think about the purpose of a CV. It&#39;s for someone to see if you can do a job. If you have relevant work experience to the job then you want to bring peoples eyes there.&lt;/p&gt;
&lt;p&gt;If you don&#39;t have any work experience, include your personal projects, freelance work or open source contributions.&lt;/p&gt;
&lt;h2&gt;3. Don&#39;t use Skill Meters&lt;/h2&gt;
&lt;p&gt;More and more, I have seen CV&#39;s that have bars indicating their skill at various technologies. Although many argue that it shows a &amp;quot;high-level&amp;quot; overview of someones ability, I find it loses all meaning. After all, what is 80% Javascript and 50% CSS?
&lt;a href=&quot;https://en.wikipedia.org/wiki/Illusory_superiority&quot;&gt;Illusory superiority&lt;/a&gt; exists because many think they are above average. In 1981, a survey took place to rate a persons own driving ability. 93% of participants in the US ranked themselves in the top 50%. Mathematically, this cannot be correct. So, evaluations of ones own skill level is flawed. And, should not be included on a CV. Speak with your experience, not fancy formatting.&lt;/p&gt;
&lt;h2&gt;4. List the Tech You Used in Various Roles&lt;/h2&gt;
&lt;p&gt;When recruiters look at your CV, they are looking to see if the technology you&#39;ve used most recently is relevant to the job they have.&lt;/p&gt;
&lt;p&gt;For the sake of easy parsing, I include a sentence at the end of each experience section that says &amp;quot;In this role, I used the following technologies...&amp;quot;.&lt;/p&gt;
&lt;h2&gt;5. Use Short Concise Sentences&lt;/h2&gt;
&lt;p&gt;As we have already covered, people aren&#39;t reading your CV. And, if they do, you don&#39;t want to bombard them with information. Try to keep things information dense but concise. I recommend using &lt;a href=&quot;hemingwayapp.com&quot;&gt;hemingwayapp.com&lt;/a&gt; to help do this.&lt;/p&gt;
&lt;h2&gt;6. Don&#39;t Bother with Interpersonal Skills&lt;/h2&gt;
&lt;p&gt;No one is going to say they don&#39;t work well in a team. Or, that they&#39;re irritable without an intravenous supply of caffeine. So, remove it and speak about how you worked amongst teams in your work experience.&lt;/p&gt;
&lt;h2&gt;7. Try to Avoid Using &amp;quot;We&amp;quot;&lt;/h2&gt;
&lt;p&gt;This is a challenging balance. Rarely, as software developers, do we work alone. But, it&#39;s prudent to speak about your individual contributions rather than ones made as a group. Using the phrase &amp;quot;we&amp;quot; makes people think you didn&#39;t actually contribute much. You can embellish but don&#39;t lie.&lt;/p&gt;
&lt;h2&gt;8. Don&#39;t Name Individual Libraries You&#39;ve Used&lt;/h2&gt;
&lt;p&gt;Unless the library or framework is something that someone would hire for, don&#39;t include it. It doesn&#39;t matter if you&#39;ve used RxJS or Lodash. It does matter if you&#39;ve used Typescript or React though.&lt;/p&gt;
&lt;h2&gt;9. Write what role you want in your personal profile&lt;/h2&gt;
&lt;p&gt;Your personal profile is a great chance to speak about what you want from a role. Remember your audience. If you&#39;re applying for a role as a full stack engineer, and you&#39;ve before worked as a frontend engineer - then write about why you want to move into such a role. It&#39;s wise to add why you would be a good fit. For example, a friend was moving into cyber security engineering after being an electrician. They spoke about how their problem solving skills applied across industries.&lt;/p&gt;
&lt;h2&gt;10. Your GitHub and Website say far more than a CV&lt;/h2&gt;
&lt;p&gt;If you get passed the first filter, then it&#39;s likely that the person interviewing you will want more information. Including your website or GitHub will give them great insight into what drives you, your experience and determination.&lt;/p&gt;
&lt;p&gt;I hope these 10 tips help you as you create your resumé. If you need any help with your CV then there are many friendly communities (such as &lt;a href=&quot;https://midlandsjs.org&quot;&gt;MidlandsJS&lt;/a&gt;) that will help you.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>My Advice on Become a Software Developer</title>
    <link href="/learning-software/"/>
    <updated>2021-01-31T00:00:00Z</updated>
    <id>/learning-software/</id>
    <content type="html">&lt;p&gt;Whilst running the &lt;a href=&quot;https://midlandsjs.org&quot;&gt;MidlandsJS&lt;/a&gt; meetup, I&#39;ve been asked a number of times how to &amp;quot;get into&amp;quot; software development and the industry at large. So that I can clarify my own thoughts (and update them), here is my advice.&lt;/p&gt;
&lt;p&gt;But, here are two points of advice that I preface my practical advice with:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;em&gt;Don&#39;t take any one&#39;s advice as gospel&lt;/em&gt; - All advice is given from that persons perspective. They are not to be blamed for this, but rather use their advice and experience to plot a course for your own path.&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Don&#39;t pay too much attention to how people got into the industry&lt;/em&gt; - There are a million and one ways to learn software and get a job in the industry. I personally fell into a little &amp;quot;analysis-paralysis&amp;quot; looking at what the &amp;quot;best&amp;quot; way to get into the industry was. The truth is, it doesn&#39;t matter. Do great work and work hard and you&#39;ll succeed.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;With that said, here is the steps I would take if I were getting into software development.&lt;/p&gt;
&lt;h2&gt;1. &lt;a href=&quot;https://freecodecamp.com&quot;&gt;FreeCodeCamp&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;It shouldn&#39;t be required that you need to pay for loads of courses to get into software development. FreeCodeCamp is an amazing free resource where I learnt to code myself. They&#39;ve changed the course since I took it, but the principles are the same. As with all learning, it&#39;s easy to power through all the lessons by googling the answers and asking on the forum. I&#39;d strongly advise against this. You&#39;ll ultimately learn nothing and be only fooling yourself later on in the process. If you do need to learn quickly then I&#39;d advise skipping over lessons and then coming back at a later date. Try to learn slowly, take notes, build the projects and have fun doing it.&lt;/p&gt;
&lt;h2&gt;2. Basic SQL and NoSQL querying&lt;/h2&gt;
&lt;p&gt;A large portion of learning to code focuses on the code part, but in reality the critical part is the data. Once you&#39;ve got through FreeCodeCamp and got a couple of apps under your belt. Start to incorporate more SQL and NoSQL querying and integrate them into your apps. For example, if you&#39;ve created a Todo app that stores the data in state, try to port it so it can store data in both MySQL and MongoDB.&lt;/p&gt;
&lt;p&gt;If you get great at SQL, you can become an indispensable asset to any company. I recommend setting it up on your own PC and reading &lt;a href=&quot;https://sqlzoo.net&quot;&gt;sqlzoo&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;3. Get a Job&lt;/h2&gt;
&lt;p&gt;If you&#39;ve learnt to drive, swim or ride a bike, you know that it&#39;s only through hours of practise that you fully master something. Looking back, I&#39;m sure you remember many times where you fell off your bike, got a bit out your depth in the sea or crashed your car (I did, twice 🙈). The initial &amp;quot;learning&amp;quot; stage is where you form a basic understanding. In the same way, once you&#39;ve gone through FreeCodeCamp and SQLZoo, you won&#39;t really know how to code, but you&#39;ll know the basics and that&#39;s all you need to get a junior programming job. So, don&#39;t be afraid to just dive in. Don&#39;t wait to &amp;quot;know&amp;quot; a technology or software at large - it&#39;s an endless journey.&lt;/p&gt;
&lt;p&gt;It&#39;s a whole other article to discuss how to pick out a junior job, and how to apply for them. But principally, try to get your foot in the door somewhere, anywhere. It will 10 times more challenging to find your first job than your second. Wherever you land, be humble, learn something from everyone and keep up with building new projects and contributing to open source.&lt;/p&gt;
&lt;p&gt;That was the advice I wish I had been able to tell my 17 year old self. I hope you are able to take something away from it.
Feel free to contact me via twitter - &lt;a href=&quot;https://twitter.com/joshghent&quot;&gt;@joshghent&lt;/a&gt; and I&#39;d be happy to help with any questions you have.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Cut to the Chase</title>
    <link href="/cut-to-the-chase/"/>
    <updated>2021-01-25T00:00:00Z</updated>
    <id>/cut-to-the-chase/</id>
    <content type="html">&lt;p&gt;Scheduling time with friends and family is even more critical nowadays. But I’ve noticed a trend that goes something like this:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;“We should get together some time”
“Yeah when is good for you”
“I’m easy, when works for you”
“I’m easy too, could we do a weekday afternoon”
“Urrmm, I can’t do Tuesday and Thursday afternoons”
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You get the idea...
I&#39;m sure you&#39;ve all had conversations like that. We&#39;re all guilty of doing this. And what results? Nothing. Arrangements don&#39;t get made. In 6 months time, you might send or receive a message that says &amp;quot;oh we never did this? Are you free soon?&amp;quot; and thus, the cycle continues.&lt;/p&gt;
&lt;p&gt;But how could we make it better?&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;“Hey we should get together to play some games, how does Friday at 7 sound?”
“Sounds great but can’t do then, shall we say Saturday at 7?”
“Done deal”
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;So much better. Why?&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;📆 You aren’t left guessing about the other&#39;s calendar&lt;/li&gt;
&lt;li&gt;🙊 Less messages mean you’re more likely to actually book something in.&lt;/li&gt;
&lt;li&gt;🤝 You are both assured of the others commitment to schedule something&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;If you’ve been sent to this website by someone, don’t take it personally! They love you and want to spend time with you! ❤️&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Redesigning my Site - Accessibility, Privacy and 100 PSI Scores</title>
    <link href="/redesign/"/>
    <updated>2021-01-11T00:00:00Z</updated>
    <id>/redesign/</id>
    <content type="html">&lt;p&gt;So, things look a little different around here. I took the time to overhaul the previous design of my site into something more simple. But why? This post is meta but bear with me.&lt;/p&gt;
&lt;p&gt;The three reasons are as follows:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Accessibility&lt;/li&gt;
&lt;li&gt;Performance&lt;/li&gt;
&lt;li&gt;Privacy&lt;/li&gt;
&lt;/ul&gt;
&lt;h1&gt;Accessibility&lt;/h1&gt;
&lt;p&gt;After hearing a talk by &lt;a href=&quot;https://twitter.com/hellojadene&quot;&gt;Jadene Aderonmu&lt;/a&gt; about the &lt;a href=&quot;https://www.w3.org/WAI/standards-guidelines/wcag/&quot;&gt;WCAG guidelines&lt;/a&gt;, I began to include accessibility as a critical part of a website. Often, I had pushed accessibility to the side. Because it was difficult to test and seldom noticed.
But accessibility is vital. Recently, an &amp;quot;Ask Hacker News&amp;quot; post asked, &lt;a href=&quot;https://news.ycombinator.com/item?id=22918980&quot;&gt;&amp;quot;How do I prepare as a developer going blind?&amp;quot;&lt;/a&gt;. Although a small part, I realised how little notice of a large community I took. My website is only a drop in the digital sea. But, as with the &lt;a href=&quot;https://www.peoplehr.com/blog/2015/11/20/the-story-of-the-boy-and-the-starfish/&quot;&gt;boy and the starfish&lt;/a&gt; &amp;quot;I made a difference for that one&amp;quot;. I don&#39;t have the power to change the web, but I do for my tiny corner of it.&lt;/p&gt;
&lt;p&gt;I found my accessibility had many glaring errors - one being the colours I used. The orange (&lt;a href=&quot;https://www.color-hex.com/color/ff6c2f&quot;&gt;#ff6c2f&lt;/a&gt;), used for links and other highlights, didn&#39;t meet the contrast guidelines of the WCAG. And, the dark theme didn&#39;t meet the mark either. I ripped these out in favour of a simple blue and white theme.&lt;/p&gt;
&lt;p&gt;The markup of the site was also lacking. Under the hood, my navigation bar wasn&#39;t getting identified as a navigation bar by the browser. I cleaned this up and now have made the site navigational via a keyboard.&lt;/p&gt;
&lt;p&gt;I plan to improve my sites accessibility further by scanning all articles for basic metadata like &lt;code&gt;alt&lt;/code&gt; tags.&lt;/p&gt;
&lt;div class=&quot;image&quot;&gt;
	&lt;img alt=&quot;Screenshot of an accessibility scanner showing a score of 92%&quot; src=&quot;./../../assets/images/after-access.png&quot; /&gt;
  &lt;em&gt;The results so far! No violations - up from a score of 80%!&lt;/em&gt;
&lt;/div&gt;
&lt;h2&gt;Performance&lt;/h2&gt;
&lt;p&gt;My blog had become bloated - over 300KB. For most modern websites being well over 1MB, 300KB seems like a small fry. Who cares right? I took a step back and analysed what this site was - a collection of static HTML pages. On that basis alone, the homepage should not be 300KB - it&#39;s text on a page.&lt;/p&gt;
&lt;p&gt;I started trimming the fat.&lt;/p&gt;
&lt;p&gt;First, I pruned Google analytics and Disqus (which I&#39;ll talk about later). Next, Google Fonts and Twitter embed went. Finally, I redesigned the site to remove the dark mode toggle, unused CSS and images. This produced a simple design that focused on the reading experience only.&lt;/p&gt;
&lt;p&gt;After all this pruning, I was down to 140KB. There is still room for improvement I am sure.&lt;/p&gt;
&lt;div class=&quot;image&quot;&gt;
	&lt;img alt=&quot;Screenshot of Solarwinds tools, showing the page weight reduced from 300KB to 140KB&quot; src=&quot;./../../assets/images/after-perf.png&quot; /&gt;
  &lt;em&gt;Screenshot of Solarwinds tools, showing the page weight reduced from 300KB to 140KB&lt;/em&gt;
&lt;/div&gt;
&lt;h2&gt;Privacy&lt;/h2&gt;
&lt;p&gt;The genesis of this redesign was simple. I had decided I wasn&#39;t getting benefit out of Google Analytics and Disqus. And, the readers of this blog had their privacy invaded. I decided to remove this, so now my blog is 100% my content.&lt;/p&gt;
&lt;p&gt;For comments, readers can now contact me via email. And, analytics I&#39;ve let go of all together. Ultimately, I&#39;m writing this blog for me. I&#39;d love if someone else got benefit out of this, but if they don&#39;t then that&#39;s ok too. I&#39;ve let go of the idea of being some famous super blogger - it wasn&#39;t a conscience hope but something in the back of my mind. I&#39;m not interested in optimising my content for clicks or ads. I&#39;m interested in improving my writing craft, whilst building a personal knowledge bank for myself.&lt;/p&gt;
&lt;h2&gt;Upcoming&lt;/h2&gt;
&lt;p&gt;This simple design comes along with a principle that I&#39;m now practising in my life - simplicity. I&#39;ve realised I want to embrace the things I value and discard those that don&#39;t. I&#39;m speaking not just of possessions here, but words, apps, health and more. I wouldn&#39;t describe myself as a minimalist - but it&#39;s along a similar vein. Soon, I&#39;ll be speaking more about this practise of &amp;quot;simplicity&amp;quot; and how it relates to me as a software engineer.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Downloading your Favorite YouTube Playlist Automatically</title>
    <link href="/youtube-playlist-downloader/"/>
    <updated>2020-12-10T00:00:00Z</updated>
    <id>/youtube-playlist-downloader/</id>
    <content type="html">&lt;p&gt;In light of Youtube-DL being taken down from GitHub, I decided to give it a go with a use case I happened to have.&lt;/p&gt;
&lt;p&gt;Lately, I&#39;ve been listening to lots of concerts/festival sets that are not available for traditional purchase. Although I have listened to them on youtube, I didn&#39;t want to have the web page open and the auto-play/queuing features are not as fledged out as a proper music player.&lt;/p&gt;
&lt;p&gt;I decided to write a quick script that I could run in a cron to pull down the latest version of the playlist I maintain. To keep an eye on it, I included a Slack webhook notification that let&#39;s me know when the playlist has been downloaded.&lt;/p&gt;
&lt;p&gt;My first version worked but I quickly realised that the script did not handle updates, only a complete re-download. To resolve this, youtube-dl has a &lt;code&gt;--download-archive&lt;/code&gt; flag that keeps track of downloaded videos.&lt;/p&gt;
&lt;p&gt;Check out the code below and hopefully you find it useful.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-sh&quot;&gt;#!/bin/bash

playlist_url=&amp;quot;https://www.youtube.com/playlist?list=ZZZ&amp;quot;

cd ~/Projects/music

youtube-dl --embed-thumbnail --download-archive downloaded.txt --no-post-overwrites --extract-audio --audio-quality 0 --format bestaudio --audio-format mp3 --yes-playlist --output &amp;quot;%(title)s-%(id)s.%(ext)s&amp;quot; $playlist_url

curl -X POST --data-urlencode &amp;quot;payload={&#92;&amp;quot;username&#92;&amp;quot;: &#92;&amp;quot;youtubeb0t&#92;&amp;quot;, &#92;&amp;quot;text&#92;&amp;quot;: &#92;&amp;quot;Completed download of all Music playlist.&#92;&amp;quot;, &#92;&amp;quot;icon_emoji&#92;&amp;quot;: &#92;&amp;quot;:tv:&#92;&amp;quot;}&amp;quot; https://hooks.slack.com/services/ZZZ/YYY
&lt;/code&gt;&lt;/pre&gt;
</content>
  </entry>
  
  <entry>
    <title>Using RDS Snapshots</title>
    <link href="/using-rds-snapshots/"/>
    <updated>2020-09-21T00:00:00Z</updated>
    <id>/using-rds-snapshots/</id>
    <content type="html">&lt;p&gt;Recently, I had a case where I needed to gain access to an RDS Instance that I had long since deleted. To add insult to injury, the Bastion host to gain access to the Database server had also been deleted and it&#39;s VPC, Security Groups and all the other architecture components! Yikes.&lt;/p&gt;
&lt;p&gt;Fortunately, I had taken a final snapshot before it was deleted. Phew!&lt;/p&gt;
&lt;p&gt;I thought it was as simple as downloading the snapshot, which I assumed was just an SQL Dump, import it into a local Database and then bobs your uncle. Not so fast... RDS Snapshots aren&#39;t just database dumps, after all that would just be too easy! Instead they are bespoke AWS format that is not able to be parsed and imported with a tool.&lt;/p&gt;
&lt;p&gt;Despite there being documentation, it&#39;s fairly lacking so I thought I&#39;d write a guide on how I gained access to the data within an RDS snapshot.&lt;/p&gt;
&lt;h2&gt;1. Create a new VPC&lt;/h2&gt;
&lt;p&gt;The first step is to create a new VPC. Name it what you like and give it any old CIDR Block. Leave everything else standard&lt;/p&gt;
&lt;div class=&quot;image&quot;&gt;
  &lt;img alt=&quot;Screenshot of creating a VPC&quot; src=&quot;../../assets/images/create-vpc.png&quot; /&gt;
&lt;/div&gt;
&lt;h2&gt;2. Create a Security Group for the VPC&lt;/h2&gt;
&lt;p&gt;Next, you need to create a security group for the VPC you just created.
Simple add a name, then select your previously created VPC you created in step 1.&lt;/p&gt;
&lt;p&gt;Next in the Inbound Rules pane, create an new inbound rule to allow traffic on Port 5432 (the PostgreSQL default port). The source should listed as &amp;quot;Your IP&amp;quot;, which will prefill it on the current IP address you are accessing the AWS Console on.&lt;/p&gt;
&lt;div class=&quot;image&quot;&gt;
  &lt;img alt=&quot;Creating a AWS Security Group&quot; src=&quot;../../assets/images/create-sg.png&quot; /&gt;
&lt;/div&gt;
&lt;h2&gt;3. Restore the RDS Instance&lt;/h2&gt;
&lt;p&gt;Now you&#39;ve done that, go to RDS and then Snapshots.
Find the snapshot you&#39;re looking to get data from and click &amp;quot;Restore snapshot&amp;quot;&lt;/p&gt;
&lt;div class=&quot;image&quot;&gt;
&lt;img alt=&quot;Select to restore the RDS Snapshot&quot; src=&quot;../../assets/images/restore-rds.png&quot; /&gt;
&lt;/div&gt;
&lt;p&gt;This will send you to a configuration page for the instance you are about to boot.
First, change the instance type to something cheaper because the default is something more powerful than Deep blue and costs about as much as a mission to the moon.&lt;/p&gt;
&lt;p&gt;Afterwards, change the VPC that the instance is in, to the one created earlier and change the security group to the one created earlier. In my case &amp;quot;snapshot-access-vpc&amp;quot; and &amp;quot;rds-snapshot-sg&amp;quot;. Additionally, select &amp;quot;Public Access&amp;quot;.&lt;/p&gt;
&lt;div class=&quot;image&quot;&gt;
  &lt;img alt=&quot;Restore RDS Options&quot; src=&quot;../../assets/images/restore-rds-options.png&quot; /&gt;
&lt;/div&gt;
&lt;p&gt;Now hit Create and your instance will launch with the snapshot data.&lt;/p&gt;
&lt;h2&gt;Hey Presto!&lt;/h2&gt;
&lt;p&gt;You should not be able now connect to your RDS instance using the details on the Connectivity Pane when clicking on your RDS instance in the list. Hopefully you found this helpful!&lt;/p&gt;
&lt;p&gt;Make sure to tear down this access after you finish using it as it is by no means secure. It should purely be used for grabbing data retrospectively if a client requests it.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Improving Koru&#39;s API Performance</title>
    <link href="/api-performance/"/>
    <updated>2020-09-21T00:00:00Z</updated>
    <id>/api-performance/</id>
    <content type="html">&lt;p&gt;If you don&#39;t know already - I love performance. It solves a genuine frustration for users and provides meaty problems to sink your teeth into. Koru&#39;s API performance was a little lacking since we had added a lot of features, rewritten large swaths of code and generally not thought about it, since our system is not dependant on returning results quickly. Never the less, we chose to do some performance improvement primarily aimed at reducing database load. This work was in preparation for moving to a multi-tenant architecture. When the move to a central database cluster is complete, reducing database load for each endpoint will be critical to ensure that we continue to have acceptable response times.&lt;/p&gt;
&lt;p&gt;This work is still largely on going but here I&#39;ll discuss some of the foundation work we have done.&lt;/p&gt;
&lt;h2&gt;Gathering Data&lt;/h2&gt;
&lt;p&gt;Firstly, we started by collecting data on query performance in our system. We don&#39;t use an application performance monitoring, but this is simple enough by attaching onto the &lt;code&gt;receive&lt;/code&gt; &lt;a href=&quot;http://vitaly-t.github.io/pg-promise/global.html#event:receive&quot;&gt;event on the PG Promise constructor&lt;/a&gt;. We then want to report queries that take over 500ms into Slack so that we can then focus our attention on them.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;const pg = pgPromise({
  // {... other options }
  receive: async (data, result, e) =&amp;gt; {
    winston.debug(
      `${e.query} received ${result.rowCount} row(s) in ${result.duration}ms`
    );

    if (result.duration &amp;amp;&amp;amp; result.duration &amp;gt;= 500) {
      await Slack.reportSlowQuery(e.query, result.rowCount, result.duration);
    }
  },
});
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This sends the query, the amount of rows it returned and the duration of the query to a Slack webhook which then produces a message like this:&lt;/p&gt;
&lt;div class=&quot;image&quot;&gt;
	&lt;img alt=&quot;slow query slack message sample&quot; src=&quot;../../assets/images/slackmessage.png&quot; /&gt;
&lt;/div&gt;
&lt;p&gt;Ok, so now we&#39;ve got a steady stream of slow queries coming into our system! 🎉&lt;/p&gt;
&lt;h2&gt;Optimization&lt;/h2&gt;
&lt;p&gt;Quickly, we began to see a pattern. Most of the queries that were reported as slow contained two things:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;A &lt;code&gt;WHERE&lt;/code&gt; clause on the clients ID - added for prevention of accessing other clients resources&lt;/li&gt;
&lt;li&gt;An &lt;code&gt;ORDER BY&lt;/code&gt; of a property within a &lt;code&gt;JSONB&lt;/code&gt; object&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The first was a fairly easy change, simply add an index on the client ID column.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-sql&quot;&gt;CREATE UNIQUE INDEX IF NOT EXISTS client_id_key ON clients USING BTREE (client_id);
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I&#39;m still unsure why we hadn&#39;t got an index here before, but I put it down to the fact that there was very little in the way of cross-client security. It was as if the system said &amp;quot;well you&#39;re authenticated so have what you want!&amp;quot;. Thankfully, we have long since added these protections.&lt;/p&gt;
&lt;p&gt;The second was a little more of a challenge. Initially, I began by looking up if it was possible to create an index on a certain property of a JSON structure or if you instead had to create just an index on the entire column.&lt;/p&gt;
&lt;p&gt;Soon I discovered it was possible! But there was a lot of conflicting advise as to whether it actually improved query performance or not and if the building of the index netted a performance negative.&lt;/p&gt;
&lt;p&gt;Anyway, I decided to create the index like this&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-sql&quot;&gt;CREATE INDEX candidate_response_score_indx ON responses((data-&amp;gt;&amp;gt;&#39;score&#39;));
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This took down the average query time from around 885ms to 7ms 🚀&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;There is still many more queries to improve the performance of but this was a great start. It taught me a lot about the internals of Postgres and experimentation.&lt;/p&gt;
&lt;h2&gt;Takeaways&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;Learn but don&#39;t be afraid to experiment - there becomes a point with learning that the best way to see if something will be a positive change or not, is to just try it out&lt;/li&gt;
&lt;li&gt;&amp;quot;Reckons&amp;quot; don&#39;t always reflect reality - I believed there was lots of performance improvements to be made in a number of common operations but after trying the queries out and collecting data, they were already very performant. Therefore, be more data driven rather than &amp;quot;reckon&amp;quot; driven, even if those assumptions point you in the general direction.&lt;/li&gt;
&lt;/ol&gt;
</content>
  </entry>
  
  <entry>
    <title>Writing Useful Error Messages</title>
    <link href="/error-messages/"/>
    <updated>2020-09-04T00:00:00Z</updated>
    <id>/error-messages/</id>
    <content type="html">&lt;p&gt;Server-side developers often overlook the fact that they are not immune to considering user experience in the design of their systems. Part of this is consideration around error messages. As I spoke about in my article about crafting quality health checks, people don&#39;t get angry when something doesn&#39;t work, but rather when something doesn&#39;t work for some inexplicable reason. In server sided systems, we explain those reasons through the form of error messages returned via an API, or some other mechanism to get a piece of text explaining the issue back to the originator of the request. But how can we write useful error messages? Here are my 5 tips.&lt;/p&gt;
&lt;h2&gt;Make them actionable&lt;/h2&gt;
&lt;p&gt;Actionable advice will always be helpful for your users. But don&#39;t just tell them, provide a means for them to take that action that you are suggesting. As an example, let&#39;s say your application requires the email address to be verified via a unique link sent to said email address. In the case where the user logs in without verifying the email address, you could return an error message like this. &amp;quot;Please verify your email address joe.bloggs@example.net to continue&amp;quot;. This might seem perfectly sensible and even actionable advice. But to go a step further to make it &#39;great&#39;, we could also include a button to &amp;quot;Resend the verification email&amp;quot; because chances are, the verification email you sent originally is long gone into the rabbit hole of that persons inbox. By providing a button to, in part, perform the action you require, it will be more likely for the user to follow through with the action.&lt;/p&gt;
&lt;p&gt;In other cases, it might not be possible to partially perform an action that your system requires. In these cases, be as clear as possible about what the system would need to work - even if it requires multiple steps. For example, you might have an API that requires parameters &amp;quot;A&amp;quot;, &amp;quot;B&amp;quot; and &amp;quot;C&amp;quot;. But the caller only passes in parameter &amp;quot;A&amp;quot;. What should the error message be?
In most cases, it would probably be:
&lt;code&gt;Please specify parameter &amp;quot;B&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;The user would then dutifully add parameter &amp;quot;B&amp;quot; and rerun the request. Only to be given &lt;em&gt;another&lt;/em&gt; error:
&lt;code&gt;Please specify parameter &amp;quot;C&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;At this point, keyboards will be broken, fists will be clenched and Irish coffees will be set to brew - not the response you want from a user of your service.&lt;/p&gt;
&lt;p&gt;Let&#39;s rewind and see if we can make the original error more actionable - the user has not passed in two required parameters for our API&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
  &amp;quot;errors&amp;quot;: {
    &amp;quot;B&amp;quot;: {&amp;quot;type&amp;quot;: &amp;quot;number&amp;quot;, &amp;quot;required&amp;quot;: true},
    &amp;quot;C&amp;quot;: {&amp;quot;type&amp;quot;: &amp;quot;string&amp;quot;, &amp;quot;required&amp;quot;: true}
  },
  &amp;quot;documentation&amp;quot;: &amp;quot;https://kryptoking.com/v2/bitcoin#parameters&amp;quot;
  &amp;quot;message&amp;quot;: &amp;quot;Please specify parameter &amp;quot;B&amp;quot; of type Number and parameter &amp;quot;C&amp;quot; of type String. Please see the documentation at https://kryptoking.com/v2/bitcoin#parameters for more details&amp;quot;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Ok so this is a little more than a message, but still is included in the overall error response. What we have here is a concise breakdown of exactly what the caller of the API needs to do to perform a valid request, as well as being provided a link to further documentation if required. Additionally, the &lt;code&gt;errors&lt;/code&gt; object contains a payload that is easily parsable by a computer to perhaps automatically correct errors on the fly without the developers intervention.&lt;/p&gt;
&lt;p&gt;In both examples, it was clear what the user needed to do next and that&#39;s what you should aim to achieve.&lt;/p&gt;
&lt;h2&gt;Remove internal/tech lingo&lt;/h2&gt;
&lt;p&gt;This is one particularly for services consumed by non-technical audiences, who may be unfamiliar with terms we as developers use everyday. We need to use language that, as Lego would put it, can be used by ages 4+. For example, if a users authentication token is invalid, rather than displaying an error &amp;quot;Error: JWT has expired&amp;quot;. We should sign the user out automatically and display a friendly messages such as, &amp;quot;We signed you out because we weren&#39;t sure you were still there! Please log in again using your account details below&amp;quot;. Same error, different angle.&lt;/p&gt;
&lt;p&gt;I am always on a crusade with whatever I work on, to remove terms from systems that would require a dictionary to understand. You should aim to do this in any messages that get displayed to users as well. Otherwise, further to point number 1, it would not be as actionable as it could have been.&lt;/p&gt;
&lt;h2&gt;Get rid of them!&lt;/h2&gt;
&lt;p&gt;The best error messages are the ones that don&#39;t exist at all. Although you don&#39;t want to be burdening your system with translating a million different data input types to the ones you want, I&#39;m almost sure that whatever system you work on, there is a way you can automatically correct an error without the user knowing. Maybe it&#39;s a default value you can assign, maybe it&#39;s a common data type that users mix-up, whatever the case, take a look at calls to your system that return errors and find patterns in them to seek out ways you can improve the system or the documentation.&lt;/p&gt;
&lt;h2&gt;Don&#39;t blame the user&lt;/h2&gt;
&lt;p&gt;One of the worst things you can do in error messages, is pin the blame on the user. Avoid using terms such as &amp;quot;Your&amp;quot; or &amp;quot;You&amp;quot;. In an example we used earlier we spoke of a system that auto logged you out if you were idle for too long. This could be blamed on the user - e.g., &amp;quot;You were idle for too long so you were logged out. Please log in again to continue&amp;quot;. It contains actionable steps, it&#39;s not robotic and there was no tech lingo. But it makes the user feel like an idiot because of a constraint that the system built. Don&#39;t blame your users, blame yourself if anyone. I&#39;ve seen no end of error messages, especially unknown ones that say words to the effect of: &amp;quot;Something went wrong in our system. Our engineers are working hard to fix it. Please site reference number: 12345 to our support channels if the issue persists&amp;quot;.&lt;/p&gt;
&lt;p&gt;Be empathetic to your users and use it as an opportunity to improve your service, maybe your documentation needs clarifying or maybe this is an easy mistake to make (misspelling of a parameter perhaps). Attack these problems, not your users.&lt;/p&gt;
&lt;h2&gt;Be direct but not robotic&lt;/h2&gt;
&lt;p&gt;&amp;quot;Invalid Password Provided&amp;quot;, &amp;quot;This field has an error&amp;quot; and &amp;quot;Bad Request: Unexpected end of input&amp;quot; are all error messages I&#39;ve had in the past, maybe you can remember some equally ridiculous yourselves. What&#39;s wrong with these messages? Well asides, from the fact that they are in-actionable, they also use a tone that might be preferred by a Terminator T1000 but probably not by the &amp;quot;humans&amp;quot;. Robotic language is rife in all kinds of messaging, but especially errors - the idea being that by being robotic, it&#39;s clear and direct for the person to understand. But in reality, it comes off as distant, cold and frankly useless. Inject some humanity and a bit of fun into your error messages. Psychologist have studied and found that a great way to instantly lift your mood is to simply smile. Smiling is contagious - if we add fun and humanness (if that&#39;s a word) into our tone of writing, then even though a user has an issue and maybe slightly annoyed, it will be impossible to maintain that position by being so friendly. Kill them with kindness so to speak (but maybe avoid killing your users because that would make you a terminator).&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;Hopefully you&#39;ve learnt some new tricks to help you craft great error messages. As an additional hint, try to include this as part of your code review process where you scan any external (or heck, even internal) facing messages and analyse them for quality. If possible, get someone unfamiliar with the system to review the messages so that they can be understood by all.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Building Awesome Application Health Checks</title>
    <link href="/health-checks/"/>
    <updated>2020-08-28T00:00:00Z</updated>
    <id>/health-checks/</id>
    <content type="html">&lt;p&gt;For many, having a health check in your application may be somewhat of an afterthought. Maybe your application does have a health check, but you&#39;ve got no idea if it actually works. I didn&#39;t give it much of a thought until I truly understood that one of the key pillars of good development is monitoring. And to be able to monitor a service, you need to know if it&#39;s up, down or fallen down the stairs.&lt;/p&gt;
&lt;p&gt;Application health checks can be useful not only for internal usage but also external and can aid with supporting customers. Take a look at many large companies and they all have public status pages (especially developer focused ones). Aren&#39;t they embarrassed whenever they go down? Perhaps, but it&#39;s a natural part of all systems, and the benefit to them is that customers can easily see that they are aware of the problem and are actively working to fix it. Otherwise, they&#39;d be crowding support channels and generally getting frustrated.&lt;/p&gt;
&lt;h2&gt;So health checks are important, but how can we create one?&lt;/h2&gt;
&lt;p&gt;If you have a health check in your application, chances are it might look something like this&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-ts&quot;&gt;import express from &amp;quot;express&amp;quot;;

const router = express.Router();

router.all(&amp;quot;/health&amp;quot;, (req, res) =&amp;gt; {
  res.status(200);
});
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Ok! Good start. In this basic example, this express API is telling the caller of the &lt;code&gt;/health&lt;/code&gt; endpoint that it is up. That can be important information to know, but fairly redundant in some cases. For example, if you host your application using AWS Lambda, Azure Functions, or whatever the GCP one is, then, in reality, all this endpoint is doing is telling you that the cloud provider has not gone down - which is far less likely that your application being unhealthy.&lt;/p&gt;
&lt;h2&gt;Define &amp;quot;unhealthy&amp;quot;&lt;/h2&gt;
&lt;p&gt;A good place to start is to define what unhealthy means for your service. In other words, at what stage will your core application become unusable to the point where your customers will react something like this.&lt;/p&gt;
&lt;div class=&quot;image&quot;&gt;
	&lt;img src=&quot;https://media0.giphy.com/media/M11UVCRrc0LUk/giphy.gif&quot; /&gt;
&lt;/div&gt;
&lt;p&gt;Nowadays, systems are never usually completely &amp;quot;down&amp;quot; but rather have failure points across a range of services. For example, if you run a microservice architected infrastructure then perhaps one piece of functionality will not work but the core of the application will. Your health checks and your messaging to customers needs to reflect this.&lt;/p&gt;
&lt;p&gt;At Cappfinity, we depend on a number of services to run a pipeline for customers and their candidates alike. Our &amp;quot;unhealthy&amp;quot; state would be when our scoring provider is down, our API and Database are unreachable or when our authentication provider is down. If any of these three components go down then we would not be able to deliver a core service to our customers. Depending on your business, you may have more or less components that you critically depend on.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;As an aside, try to look into each one of these components to see if you can deliver a reduced experience if they are unavailable. Netflix do this to great success and you can read more about that here: &lt;a href=&quot;https://netflixtechblog.com/making-the-netflix-api-more-resilient-a8ec62159c2d&quot;&gt;https://netflixtechblog.com/making-the-netflix-api-more-resilient-a8ec62159c2d&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;Improving the health check&lt;/h2&gt;
&lt;p&gt;Ok so we&#39;ve defined what &amp;quot;unhealthy&amp;quot; is, so now we need to build the health check itself.&lt;/p&gt;
&lt;p&gt;First, let&#39;s dive into each of our critical components and add an endpoint or monitoring around if they can connect to all of their dependant third parties.
For example, if our API depends on a Database and an authentication provider then we might have the following&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-ts&quot;&gt;import express from &amp;quot;express&amp;quot;;
import Database from &amp;quot;./lib/database&amp;quot;;
import Auth from &amp;quot;./lib/auth&amp;quot;;

const router = express.Router();

router.all(&amp;quot;/health&amp;quot;, async (req, res) =&amp;gt; {
  const db = await Database.connect(); // check we can connect to the database
  const auth = await Auth.getUserById(1); // check if we can get a random user from our auth provider

  return res.json({
    db: db ? &amp;quot;healthy&amp;quot; : &amp;quot;unhealthy&amp;quot;,
    auth: auth ? &amp;quot;healthy&amp;quot; : &amp;quot;unhealthy&amp;quot;,
  });
});
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Great, so now we&#39;re able to tell any caller of this endpoint that:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;The API is up&lt;/li&gt;
&lt;li&gt;The Database is up&lt;/li&gt;
&lt;li&gt;The authentication provider is up&lt;/li&gt;
&lt;li&gt;The API can connect to the database&lt;/li&gt;
&lt;li&gt;The API can connect to the authentication provider&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The last two points are key, because although you might be able to create a new service that goes and pings these services to see if they are up, you are not telling the caller if those services can be reached. Often, firewalls, static IP&#39;s, bad credentials, and other application issues can prevent a service from connecting to another. So it&#39;s important to test the connectivity from which ever service relies on it.&lt;/p&gt;
&lt;h2&gt;Internal vs External&lt;/h2&gt;
&lt;p&gt;That health check endpoint is exactly what developers need to assist with debugging a service. However, saying that your database is down to a customer is not really relevant and may even confuse them. In all reality, if you are building a customer facing experience for your service status then you need to think about how to communicate what the impact for them is.&lt;/p&gt;
&lt;p&gt;For example, take a look at Slack status page&lt;/p&gt;
&lt;div class=&quot;image&quot;&gt;
	&lt;img src=&quot;../../assets/images/slackstatus.png&quot; /&gt;
&lt;/div&gt;
&lt;p&gt;You can see that it doesn&#39;t contained detailed break downs of information, but rather individual components of a system that a customer will be familiar with - messaging, login, notifications etc. You should follow a similar pattern, if your system is an API that delivers Woody Harrelson&#39;s face as placeholder images for frontend developers, then perhaps you&#39;d have the following services you can report on:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The API&lt;/li&gt;
&lt;li&gt;Harrelson import system - for importing a continuous feed of Woody Harrelson&#39;s and cropping out everything but the face&lt;/li&gt;
&lt;li&gt;Harrelson&#39;s calculator service - the service for getting the correct Woody Harrelson picture based on the size requested by the caller&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;You can then report those data points to a frontend status page whilst keeping your in-depth health checks that are only accessible by your developers internally.&lt;/p&gt;
&lt;div class=&quot;image&quot;&gt;
	&lt;img src=&quot;../../assets/images/seriouswork.png&quot; /&gt;
  &lt;em&gt;There we go! Ready for production!&lt;/em&gt;
&lt;/div&gt;
&lt;h2&gt;Resistance&lt;/h2&gt;
&lt;p&gt;In the past, I&#39;ve seen some companies be hesitant about revealing in depth information about the health of their systems. If you face this kind of opposition then you can reason that the company will benefit for the following reasons:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;The company is transparent about it&#39;s successes and shortcomings which appeals to people and businesses&lt;/li&gt;
&lt;li&gt;The company is proud of great engineers debugging systems and fulfilling any contractual SLA&#39;s&lt;/li&gt;
&lt;li&gt;Developers who consume your application don&#39;t have to ring your support line to get help if the system is down&lt;/li&gt;
&lt;li&gt;You&#39;re following in the footsteps of some of the largest companies in the world - Netflix, Slack, Stripe, Apple etc.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;Overview&lt;/h2&gt;
&lt;p&gt;I hope you&#39;ve learnt something and can go out and improve your systems health checks! Please feel free to share with me any of your tips for creating great health checks in the comments below. That&#39;s right! My pocky little blog has comments. It&#39;s right posh round &#39;ere now.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>ATS Resiliency</title>
    <link href="/ats-resiliency/"/>
    <updated>2020-08-24T00:00:00Z</updated>
    <id>/ats-resiliency/</id>
    <content type="html">&lt;p&gt;As with all modern enterprise SaaS platforms, Koru has a mechanism to send data about a candidates results back to a clients ATS (Application Tracking system) or CRM (Customer Relationship Management) system. Since Koru was event driven and the traffic is not consistent, the decision (which pre-dates myself) was made to make these &amp;quot;ATS Integrations&amp;quot; lambda&#39;s triggered from DynamoDB.&lt;/p&gt;
&lt;p&gt;So the set up is this, when a candidate completes an assessment on our platform, it goes through all the pipes and tubes, passed Gerald the field mouse that generates the scores and then finally back into our API when the completion of the scoring is recorded. Now we want to get this completed candidate data to our clients system, so our API creates a new record in the DynamoDB table for that client&#39;s ATS with all the information it needs. This then triggers the lambda that sends the data to the ATS of choice. Sounds easy right?&lt;/p&gt;
&lt;p&gt;I thought as much, but found there were a number of holes that the original developers fell into when developing this software that caused huge resiliency issues. I thought I would share my findings to all those trying to do the very common task of &amp;quot;submit data via HTTP to a 3rd party&amp;quot;. In our case, the third party API is the Cappfinity external scoring mechanism which is built by another team at the company which allowed us to work closely with them. However, most fixes were addressed on the Koru side.&lt;/p&gt;
&lt;h3&gt;1. Lambda was suffering from &amp;quot;socket exhaustion&amp;quot;&lt;/h3&gt;
&lt;p&gt;This was the biggest problem we had &amp;quot;socket exhaustion&amp;quot;. I hadn&#39;t stumbled across the term apart from a joke I&#39;d seen on IRC, so it took me a while to discover what the problem actually was and why it was happening in our system.
What we observed was that the Lambda function would fire a request for each record that we hadn&#39;t marked as &amp;quot;processed&amp;quot; in our system (It is considered &amp;quot;processed&amp;quot; when we get a 200 response from the 3rd Party API). Often this could just be one request, which would succeed. But sometimes, if records had failed over time, then we would build up this backlog of pending records to be sent to the third party. In these cases, the first few would succeed but quickly we&#39;d get the error &amp;quot;ETIMEDOUT&amp;quot; or &amp;quot;ESOCKETTIMEDOUT&amp;quot;.&lt;/p&gt;
&lt;p&gt;A quick google of these errors lead me to discover that Node goes to look up the DNS the URL for each request it makes. After this, it makes the request to the URL you&#39;ve specified. Often if you have multiple requests trying to process in parallel, and the DNS lookup for a few of them takes a bit too long then the program can&#39;t keep up with trying to issue new sockets to make requests on. Uber talks more about why it occurs over &lt;a href=&quot;https://eng.uber.com/denial-by-dns/&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;So we knew the the problem was that it was trying to process too many requests in parallel. To the code!
Immediately, I spotted the problem.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;return new Promise((resolve) =&amp;gt; {
  Promise.all(
    records.map(async (record) =&amp;gt; {
      const payload = buildPayload(record);
      try {
        const response = await processPayload(payload);
      } catch (e) {
        await markRecordAsFailed(record);
      }
      return Promise.resolve(payload);
    })
  ).then((result) =&amp;gt; resolve(result));
});
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;See the problem? Well there are a couple...&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;code&gt;processPayload&lt;/code&gt; is an async function but an &lt;code&gt;Array.map&lt;/code&gt; with an async callback will appear as valid javascript but will in fact not truly &lt;code&gt;await&lt;/code&gt; functions contained within it&lt;/li&gt;
&lt;li&gt;Because of not being awaited successfully, the array will quickly loop through and call the &lt;code&gt;processPayload&lt;/code&gt; but not listen for the result and therefore max out the sockets&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;It&#39;s worth noting that this code was used in production for many months before we got these kinds of issues, in response to increased load.&lt;/p&gt;
&lt;p&gt;What was the solution here? It turns out to be fairly simple, rewrite it as a regular &lt;code&gt;for&lt;/code&gt; loop.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;for (let i = 0; i &amp;lt; records.length; i += 1) {
  const payload = buildPayload(records[i]);
  try {
    const response = await processPayload(payload);
  } catch (e) {
    await markRecordAsFailed(records[i]);
  }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The code above means that we submit a request to send the data to the 3rd party one at a time.&lt;/p&gt;
&lt;h3&gt;2. The 3rd party was getting overwhelmed with requests&lt;/h3&gt;
&lt;p&gt;After we deployed the code above, we now found that the 3rd Party API was getting hammered with the amount of requests we had to process (even though they were one at a time). Although this sort of seems like &amp;quot;not my problem&amp;quot;, I chose to issue a fix for this.
The solution was to simply limit the number of requests that we sent too 100 per &amp;quot;run&amp;quot; of the lambda (i.e., we would only process a maximum of 100 records each time the lambda got triggered which was for all DynamoDB inserts into the table)&lt;/p&gt;
&lt;h3&gt;3. Use Https agent with a pool&lt;/h3&gt;
&lt;p&gt;We added a further fix to the problem above by creating our own Https Agent and passing that into the &lt;code&gt;request-promise&lt;/code&gt; libraries options.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;const agent = new https.Agent({ maxSockets: 25, keepAlive: true });
await request({
  options: {},
  agent: agent, // use the global agent
});
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This meant that we would share the agent across all requests that were made from the program rather than creating a new one per request. This again prevented socket timeout issues as well as slowing the rate that we sent requests to the API.&lt;/p&gt;
&lt;h3&gt;4. Cache JWT tokens (or any other authentication tokens you are using for the life time of the request)&lt;/h3&gt;
&lt;p&gt;A few months after we deployed the service, the Cappfinity team called us up to say that we had maxed out their M2M Auth0 Tokens. Oops.
When digging into the code, I found that the token was not only generated each time we processed records, but for each individual record itself. Double Oops.
I was hesitant to cache JWT token&#39;s because it just feels a bit wrong and has the potential to become invalid in the case of rotated secrets and so forth, but we went ahead with it anyway. We cached the token by storing it in another DynamoDB table - the mix of cheap storage combined with quick and easy access was hard to compete with. In the program, each time it boots, we check the table for a token. We then check if that token is valid. If it is then we can proceed as normal. Alternatively, if it&#39;s not valid then we can regenerate it, and update it in the table.&lt;/p&gt;
&lt;h3&gt;5. Don&#39;t submit data to the 3rd party when it is down&lt;/h3&gt;
&lt;p&gt;Systems go down, that&#39;s a fact. Or maybe a deployment breaks everything. Either way, we should have some mechanism so that if we mark a candidate records as failed to be processed, then we can be sure this is a data issue rather than an issue with any part of the underlying system. There are many ways to do this, but without working too heavily with the third-party API, we were able to implement a basic sanity check. I asked if they had a health check mechanism, which they did. I asked to look at the code and this is what I saw (roughly)&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;const router = express.Router();
router.all(&amp;quot;/health&amp;quot;, (req, res, next) =&amp;gt; {
  return res.send(200);
});
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Yup, that&#39;s it! I&#39;ll dive into how to build great health check mechanisms in another post, but suffice to say, this doesn&#39;t give an accurate representation of a systems health. Nonetheless, we implemented a system that would first ping the health check system to see if we got a good response back. We created tickets to get this health check improved so that we can have more confidence in this check going forward but this is a good first start. Additionally, this basic check would have saved us in the case of an outage the Cappfinity API had for 2 days. In that time, Koru was firing requests at the Cappfinity API, getting no response and failing them accordingly. This is correct behaviour but was such a headache to get them to submit again when the API was backup so this health check will prevent those sorts of situations.&lt;/p&gt;
&lt;h3&gt;6. Filtered out DynamoDB update, delete and modify triggers to reduce retries&lt;/h3&gt;
&lt;p&gt;I said earlier that the Lambda was triggered for DynamoDB inserts, but as of August 2020, there is no way within AWS itself to be this specific. So how did we do it and why?
We implemented this by looking at the &lt;code&gt;eventName&lt;/code&gt; property on the &lt;code&gt;event&lt;/code&gt; that is passed as the first argument to the Lambda function. This event name property, for DynamoDB events, signifies what type of event it was. When a record is updated, it&#39;s &lt;code&gt;MODIFY&lt;/code&gt;, when a record is deleted it&#39;s &lt;code&gt;DELETE&lt;/code&gt; and so on. The one we are interested in is &lt;code&gt;INSERT&lt;/code&gt;, which occurs, logically, when a record is inserted.
It was then a matter of checking the first records &lt;code&gt;eventName&lt;/code&gt; like this&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;const triggerType = event.Records[0].eventName;
const acceptedTriggers = [&amp;quot;INSERT&amp;quot;];
if (!acceptTypes.includes(triggerType)) {
  return {
    message: &amp;quot;Invalid trigger for this lambda&amp;quot;,
  };
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;But why did we do this?&lt;/p&gt;
&lt;p&gt;What was happening was that as part of our system, we update the dynamo table to indicate how many &amp;quot;retries&amp;quot; a record has attempted or if it&#39;s been processed or not. This updating meant that each time we processed a record it also triggered the lambda, so it just spun in a circle continuing to retry, over and over again. Pretty hilarious really.
By cutting down the lambda to trigger only on inserts, we not only saved money, but also made it more efficient. Additionally, we only wanted the lambda to trigger for new records and retry old ones and this bit of code achieved that purpose.&lt;/p&gt;
&lt;h3&gt;7. Special error cases&lt;/h3&gt;
&lt;p&gt;We worked with the Cappfinity team closely on this one. We found that in some cases, somehow, records were in the database ready to submit back to the ATS, that shouldn&#39;t have been there. These were the cases we were trying to handle:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Straight up invalid records&lt;/li&gt;
&lt;li&gt;The candidate hadn&#39;t actually completed the assessment&lt;/li&gt;
&lt;li&gt;We had already sent data but had not marked it as processed in our system.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;To solve this problem, we needed to introduce some new status codes so that we could categorize the errors we got back. Previously, if we sent an invalid request of any kind, it would return a 500 error to us. This categorization of errors meant we could handle each one uniquely. For complete invalid records, this meant just deleting them. Whilst for duplicate submissions, we marked them as &amp;quot;processed&amp;quot;.&lt;/p&gt;
&lt;p&gt;After we deployed this change, we let it chew through a huge backlog of records that were marked as unprocessed and soon we were down to 0 records marked as &amp;quot;unprocessed&amp;quot;. Great success!&lt;/p&gt;
&lt;h2&gt;Overview&lt;/h2&gt;
&lt;p&gt;I&#39;m really happy with how solid and robust the system is now. There are still improvements that could be made but that&#39;s the way with all things. I hope to have this system taken off my list of &amp;quot;most modified&amp;quot; into a pile of systems that &amp;quot;just work&amp;quot;(tm). None of these techniques are new, but the key is to take the time to identify first why things happen. This system resiliency work too me to places that felt out my comfort zone and put under the microscope the gaps in my knowledge - namely, how NodeJS actually works under the hood so to speak and networking principles. I&#39;ve since taken a couple of courses and watched a number of conference talks on these very topics.&lt;/p&gt;
&lt;p&gt;Takeaways:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Use your work to identify gaps in your knowledge&lt;/li&gt;
&lt;li&gt;Don&#39;t assume - look for the why&#39;s and measure to test the hypothesis&lt;/li&gt;
&lt;/ul&gt;
</content>
  </entry>
  
  <entry>
    <title>Rebuilding a Monolith</title>
    <link href="/rebuilding-a-monolith/"/>
    <updated>2020-08-20T00:00:00Z</updated>
    <id>/rebuilding-a-monolith/</id>
    <content type="html">&lt;p&gt;It&#39;s no secret that microservices are the hotness right now (I won&#39;t say new at this point because they&#39;re fairly well established). But you only have to Google the phrase &amp;quot;Why not to use Microservices&amp;quot; to see a treasure trove of articles about why to be sceptical of this pattern. As with all things, in some instances it&#39;s the &amp;quot;correct&amp;quot; approach and in other&#39;s, it&#39;s not. I&#39;m not entering into a holy war of which is better, nor am I a Basecamp hipster who despises frameworks. This is a story about why and how we collapsed a few &amp;quot;microservices&amp;quot; into our central API.&lt;/p&gt;
&lt;h2&gt;The Setup&lt;/h2&gt;
&lt;p&gt;Koru is a pre-interview assessment filter. That sounds wordy but basically it&#39;s designed for large corporations who get thousands of applications to jobs, and this tool aids companies in seeing what different strengths a candidate has based on an assessment. The product is fairly simple infrastructure wise. An event-driven microservices architecture with a central API, a React frontend for customers and a admin interface for employees. In addition to that we have systems to generate PDF&#39;s and send emails that are part of the candidate lifecycle through our system.&lt;/p&gt;
&lt;p&gt;What we wanted to combine was the following:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Our admin interface and it&#39;s API&lt;/li&gt;
&lt;li&gt;Our internal tooling API&lt;/li&gt;
&lt;li&gt;Our internal API documentation
...into our main API that serves customers and their candidates.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;It&#39;s worth noting that these were all separate repositories.&lt;/p&gt;
&lt;h2&gt;Why?&lt;/h2&gt;
&lt;p&gt;The reasons for doing this were simple&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Our admin portal was incredibly buggy and since we had moved over to a new assessment provider, our API no longer provided us with all the functionality that we needed. Additionally, it had no tests suite, or documentation. The biggest reason however, was that Koru runs (currently) as a single tenant model. In other words, each client gets their own API and DB instance. The problem with this is that the admin API needs to then connect to each customers database. And because we&#39;re not insane that means we have to create a VPC peer with each individual customer database from the admin API. Not only is this complex to maintain and debug but also a pain to update. Furthermore, it was hosted on it&#39;s own EC2 instance which meant that it was costing us a reasonable amount of money. This would be fine if it was high usage but it was barely ever used.&lt;/li&gt;
&lt;li&gt;Our internal tooling API had the same issue as the admin API, we wanted to keep things simple from a &amp;quot;what has access to the DB&amp;quot; point of view so we chose to collapse this one as well.&lt;/li&gt;
&lt;li&gt;The API documentation was something that we wanted to be constantly updated along with new changes to the API. To make developers go to a whole new repo, make a whole new PR and such meant that it seldom got updated. We chose to combine it so it can be referenced as part of a Pull request change to any code.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;How we did it&lt;/h2&gt;
&lt;p&gt;As all things should be, we tackled these one at a time and released each one of them so we didn&#39;t have any confused boundaries about where large swaths of code got introduced.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;The documentation
The documentation site was written using Slate, which is a Ruby based project. To add this to our &lt;em&gt;NodeJS/Express&lt;/em&gt; API, sounds painful but all we did was put it in new folder &#39;devdocs&#39; and treat it as it&#39;s own separate project. Next we added a GitHub Action (Our CI/CD tool of choice) to build and deploy the &#39;devdocs&#39; to the S3 bucket where they are served from.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;https://blogs.psychcentral.com/hidden-disabilities/files/2020/02/easy-button-300x300.png&quot; alt=&quot;That was easy&quot; /&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Well that was easy&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Right, next... 2. Internal Tooling API
Since this was a new service written only a few months prior, it was already similar to our main API in terms of the patterns used. Additionally, it was written in Typescript so the code could quite literally be copied across. We took the time to fix up some tools that were broken and created new ones. Additionally, we added a few simple sanity tests for each one so that we could verify they worked.&lt;/p&gt;
&lt;ol start=&quot;3&quot;&gt;
&lt;li&gt;Admin Portal
This is where things became a bit tricky. As stated previously, one of the main reasons we wanted to merge these API&#39;s was because the current API didn&#39;t really serve the needs of the system, we were sort of bending it to our needs and in a number of cases, it snapped under the pressure. We embarked first on a mission of discovery, what were all the things that it did, why did it do them, and how did we want to improve them.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The API was written in Python using Flask as this is what the previous team had mostly used. But since there was little expertise within the team for Python, we decided to rewrite it in Typescript. One of the main improvements we wanted to make was to merge the creation of different records in various tables into one easy to use endpoint. Previously, each individual section when creating anything (like a study, survey etc.) had to be saved individually. This lead to confusion from users and confusion when debugging issues where some data was relied on by another part of the system. By combining them, we made the save process one simple API call, and more importantly, one set of errors to deal with.
Along with this merge, we added new tests, both unit and integration so that we could verify the integrity of this process. We also made plans to improve the service further by adding tools for Koru employees to be able to &amp;quot;Copy&amp;quot; existing studies to make them easier to setup.&lt;/p&gt;
&lt;p&gt;The difficult part with this merge was authentication. In our main API we had no mechanism to authenticate requests only from Koru Employees. Thankfully this was a breeze, because we use Auth0 to handle authentication. We simply wrote a new middleware handler that reads from Auth0 based on the JWT &lt;code&gt;req.user.sub&lt;/code&gt; field and checks certain properties of that user in Auth0&#39;s system.&lt;/p&gt;
&lt;p&gt;The interface was built in React but although not inherently bad, it was clearly coded in a bit of a rush. Things over time had been hacked in as new features came along so we took the time to clean this up (as the boy scouts that we are) before merging it into our central tooling interface. Again we added tests to make sure that the interface loaded the data correctly and behaved the way we expected.&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;Overall, I&#39;m really happy with &amp;quot;the big merge&amp;quot;, because it&#39;s meant that we&#39;ve created a useful resource for Koru employees and they can self-serve for whatever customers ask of them. Additionally, it will be easier to add documentation for the API and consolidate that back to a single work ticket. On top of this, the main Koru API is now a complete CRUD API rather than having bits of &amp;quot;R&amp;quot; and &amp;quot;U&amp;quot; with &amp;quot;C&amp;quot; and &amp;quot;D&amp;quot; being all the way over in a different service! Hooray!&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Re-architecting our PDF Generation</title>
    <link href="/pdf-generation/"/>
    <updated>2020-07-31T14:06:03Z</updated>
    <id>/pdf-generation/</id>
    <content type="html">&lt;p&gt;When I joined the Koru team, one of the biggest issues with our pipeline was the PDF generation system.
We have PDF&#39;s in the first place as a nice report that customers get about each of their candidates that takes an assessment on our system.
These PDF&#39;s are saved to S3 and then we send the customer a signed link to download the PDF.&lt;/p&gt;
&lt;p&gt;The major problem with it however, was that the PDF system ran on two big expensive &lt;code&gt;m4.large&lt;/code&gt; box (~$74 a month each!) and then there was another one for our QA environment.
You wouldn&#39;t mind paying a premium for the machine if the service worked but it constantly failed to generate PDF&#39;s for unknown reasons. It was one of those services that was just a black hole to us.
I don&#39;t like having any black holes in systems that I&#39;m charged with running and maintaining. It pays to be extremely knowledgable about the system (within reason of course).&lt;/p&gt;
&lt;p&gt;So we&#39;ve got the classic, too expensive, doesn&#39;t work and a &amp;quot;known unknown&amp;quot;. What would you do?&lt;/p&gt;
&lt;p&gt;How the PDF system worked historically was to open the web page in an instance of Chrome and then &amp;quot;print&amp;quot; it to a PDF. This was extremely memory intensive and the box could only handle a couple of PDF&#39;s at a time.
However, there was no mechanism to rate limit itself. Therefore, if we fired five requests at it, then the box crashed and it had to be manually rebooted.
Although we tried a number of ways to fix it, what we really wanted was something that was event driven so it could fail and manage it&#39;s own processing rate with ease. Since the application was hosted on EC2, we couldn&#39;t configure it to trigger from SQS without additional overhead, so we decided to rewrite.&lt;/p&gt;
&lt;p&gt;The first MVP was a Node Lambda that called a service called Api2Pdf - a simple API-as-a-service platform that, given a URL and some parameters, creates you a PDF.
We then saved this PDF Buffer to our own S3 bucket and then deleted the one that they had just generated.
Unfortunately, we had a number of bugs with this system. Because the web page that we took a PDF of often took up to 700ms to load, the PDF was being generated before any of the content had loaded. It was seemingly only an edge case though.
The obvious fix for this was to add some kind of CSS selector that the API would wait for &lt;em&gt;before&lt;/em&gt; generating the PDF - but this was not possible at the time with Api2Pdf&#39;s API.&lt;/p&gt;
&lt;p&gt;So the next step, was to add some protective validation that would check if the PDF had any content before then being accepted. If it was blank then an error was thrown for it to be retried.&lt;/p&gt;
&lt;p&gt;This worked for a couple of weeks, and then we got some more errors... This time, it had generated the content but the underlying data hadn&#39;t yet loaded it. This meant it passed the &amp;quot;blank&amp;quot; check but ultimately hadn&#39;t loaded correctly.
So I got working on another fix, my first attempt was to take a sample and compare the Buffers and see if they were the same. However, there were edge cases that this could not handle.
I then found PDFJS from Mozilla which had a handy feature to be able to get the text of a PDF. I decided to compare the generated PDF&#39;s text with a sample I had taken.&lt;/p&gt;
&lt;p&gt;Again, this appeared to work for a few weeks - perhaps too well. Although still ultimately had the downside that if it failed we would have to regenerate the PDF and were charged in the process for doing so.&lt;/p&gt;
&lt;p&gt;However, after an API release, it appeared as though the page rendering the PDF had slowed down enough that the amount of PDF&#39;s getting generated incorrectly had increased.&lt;/p&gt;
&lt;p&gt;At this point, I decided I was done with hacky fixes and working with a million libraries not updated in ages - seriously with the amount of NPM packages out there I would have thought there was something decent for PDF&#39;s.
It was time for the big guns - it was time for Puppeteer.&lt;/p&gt;
&lt;p&gt;As you may or may not know, Puppeteer is a beautifully designed API for working with Chromium. It has specific functions for screenshots, modifying the page, etc. But most importantly for us, it could generate PDF&#39;s.&lt;/p&gt;
&lt;p&gt;Porting over to the API was relatively ease and didn&#39;t require a whole lot of modification. What did, was getting it deployable to serverless and AWS Lambda.&lt;/p&gt;
&lt;p&gt;Because Puppeteer contains the entire Chromium browser in its install, it vastly goes beyond the size limit for Lambdas (50MB Zipped, 250MB Unzipped)
The solution was to use a Lambda layer to contain the Puppeteer stuff and then just the application code in the lambda itself. I used a lambda layer taken from &lt;a href=&quot;https://github.com/shelfio/chrome-aws-lambda-layer&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;I had to modify my serverless config like so:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;layers:
  HeadlessChrome:
    name: HeadlessChrome
    compatibleRuntimes:
      - nodejs12.x
    description: Required for headless chrome
    package:
      artifact: layers/chrome_aws_lambda.zip

functions:
  generate-pdf:
    provisionedConcurrency: 1
    description: Generates PDF from HTTP calls or SQS messages.
    handler: dist/handler.handler
    layers:
      - { Ref: HeadlessChromeLambdaLayer }
    events: ...
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Next, I had issues with the PDF working fine, but not displaying any background colors. At first I tried doing the &lt;code&gt;emulatePageMedia&lt;/code&gt; function and passing null so that it would not use print based CSS code, but to no avail. Finally, I managed to find a helpful fix on StackOverflow &lt;a href=&quot;https://stackoverflow.com/questions/60736354/puppeteer-not-rendering-color-background-color-when-i-try-to-save-pdf-on-disk&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-css&quot;&gt;html {
  -webkit-print-color-adjust: exact;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I slumped back in my chair after this and breathed a sigh of relief. That was it... or so I thought.&lt;/p&gt;
&lt;p&gt;Turns out, in my haste I had set the option of &lt;code&gt;waitUntil: networkidle2&lt;/code&gt; in the &lt;code&gt;.goto()&lt;/code&gt; function for Puppeteer.
Basically this function is called so that the headless browser will navigate to the page provided. It returns a Promise of the page when the page has loaded. So, it can be awaited until the navigation has finished. But what is considered &amp;quot;loaded&amp;quot; anyway? Well thankfully, Puppeteer takes an array of special strings that tell it what it should consider &amp;quot;loaded&amp;quot;. NetworkIdle2, according to the documentation, works as follows: &lt;code&gt;consider navigation to be finished when there are no more than 2 network connections for at least 500 ms&lt;/code&gt;.
Now I&#39;m not sure why I did this because in hindsight that was bound to go wrong. If the network request that loads the data takes more than 500ms then that means it will consider the page loaded before the content is present. Therefore, we changed it so it waited for &lt;code&gt;networkidle0&lt;/code&gt; which is the same as networkidle2 but waits until there are 0 network connections for 500ms.
Additionally, for added safety we added a 5 second &amp;quot;sleep&amp;quot; before it takes the PDF. This probably isn&#39;t needed any more, but we wanted to be extra cautious because of all the customer problems that this system had caused.&lt;/p&gt;
&lt;p&gt;This was the final piece in the puzzle. For real this time! It was finally stable. It was quite a journey and forced me to conquer some interesting problems with tonnes of various answers that had worked for some individuals. Additionally, we expanded our internal tooling with this project which will help us going forward to resolve customer issues. So overall, despite the chaos, it was a win win.
On reflection, I wish we had gone straight to Puppeteer but hindsight is 20/20. In all honesty, I was afraid of it. I saw it as an absolutely mammoth task, having nightmares about trying to implement it years ago at a web dev job, but it was shockingly simple to implement. It&#39;s taught me to take a better look at the options and not hold onto opinions from time gone. Technology moves to fast to do so.&lt;/p&gt;
&lt;p&gt;I&#39;ll conclude this article with a quote that I believe applies here:
&lt;code&gt;It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so.&lt;/code&gt;&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Preserving Links whilst Migrating Domains with S3</title>
    <link href="/domain-migration/"/>
    <updated>2020-07-31T00:00:00Z</updated>
    <id>/domain-migration/</id>
    <content type="html">&lt;p&gt;Domain migrations can be fairly simple things, change the CNAME and bobs your uncle. Difficulties arise when you have two different website systems and existing paths that you want to preserve and the site is run statically with no web server. The latter was the situation we found ourselves in.&lt;/p&gt;
&lt;h2&gt;The Problem&lt;/h2&gt;
&lt;p&gt;The product I work on, Koru was purchased by a company called Cappfinity. Koru had a website https://joinkoru.com and Cappfinity had https://cappfinity.com. To consolidate the offerings, the websites were effectively merged.&lt;/p&gt;
&lt;p&gt;The setup for the Koru site was as follows:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A static web app hosted on S3 on a subdomain of joinkoru.com&lt;/li&gt;
&lt;li&gt;Route53 for DNS&lt;/li&gt;
&lt;li&gt;A wordpress site hosted externally for the &amp;quot;main&amp;quot; website&lt;/li&gt;
&lt;li&gt;Cloudfront for all SSL and edge distribution&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;We wanted to redirect &lt;code&gt;*.joinkoru.com&lt;/code&gt; to &lt;code&gt;cappfinity.com/koru&lt;/code&gt; whilst preserving the paths for the user. Ordinarily, we would have just used an Nginx or Apache rewrite rule. That would have given us much more granular control than the solution we arrived at, but we didn&#39;t want to host a whole new web server for the sake of some redirects.&lt;/p&gt;
&lt;h2&gt;S3 To the Rescue!&lt;/h2&gt;
&lt;p&gt;We eventually discovered that you can have &amp;quot;Routing Rules&amp;quot; in S3 Buckets. I quickly got to work creating a new S3 bucket.
Then under &lt;code&gt;Properties &amp;gt; Static Web Hosting&lt;/code&gt; I configured the following
Index Document: &lt;code&gt;index.html&lt;/code&gt; - this doesn&#39;t need to exist, the form element just needs to have content.&lt;/p&gt;
&lt;p&gt;The magic happens in &lt;code&gt;Redirection Rules&lt;/code&gt; which takes a horrendous XML type configuration. We set it up as follows:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-xml&quot;&gt;&amp;lt;RoutingRules&amp;gt;
  &amp;lt;RoutingRule&amp;gt;
    &amp;lt;Condition&amp;gt;
      &amp;lt;KeyPrefixEquals/&amp;gt;
    &amp;lt;/Condition&amp;gt;
    &amp;lt;Redirect&amp;gt;
      &amp;lt;HostName&amp;gt;koru.cappfinity.com&amp;lt;/HostName&amp;gt;
    &amp;lt;/Redirect&amp;gt;
  &amp;lt;/RoutingRule&amp;gt;
  &amp;lt;RoutingRule&amp;gt;
    &amp;lt;Condition&amp;gt;
      &amp;lt;HttpErrorCodeReturnedEquals&amp;gt;403&amp;lt;/HttpErrorCodeReturnedEquals&amp;gt;
    &amp;lt;/Condition&amp;gt;
    &amp;lt;Redirect&amp;gt;
      &amp;lt;HostName&amp;gt;koru.cappfinity.com&amp;lt;/HostName&amp;gt;
      &amp;lt;ReplaceKeyPrefixWith&amp;gt;404&amp;lt;/ReplaceKeyPrefixWith&amp;gt;
    &amp;lt;/Redirect&amp;gt;
  &amp;lt;/RoutingRule&amp;gt;
&amp;lt;/RoutingRules&amp;gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;So what&#39;s happening here?&lt;/h2&gt;
&lt;p&gt;The first part is the routing condition. For us, we have made it so the KeyPrefixEquals blank which means anything. This handles all the main redirects for existing content. In other cases, you may have it so &lt;code&gt;https://olddomain.com/blog&lt;/code&gt; goes to &lt;code&gt;https://newdomain.com/newblog&lt;/code&gt;. In that case, you would set the KeyPrefixEqual as &lt;code&gt;&amp;lt;KeyPrefixEqual&amp;gt;/blog&amp;lt;/KeyPrefixEqual&amp;gt;&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;The second routing rule states that if the document that we redirect to returns a 403 (meaning forbidden) then we will redirect to the same host but replace the Http Error code with 404 so it will go to their Not Found page correctly. This 403 error was happening in a subset of requests due to content not being present in the new system. We left it in for historical reasons.&lt;/p&gt;
&lt;h2&gt;Subdomains to Suffix&#39;s&lt;/h2&gt;
&lt;p&gt;You&#39;ll notice in the rules above that the redirect is not to &lt;code&gt;cappfinity.com/koru&lt;/code&gt; but rather to &lt;code&gt;koru.cappfinity.com&lt;/code&gt;. I forget the reason exactly for this because we could have used the &lt;code&gt;&amp;lt;ReplaceKeyPrefixWith&amp;gt;&lt;/code&gt; tag to redirect to &lt;code&gt;/koru&lt;/code&gt;. Nonetheless, we didn&#39;t for whatever reason (I hunted my emails down for a answer but couldn&#39;t find one). Instead, on the Cappfinity website side, which was newly hosted in Netlify, &lt;code&gt;koru.cappfinity.com&lt;/code&gt; was setup as a domain alias and a CNAME was configured for that subdomain to redirect to the Netlify Application.&lt;/p&gt;
&lt;p&gt;And hey presto! We were done. Overall, it was a fairly simple migration but used a solution I didn&#39;t even know was possible with S3!&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Why Backwards Compatibility is Critical</title>
    <link href="/backwards-compatible/"/>
    <updated>2020-07-03T00:00:00Z</updated>
    <id>/backwards-compatible/</id>
    <content type="html">&lt;p&gt;Backwards compatibility is not something I see discussed much in tech circles. It&#39;s all new-new-new, fast-fast-fast. Piling features on top of one another and tightly coupling releases between services. Previously Facebook, the &lt;a href=&quot;https://www.alexa.com/topsites&quot;&gt;4th most popular site on the internet&lt;/a&gt; no less, had the mantra of &amp;quot;move fast and break things&amp;quot;.
These are the kinds of sentiments I see all around me, particularly from startup and SaaS companies. I&#39;ve always felt that coupling releases too closely was insanity inducing and seen first hand how corrosive they were to a customer experience of the product.&lt;/p&gt;
&lt;p&gt;But the web hasn&#39;t always been like this. The core backbone of the internet, in the form of TCP/IP, DNS, HTTP and even HTML and CSS, has been unchanged for many years - or at least changed in manner that doesn&#39;t break previous versions. As an example, both &lt;a href=&quot;http://spacejam.com&quot;&gt;Space Jam&#39;s website&lt;/a&gt; and &lt;a href=&quot;http://www.milliondollarhomepage.com/&quot;&gt;Million Dollar Homepage&lt;/a&gt; both still function on modern browsers, having been created in 1996 and 2005 respectively.
So what happened? There isn&#39;t one conclusive answer to this, more a prevailing zeitgeist amongst developers and product managers. But in my view, it&#39;s due to the large investment that technology has seen over the past 20 years. It&#39;s grown exponentially.
With that, we have seen bad business practises, ill thought out ideas and customers that are keeping a company afloat. These things existed before, but now manifest themselves in the technology that these organisations build.
Additionally, products are architected around small services given a single responsibility. Previously, the web was simple - throw a LAMP stack on a server somewhere and bobs your uncle. There were a lot less moving parts.&lt;/p&gt;
&lt;p&gt;Now this isn&#39;t going to be a nostalgic post where we reminisce the days of the &amp;quot;good ol&#39; web&amp;quot; or something, because I find that all a bit petty. I want to discuss how we need to build things to last and practical ways to do that in the face of &amp;quot;moving fast&amp;quot; (side note: watch Bryan Cantrill&#39;s fantastic talk on the principles of tech leadership &lt;a href=&quot;https://www.youtube.com/watch?v=9QMGAtxUlAc&quot;&gt;here&lt;/a&gt;)&lt;/p&gt;
&lt;h1&gt;But why do developers avoid making things backwards compatible?&lt;/h1&gt;
&lt;p&gt;This article wouldn&#39;t exist if there wasn&#39;t at least 1 answer to this question.
Primarily it boils down to &amp;quot;it&#39;s more effort&amp;quot;. If you&#39;re making a major change to a service, and you have all other teams that consume or otherwise use this service in some way to also make the needed change then it&#39;s reasonable to assume you don&#39;t really need the old system. And potentially more work to maintain it.
Your team may also not even have a &amp;quot;versioning&amp;quot; strategy in place. I&#39;ve sat in meetings well over 5 hours of meetings about how to version services with no outcome. A lot of people have opinions about this, and often, developers seem more intent on arguing the others point rather than accomplishing the objective behind the change.
Furthermore, there have arguably been a number of failures in attempting to preserve backwards compatibility such as with Java and SQLite3.
These challenges can be major roadblocks in creating stability and backwards compatibility in your products services.&lt;/p&gt;
&lt;h1&gt;Why is it important then?&lt;/h1&gt;
&lt;p&gt;First we need to clarify that preserving backwards compatibility is not about holding onto legacy. If something is old, busted, broken or unused, then by all means, pave over it and start afresh. There&#39;s no need to attach infinite eels to yourself to support absolutely every single use case of your service. Things change as software changes. It&#39;s natural.
On the other hand, backwards compatibility is about not creating unnecessary work for hundreds of your users every time you make a change. Or coupling releases so tightly that everything has to be deployed at exactly the same time and caches flushed in sync. Or having no deprecation plan and changing external interfaces constantly.&lt;/p&gt;
&lt;p&gt;Stripe treads this line very careful (however, I am unaware of the overall experience as a developer there). Being a payments processor, there has to be certain guarantees about how things will be handled.
To accomplish this, Stripe use a date versioned API system. You get assigned the latest versioned API when you create an account and can easily update the API if you so wish. But you also have the option to leave it completely. In fact, there are still websites I built a few years ago with now old Stripe integrations that tick along fine. They have a great post about their versioning mechanism &lt;a href=&quot;https://stripe.com/blog/api-versioning&quot;&gt;here&lt;/a&gt;&lt;/p&gt;
&lt;h1&gt;How to do it&lt;/h1&gt;
&lt;p&gt;You might assume that because you use a /v1 and /v2 in your endpoints you&#39;re all set right? Well, not so fast. Ultimately, as humans we are subject to reason about things that we ourselves cannot fully understand. Therefore, what constitutes a major version bump for some, may not be for others.
So how can you do it?&lt;/p&gt;
&lt;h2&gt;1. General coding practise&lt;/h2&gt;
&lt;p&gt;If you&#39;re changing something minor like the name of a parameter on an interface or the type, then there is the possibility that you can support the old method by casting it to the new type and so forth. There are many general coding practises that allow your code to be bug hardened whilst not introducing lots of spaghetti.
Additionally, a good starting point for all backwards compatible changes is to mark the &amp;quot;old&amp;quot; code with a &amp;quot;deprecation&amp;quot; warning of some kind so that other developers in your team know not to use that code any more.&lt;/p&gt;
&lt;h2&gt;2. Documentation and Deprecations&lt;/h2&gt;
&lt;p&gt;If you&#39;re going to make a breaking change to something, you need a way to communicate that to the consumers of your product and you need to tell them how to update if they absolutely cannot be held on the previous version for some reason.
You can do this by giving the customer, plenty of warning via email, an account manager or a deprecation warning in the response. You could have a system whereby, when a deprecated API method is called, it logs it to a table. Each day, the table can then be scanned and you can tell the customer &amp;quot;You called X route which has been deprecated and will no longer receive updates, please see N website for documentation on how to update&amp;quot;.
Hand-in-Hand with this goes a clear policy on how long you will support &amp;quot;deprecated&amp;quot; routes. Depending on the market you&#39;re in that could be a few months or a few years. Either way, be clear to your customers. Again, Stripe does a pretty good job of saying &amp;quot;you can use this near indefinitely&amp;quot; and including that as part of its marketing to developers.&lt;/p&gt;
&lt;h2&gt;3. Pivoting&lt;/h2&gt;
&lt;p&gt;If making something backwards compatible has become so incredibly painful that you&#39;d rather play hopscotch on a floor of hot coals, then you need to ask if the service has pivoted to a point where it&#39;s potentially a whole new thing.
As an example of this, I created a service for a messaging application that kept track of when a customer had last read a particular message. However, it then needed the functionality to manage if the customer had left a group message and then if the customer had muted the messages and so on. Before you knew it, it was no longer an API for managing if the customer had read a message or not but more a fully fledged notifications API.
In retrospect, I should have seen this inevitability. But the service had devolved to a point where it wasn&#39;t anything like the original. Although it was an internal only service, it&#39;s something that looking back I should have redone and gradually migrated over to the new service.
Although this may not be preserving the backwards compatibility in a true sense, as long as you provide a sensible upgrade path and don&#39;t immediately shutdown the old service then it&#39;s ok in my books.&lt;/p&gt;
&lt;h2&gt;4. Versioning&lt;/h2&gt;
&lt;p&gt;We&#39;ve touched on this a few times in this post, and arguments about different versioning strategies has been talked about since time began.
I&#39;m not going to provide any guidance on which one is best or which one should you chose - simply decide on one as a team and agree upon clear definitions about what constitutes a version upgrade. Then include this as part of your release strategy.
If you work with continuous deployment then perhaps look at something similar to Stripe or a Semver strategy that goes beyond a &amp;quot;/v1&amp;quot; and &amp;quot;/v2&amp;quot; route structure (although may include it).
It will depend on a few factors
_ Expectations of the market you are in - how long do you customers expect to use your product and forget about the implementation - hint it&#39;s often longer than you think
_ What is your release cycle like? - if it&#39;s daily then you need something to automate the process, if it&#39;s each decade where you pack your software onto a disc then it can be something manual * Do you have a lot of third party consumers - if there consumers to your service that go beyond your company then you will have different requirements about deprecation etc.&lt;/p&gt;
&lt;p&gt;TL;DR - Pick one and go with it. You can even pick different mechanism for different services!&lt;/p&gt;
&lt;h2&gt;5. Limit dependencies&lt;/h2&gt;
&lt;blockquote&gt;
&lt;p&gt;Mo&#39; Dependencies, mo&#39; problems - Notorious D.E.V&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;By limiting the number of dependencies you use, versioning will be easier because you no longer need to provide constant security updates and check if every last version of your software works with it.
The Node community is particular bad at this one, and often does provide security and bug fixes down stream and instead just forces everyone to upgrade to the latest version. We can do better than this. But we make our lives a lot easier by reducing dependencies.&lt;/p&gt;
&lt;h1&gt;Conclusion&lt;/h1&gt;
&lt;p&gt;There are lots of arguments against backwards compatibility and I can understand why. Personally speaking, I want to build to last. I&#39;d like to think that in 10 years time I could still use my products without having to change the integration.
Something about seeing the spacejam website just sort of fills me with a warm glow of a moment in time that is accessible at any point.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Sharpening the Saw</title>
    <link href="/sharpen-the-saw/"/>
    <updated>2020-06-26T10:17:00Z</updated>
    <id>/sharpen-the-saw/</id>
    <content type="html">&lt;p&gt;Sharpening the saw is Habit 7 in the cringe inducing book entitled &amp;quot;7 Habits of Highly Effective People&amp;quot;. This post isn&#39;t yet another book review but rather the work we do to make the rest of my work, better, faster and more consistently. The label Covey gave to this work was &amp;quot;sharpening the saw&amp;quot;.
It conveys the idea that, given a new saw, one cannot just continuously cut wood all day, every day. Time needs to be given to sharpen the saw, literally, so that the wood cutting rate remains consistent.&lt;/p&gt;
&lt;p&gt;As a developer, I interact with hundreds of tools, websites and system a day. Each one has it&#39;s own cognitive overhead such as keyboard shortcuts and the like. Furthermore, as technology is so rapidly evolving, there is seldom reason why you shouldn&#39;t attempt to iterate your approach to problems. My quest to sharpen the saw started, like most new things, with a question - &amp;quot;How can I make my life easier as less stressful&amp;quot;.&lt;/p&gt;
&lt;h2&gt;1. Keeping better personal notes&lt;/h2&gt;
&lt;p&gt;Most of the people I admire across a broad spectrum of industries love to keep notes. Learning from your mistakes is very easy to say but not so easy to actually do. And more importantly, implement a system to do so.
I solved this by doing the following:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Creating a new GitHub repo called Notes, each note is then a simple text or markdown file. I chose GitHub for storing this so I do not need anything asides from my YubiKey and a terminal - simple and low friction when moving across machines. Also text will never die, platforms will, eventually - even the best&lt;/li&gt;
&lt;li&gt;Treating my Todoist Inbox as a brain dump - I run my life on Todoist, so it&#39;s a super easy way, if I have a bunch of ideas to just dump them into Todoist. Since this is my task manager, I can process them as if they were a task. If they need action, I will expand them and sort them into their appropriate project. If the idea is a new project entirely (more than 2 actions), I will either do it or look at where I can schedule it&lt;/li&gt;
&lt;li&gt;Keeping a &lt;code&gt;currently.txt&lt;/code&gt; file open in one terminal window that I update before I context switch to something else. For example, if my wife calls me to help her with something round the house or someone comes to the door, I quickly write what I was &amp;quot;currently&amp;quot; doing. When I then come back, rather than trying to rack my brain to figure out where I was, I just read that document.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;2. Improving my Desktop Productivity Setup&lt;/h2&gt;
&lt;p&gt;I want to expand on this section a bit more in a personal infrastructure post but here is a brief summary of how I got my PC to work for me rather than me working for it.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Store my dotfiles in GitHub and have them auto sync from any of machines when they are edited, this keeps the aliases, functions and programs in sync across the machines&lt;/li&gt;
&lt;li&gt;Switch my terminal from Bash to ZSH and install a bunch of plugins to make hopping around the shell easier&lt;/li&gt;
&lt;li&gt;Got a USB switcher so I could clamshell my Macbook&#39;s (personal and work) and switch between them easily&lt;/li&gt;
&lt;li&gt;Got well acquainted with the *nix terminal and keyboard shortcuts&lt;/li&gt;
&lt;li&gt;Purchased a solid keyboard as I try to avoid using the mouse at all costs&lt;/li&gt;
&lt;li&gt;Analysing all software deeply and attempting to speed up its performance as much as possible&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;3. Dedicated learning time&lt;/h2&gt;
&lt;p&gt;Each week, I spend at least 20 minutes learning something new. But it can&#39;t just be anything. It has to be something that will further my knowledge in a new paradigm. Learning a new JS framework will not extend my knowledge in a new paradigm. Learning how the TCP/IP stack works in *nix or how a certain part of the V8 engine works will.
Personally, I enjoy teaching others so part of my &amp;quot;learning&amp;quot; time can involve teaching others things I already know. The effect this has on the brain and knowledge storage is fairly well documented and I&#39;m not a neuroscientist (yet) so I&#39;ll leave it to them to explain why this is so effective at knowledge retention.&lt;/p&gt;
&lt;p&gt;Additionally, I find attending meetups (such as LeicesterJS, one I help run), and speaking with other developers fascinating. I like to hear of their problems and throw myself in. It helps improve my questioning ability - something I did not cultivate enough early in my career. In the past, I was very much &amp;quot;open the bonnet and have a rummage&amp;quot; kind of debugger, this approach works for a lot of problems, but there are others such as ones &lt;a href=&quot;https://digest.bps.org.uk/2018/05/04/learning-by-teaching-others-is-extremely-effective-a-new-study-tested-a-key-reason-why/&quot;&gt;documented&lt;/a&gt; where this strategy does not cut it. To combat this, I have invested time in trying to be better at asking questions. As a child we are very akin to simply asking &amp;quot;why&amp;quot; all the time, but asking good questions is more than just why as often things boil down to more than 1 problem. It&#39;s about asking questions that carve away at the ultimate nugget of truth based on absolutes.
These questions also help me when presenting new ideas. If I present an idea after having read about it on a large companies engineering blog, I would have no foundation of which to base my reasoning on. Instead, by gathering data and arriving at a solution that way, it has a basis to stand on. Don&#39;t try to wedge a solution from someone else into a problem you are experiencing, it mostly won&#39;t fit.&lt;/p&gt;
&lt;h2&gt;4. Post mortems and reviews&lt;/h2&gt;
&lt;p&gt;Part of learning from your mistakes involves reviewing what happened, what went wrong and most importantly why. Although at my workplace we do not have an engineering blog or a culture of writing post mortems for absolutely every issue, I&#39;ve found it personally beneficial to keep a &amp;quot;war&amp;quot; log of all the big issues I&#39;ve faced, why they came about and how we solved them. I keep these write ups in my personal notes.&lt;/p&gt;
&lt;h2&gt;Overview&lt;/h2&gt;
&lt;p&gt;I&#39;m happy I spent time doing this, and is a process I will continue to follow. Hopefully this helped you practically speaking as there is a lot of wish-wash in this sector. If you have any more suggestions let me know on &lt;a href=&quot;https://twitter.com/joshghent&quot;&gt;twitter&lt;/a&gt;. You can also following progress of my startup &lt;a href=&quot;https://turboapi.dev&quot;&gt;TurboAPI&lt;/a&gt; - &lt;a href=&quot;https://www.indiehackers.com/product/turboapi&quot;&gt;here&lt;/a&gt;&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Lightning Fast ZSH Performance</title>
    <link href="/zsh-speed/"/>
    <updated>2020-06-19T12:31:03Z</updated>
    <id>/zsh-speed/</id>
    <content type="html">&lt;p&gt;As part of my work to &amp;quot;sharpen the saw&amp;quot;, I decided to spend some time improving the performance of various components in my setup. The first target of my attention was ZSH.&lt;/p&gt;
&lt;h2&gt;Why?&lt;/h2&gt;
&lt;p&gt;I open new shell instances constantly and try to live in the terminal as much as possible (including for writing this blog - in vim to be precise :wave:), so each second spend loading new shell windows or tabs is a second wasted. Additionally, when things take time to load, you&#39;re more distracted and/or frustrated with how long it&#39;s taking to load rather than the current outcome you were trying to achieve. In my eyes, speed is one of the 3 major pillars of user experience - along with findability and accessibility.
To start on this performance journey, I first needed a quantifiable metric to track my progress as I made changes.&lt;/p&gt;
&lt;p&gt;To do this I ran the following command&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;for i in $(seq 1 10); do /usr/bin/time $SHELL -i -c exit; done
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Try running it in your shell of choice now, and you&#39;ll get a performance breakdown of 10 runs of initiating your shell! For me the response was this&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;1.5 seconds 0.8 user 0.7 sys
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;1.5 seconds may not seem like a lot, but that is more than noticeable to any human. Time perception is a whole other topic for far more intelligent people than I. But after some digging, the consensus seems to be that reaction times are around 150ms (input-to-action) and 13ms for perceptive time (from visual stimuli).
Therefore, our benchmark we were aiming for was 0.13 seconds. This is a number I now seek out across all software I use and consume.&lt;/p&gt;
&lt;p&gt;Ok so, we&#39;ve got a benchmark of 0.13 seconds meaning we need to reduce the load times by 1.37 seconds - let&#39;s get to it!&lt;/p&gt;
&lt;h2&gt;Phase one - the big boys&lt;/h2&gt;
&lt;p&gt;I had a rough idea of what might be taking the time in my zshrc file.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;NVM - this is a node version library that I&#39;ve (unfortunately) used for years. It allows you to quickly swap between node versions. Handy but performance intensive. It initiates for each shell instance so you have access to the &lt;code&gt;nvm&lt;/code&gt; command. I don&#39;t need this all the time, so it can be removed. Note, by removing this, it still allows me to use the &amp;quot;NVM&amp;quot; node install - just not change it.&lt;/li&gt;
&lt;li&gt;ZSH Sourcing - The standard oh-my-zsh config contains the line &lt;code&gt;source $ZSH/oh-my-zsh.sh&lt;/code&gt;. We don&#39;t need this as I was using Antigen as a plugin manager, which automatically includes oh-my-zsh, so in reality I was sourcing it twice!&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;After these changes, I ran the test script from earlier and got the following result&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;0.61 real 0.32 user 0.27 sys
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;That&#39;s nearly a second removed just by removing NVM and sourcing ZSH! A huge difference so happy about that.&lt;/p&gt;
&lt;p&gt;The next suspect I had on my list was my prompt, I used this heavy emojified Spaceship theme that was installed via NPM. Most of the time, if something looks good, it takes ages to load.
I switched over to the &lt;a href=&quot;https://github.com/sindresorhus/pure&quot;&gt;Pure prompt&lt;/a&gt; and remeasured the results&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;0.54 real 0.29 user 0.24 sys
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Ok, so not a massive change, but the overall experience of loading a new shell felt instantly snappier. It could have been a placebo but it worked for me, so I plowed on ahead for more culprits. But I needed help...&lt;/p&gt;
&lt;h2&gt;Phase two - profiling&lt;/h2&gt;
&lt;p&gt;Now I had removed some &amp;quot;big hitters&amp;quot; to performance, it was time to dive into the nitty gritty. We can do this by profiling our ZSH config so we can see what is taking the time.
We can do this with a tool called &lt;a href=&quot;http://zsh.sourceforge.net/Doc/Release/Zsh-Modules.html&quot;&gt;zprof&lt;/a&gt;, which is bundled with ZSH by default.
We can add it by putting &lt;code&gt;zmodload zsh/zprof&lt;/code&gt; at the top of our &lt;code&gt;~/.zshrc&lt;/code&gt; config and then putting &lt;code&gt;zprof&lt;/code&gt; at the very bottom of the config.&lt;/p&gt;
&lt;p&gt;Now we simply need to reload our shell and we get a nice breakdown of the time taken for each part of our config.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;../../assets/images/zprof.png&quot; alt=&quot;zprof profile&quot; /&gt;&lt;/p&gt;
&lt;p&gt;We can clearly see from the above screenshot that a plugin called set_iterm_tab_color is accounting for a lot of our load time. The plugin wasn&#39;t really what I wanted it for anyway (I wanted something like peacock for VSCode).
I removed it and a few other Antigen plugins and re-ran the test script again&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;0.30 real 0.17 user 0.14 sys
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;That&#39;s the load time halved! I was pretty happy to stop at this stage but I kept pursuing that prized 0.13 seconds.&lt;/p&gt;
&lt;h2&gt;Phase Three - Antigen to Antibody&lt;/h2&gt;
&lt;p&gt;Here&#39;s where things went wrong. As you could see from the screenshot, my second biggest load time item was Antigen itself. Antigen is known as a bit of a &lt;a href=&quot;https://github.com/zsh-users/antigen/issues/116&quot;&gt;big beast&lt;/a&gt;. So I started to look for alternatives.
Antibody was advertised as a answer to Antigen&#39;s woeful performance. I spent around 2 hours diligently porting my config over to Antibody.
However, when I &lt;em&gt;eventually&lt;/em&gt; got it working and battled through the lackluster documentation, I found it was actually slower than Antigen! A bit annoying to say the least. I didn&#39;t write down the test results from then but they were somewhere in the region of 0.8 seconds.&lt;/p&gt;
&lt;h2&gt;Phrase Four - Remove Pyenv&lt;/h2&gt;
&lt;p&gt;The last phase I did for now was to remove pyenv and compinit. Loading of pyenv is something that had escaped my notice when originally pruning through my config. I barely do any python development nowadays so this got removed hastily.
I reran the tests and...&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;0.19 real 0.10 user 0.08 sys
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Ok, so it&#39;s not quite the 0.13 seconds I had hoped for, but not bad.&lt;/p&gt;
&lt;p&gt;By this stage, the main load times for opening new tabs was iTerm2 itself (my terminal application of choice). Going forward I am going to switch over to Linux and most likely use Kitty which is a blazing fast terminal program. This should null and void any performance gains I&#39;d get by reducing it down by that 0.05 seconds.&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;This was a really good use of a few hours of time. It&#39;s going to make me happier and more productive at work, which in my mind, is time well spent. I still have plans to speed up ZSH (asides from Kitty) but they will have to wait (convert over to Zplugin and reduce plugins even more).
For now, I&#39;ll move onto other programs I use regularly like VSCode and Chrome for more performance gains there.&lt;/p&gt;
&lt;p&gt;I hope this post helps you speed up your ZSH - let me know the speeds you get on twitter - @joshghent&lt;/p&gt;
&lt;p&gt;Speaking of performance, I&#39;m currently working on a side project called TurboAPI. It&#39;s a super simple tool designed to monitor your API and Webhook&#39;s performance without installing a thing. I&#39;d love if you could check it out &lt;a href=&quot;https://turboapi.dev&quot;&gt;here&lt;/a&gt;&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Personal Infrastructure</title>
    <link href="/personal-infra/"/>
    <updated>2020-06-16T12:01:03Z</updated>
    <id>/personal-infra/</id>
    <content type="html">&lt;p&gt;After seeing the amazing posts by both Stephan Wolfram and Jess Frazelle, I wanted to chime in on my &amp;quot;personal infrastructure&amp;quot;. I&#39;ve always found stories about how people work, their little scripts and hacks they use and the machines they operate on, to be incredibly compelling - &lt;a href=&quot;https://usesthis.com/&quot;&gt;usesthis&lt;/a&gt; is a great site dedicated to that very subject.&lt;/p&gt;
&lt;h2&gt;Principles&lt;/h2&gt;
&lt;p&gt;I have two principles that I keep close to mind when looking to change or add to my setup.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Productivity - although I dislike the word (it sort of leaves a plastic taste in my mouth), productivity or rather getting the right things done is something that I&#39;m always cognisant of. Because time is always a constraint, I look for ways to reduce the time &lt;em&gt;I&lt;/em&gt; actually have to be doing something.&lt;/li&gt;
&lt;li&gt;Automation - this sort of links with the above but automation is a guiding principle in my life because the enjoyment I get from having a computer do something I did previously is overwhelming. I am fascinated by small scripts, cron jobs, and simple bots. I utilize all sorts of services to automate tasks. If there is a recurring activity I perform, in all likelihood, I will automate it.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;The Daily Grind&lt;/h2&gt;
&lt;p&gt;Currently, I work as a Senior Software Engineer for &lt;a href=&quot;https://www.cappfinity.com/&quot;&gt;Cappfinity&lt;/a&gt;. My role is heading up development on a product called Koru which was purchased as a startup. The job, rather generously, purchased a 2019 Macbook Pro with 32GB Ram for development usage. It works great asides from the occasional sluggishness. I&#39;m fortunate as for personal work, I use a Macbook Pro as well so the context switching between the two devices was minimal.&lt;/p&gt;
&lt;p&gt;I keep this Macbook primarily &lt;a href=&quot;https://cdn.osxdaily.com/wp-content/uploads/2012/06/clamshell-macbook-in-bookarc.jpg&quot;&gt;&amp;quot;clamshelled&amp;quot;&lt;/a&gt; on my desk in a &lt;a href=&quot;https://www.amazon.co.uk/gp/product/B07HKH2QGD/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&amp;amp;psc=1&quot;&gt;black metal stand I got on Amazon&lt;/a&gt;. It is connected via a DisplayLink dock to two 24 Inch Monitors.&lt;/p&gt;
&lt;p&gt;I use a &lt;a href=&quot;https://www.logitech.com/en-gb/product/multi-device-keyboard-k380&quot;&gt;Logitech K380&lt;/a&gt; for typing. As much as I would love a mechanical keyboard, my catastrophically frail wrists cannot bear the strain of the key travel. The build quality of the keyboard is not great and is more plastic than an American news presenters face, but it has a great 3 device bluetooth switcher which allows me to toggle between my personal and work laptop&#39;s as well as my gaming PC.&lt;/p&gt;
&lt;p&gt;I put an &lt;a href=&quot;https://www.logitech.com/en-gb/product/mx-master-3?crid=7&quot;&gt;MX Master 3&lt;/a&gt; on the right hand side of my keyboard - again it has the same great 3 device bluetooth switcher. And a Magic Trackpad on the left. I occasionally swap the two but generally it stays in this configuration. It sounds odd, but it&#39;s nice to have the options and reduce the repetitive movements I do.&lt;/p&gt;
&lt;p&gt;During lockdown, my wife was able to spend some time redoing both her art studio and my office. She painted it a subtle green and chose an &amp;quot;urban jungle&amp;quot; theme. I&#39;ve always loved bringing nature indoors and having lots of plants around really helps me focus and not feel like I&#39;m in an office. Makes it look a more bright and vibrant environment, rather than a dull dreary office.&lt;/p&gt;
&lt;h2&gt;Health&lt;/h2&gt;
&lt;p&gt;My wife, kindly, bought me a &lt;a href=&quot;https://www.hermanmiller.com/en_gb/products/seating/office-chairs/aeron-chairs/&quot;&gt;Herman Miller Aeron&lt;/a&gt; for my desk after &amp;quot;relentlessly moaning&amp;quot; (according to her - what does she know) about my previous chair - which was as comfortable as a church pew. The Aeron is like being cradled by 4 angels and has helped my posture and fatigue massively. Yes that&#39;s right, fatigue. Sitting down all day can be tiring in a weird way, due to the lack of lower body movement. I&#39;ve tried to alleviate this by taking regular breaks and working in different environments. It was the best present she&#39;s got me to this day and will make a huge difference to my daily life.&lt;/p&gt;
&lt;p&gt;Furthermore, I&#39;ve found the amount of water I drink to be a big factor in my mood. I know attempt to drink around 4L of Pure water per day as well as a coffee or two. I try to avoid caffeine after midday as I find it affects my sleep quantifiable.&lt;/p&gt;
&lt;h2&gt;Personal Work&lt;/h2&gt;
&lt;p&gt;Asides from my day job, I am working on a SaaS business, &lt;a href=&quot;https://turboapi.dev&quot;&gt;TurboAPI&lt;/a&gt;, and a charity, Alex&#39;s Wonderland Puzzles (that is still undergoing heavy development).&lt;/p&gt;
&lt;p&gt;In the case of the former, I develop the application using Typescript and host on AWS and DigitalOcean with Docker - because ECS is more hassle than it&#39;s worth. The frontend is hosted on Netlify and is built using React. I am looking to dabble a lot more in Golang and Rust, so I can work closer to the metal. Golang in particular introduces some new paradigms that are interested to me, and I believe would benefit me. However, my current spare time is dedicated to TurboAPI and Alex&#39;s Puzzles so I have that learning on hold for the time being.&lt;/p&gt;
&lt;p&gt;For the puzzle company, my role is a little different. I&#39;m in charge of sorting manufacturing, marketing, website development/maintenance and design. For this, I take details notes in Notion as it allows me to create tables, lists, charts and anything else I want. The website was built very simply using React and will soon have a store hosted with Shopify. I do designs in whatever notebook I have to had - usually a moleskin.&lt;/p&gt;
&lt;p&gt;On both my work and personal setup, I take notes in a private repo in GitHub. It allows me to jot my thoughts quickly and frictionlessly. I also write post mortems in there from any projects I&#39;m working on. I&#39;ve never been in an organisation that does public post mortems, but I find them good practise to really nail down the issue that was at the heart of the problem.&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;I&#39;ve got tonnes more to talk about here, but I want to keep this post short so I&#39;ll bid you all adieu and goodbye. In the next post I&#39;ll discuss my personal slack hub and automations I&#39;ve been working on. Stay Tuned!&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>How to use Private GitHub Packages on TravisCI</title>
    <link href="/private-github-packages-travis/"/>
    <updated>2020-03-09T09:22:03Z</updated>
    <id>/private-github-packages-travis/</id>
    <content type="html">&lt;blockquote&gt;
&lt;p&gt;The Problem: It&#39;s fairly well documented how to use private NPM packages in a project that uses TravisCI, but what about the GitHub Package Registry?&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;This was the issue I was facing. I was googling all over the net and finally landed on a solution to solve this problem.&lt;/p&gt;
&lt;p&gt;In practise, TravisCI just boots a VM or container that then runs the scripts you have defined. We can leverage the &lt;code&gt;before_install&lt;/code&gt; script to setup Travis as a new GitHub Package user. Here is how to do it...&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;If you&#39;re using a team, you will want to create a new user account and add it to your team. This is so we can safely generate a GitHub Token without fear of the person leaving the business in the future etc. Add this new bot user account to your GitHub Team&lt;/li&gt;
&lt;li&gt;Generate a new personal access token on the new bot user &lt;a href=&quot;https://github.com/settings/tokens/new&quot;&gt;here&lt;/a&gt;. It will need access to &lt;code&gt;write:packages&lt;/code&gt; and &lt;code&gt;read:packages&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Add the newly generated token to TravisCI&#39;s environment variables as &lt;code&gt;GITHUB_ACCESS_TOKEN&lt;/code&gt; - you will need to do this for each project that requires usage of the private package&lt;/li&gt;
&lt;li&gt;Add the following to your &lt;code&gt;before_install&lt;/code&gt; section of your &lt;code&gt;.travis.yml&lt;/code&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-yml&quot;&gt;before_install:
  - echo &amp;quot;//npm.pkg.github.com/:_authToken=${GITHUB_ACCESS_TOKEN}&amp;quot; &amp;gt; .npmrc
  - npm config --global set &amp;lt;YOUR ORG&amp;gt;:registry https://npm.pkg.github.com
  - cp .npmrc ~/
&lt;/code&gt;&lt;/pre&gt;
&lt;ol start=&quot;5&quot;&gt;
&lt;li&gt;Swap the &lt;code&gt;&amp;lt;YOUR ORG&amp;gt;&lt;/code&gt; with your GitHub Organization name without the &lt;code&gt;@&lt;/code&gt; - in our case the line would read &lt;code&gt;npm config --global set k0ru:registry https://npm.pkg.github.com&lt;/code&gt; - &lt;strong&gt;this org name should also be contained within the package name that you publish&lt;/strong&gt;, so in the &lt;code&gt;package.json&lt;/code&gt; file for your &lt;strong&gt;package&lt;/strong&gt; the name should be &lt;code&gt;@&amp;lt;YOUR ORG&amp;gt;/package-name&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Push up your new &lt;code&gt;.travis.yml&lt;/code&gt; file and kick off a build!&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Hey presto that should all fit together and download correctly. Short article but it&#39;s the article I wish existed when I was searching for an answer!&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>How to Create a Pinned Gist Bot in 10 minutes with GitHub Actions</title>
    <link href="/how-to-create-gist-bot/"/>
    <updated>2019-11-14T14:58:00Z</updated>
    <id>/how-to-create-gist-bot/</id>
    <content type="html">&lt;p&gt;Recently I stumbled upon an awesome page I hadn&#39;t seen before &lt;a href=&quot;https://github.com/matchai/awesome-pinned-gists&quot;&gt;awesome pinned gists&lt;/a&gt;. The premise of the list is small apps that run GitHub actions on a schedule to update a gist that is then pinned to your profile.
There are ones for monitoring your &lt;a href=&quot;https://github.com/matchai/waka-box&quot;&gt;Wakatime&lt;/a&gt;, your &lt;a href=&quot;https://github.com/matchai/bird-box&quot;&gt;last tweet&lt;/a&gt; or even your &lt;a href=&quot;https://github.com/JohnPhamous/strava-box&quot;&gt;Strava Metrics&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;It inspired me to create my own using RescueTime data - called &lt;a href=&quot;https://github.com/joshghent/rescue-box&quot;&gt;rescue-box&lt;/a&gt; (if you notice, all the apps are appended with &lt;code&gt;-box&lt;/code&gt;).&lt;/p&gt;
&lt;p&gt;Here is how I created rescue-box and &lt;em&gt;how you can create your own Pinned Gist Bot!&lt;/em&gt; This whole process took me around 10 minutes and is a fairly easy process.&lt;/p&gt;
&lt;h2&gt;1. Pick an Idea&lt;/h2&gt;
&lt;p&gt;The first and most important step is to have an idea for what data you want to display in your pinned gist. In my case, I chose RescueTime productivity data, but below are some other ideas if you&#39;re a bit stuck&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;FitBit Step Count&lt;/li&gt;
&lt;li&gt;JIRA ticket last complete by yourself&lt;/li&gt;
&lt;li&gt;Todoist Tasks completed today&lt;/li&gt;
&lt;li&gt;Reddit upvotes for your user&lt;/li&gt;
&lt;li&gt;PocketCast listening time&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;2. Get the data&lt;/h2&gt;
&lt;p&gt;Now once you have chosen your idea you need to find where you can get the data for your app.
Most services have a public API, so try Googling &lt;code&gt;&amp;lt;APP NAME&amp;gt; API documentation&lt;/code&gt;. In my case, I found the RescueTime documentation &lt;a href=&quot;https://www.rescuetime.com/apidoc&quot;&gt;here&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Inside the documentation, you will be able to find the endpoint you need to call to get the data as well as information about how to authenticate the request.&lt;/p&gt;
&lt;div class=&quot;image&quot;&gt;
	&lt;img src=&quot;./../../assets/images/rescuetime-api.png&quot; /&gt;
  &lt;em&gt;The RescueTime API Docs I used&lt;/em&gt;
&lt;/div&gt;
&lt;p&gt;Once you have the request, mock it up in a tool such as Postman or Insomnia so you can see how the data will come back from the API.&lt;/p&gt;
&lt;p&gt;This is the most tricky part of the process as your idea may not have a public API or the authorization may be too tricky to implement, in this case, try another idea until you find one - this is the most difficult part of the process.&lt;/p&gt;
&lt;h2&gt;3. Get the foundations&lt;/h2&gt;
&lt;p&gt;To bypass the boring setup, clone my rescue-box repo as this will give you a great starting point - especially if you are building something in the same format as I did
You can do this by running&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ git clone https://github.com/joshghent/rescue-box
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Change all the references to rescue-box to your own app name and replacing the &lt;code&gt;RESCUETIME_API_KEY&lt;/code&gt; references with your own service&#39;s name (e.g., &lt;code&gt;FITBIT_API_KEY&lt;/code&gt;). More importantly, update the documentation on how to setup the application - include information and links on where to get the API key for the service you are integrating with.&lt;/p&gt;
&lt;p&gt;Now run &lt;code&gt;npm install&lt;/code&gt; in your terminal to install the dependencies&lt;/p&gt;
&lt;h2&gt;4. Modify the API call&lt;/h2&gt;
&lt;p&gt;Inside the &lt;code&gt;index.js&lt;/code&gt; file in the &lt;code&gt;main()&lt;/code&gt; function, change the API call URL to be the one for your service. Additionally, change the environment variables loaded at the top with the ones for your service.&lt;/p&gt;
&lt;p&gt;Next, inside the &lt;code&gt;updateGist&lt;/code&gt; function, change the code from &lt;code&gt;line 30&lt;/code&gt; onwards to be what you want to inject into the pinned gist.&lt;/p&gt;
&lt;p&gt;Rescue-box injects a line for each type of &amp;quot;productivity&amp;quot; metric and then has a bar next to each of them out of 100%. However, in the example of a FitBit step tracker, there is no &amp;quot;percentage&amp;quot; for the number of steps you take per day so this can be removed and replaced with other information if you wish.&lt;/p&gt;
&lt;p&gt;It is worth noting that you can only have 5 lines displayed in a gist on your profile at a time. In the case of Rescue-box we use the first line to render the date the stats were taken from, leaving 4 lines for the information.&lt;/p&gt;
&lt;h2&gt;5. Test it locally&lt;/h2&gt;
&lt;p&gt;By this point, you should have followed the instructions in the setup guide and set yourself up with an API key, a Gist and a Github token. If you haven&#39;t already, now is the time to do that
Now rename the &lt;code&gt;sample.env&lt;/code&gt; to &lt;code&gt;.env&lt;/code&gt; and add your application secrets.&lt;/p&gt;
&lt;p&gt;Next, run the following command to run the application and update your gist!&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ node index.js
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If you go and view your Gist in the browser, this should have successfully updated it with the information you want. If it has worked, then proceed to the next step, otherwise - time for bug fixes! 🐛&lt;/p&gt;
&lt;div class=&quot;image&quot;&gt;
	&lt;img src=&quot;./../../assets/images/rescuebox.png&quot; /&gt;
  &lt;em&gt;&lt;/em&gt;
&lt;/div&gt;
&lt;h2&gt;6. Publish it to GitHub Actions&lt;/h2&gt;
&lt;p&gt;Now the automation bit, getting the app to run on GitHub actions.&lt;/p&gt;
&lt;p&gt;If you&#39;re not familiar with GitHub actions, it&#39;s a task runner, similar to Jenkins and can do all sorts of things like publishing to NPM or Docker Hub, building apps and running test suites. For our application, we will be using it to run our app on a schedule every 10 minutes.&lt;/p&gt;
&lt;p&gt;The prerequisite to this is to set up your repo (as documented in the rescue-box instructions). If you haven&#39;t already created a repo - &lt;a href=&quot;https://repo.new&quot;&gt;create one&lt;/a&gt;. You will need to add your API key, gist Id and GitHub token into &lt;code&gt;Settings &amp;gt; Secrets&lt;/code&gt; for the repo.&lt;/p&gt;
&lt;p&gt;Next, modify the &lt;code&gt;.github/workflows/schedule.yml&lt;/code&gt; file with the name of the app - this is how the job will display in GitHub actions.&lt;/p&gt;
&lt;p&gt;Now update the &lt;code&gt;Update gist&lt;/code&gt; action &lt;code&gt;uses:&lt;/code&gt; repo. Currently, this should be pointing to &lt;code&gt;joshghent/rescue-box@master&lt;/code&gt; but change this to your user name and repo (e.g., &lt;code&gt;joe-bloggs/fit-box@master&lt;/code&gt;)&lt;/p&gt;
&lt;p&gt;Below, in the &lt;code&gt;env:&lt;/code&gt; section, replace the &lt;code&gt;RESCUETIME_API_KEY&lt;/code&gt; with whatever it is named in your application (e.g., &lt;code&gt;FITBIT_API_KEY&lt;/code&gt;)&lt;/p&gt;
&lt;p&gt;Your finished GitHub workflow should look similar to this&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yml&quot;&gt;name: Update gist with FitBit Stats
on:
  schedule:
    - cron: &amp;quot;*/10 * * * *&amp;quot;
jobs:
  update-gist:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@master
      - name: Update gist
        uses: joe-bloggs/fit-box@master
        env:
          GH_TOKEN: $
          GIST_ID: $
          FITBIT_API_KEY: $
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;7. Push your app to your master branch&lt;/h2&gt;
&lt;p&gt;The final step is to push all your code from your machine to the master branch on GitHub!&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ git add -A &amp;amp;&amp;amp; git commit -m &amp;quot;:tada: Initial commit&amp;quot; &amp;amp;&amp;amp; git push origin master
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;✨ All done! Now you&#39;ve got an awesome automated pinned gist application running on GitHub Actions&lt;/h2&gt;
&lt;h3&gt;[OPTIONAL] 8. Add it to Awesome Pinned Gists&lt;/h3&gt;
&lt;p&gt;The final step is to add it to awesome pinned gists so everyone can appreciate your gist-based genius! - https://github.com/matchai/awesome-pinned-gists&lt;/p&gt;
&lt;p&gt;Happy hacking - reach out to me &lt;a href=&quot;https://twitter.com/joshghent&quot;&gt;@joshghent&lt;/a&gt; if you get stuck at all!&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Managing Application Secrets for Terraform across Teams</title>
    <link href="/terraform-secrets/"/>
    <updated>2019-11-11T15:08:03Z</updated>
    <id>/terraform-secrets/</id>
    <content type="html">&lt;h3&gt;TL;DR&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Terraform stack is deployed via Travis using a script (below)&lt;/li&gt;
&lt;li&gt;Secrets are shared by storing an encrypted tar file in Git&lt;/li&gt;
&lt;li&gt;The tar is decrypted by TravisCI and another other team member using secret keys stored elsewhere&lt;/li&gt;
&lt;li&gt;Variable files and generated dynamically based on the &lt;code&gt;.env&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Ok, that&#39;s a bit of a wordy title, but that&#39;s exactly the challenge I was tasked with solving recently - and it proved to be more of an issue that I expected.
The objective was to be able to use application secrets for ECS task definitions inside Terraform and have those be easily shared across the team.&lt;/p&gt;
&lt;h2&gt;🔐 Encrypting the Configuration&lt;/h2&gt;
&lt;p&gt;My first approach was simply to use environment variables and then put the environments file in secure storage (i.e., KeyBase) rather than committing it to source control. Simple right?
So that works, but in our case, we have TravisCI do the deployments for our stack and so that approach would have been lengthy whenever we wanted to add a new secret parameter.&lt;/p&gt;
&lt;p&gt;To solve this, I created a script that encrypts and decrypts a config with OpenSSL encryption. These encryption keys could then be shared across the CI and the rest of the team. It would be ideal to use a GPG key for this but I do not believe this is possible in CI. Plus it&#39;s a pain to onboard a new team member as you need to re-encrypt the secrets with their public key - less than ideal.&lt;/p&gt;
&lt;p&gt;We have a file called &lt;code&gt;.env&lt;/code&gt; in the root of the Terraform repo with the environment variables as you&#39;d usually find them. This is ignored from source control.
We then have one script to decrypt and unzip from &lt;code&gt;tar&lt;/code&gt; using OpenSSL and another to encrypt the file in a &lt;code&gt;tar&lt;/code&gt;. The tar is then committed to source control, it can then be decrypted by Travis and another other team member with access.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# encrypt-config.sh
#!/bin/bash

echo &amp;quot;Compressing config...&amp;quot;
tar czf config.tar.gz .env

echo &amp;quot;Encrypting config tarball...&amp;quot;
openssl enc -aes-256-cbc &#92;
  -in ./config.tar.gz &#92;
  -out ./config.tar.gz.enc &#92;
  -K ${CI_ENC_KEY} &#92;
  -iv ${CI_ENC_IV}

rm config.tar.gz

# decrypt-config.sh
#!/bin/bash

echo &amp;quot;Decrypting config...&amp;quot;
openssl enc -aes-256-cbc -d &#92;
  -in config.tar.gz.enc &#92;
  -out config.tar.gz &#92;
  -K ${CI_ENC_KEY} &#92;
  -iv ${CI_ENV_IV}

echo &amp;quot;Extracting config...&amp;quot;
tar xzf config.tar.gz
rm config.tar.gz
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;☝ Loading the variables&lt;/h2&gt;
&lt;p&gt;Terraform environment variables must be prefixed with &lt;code&gt;TF_VAR_&lt;/code&gt;, since I didn&#39;t want the laborious process of adding this to each variable, I wrote a script to prefix them and then load the variables into the shell environment. The script can be found below:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# loadenv.sh
export $(egrep -v &#39;^#&#39; .env | while read line; do echo &amp;quot;TF_VAR_$line&amp;quot;; done | xargs)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This then meant that my variables could be referenced in terraform by adding them to the root &lt;code&gt;variables.tf&lt;/code&gt; like this&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-terraform&quot;&gt;variable &amp;quot;REACT_APP_ENVIRONMENT_VAR&amp;quot; {
  type = string
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;...and then passed into the module like this&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-terraform&quot;&gt;// main.tf
module &amp;quot;Sample&amp;quot; {
  source                                = &amp;quot;./modules/general-cluster&amp;quot;
  REACT_APP_ENVIRONMENT_VAR                = var.REACT_APP_ENVIRONMENT_VAR
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In our shell the variable would be loaded as &lt;code&gt;TF_VAR_REACT_APP_ENVIRONMENT_VAR&lt;/code&gt;.&lt;/p&gt;
&lt;h2&gt;🤖 Automating&lt;/h2&gt;
&lt;p&gt;But I wanted to go a step further, because adding to the root &lt;code&gt;variables.tf&lt;/code&gt; each time is a pain. Me thinks, time for a script... so I did some digging and added this to the end of the &lt;code&gt;loadenv.sh&lt;/code&gt; from earlier&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Clear the old variables.tf file out
&amp;gt; variables.tf

# Loop through each line of our .env file
egrep -v &#39;^#&#39; .env | while read line;
do
  # Get the first part (before the =) of the line for the variable name
  var_name=$( cut -d &#39;=&#39; -f 1 &amp;lt;&amp;lt;&amp;lt; &amp;quot;$line&amp;quot; )

  # Write it to the variables.tf file
  cat &amp;gt;&amp;gt; variables.tf &amp;lt;&amp;lt;EOL
variable &amp;quot;${var_name}&amp;quot; {
  type = string
}
EOL
done
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This now automagically generates a &lt;code&gt;variables.tf&lt;/code&gt; file at the root of the Terraform folder. Travis can run this &lt;code&gt;loadenv.sh&lt;/code&gt; script based on the encryption keys it has stored in it&#39;s own environment. I created yet another script called &lt;code&gt;deploy.sh&lt;/code&gt; that Travis runs only on the master branch. As the name implies, it handles the deployments and notifications of such.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# deploy.sh
#!/bin/bash

set -e

if [[ $TRAVIS_BRANCH == &#39;master&#39; ]]
then
  errorstatus() {
    echo &amp;quot;Error when deploying Terraform config&amp;quot;
    # Slack Webhook Message
    curl -X POST --data-urlencode &amp;quot;payload={&#92;&amp;quot;channel&#92;&amp;quot;: &#92;&amp;quot;#deployments&#92;&amp;quot;, &#92;&amp;quot;username&#92;&amp;quot;: &#92;&amp;quot;Deploy Bot&#92;&amp;quot;, &#92;&amp;quot;text&#92;&amp;quot;: &#92;&amp;quot;:poop: Build #$TRAVIS_BUILD_NUMBER Failed when deployed Terraform Stack. Error log $TRAVIS_BUILD_WEB_URL&#92;&amp;quot;, &#92;&amp;quot;icon_emoji&#92;&amp;quot;: &#92;&amp;quot;:rocket:&#92;&amp;quot;}&amp;quot; $SLACK_WEBHOOK_URL
  }

  # When exiting due to an error, run the error status
  trap errorstatus ERR

  . ./decrypt-config.sh

  # shellcheck disable=SC1091
  source ./loadenv.sh
  terraform init
  terraform validate
  terraform apply -auto-approve
  curl -X POST --data-urlencode &amp;quot;payload={&#92;&amp;quot;channel&#92;&amp;quot;: &#92;&amp;quot;#deployments&#92;&amp;quot;, &#92;&amp;quot;username&#92;&amp;quot;: &#92;&amp;quot;Deploy Bot&#92;&amp;quot;, &#92;&amp;quot;text&#92;&amp;quot;: &#92;&amp;quot;:tada: Build $TRAVIS_BUILD_NUMBER successfully deployed Terraform Stack&#92;&amp;quot;, &#92;&amp;quot;icon_emoji&#92;&amp;quot;: &#92;&amp;quot;:rocket:&#92;&amp;quot;}&amp;quot; $SLACK_WEBHOOK_URL
  echo &amp;quot;Deployment Completed Successfully&amp;quot;
else
  echo &amp;quot;Branch is not master so skipping deployment&amp;quot;
fi
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;After all this has run, I end up with a nice deployment message in Slack!&lt;/p&gt;
&lt;div class=&quot;image&quot;&gt;
	&lt;img src=&quot;./../../assets/images/deployment-success.png&quot; /&gt;
  &lt;em&gt;&lt;/em&gt;
&lt;/div&gt;
&lt;p&gt;And that&#39;s all! We are still iterating our approach with Terraform and working to get it running against our entire stack rather than just parts of it. I am enjoying learning it so far, even if it&#39;s a bit rough around the edges I&#39;m excited to see where Terraform goes!&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Monitoring Git Leaks in Travis</title>
    <link href="/gitleaks-travis/"/>
    <updated>2019-11-08T12:11:00Z</updated>
    <id>/gitleaks-travis/</id>
    <content type="html">&lt;p&gt;Recently, we&#39;ve wanted to add Gitleaks scanning into our repos to keep on top of any potential security issues.
I checked out a number of tools such as &lt;a href=&quot;https://github.com/Yelp/detect-secrets&quot;&gt;detect-secrets&lt;/a&gt; and &lt;a href=&quot;https://github.com/dxa4481/truffleHog&quot;&gt;trufflehog&lt;/a&gt; but eventually I decided to use &lt;a href=&quot;https://github.com/zricethezav/gitleaks&quot;&gt;Gitleaks&lt;/a&gt; as the format was fairly CI friendly.&lt;/p&gt;
&lt;p&gt;There is already a CI version of &lt;a href=&quot;https://github.com/zricethezav/gitleaks-ci&quot;&gt;Gitleaks&lt;/a&gt; but it uses a stripped down version of &lt;a href=&quot;https://github.com/zricethezav/gitleaks&quot;&gt;Gitleaks&lt;/a&gt; with basic regex.
I wanted to use the fully fledged version that was updated a bit more regularly. Additionally, with the CI version you had to configure a few environment variables which I didn&#39;t want to do with every single repository.&lt;/p&gt;
&lt;p&gt;Since there was not much documentation on how to use it in CI, I decided to post this blog.&lt;/p&gt;
&lt;p&gt;Simply add this script in &lt;code&gt;/.ci/leaks.sh&lt;/code&gt;
This will only audit the current script in the local repo&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;#!/bin/bash

if [ ! -z $TRAVIS_PULL_REQUEST ]; then
    REPO_SLUG=&amp;quot;/${TRAVIS_REPO_SLUG}&amp;quot;

    # Audit the current commit for secrets
    docker run --rm --name=gitleaks -v $PWD:$REPO_SLUG zricethezav/gitleaks -v --repo-path=$REPO_SLUG --commit=$TRAVIS_COMMIT
fi
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Next, add this into your &lt;code&gt;.travis.yml&lt;/code&gt;. Alternatively just add an additional &amp;quot;script&amp;quot; if you don&#39;t want to do different stages&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;- stage: Leaks
    language: generic
    script:
    - &amp;quot;./.ci/leaks.sh&amp;quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Additionally, add &lt;code&gt;docker&lt;/code&gt; as a new service in the &lt;code&gt;.travis.yml&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;That&#39;s it! Tweet me &lt;a href=&quot;https://twitter.com/joshghent&quot;&gt;@joshghent&lt;/a&gt; if you have any problems.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Signal vs Noise - Staying Up to Date</title>
    <link href="/signal-vs-noise/"/>
    <updated>2019-10-16T11:40:03Z</updated>
    <id>/signal-vs-noise/</id>
    <content type="html">&lt;p&gt;Technology is so fast paced that to stay up to date, you &lt;em&gt;need&lt;/em&gt; to be learning on a daily basis. However, the internet is so awash with vast swaths of information of varying accuracy and importance that it&#39;s difficult to filter the signal from the noise and only consume that which will be of lasting importance.&lt;/p&gt;
&lt;p&gt;This was a challenge that I faced myself in my career of staying up to date and learning about new technologies in a 80/20 fashion. In other words, what is the 20% of content I can &amp;quot;consume&amp;quot; for 80% of the impact.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Why is &amp;quot;staying up to date&amp;quot; important though?&lt;/h2&gt;
&lt;p&gt;Recently, Corey Quinn, writer of Last Week in AWS, had an interesting point when answering the question of &lt;a href=&quot;https://www.techrepublic.com/article/aws-billing-is-broken-and-kubernetes-wont-last-says-irreverent-economist-corey-quinn/?ck_subscriber_id=559247293&quot;&gt;&lt;em&gt;&amp;quot;What&#39;s the most consistently wrong thing you see AWS users do?&amp;quot;&lt;/em&gt;&lt;/a&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Quinn&lt;/em&gt;: The most consistent mistake that everyone makes when using AWS—this extends to life as well—is once people learn something, &lt;em&gt;they stop keeping current on that thing&lt;/em&gt;. There is an entire ecosystem of people who know something about AWS, with a certainty. That is simply no longer true, because capabilities change. Restrictions get relaxed. Constraints stop applying. If you learned a few years ago that there are only 10 tags permitted per resource, you aren&#39;t necessarily keeping current to understand that that limit is now 50.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;I&#39;ve found this to be true in my own career. Often, there will be a workaround I have to do for a particular project, but then a new strategy is released that means there is no need to do that do that workaround any more. Without &amp;quot;staying up to date&amp;quot; I would keep doing this workaround!&lt;/p&gt;
&lt;blockquote&gt;
&lt;h2&gt;The challenge remains however, that you cannot consume everything that is published, even about a specific technology. So how can you filter the signal vs noise?&lt;/h2&gt;
&lt;/blockquote&gt;
&lt;h2&gt;Subscribing to newsletters&lt;/h2&gt;
&lt;p&gt;First and foremost, subscribe to these on your work email. Learning is a part of your work and so reading through information on a work relevant topic should be considered as such. Previously, I subscribed to them on my personal email and found myself with a mountain of emails to read through on my way home from work - not fun and I didn&#39;t get any benefit from the content.&lt;/p&gt;
&lt;p&gt;I find newsletters a great source of learning because they only include articles that particularly stood out with importance. Additionally, each article usually has a short summary that can make you decide whether to read it or not.&lt;/p&gt;
&lt;p&gt;There is even a newsletter for &lt;a href=&quot;https://www.hackernewsletter.com/&quot;&gt;HackerNews&lt;/a&gt; so you can have it boiled down as a sort of &amp;quot;Greatest Hits&amp;quot; of the week. This helps you feel connected to the community without having to spend all day scrolling.&lt;/p&gt;
&lt;h2&gt;Go to Meetups&lt;/h2&gt;
&lt;p&gt;I&#39;m a big advocate of meetups for learning. Why? Because it&#39;s dedicated time to focus on one thing. You&#39;re not (or at least you shouldn&#39;t) be checking Twitter at the same time, you&#39;re just listening and learning.&lt;/p&gt;
&lt;p&gt;Furthermore, it&#39;s a great way to surround yourself with like minded people and build on your ideas. You can gain real insight into how people learned a topic or even got into the industry in the first place.&lt;/p&gt;
&lt;h2&gt;Join Slack/Discord/Forum Communities focused around a technology or topic&lt;/h2&gt;
&lt;p&gt;If you go to meetups, they will often have a slack or discord community that you can join and talk with other attendees. I&#39;d highly recommend joining these, but be selective, not joining and participating in absolutely every single one. Instead, encourage conversation by asking questions, even if you think they are dumb - people will help you out.
Slack is particularly good for meetups because it allows you to build relationships with the people in the local area. This makes it easier to look for new opportunities, to collaborate and perhaps to secure funding for your next great idea.&lt;/p&gt;
&lt;p&gt;Additionally, if you have an interest in a particular technology or area of expertise, these will likely have a community around them also. Personally, I really love the Chaos Engineering Slack as there is always lively discussion and a lot of links being shared on the topic. Again this allows you to filter down articles and gain insights on the topic that you otherwise would not know.&lt;/p&gt;
&lt;h2&gt;Follow relevant people on Twitter (+ setup Twitter mute keywords)&lt;/h2&gt;
&lt;p&gt;Despite the hate, I find Twitter a great asset to me in my learning and &amp;quot;staying-up-to-date&amp;quot;. But it carries a couple of warnings.
Firstly, it&#39;s to setup Twitter mute keywords. I have around a hundred or so of these (unfortunately they cannot be imported via CSV or other formats), that filter out everything I don&#39;t want to see (basically anything asides from tech). Additionally, I steer clear from politics and other &amp;quot;drama&amp;quot; that springs up in the community. Some may find this to be &amp;quot;putting up the shutters&amp;quot; so to speak, but I get my news elsewhere and my life is a lot happier not reading about other peoples personal lives.&lt;/p&gt;
&lt;p&gt;Second is to be selective about who you follow in the community. Go to your favourite &amp;quot;tweeter&amp;quot; and look at who they follow and just follow them all, then if you find the content off topic then you can simply unfollow them. Overtime you can build a highly curated list of people who give you great content.&lt;/p&gt;
&lt;h2&gt;Listen to Podcasts (selectively) and Books&lt;/h2&gt;
&lt;p&gt;Listening to content whilst commuting is a great way to learn whilst on-the-go. There are a large array of podcasts that I subscribe to, both technical and non-technical as well as an Audible subscription for a new book each month.&lt;/p&gt;
&lt;p&gt;Podcasts used to be a major source of stress for me as they would stack up in my &amp;quot;To Listen&amp;quot; playlist waiting to be listened to. Now I am much more rigorous about which podcasts I subscribe to and listen to. If I am subscribed to a weekly show, I don&#39;t care to listen to each episode and just dive in on whatever episode I want to listen to.
PocketCasts still says however that since July 28th 2016, I&#39;ve listened to a staggering 84 days and 3 hours of content. Whilst I listen to this as well as doing other things, such as shopping or commuting, I do suddenly realise two things&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;I barely remember a single one of those podcasts I&#39;ve listened to&lt;/li&gt;
&lt;li&gt;I could have read/listened to a lot of books in those 84 days that would have had far greater impact.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;It was for those two reasons that I have no switched almost exclusively to listening to audiobooks. They give you great insights in on the world whether technical or non-technical. Recently, I&#39;ve been listening to a number of psychology and history books, which tangentially has given me new perspectives on technology.&lt;/p&gt;
&lt;p&gt;I don&#39;t really have time to read physical paperbacks (despite loving the tactility), so audiobooks are a great way to read without reading. I find my comprehension and memory better if anything when compared to regular reading as I lean towards auditory learning.&lt;/p&gt;
&lt;h3&gt;Relax&lt;/h3&gt;
&lt;p&gt;... you don&#39;t need to be learning all the time. Although important, you should not overwhelm yourself with media to consume on a near constant basis. That is why I chose to not include tools like Pocket in this article. For myself, they led to massive FOMO and anxiety around how many articles I had stacking up. I read hundreds of articles previously, and truth be told, I remember almost none of them. What I do remember however, is books. For me, they provide much deeper insights and introduce new world views that allow me to learn in new ways.&lt;/p&gt;
&lt;p&gt;Additionally, although it&#39;s good to use these different channels, make sure to not be distracted by them and instead allot their own block of time to be focused on individually.&lt;/p&gt;
&lt;p&gt;I have set myself the provision that if a tab is open for more than 10 days, then even if I think it might be the greatest piece of literature ever written, then I close the tab. You have to reason, if it is that good, you&#39;ll probably hear about it by some other means.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Pentest Aftermath</title>
    <link href="/pentest/"/>
    <updated>2019-10-09T14:06:00Z</updated>
    <id>/pentest/</id>
    <content type="html">&lt;p&gt;Recently, &lt;a href=&quot;https://joinkoru.com&quot;&gt;Koru&lt;/a&gt; had a penetration test done by an independent third party. The actual test took place a little before I had joined but the results only came in afterwards. Having never read through a pentest report, I was curious to see what they would find and more importantly how. Having been listening to copious amounts of DarkNet Diaries, I thought we were in for something along the lines of &amp;quot;we have all your data and locked you out&amp;quot; level of intrusion.&lt;/p&gt;
&lt;p&gt;Thankfully, that wasn&#39;t the case. Instead we got a 6 page report outlining the findings. Being fairly unfamiliar with the system at the time (it was literally my first week), it was interesting to gain the insights that the document provided.
Reflecting on it now, it&#39;s not incredibly surprising, we use a set of very tight VPC&#39;s to separate each clients data and the only frontend exposure is on a static website; So there are no chances of XSS attacks.&lt;/p&gt;
&lt;p&gt;The first discovery which was considered Number 1 priority turned out to be rather humorous. It was discovered that the API returned an &amp;quot;X-Powered-By&amp;quot; header (which tells the client what library or language the service is written in) of &amp;quot;PHP 5.1&amp;quot;.
I was a bit puzzled by this as the entire stack was Javascript or Python. After some digging I discovered the following code in the Express server declaration.&lt;/p&gt;
&lt;br /&gt;
&lt;div class=&quot;image&quot;&gt;
	&lt;img src=&quot;../../assets/images/header-code.png&quot; /&gt;
	&lt;em&gt;The express X-Powered-By header code forgery&lt;/em&gt;
&lt;/div&gt;
&lt;br /&gt;
&lt;div class=&quot;image&quot;&gt;
	&lt;img src=&quot;../../assets/images/illusion-100.jpeg&quot; /&gt;
	&lt;em&gt;Sneaky beaky&lt;/em&gt;
&lt;/div&gt;
&lt;p&gt;This was a sneaky tactic... and very funny. This got removed to not return an X-Powered-By header at all.&lt;/p&gt;
&lt;p&gt;Additionally, the pentest had pointed out that we did not have a password reset functionality when inside the application. This didn&#39;t seem like a security issue but the reasoning behind it was if the account was compromised then the user could take protective action and change their password right there and then. An edge case certainly, but interesting nonetheless.
This feature has been more of a pain that we anticipated since Auth0 helpfully doesn&#39;t return the &amp;quot;reason&amp;quot; why a password was too weak when changing the password via the API. This is not a nice UX so although the feature has been built, it has not been released.&lt;/p&gt;
&lt;p&gt;In connection with this, they pointed out that our password policy was incredibly lenient. Since we use Auth0 as our identity provider, this was a simple matter of upgrading the password requirements, turning on their common password list protection (to prevent highly used passwords like &lt;a href=&quot;http://bash.org/?244321&quot;&gt;&amp;quot;hunter2&amp;quot;&lt;/a&gt;) as well as configuring a bespoke list of blacklisted words such as our company name and some industry specific terms.&lt;/p&gt;
&lt;p&gt;Another feature we lacked was no Multi-factor authentication. This was an important feature to have, as increasingly enterprise customers are requiring it from their suppliers. Again, since we use Auth0 this was fairly trivial to add, despite bending the UX flow they recommend (which isn&#39;t that great).&lt;/p&gt;
&lt;p&gt;After this was a number of minor points around the Content Security Policy as well as the headers that were being sent with requests. In the case of the latter, this was solved by the defaults of helmetjs. For the CSP, this required a lot of back and forth configuring all the URL&#39;s we let through. In doing so, it made me realise just how much of the web is powered mostly by third parties. Our site has a intercom integration, Auth0, a few JS and CSS libraries and some Google Analytics for good measure, but even not having 1 of those components had ripple effects right across the site. The web is incredibly fragile and it&#39;s made me wonder if we need all of it?&lt;/p&gt;
&lt;p&gt;I personally have Google Analytics configured on my blog, but I can&#39;t tell you the last time I looked at it or even did anything with the data. It simply has been hoovering up data for the big Google machine. The case is similar with the Intercom button, I&#39;ve personally never once used one and it seems as this recent tweet states, no one else does either...&lt;/p&gt;
&lt;blockquote class=&quot;twitter-tweet&quot;&gt;&lt;p lang=&quot;en&quot; dir=&quot;ltr&quot;&gt;Here&amp;#39;s why these auto-messages with the same text for every visitor simply don&amp;#39;t work (over 14K sent, 88% opened, only 3% replied and thus 85% just distracted and probably annoyed): &lt;a href=&quot;https://t.co/GOAVQAp4BJ&quot;&gt;pic.twitter.com/GOAVQAp4BJ&lt;/a&gt;&lt;/p&gt;&amp;mdash; Yury Smykalov (@ysmykalov) &lt;a href=&quot;https://twitter.com/ysmykalov/status/1182194935967211520?ref_src=twsrc%5Etfw&quot;&gt;October 10, 2019&lt;/a&gt;&lt;/blockquote&gt; &lt;script async=&quot;&quot; src=&quot;https://platform.twitter.com/widgets.js&quot; charset=&quot;utf-8&quot;&gt;&lt;/script&gt;
&lt;p&gt;Are these things a bad thing? Sometimes yes, sometimes no. I know for a number of companies and even peoples jobs literally revolve around Google Analytics, and that&#39;s super cool. But for a lot of a websites and apps, it&#39;s just not needed. It&#39;s another case of someone saying &amp;quot;oh we should have this so we can track X&amp;quot;. Then &amp;quot;X&amp;quot; is forgotten and the person who wanted it in the first place leaves, and then it&#39;s just left. Silently slurping data in the shadows.&lt;/p&gt;
&lt;p&gt;That was a little tangent, but you get my point. Overall, my experience of having a pentest done was very positive and I&#39;d recommend it for any organisation. Security is something that is always &lt;em&gt;said&lt;/em&gt; to be priority number 1 on peoples list, but no action is taken; ergo it&#39;s not actually priority number 1. These kind of insights from a third party can quickly and easily give you industry level knowledge on how to prevent issues proactively - which is a hell of a lot cheaper than fixing them retroactively.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Resiliency</title>
    <link href="/resiliency/"/>
    <updated>2019-10-08T11:43:03Z</updated>
    <id>/resiliency/</id>
    <content type="html">&lt;p&gt;At my previous post at &lt;a href=&quot;https://cloudcall.com&quot;&gt;CloudCall&lt;/a&gt;, I was responsible for the SMS/IM backend. Whilst it was being developed, we made the classic mistake of not worrying about resiliency or testing since we were so stacked with features and had a manual QA department to act as a big bug dragnet.&lt;/p&gt;
&lt;p&gt;Once things settled down however, one of my first priorities (for my own sanity and peace of mind) was to focus on resiliency. Here are the steps that I took, that contain translatable principles across any system. In my case, I was working with a suite of NodeJS API&#39;s and consumer services hosted on ECS or Lambda.&lt;/p&gt;
&lt;h2&gt;Goals&lt;/h2&gt;
&lt;p&gt;Let&#39;s start of with some high level goals, because often resiliency can be used to mean &amp;quot;stopping a service from ever crashing&amp;quot;. However, the goal of resiliency should be to accept failure, and instead be more focused on handling them gracefully.&lt;/p&gt;
&lt;p&gt;I had the objective of making 99% of errors auto recover and be never told about them, but the 1% of errors I wanted to know about immediately. This included not just, is X service healthy, but also is the queue that it listens to being consumed at a rate less than it was being added to and so on.&lt;/p&gt;
&lt;p&gt;Making the web more resilient is a certain goal of mine as I (as I&#39;m sure you, the reader, have) been burned by going through a form submission, only for it to go wrong and have to do the entire thing again. Frustrating right? One example of this stands out in my mind where I submitted an application for a bank account. On the confirmation screen, I left it as I got distracted by something else. By the time I went back on the page, Chrome decided to reload it and what resulted was me having no way to login and a credit checking being done on me but no bank account at the end of it! Funny (in retrospect) perhaps but pointed out serious resiliency issues. If you start looking, you&#39;ll see them everywhere.&lt;/p&gt;
&lt;p&gt;Here&#39;s what I ended up doing to make our SMS/IM systems more resilient.&lt;/p&gt;
&lt;h2&gt;Backing Off&lt;/h2&gt;
&lt;p&gt;When the primary consumer service (loving named Short Message Event Gatherer or &amp;quot;SMEG&amp;quot; for short), we implemented a retry mechanism that just requeued the message to be re-processed. It was then consumed immediately by another instance which undoubtedly generated the same error (due to 3rd party errors etc).&lt;/p&gt;
&lt;p&gt;The correct behaviour is to have a backing off mechanism. This means that in the event of 3rd party failures (databases, external API&#39;s etc) it will give them chance to recover before retrying the process.&lt;/p&gt;
&lt;p&gt;We implemented a gradual back off procedure that increased 3x with each attempt. So after the first failure it would retry after 1 minute then the second would be after 3 minutes, and then 9 minutes and so on.&lt;/p&gt;
&lt;h2&gt;Die!&lt;/h2&gt;
&lt;p&gt;Hosting the application with ECS (Docker) forced us to think in a &amp;quot;cattle&amp;quot; rather than &amp;quot;pets&amp;quot; way of working with servers.&lt;/p&gt;
&lt;p&gt;When the applications health check failed or it&#39;s heartbeat to RabbitMQ failed, it would immediately kill itself. This would then trigger ECS to boot a new task to take its place.&lt;/p&gt;
&lt;h2&gt;Health Check on Start&lt;/h2&gt;
&lt;p&gt;Linked with the above, when the application booted, we did a systems check to make sure it could connect to the database, RabbitMQ and a couple of third parties that we used.&lt;/p&gt;
&lt;p&gt;If this process failed on startup, it means that ECS could try to boot it in another availability zone, increasing its chance of success.&lt;/p&gt;
&lt;p&gt;We configured alerting on the back of this if AWS could not boot a container that added to our overall picture of the systems health.&lt;/p&gt;
&lt;h2&gt;Automated end-to-end monitoring&lt;/h2&gt;
&lt;p&gt;The SMS project was primarily driven as an event-driven microservices architecture with lots of API&#39;s and consumer services along the way. And although we had unit and integration test for each of these services, the tests were still isolated to the scope of that particular service. There needed to be a way to guarantee that the whole pipe was flowing not just individual parts and their surroundings.&lt;/p&gt;
&lt;p&gt;To do this, we set about creating Node and sometimes Python scripts that would simulate sending and receiving SMS text messages. These scripts admittedly often broke due to the configuration being a bit hacky but other than that, they worked rather well. These scripts were triggered on a CRON basis and could give us a lot of piece of mind that the entire product was ticking over well.&lt;/p&gt;
&lt;p&gt;It also covered over failings on the part of monitoring the individual services, although they had health endpoints, often these health endpoints would just be &lt;code&gt;return res.status(200)&lt;/code&gt; rather than abiding by the advice I have above. This meant that if one of those services genuinely did go down (which they did in the beginning), the end-to-end monitoring notified us immediately.&lt;/p&gt;
&lt;h2&gt;CorrelationIds&lt;/h2&gt;
&lt;p&gt;Due to the microservice architecture patterns, tracing logs through the system quickly became a nightmare. To resolve this we implemented correlationIds that were passed from one service to the next. In our case, they were generated by our API gateway - Kong, so we could trace the call into our infrastructure right from the source.&lt;/p&gt;
&lt;p&gt;This is not a unique idea but, looking back in retrospect, would be one of the first things I do. Additionally, as we had setup, I would configure these Id&#39;s to be generated at a gateway level as I often ran into issues where requests from a third party to us simply would &amp;quot;disappear&amp;quot;.&lt;/p&gt;
&lt;h2&gt;Fail early&lt;/h2&gt;
&lt;p&gt;Throughout any system, there will be a number of &amp;quot;failure scenarios&amp;quot; that you have to handle. Previously, we handled all failures the same, just requeued the message and then eventually completely failed.
Soon though, we found a number of issues that should not have been retried such as a lack of credit on the account. For these cases, we created a way to categorize the errors, some were critical and others were just warnings. This meant more messages went all the way through (maintaining a minimum viable service level) and fed back to the user sooner on issues that were their fault (sorry!)&lt;/p&gt;
&lt;h2&gt;Aggressive feature flagging&lt;/h2&gt;
&lt;p&gt;Although we did not utilize a feature flagging tool such as LaunchDarkly (which in retrospect, we should but didn&#39;t know about it at the time), we still aggressively feature flagged everything in the backend. I had a number of features launch ages in advance of when they were actually &amp;quot;turned on&amp;quot; since my pace of work exceeded that of the frontend team. Often, I would create a feature, release it to our development environment, test it and sign it off. Then we would get a bug report or another more important feature, I would then add that on top of the previous feature which would then mean it was a pain to release one without the other. Could that be solved with better release cycles? Perhaps. But often I would not be aware of if Feature B needed to go before Feature A and when Feature A&#39;s frontend would be done.&lt;/p&gt;
&lt;p&gt;Anyway, the solution was to add feature flags that we then toggled within the applications config that it pulled down from ASM. Easy-peasy. But this simple mechanism allowed code to be released and tested well in advance of when it was actually needed.&lt;/p&gt;
&lt;h2&gt;Alerting&lt;/h2&gt;
&lt;p&gt;Being notified of errors is something that is critical in any good system, but in our case we had additional considerations around handling third-parties and since it was driven by a consumer service we needed to make sure that the queue was not backing up to far and auto-scaling policies were working correctly.&lt;/p&gt;
&lt;p&gt;Fortunately, were able to setup RabbitMQ to post it&#39;s stats to Grafana so we could create alerting from Grafana based on different metrics coming from RabbitMQ - this could trigger auto scaling policies and/or emails and slack messages to the relevant parties.&lt;/p&gt;
&lt;p&gt;On one occasion we had an AZ outage where although our service span up correctly in another AZ, some other components that our service required didn&#39;t. We soon knew about this thanks to the reporting we had inside the service to report it&#39;s health in relation with other services it required.&lt;/p&gt;
&lt;p&gt;The end-to-end monitoring also had an alerting component since it notified us if the whole process of sending an SMS message took over a certain N number of milliseconds.&lt;/p&gt;
&lt;h2&gt;Future Plans&lt;/h2&gt;
&lt;p&gt;Since I moved to a new company, I did not get chance to execute on all of my resiliency plans.&lt;/p&gt;
&lt;p&gt;I had planned to add a new feature into our frontend that would allow people to manually retry messages (like on iMessage or WhatsApp), this would then trigger a new Lambda that would requeue the message based on the data in the DynamoDB table&lt;/p&gt;
&lt;p&gt;Additionally, I was looking to implement a feature whereby messages that failed completely (after retries) would be moved to another &amp;quot;Failed&amp;quot; DLX queue. If somehow the app managed to process a message then it would start consuming from the &amp;quot;Failed&amp;quot; DLX queue.&lt;/p&gt;
&lt;p&gt;Furthermore, implementing some kind of chaos engineering in production would have properly and automatically tested all the work I had done around handling sudden outages. Although we tested these manually, either by firewalling or taking down the service, we did not test it with each new release, only when the work was originally done so there is the chance it could break in the future. Automating tests like this makes it near impossible to run into these types of issues and uncovers additional flaws in the system.&lt;/p&gt;
&lt;h2&gt;Summary&lt;/h2&gt;
&lt;p&gt;Each of these points could have been a blog post in of themselves but I believe it&#39;s often good to have a rough overview from a specific view point and then research the implementation separately. I hope you had enjoyed reading about this resiliency work and provided some ideas for how you can make a more resilient and robust web!&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Gatsby or Bust!</title>
    <link href="/gatsby-or-bust/"/>
    <updated>2019-08-29T11:37:00Z</updated>
    <id>/gatsby-or-bust/</id>
    <content type="html">&lt;p&gt;Recently, I moved my website from a static HTML file on GitHub pages (yes &lt;em&gt;actually&lt;/em&gt; static) and my blog from Medium. I decided to combine them both and move over to a Gatsby website.&lt;/p&gt;
&lt;h2&gt;Why?&lt;/h2&gt;
&lt;p&gt;I have been wanting to move my blog from Medium for a long time. Since the platform was built, they have struggled to find a viable business model and have resorted to increasingly anti-user friendly ways of attempting to get people to pay for content. I have thoughts on what they &lt;em&gt;should&lt;/em&gt; do but that&#39;s a topic for another time.
Anyway, I wanted for people to actually read my blog and didn&#39;t like the mobile experience in particular.&lt;/p&gt;
&lt;div class=&quot;image&quot;&gt;
	&lt;img src=&quot;https://miro.medium.com/max/2560/1*6Mu_U4dUXP5uzebamoUYaw.png&quot; /&gt;
	&lt;em&gt;Source: &lt;a href=&quot;https://medium.com/@nikitonsky/medium-is-a-poor-choice-for-blogging-bb0048d19133&quot;&gt;&lt;/a&gt;https://medium.com/@nikitonsky/medium-is-a-poor-choice-for-blogging-bb0048d19133&lt;/em&gt;
&lt;/div&gt;
&lt;p&gt;Additionally, I wanted to regain control of my content. I didn&#39;t like the idea that a platform could be gaining revenue from content that wasn&#39;t theirs. It wasn&#39;t as if I was using the platform for free, I had paid $70 or so to get it pointed to my own subdomain (a feature that they later dropped).&lt;/p&gt;
&lt;h2&gt;The Move&lt;/h2&gt;
&lt;p&gt;I chose Gatsby for two reasons, it seemed pretty quick and was easy to deploy and add new blog posts to. I could also keep everything inside git and tools that I was already using for development work.&lt;/p&gt;
&lt;h3&gt;Deployments&lt;/h3&gt;
&lt;p&gt;I chose to host the site on Netlify and configured auto deployments from new commits on the master branch. I also configured my DNS provider with a CNAME from the root of my domain the the Netlify application.&lt;/p&gt;
&lt;p&gt;Along with this, I configured TravisCI to run a spell check on all my blog posts as well as deployment previews for new PR&#39;s. This allows me to see new posts before they get merged in the live site.&lt;/p&gt;
&lt;h3&gt;Development&lt;/h3&gt;
&lt;p&gt;I started, like most people, with the &lt;a href=&quot;https://github.com/gatsbyjs/gatsby-starter-blog&quot;&gt;Gatsby starter blog&lt;/a&gt;. I didn&#39;t like some of the coding styles, but didn&#39;t really care all that much.
On top of the boilerplate, I made some additional changes&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Added a light/dark mode toggle in the Navbar - heavily inspired by the Overreacted.io blog as well as a myriad of others with the same feature. This was easy enough to do&lt;/li&gt;
&lt;li&gt;Changed the color scheme&lt;/li&gt;
&lt;li&gt;Added my &lt;a href=&quot;https://keybase.io/joshghent&quot;&gt;keybase&lt;/a&gt; GPG key for verification of my identity&lt;/li&gt;
&lt;li&gt;Changed the style of headings for the blog posts, I found the default headings to be larger than writing on a charity cheque&lt;/li&gt;
&lt;li&gt;Added a /now page as inspired by David Sivers and documented the apps and tools I use currently. This needs some improving&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Moving my old posts&lt;/h3&gt;
&lt;p&gt;Setting up the site was relatively easy. The difficult and laborious part was going to be the moving of my old posts.
Since Gatsby runs on Markdown, I found a neat NPM app called &lt;a href=&quot;https://www.npmjs.com/package/medium-2-md&quot;&gt;medium-2-md&lt;/a&gt;.
I went into all 30 or so of my old blogs and then copied the URL&#39;s and then ran the command
&lt;code&gt;medium-2-md convertUrl https://blog.joshghent.com/sample-post -f -o index.md&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;I could have potentially wrote a script to automate it. But...&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://imgs.xkcd.com/comics/automation.png&quot; alt=&quot;xkcd&quot; /&gt;&lt;/p&gt;
&lt;p&gt;... It wasn&#39;t really worth it&lt;/p&gt;
&lt;p&gt;After moving them all to markdown, I then ran the site locally to compare the markdown posts to the Medium posts to make sure they matched and all the content worked.&lt;/p&gt;
&lt;p&gt;There was a couple of recurring problems I found&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Words often had the &lt;code&gt;*&lt;/code&gt; in the wrong place&lt;/li&gt;
&lt;li&gt;Paragraphs didn&#39;t have enough &lt;code&gt;&#92;n&lt;/code&gt; so rendered all as one paragraph&lt;/li&gt;
&lt;li&gt;Embed content such as Gists and Tweets didn&#39;t work - I had to find Gatsby plugins for this and port over the references to use their format&lt;/li&gt;
&lt;li&gt;Image captions did not work - I had to move these over to use &lt;code&gt;&amp;lt;em&amp;gt;&lt;/code&gt; tags&lt;/li&gt;
&lt;li&gt;Images had to move moved over manually - yup saving each one and importing it correctly. There is now a &lt;code&gt;-i&lt;/code&gt; feature you can pass to &lt;code&gt;medium-2-md&lt;/code&gt; but this wasn&#39;t a feature when I did the original port (a long time before the actual thing launched!)&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Review&lt;/h2&gt;
&lt;p&gt;Overall, I&#39;m really happy with the result. There are some things I need to change but for now, it&#39;ll do. It was super quick and easy to get up and running with Gatsby.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>A Guide to Leaving Your Job</title>
    <link href="/leaving-work/"/>
    <updated>2019-08-16T12:31:03Z</updated>
    <id>/leaving-work/</id>
    <content type="html">&lt;p&gt;Recently, I handed my notice in to my previous job at CloudCall after receiving an new offer at Capp&amp;amp;Co. I won&#39;t go into &lt;em&gt;why&lt;/em&gt; I chose to leave, but handing my notice in did leave me with the challenge of how to uncouple myself as a Developer from the services that I managed. Jamie Tanna has suggested using blogs as a form of documentation, which is exactly what this is, everything you need to do before you leave your job - broken down by time. In my case I had just under 30 days to get everything ready.&lt;/p&gt;
&lt;h2&gt;30 Days to Go&lt;/h2&gt;
&lt;h3&gt;Take Stock&lt;/h3&gt;
&lt;p&gt;What products do you own? What does everyone always come to you for? Think about these things and begin to write documentation around all the gotcha&#39;s and a FAQ for common issues. Playbooks are often a great resource for the next developer that maintains you systems. I&#39;d suggest writing a playbook on the core functionalities of a system. In my case, I looked after the SMS/IM backend systems, so I documented things such as messages not sending or the messages not being sent to our sync service.&lt;/p&gt;
&lt;h3&gt;Holiday&lt;/h3&gt;
&lt;p&gt;Take a look at how much annual leave/holiday/vacation time you have accrued throughout the year. Confirm whether your employer will pay you for any remaining unused annual leave with your last paycheck. If they will not pay you, then best get booking it! Nonetheless, even if they will, now is a good time to take a break so you can prepare for your next role and get other errands done that you&#39;d been putting off.&lt;/p&gt;
&lt;h3&gt;Organise hand over&lt;/h3&gt;
&lt;p&gt;Speak with your manager about who will take over the services you manage and make sure to schedule a release of those services (if you do not do CI/CD). Additionally, schedule meetings with these new individuals so you can spend time discussing the systems going through the documentation. I would recommend getting the new developer to read through the documentation and trying to solve any problems that come up whilst you&#39;re still there. It &amp;quot;tests&amp;quot; the documentation and means you are still there as a backup if that test fails!&lt;/p&gt;
&lt;h3&gt;Write no code&lt;/h3&gt;
&lt;p&gt;Ok maybe, not &amp;quot;No code&amp;quot; but spent the last remaining pieces of time working on anything that will make the systems more resilient as well as tests and &amp;quot;overkill&amp;quot; levels of documentation (better to have more than less - just make sure it&#39;s relevant and useful!). If things don&#39;t go wrong then it will make the new developers life a lot easier and mean they won&#39;t need to suddenly dive in and fix something.&lt;/p&gt;
&lt;h2&gt;10 Days to Go&lt;/h2&gt;
&lt;h3&gt;Payslips&lt;/h3&gt;
&lt;p&gt;Download archive if you do not get them paper based. These are useful for your records and may be needed for tax reasons in the future.&lt;/p&gt;
&lt;h3&gt;Final release&lt;/h3&gt;
&lt;p&gt;If you don&#39;t do continuous delivery then deploy all services you manage alongside the person(s) who are taking those systems over. This will enable them to discover any issues and also it gets all the code you&#39;ve written out in the wild before you go.&lt;/p&gt;
&lt;h3&gt;Uncouple Accounts&lt;/h3&gt;
&lt;p&gt;Over time, as much as you keep it separate, &amp;quot;work&amp;quot; logins and things of that nature seep into your personal devices. Now is the time to sign out of all those services as there will be ones you have forgotten!&lt;/p&gt;
&lt;p&gt;In my case I had a list of the following:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Github&lt;/li&gt;
&lt;li&gt;Gitlab&lt;/li&gt;
&lt;li&gt;Npm&lt;/li&gt;
&lt;li&gt;Jenkins/CI&lt;/li&gt;
&lt;li&gt;AWS&lt;/li&gt;
&lt;li&gt;Removing Email from phone&lt;/li&gt;
&lt;li&gt;Removing iCloud from Macbook&lt;/li&gt;
&lt;li&gt;Deactivate any work-related IFTTT rules&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Logging out&lt;/h3&gt;
&lt;p&gt;View all passwords stored in Chrome, check if there is anything you need then begin using a &amp;quot;Guest&amp;quot; login to Chrome. Because this won&#39;t have any of the passwords, you&#39;ll find accounts you need to recover or accounts you need to sign out of on your main chrome login.&lt;/p&gt;
&lt;h2&gt;Last Day&lt;/h2&gt;
&lt;h3&gt;Take home personal belongings&lt;/h3&gt;
&lt;p&gt;Remember your mug in the cupboard, the cables that are yours etc.&lt;/p&gt;
&lt;h3&gt;Security&lt;/h3&gt;
&lt;p&gt;Remove any SSH/GPG keys from the laptop itself as well as removing them from your Github/GitLab accounts. Additionally, if you use Keybase and your GPG key contains your work email address then remove it and re-push the key to Keybase.&lt;/p&gt;
&lt;h3&gt;Log out&lt;/h3&gt;
&lt;p&gt;Along the lines of the previous point, make sure to nuke your browsers cookies/history so you are not signed into any services such as GitHub or Stackoverflow which may have been personal logins but are inherently corporate in nature.&lt;/p&gt;
&lt;h3&gt;Remove apps or Containers&lt;/h3&gt;
&lt;p&gt;It&#39;s worth removing any apps that have your logins such as Spotify as well as any running docker containers that may contain sensitive data. For example, I ran my LastFm2Slack bot on my work macbook which contained an API keys for my LastFM account.&lt;/p&gt;
&lt;h2&gt;Take aways&lt;/h2&gt;
&lt;p&gt;I hope this article can help you in the future if you&#39;re moving on from a job. It can be a stressful time and the last thing you want to worry about is becoming a blocker to the rest of your team. Planning ahead, as I have outlined here, allows you to make a clean exit and not have future developers cursing your name for lack of documentation. You want to leave with positive opinions, on both sides of the table - tech is a small world.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Lessons from Battling with Elasticsearch</title>
    <link href="/battling-with-elasticsearch/"/>
    <updated>2019-07-05T12:31:03Z</updated>
    <id>/battling-with-elasticsearch/</id>
    <content type="html">&lt;p&gt;This is a story of changing requirements being impeded by architecture and software. It’s an age-old adage but I thought it was worth telling this story as a lesson in foresight and blame.&lt;/p&gt;
&lt;p&gt;The bug goes as follows, CloudCall developed an instant messaging and SMS application that plugged into their existing application, that previously only handled phone calls. When planning this new messaging system, since we had the requirement to do full-text searches on the message contents, we decided to use ElasticSearch as a data store. No-one had used it before, but we knew the problems it solved and were happy to start development with it.&lt;/p&gt;
&lt;p&gt;Later down the line, we began to get bug reports of messages that failed to sync into CRM’s (a primary USP of the product), as well as channels that were no longer visible after refreshing our application. It was a whole host of different bugs that ultimately lay at the doorstep of one simple truth — Elasticsearch is not strongly consistent. You can do your damnedest to try though, which is exactly what we did.&lt;/p&gt;
&lt;p&gt;The bugs arose because a user would go to create a channel or send a new message, this would then create the record in Elasticsearch. If the user refreshed immediately, when our API went to look up the channels that belonged to that user, it returned an out of date list — since the new channel was not yet indexed on the shard. Additionally, when syncing messages into CRM’s, we first queried for that message across the shards in Elasticsearch (shards of messages are segmented by calendar month). Again, the document was not available in the shard at the time of syncing — as this process is kicked off right after the Elasticsearch insert.&lt;/p&gt;
&lt;p&gt;The solution was threefold. First, where possible we queried for the document on the shard we expected it to be on, and then had a fallback mechanism to do a wildcard search.&lt;/p&gt;
&lt;p&gt;In the instance of searching for a specific message, we first query the latest calendar months shard since we assume that messages will only be queried for in a calendar month. If that returns no data, we do a wildcard search across all message shards, where the message will always be available — since we do not insert into previous months message shards and therefore the document has been indexed already if it is present.&lt;/p&gt;
&lt;p&gt;Secondly, we tuned the refresh interval for indexing documents from 30 seconds to 1 second. This is a lot more intensive on the boxes we host Elasticsearch on but it’s worth it for the benefits it gives us. This is a simple configuration option within Elasticsearch itself. This means that a shards index is recreated every second to immediately make new documents available on it.&lt;/p&gt;
&lt;p&gt;Lastly, there is an option you can pass called “&lt;a href=&quot;https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-refresh.html&quot;&gt;refresh&lt;/a&gt;” when doing an insert into Elasticsearch. This tells ES to immediately refresh the index and make the new document available on the shard once it has been created. This again prevents issues around going to get documents that are not yet indexed.&lt;/p&gt;
&lt;p&gt;At the outset, I said this was a story about foresight and blame. Picked up the backend services for our messaging application after it has been architected could have led me to curse my predecessors for not thinking about future functionality. And for a time, it did. However, I quickly came to realize that this is a repeated pattern in software development. You can never foresee the future and gaze into a crystal ball made up of 1’s and 0’s.&lt;/p&gt;
&lt;p&gt;This thought was inspired by reading &lt;a href=&quot;http://boringtechnology.club/&quot;&gt;http://boringtechnology.club/&lt;/a&gt; and the story of how Etsy built the activity feed and their battles with Memcache’s ephemeral nature. But you know what, after they fixed it, it worked later down the line even after scaling.&lt;/p&gt;
&lt;p&gt;I felt the same way about the work I have done on our Messaging backend. I hope that, as Etsy did, they can leave those API’s and consumers to whir away and hum quietly in the background. It is a testament to how good your code is if 20x scale later it is still humming along nicely and a pseudo-metric I aim for with everything I write.&lt;/p&gt;
&lt;div class=&quot;image&quot;&gt;
	&lt;img src=&quot;https://mk0osnewswb2dmu4h0a.kinstacdn.com/images/comics/wtfm.jpg&quot; /&gt;
	&lt;em&gt;Credit: &lt;a href=&quot;https://www.osnews.com/story/19266/wtfsm/&quot;&gt;https://www.osnews.com/story/19266/wtfsm/&lt;/a&gt;&lt;/em&gt;
&lt;/div&gt;
&lt;p&gt;It has taught me a lesson not to march in and say “why didn’t they think of this when designing the system??? Time to tear it out and start again”. Instead, take a look at why the design was chosen in the first place, and to work with the design, rather than against it.&lt;/p&gt;
&lt;p&gt;Legacy code often gets equated with bad code. But this is seldom truly the case. Legacy code contains bug fixes, resiliency, hundreds of hours of review and many different sets of eyes tweaking and refining. Every line of code written has a reason for being present — even the bad stuff.&lt;/p&gt;
&lt;p&gt;Nonetheless, on the contrary, it has taught me vital lessons into thinking at scale and really diving into a technology before using it. Perhaps simply googling “what is &lt;TECHNOLOGY&gt; bad at” or something similar. This can help you discover the pain points that others have run into. In retrospect, I would have used MySQL as a data store for our messaging service, since we have a vast array of experience in-house with it. We know that MySQL is not good for full-text search but we could use Elasticsearch or something similar to provide just the search functionality rather than the storage of all the data at a later point. Hindsight is 20:20 though, and who’s to say we would not have had issues with MySQL?&lt;/TECHNOLOGY&gt;&lt;/p&gt;
&lt;p&gt;There is no real definitive conclusion to this article but here are the main points:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Don’t blame your predecessors for architecture designs or code that was written at the time, have respect for the code and leave it a little bit better than you found it&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Go with technologies that people in-house understand and have experience with, you will run into issues you cannot anticipate if not&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Hacking around can feel like, well a hack, but there is always a hack in every large bit of software. So don’t be afraid to work with the tools you’ve got to make them work, it’s a lot easier than ripping everything out&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Structure your code in a way that makes it easy to rip out one data source and use another&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Remember that if a technology has an advantage, there is usually a disadvantage. Especially databases, where there are hardware limitations at play. For example, Elasticsearch has great search functionality, but consistency can be a problem.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
</content>
  </entry>
  
  <entry>
    <title>Using Grafana for Monitoring you NodeJS Apps</title>
    <link href="/grafana-node/"/>
    <updated>2019-03-06T22:12:03Z</updated>
    <id>/grafana-node/</id>
    <content type="html">&lt;div class=&quot;image&quot;&gt;
	&lt;img src=&quot;../../assets/images/grfana.png&quot; /&gt;
	&lt;em&gt;This guide assumes you already have a basic NodeJS API and a Graphite instance configured.&lt;/em&gt;
&lt;/div&gt;
&lt;p&gt;Graphs are a great way to monitor your services, and as an added bonus — they look cool.&lt;/p&gt;
&lt;p&gt;I always looked at companies with giant flat screen monitors with pages of various graphs and thought that was all way over my head. Turns out, it’s surprisingly easy.&lt;/p&gt;
&lt;p&gt;At CloudCall, we are heavy users of Grafana and a data source behind it called StatsD. We use it to monitor throughput on services as well as general tracking of if a certain service is being used. Here is a guide on getting your shiny API monitored using Graphite.&lt;/p&gt;
&lt;p&gt;First thing you need is a Grafana instance. It’s easy to get setup with Grafana — the &lt;a href=&quot;http://docs.grafana.org/installation/&quot;&gt;installation docs&lt;/a&gt; are all you will need. You could spin this up locally or run it on a VPS or similar.&lt;/p&gt;
&lt;p&gt;After you have picked the route you want to start work on you can run&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;npm i -s node-statsd-client&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;This is the statsd client we will be using. StatsD runs on a UDP port and this library simply pushes a string of data via UDP. There are a number of other statsd libraries out there, but I have found this one to be the most reliable — your mileage may vary.&lt;/p&gt;
&lt;p&gt;Next up is to create some kind of wrapper around the statsD client as I found it difficult to use (and also not typescript ready).&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-ts&quot;&gt;import { Client } from &amp;quot;node-statsd-client&amp;quot;;
import { IGraphiteController } from &amp;quot;../interfaces&amp;quot;;

const PREFIX = &amp;quot;MY-API-NAME&amp;quot;;

export default class GraphiteController implements IGraphiteController {
  private _client: any;
  // the graphite port will always be this for every environment
  private _port: number = 8125;
  private static _instance: GraphiteController;

  // Check if we are running tests, if so, deactive the graphite logging
  private testing: boolean = process.env.NODE_ENV === &amp;quot;testing&amp;quot;;

  constructor(config: any) {
    if (!this.testing) {
      try {
        this._client = new Client(config.statsd, this._port);
      } catch (err) {
        throw new Error(`There was an error connecting to Graphite: ${err}`);
      }
    }
  }

  public static getInstance(config: any): GraphiteController {
    if (!this._instance) {
      this._instance = new GraphiteController(config);
    }
    return this._instance;
  }

  public write(activityType: string, error: boolean = false): void {
    if (!this.testing) {
      this._client.increment(
        `${PREFIX}.${activityType}${error ? &amp;quot;.error&amp;quot; : &amp;quot;&amp;quot;}`
      );
    }
  }

  /**
   * Writes a graphite timing
   * This is used to measure the time a function or piece of logic takes
   * @param  {string} activityType
   * @param  {Date} startDate - a new Date() object. Create this as a variable at the top of the function and then pass it in
   * @returns void
   */
  public writeTiming(activityType: string, startDate: Date): void {
    if (!this.testing) {
      this._client.timing(
        `${PREFIX}.${activityType}`,
        new Date().getTime() - startDate.getTime()
      );
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Most importantly, we a static &lt;code&gt;*getInstance()*&lt;/code&gt; method. But why? We found that, long lived services (anything not serverless) would create a massive amount of UDP connections over time and eventually make it so the service could not create any new connections. We use this getInstance method so we make sure we use a single connection throughout the app.&lt;/p&gt;
&lt;p&gt;So how do we implement it?&lt;/p&gt;
&lt;p&gt;First we need to import it into our route file&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;import GraphiteController from “./graphite”;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Next, at the top of the file let’s get an instance of the graphite controller that we can reuse in this file. We create it with the configuration that contains the url for the statsd instance.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;const graphiteController = GraphiteController.getInstance(config);&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Ok, now we’ve got an instance of the controller we can now record some data.&lt;/p&gt;
&lt;p&gt;The first piece of data to record is a simple counter to see how many times a certain route has been used. This is useful because it might be that you need to pull out certain routes into new services so that they can be independently scaled. It may also be that a route was written but never used, so this can be factored out. Let’s add this counter — make sure to add it as the last thing in your router, right before you return the response. That way, if there are any errors along the way then we will not get a count that didn’t succeed.&lt;/p&gt;
&lt;p&gt;Here is our router file now&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-ts&quot;&gt;import { Router } from &amp;quot;express&amp;quot;;
import { Config } from &amp;quot;./configuration&amp;quot;;

// Import or require the graphite controller and activity labels
import { GraphiteController, GraphiteLabel } from &amp;quot;../graphite&amp;quot;;

const router = Router() as Router;

// Load your config
const config = Config.getConfig();

// Import the graphite controller
const graphiteController = GraphiteController.getInstance(config);

router.get(&amp;quot;/&amp;quot;, async (req, res, next) =&amp;gt; {

   try {
       const record = await UserController.get(req.query, req.jwt.accountId);

       if (record) {
           // Just before the response is sent, we log the route being called
           graphiteController.write(“GetUsers”);
           res.json({ success: true, data: record });
       } else {
           res.status(404).send();
       }
   } catch (err) {
       next(new InternalServerError(&amp;quot;There was an error getting data from the database&amp;quot;));
   }
});

export default router;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Ok, now we’ve setup a log counting the number of successful calls the route has, we should also track any unsuccessful calls to the router. This way, if you see a sudden spike in errors, you can see exactly where that error is occurring.&lt;/p&gt;
&lt;p&gt;Add this to the catch block in the router. The second argument defaults to false but when true, signals that this call was an error. In the background, this will append .error to the UDP message, meaning you can filter those calls separately.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;graphiteController.write(“GetUsers”, true);&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Next we will add time tracking to the route. This way we can see how long a certain route took to execute. This is a great piece of information to have as you can see if you perhaps need to upgrade the machine your API is running on, or where to focus optimization efforts.&lt;/p&gt;
&lt;p&gt;To do this you need to record the start time that the route was called. You can do this simply by&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;const startTime = new Date();&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Next, underneath the .write method you added earlier you can add a new call to the write timings method.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;graphiteController.writeTiming(“GetUsers”, startTime);&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Here we have passed “GetUsers” as the activity or label for graphite and the start time of the route call. In the background, the wrapper calculates the time between the start time and the current time.&lt;/p&gt;
&lt;p&gt;And hey presto. You will now be pushing the route timings to StatsD!&lt;/p&gt;
&lt;p&gt;At this point, try calling your API endpoints a few times so we have some data to work with. Next, we can move onto to creating the graphs.&lt;/p&gt;
&lt;p&gt;Let’s first create the “count” graph that tracks how many times and when the route was called.&lt;/p&gt;
&lt;p&gt;Create a new dashboard and you should find yourself on a screen like this. I would recommend creating a new “dashboard” for each API/Service&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://cdn-images-1.medium.com/max/3200/0*GUwQEkXskLsaZXhZ&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;
&lt;p&gt;Move your mouse over to the left and click “Add Panel” in the little menu that pops out. Then click “Graph”&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://cdn-images-1.medium.com/max/3200/0*b291v5I_59xKDkj6&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;
&lt;p&gt;Now, click the graph and click “Edit”. Now we can add a data source for our graph.&lt;/p&gt;
&lt;p&gt;Set your data source as your Graphite DB so you can now perform queries for your data. You will need to build up a query like this:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://cdn-images-1.medium.com/max/2000/0*OtAVsLVruQ7s6xqi&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;
&lt;p&gt;Let’s break this down&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;*&lt;/code&gt; — This is a wildcard query as we do not need to narrow it down just yet&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;“MYPREFIX” — this our prefix that is configured in the GraphiteController at the top of the file&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;“GetUsers” — the graphite activity or label, you should use a descriptive name depending on which route you are working on&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;“Count” — this is the counter, there should also be one for “timer”&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;“consolidateBy(sum)” — This rounds the values so that you get only integer values rather than decimals&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;“alias(Get Users Count)” — the alias label for the query, this applies when you show the averages, min and max values.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;You should now have a graph like this&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://cdn-images-1.medium.com/max/3200/0*NTK1NC0F3jwXLTNh&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;
&lt;p&gt;For our “error” graph, simply repeat the steps and change the query to add an extra “error” metric — so the full query will be something like * * MYPREFIX GetUsers count error.&lt;/p&gt;
&lt;p&gt;A similar process is taken for the timing graph, simply change the query from “count” to “rate” and this will be the timings for the route.&lt;/p&gt;
&lt;p&gt;And there we go! Done and dusted! Now you’ve got it working for a single route, you can follow the steps again for all your other routes!&lt;/p&gt;
&lt;p&gt;Did you find this guide useful? Would you like to see similar content in the future? Let me know on twitter &lt;a href=&quot;http://twitter.com/joshghent&quot;&gt;@joshghent&lt;/a&gt; or in the comments below.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Why does NTP Exist?</title>
    <link href="/ntp/"/>
    <updated>2019-03-05T22:12:03Z</updated>
    <id>/ntp/</id>
    <content type="html">&lt;p&gt;NTP is one of the most essential and complex systems that never gets spoken about. But why? And what even are they? And why do we need them? If you’re like me, you might have known about NTP servers and known they were important to keep clocks in sync. But don’t computers have clocks already? It was when I visited the Greenwich observatory recently that I realized how complex time was and with a fascination in both computer systems and horology, I decided to dive into the backbone of our lives, time.&lt;/p&gt;
&lt;h2&gt;What is NTP?&lt;/h2&gt;
&lt;p&gt;At a high level, NTP or the Network Time Protocol is a network-based protocol and standard by which distributed computer systems can synchronize their time with UTC. The protocol also defines a means by which these systems can passively listen to updates for upcoming leap second adjustments (more on that later).&lt;/p&gt;
&lt;h2&gt;How do they work?&lt;/h2&gt;
&lt;p&gt;NTP defines different “stratum” or tiers from GPS or atomic clocks (Stratum 0) down to computers synchronized to a microsecond (stratum 1) and so on until you reach Stratum 16 which NTP defines as unsynchronized.&lt;/p&gt;
&lt;p&gt;The stratum number is used to measure the distance between a given device and the “ultimate” time source Stratum 0. This number means NTP can prevent cyclical dependencies too.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://cdn-images-1.medium.com/max/2000/1*UMh6Wu8Mg-55mHR3NaOzCg.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;
&lt;p&gt;As you go down the chain, each “tier” is configured to synchronize with the tier above it. A given device in a tier may sanity check other computers that are in the same stratum (asides from stratum 0). Furthermore, a single computer may query multiple computers from the tier above to gain even more accuracy.&lt;/p&gt;
&lt;p&gt;You may think your service needs to be ultra-accurate, therefore you should synchronize based on Stratum-0. Alas, unless you are working in Goldman Sachs, this precision is probably unnecessary. Stratum 1 is used for all primary time servers e.g., time.google.com.&lt;/p&gt;
&lt;p&gt;Now how do we actually calculate the time? The client will go and query multiple computers for the time. The client then calculates the offset between its time and the time it received from the computers taking into account the round-trip delay. And hey presto you’ve got time! A computer usually does this sync around once every 10 minutes by dispatching a UDP packet to the desired synchronization servers.&lt;/p&gt;
&lt;h2&gt;Why do they exist?&lt;/h2&gt;
&lt;p&gt;This was the big question I was attempting to answer with this research. Why do we need them? Can’t computers keep their own time?&lt;/p&gt;
&lt;p&gt;Keeping time is notoriously difficult on something that is not always going to be running like a computer. Furthermore, computers get hot, experience high loads and other factors which could affect it keeping the correct time even when it is on. Additionally, time is relative, literally. If it weren’t for NTP then satellites would not work, since they orbit the globe and therefore experience time at a much quicker rate than us Earth dwellers. There needs to be something else.&lt;/p&gt;
&lt;p&gt;On top of all this, the Earth’s orbit is a bit awkward. An Earth “year” is not exactly 365 days. It’s 365 days 5 hours 48 minutes and 45 seconds. Since we can’t just round off that 5 hours, we have to put it somewhere. The Gregorian calendar does this by introducing a 24 hours day every 4 years at the end of February. Ok simple enough.&lt;/p&gt;
&lt;p&gt;But now you learn that not only does it not take exactly 365 days to complete an orbit of the sun, but the Earth’s spin slows and speeds whenever it damn well feels like it. There are a lot of factors that can contribute to the Earth speeding up or slowing down but the main one is tidal friction. But other events such as changes in the convection currents within the mantel and earthquakes have slowed the earth down.&lt;/p&gt;
&lt;p&gt;Either way, if we just left clocks to it, they would be off by a large margin even after a single year. NTP helps solve this problem by being able to broadcast changes in time and have that synchronize. Google and Amazons NTP servers both use strategies of leap smears, where a second is broken into small chunks and distributed between noon and the following noon UTC. If this strategy is not taken then a double second or skipped second is introduced depending on whether the time is being increased or decreased.&lt;/p&gt;
&lt;p&gt;But why is it essential to keeping computer clocks up to date? To name a few:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Stock market&lt;/strong&gt; — trades need to be nanosecond accurate to guarantee that a certain seller got a certain price from a specific buyer. Exploiting time is how high-frequency trading became so profitable (if you haven’t read about it I highly recommend Michael Lewis’ book Flash Boys).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Filesystems&lt;/strong&gt; — accuracy about when a file was written or read aids in knowing which version is most up to date and allows the filesystem to set recovery points in the cache. This means when you hit CTRL+Z to undo, it always goes back to the last thing! It also enables things like table and row locking in databases by knowing if a request has completed or not.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Logs&lt;/strong&gt; — you need them in order, and computers run very fast. Accuracy in time for these logs is important to trace through the exact order of operations&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Update times&lt;/strong&gt; — how many sites have you seen where they have something along the lines of “Last Updated 2 hours ago”. Well if it weren’t for NTP, you wouldn’t have that because if you were in the US and the update came from Japan it would say “Last Updated 5 hours from now”.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;There are of course many other use cases for NTP and time synchronization as it underpins the clock of every computer out there. Java brags about being run on over 3 million computers. Well NTP, and all it’s implementations run on every single computer produced since 1985. So take that Oracle.&lt;/p&gt;
&lt;p&gt;In any case, NTP sparked my curiosity in low-level systems and protocols and helped me appreciate the foundations of what we today build upon. Understanding them may not improve your code on a meaningful level, but it is vital to have a knowledge of the systems your code run to help you debug and improve performance.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Starting with Why as a Software Developer</title>
    <link href="/why-as-a-dev/"/>
    <updated>2019-02-26T22:12:03Z</updated>
    <id>/why-as-a-dev/</id>
    <content type="html">&lt;div class=&quot;image&quot;&gt;
	&lt;img src=&quot;../../assets/images/jackdaw.png&quot; /&gt;
	&lt;em&gt;Developers are like jackdaws — “oh shiny!”&lt;/em&gt;
&lt;/div&gt;
&lt;p&gt;As you progress as a software developer, you should begin to build an innate sense of when something should be done a certain way, perhaps to future proof it in some way or make it more resilient. It’s an odd feeling, but you just get a sense for something.&lt;/p&gt;
&lt;p&gt;I found this happened more frequently as I advanced and learned. But I never stopped to consider *why *I came up with the solutions I did, or a reason &lt;em&gt;why&lt;/em&gt; I made a certain decision over another.&lt;/p&gt;
&lt;p&gt;I’m a big fan of first principles thinking, and that concept really brought it home to me that to truly progress in my career, I need to start with the why. Understanding the reasons for my decisions and solutions and be able to communicate them in a concise manner.&lt;/p&gt;
&lt;p&gt;Let’s give an example of this, we had a few resiliency issues with our SMS system. At the outset, coming up with a solution to this problem appeared to be a black and white exercise. But, I decided to dig a little deeper, why did we want to address these resiliency issues? Is it simply so we would have developer “cred” that our systems are robust enough to handle failures? Partially, but not entirely. The reason we wanted to improve the resiliency of our systems is so that the people using our product would gain faster feedback about the state of their message. If a message failed somewhere along the pipe, we need to make sure we shout back down that pipe to the customer so they know we failed. That is a much better user experience than a message escaping to the ether. Furthermore, we wanted to allow users to retry sending messages, so again this is a consideration in our solution.&lt;/p&gt;
&lt;p&gt;Perhaps you might be a product-customer-centric-genius though and think this does not apply, but I bet there are solutions you have come up with where time has not been taken to weight up other options and be able to meaningfully justify the reasoning behind it.&lt;/p&gt;
&lt;p&gt;A common example arises when choosing technologies. Developers are like jackdaws, they love new shiny things. Sometimes the new shiny things are amazing, for example serverless technologies would be like a gold ring. But other times, you might encounter things that are like figurative bits of foil. Looks great, and has its place but you already have some nice shiny foil, so is this better? Maybe not. This is not to say that technologies have no merit, but they have to be seriously weighed against the current approach and other options out there. Instead of saying “X will revolutionize how we work, it has Y and Z feature”, go out and investigate what advantages it has over other similar tech, what the learning curve is like and so on. Perhaps even create a PoC if you feel confident. All of these things will help you discover the why. Why do you want to use this technology? Why do you need to change technology in the first place? And so on…&lt;/p&gt;
&lt;p&gt;As an example of this process in action, I considered porting a serverless based API written in Node to Golang. I wanted to do Golang because it seemed cool, Docker was written using it and because I had read it was incredibly fast — that was it. However, when I investigated, I infact found that Python, Go and Node have similar cold start times and there would be little advantage in porting. In terms of the processing we were doing, again, Node was just as capable as Go for that particular task and we had the advantage of all our libraries written in Node already. I decided against this and instead poured time into optimizing the existing Node API. This was a far better use of time that porting to some new language I barely knew.&lt;/p&gt;
&lt;p&gt;When you’re looking at something new, or dreaming something big — start with the why. You will grow far more because of it.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Architecting the Next Generation of Communication</title>
    <link href="/next-gen-communication/"/>
    <updated>2019-01-23T22:12:03Z</updated>
    <id>/next-gen-communication/</id>
    <content type="html">&lt;p&gt;With the shift to mobile and the statistics of the “younger” generation (hi there) not using phone calls as a means of communication, there is a constant push towards reaching people in a platform agnostic way — via email, LinkedIn, twitter DM, you name it. The challenge arises when you need to create a platform that is scalable demands and flexible enough to hack in any other new communication streams later down the line — maybe we suddenly want support for MySpace messaging.&lt;/p&gt;
&lt;p&gt;The architecture I discuss below comes out of experience with this problem first hand, and the solutions we came up with — all to be delivered for a deadline.&lt;/p&gt;
&lt;h2&gt;Way of Websockets!&lt;/h2&gt;
&lt;p&gt;Since this needed to be real time in the case of IM or near-real-time in the case of SMS, websockets are the best way to go and a defacto standard for real-time operations on the web. For this, PubNub is a great choice since it already had a lot of the functionality baked in, such as different channels and mechanism to subscribe, send and receive on those channels.&lt;/p&gt;
&lt;p&gt;PubNub also had a mechanism called “PubNub functions” whereby any new websocket message on a channel matching a certain pattern would be handled by a function written in plain ol’ javascript. This meant you can fire SMS messages off to other systems that would handle the actual sending of the SMS message and another route that sends WhatsApp messages to Twilio’s API for example. It provided immense flexibility, especially as you expand to different communication methods and channel types.&lt;/p&gt;
&lt;p&gt;Although PubNub has it’s own data store in the background, you can only query it through their API, making it difficult to just dive into the DB and find let’s say all channels containing accountId 123. Additionally, you can only bring back 99 records at a time with PubNub which means producing accurate reporting a challenge. The solution was to introduce a second data source. This potentially opens up the problem of many sources of truth. This can be avoided by having an API stood in front of a non-relational database (I would recommend ElasticSearch) which all read operations would go through.&lt;/p&gt;
&lt;h2&gt;Typescript Time&lt;/h2&gt;
&lt;p&gt;With a project that is going to span many different API’s and services, Typescript would prove invaluable because it allowed us to reuse a lot of code whilst increasing developer productivity and reducing bugs. Sounds too good to be true right? Well, there were still bugs sure and productivity only took an upwards swing after all the developers got comfortable with it, but overall it was a fantastic move. One of the first things you should do is create a common “type” library that you can share across all of your services and systems that needed them. In this types library was all the interfaces and enums that were going to be used throughout the system. You can store everything there from error codes, to channel types and an interface for how a message was structured. You can then include this library in all your services to ensure consistency.&lt;/p&gt;
&lt;h2&gt;Different channels&lt;/h2&gt;
&lt;p&gt;To differentiate the communication types you have, group instant message, direct message, sms message, mass sms message, carrier pigeon etc. you can build this up as part of the channel name. Again, PubNub (who I promise aren’t sponsoring this post) gives great flexibility by allowing channel names to be whatever you want them to be. I would recommend they are built up with the platform, channel type and then a unique identifier .e.g, &lt;code&gt;production.sms.123456&lt;/code&gt;. In your pubnub function, you can then check the channel type within the channel name, using regular expression, and handle the message accordingly.&lt;/p&gt;
&lt;p&gt;Channels should be created per group of participants per channel type. For example creating a new sms to a contact creates a new channel, sending an sms to the same contact again will not create a new channel. But, creating a group with Bob, June and Sally called “Sales Call” and then another with the same people but called “Another sales call”, would create two different channels. This is how many other chat applications are built which in accordance with &lt;a href=&quot;https://lawsofux.com/jakobs-law.html&quot;&gt;Jakob’s Law&lt;/a&gt;, is what you want to do.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://cdn-images-1.medium.com/max/2808/0*uy2HVNILokIO_fsG&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;
&lt;p&gt;Now that we have a basic instant message and SMS system, we had a new problem to solve — How do we get notifications to the user? Emit a message of a different type on the existing channels sounds like an obvious solution but it assumes the user is subscribed to channel. Fortunately, one way you can solve this is with a “notification” channel. Each account should be assigned a notification channel. Every time a message is sent, it is also sent to the participants notification channel.&lt;/p&gt;
&lt;p&gt;For example, if Bob creates a new group chat with June and Sally, it will send a new message on June and Sally’s notification channels informing our application “hey there is a new channel you need to subscribe to!”. This will then trigger a process in the app to subscribe to that channel in the background. When Bob then sends a message on that channel, it sends another message on both the participants (June and Sally) notification channels. When this message is received by the application you can then pop a desktop or mobile notification depending on the platform.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://cdn-images-1.medium.com/max/2000/0*DilygOA_B31jva_5&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;
&lt;p&gt;Additionally, you can use this notification channel to send other kinds of messages like when a channel has been read, or when the user mutes, leaves or hides a channel. Utilizing the PubNub function again, these notifications can be captured and forward them onto a CRUD API which saves them in DynamoDB. This allows us to provide a consistent experience across any device that the account uses.&lt;/p&gt;
&lt;p&gt;Some may be wondering why we don’t just call the API directly, but instead go through PubNub, this is to cater for the case that a user has both our mobile and the desktop application open at the same time. Sending the message via the notification channel means if you hide a channel on the desktop, it will immediately be hidden on the mobile app.&lt;/p&gt;
&lt;h2&gt;Authentication&lt;/h2&gt;
&lt;p&gt;&lt;img src=&quot;https://cdn-images-1.medium.com/max/2000/0*b4LSzT0YAF4jZf6O&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;
&lt;p&gt;Authentication can be a big hurdle when breaking up a monolithic architecture into microservices, this is the situation my company found itself in. Prior to developing these SMS/IM systems, users were authenticated to our backend using a username, password and license key. All requests to the API used these parameters. This was not an option when authenticating with PubNub as firstly we did not want to give them access to our accounts database, and second because it’s not an option on their system. A token based system was the only way. We considered a number of different options for token based authentication but eventually settled on JWT because of its flexibility, ease of implementation and security. Combined with this, we had found Kong along with the JWT plugin to be fantastic at handling all the traffic we threw at it.&lt;/p&gt;
&lt;p&gt;An enormous amount of work went into not only overhauling the API to accept JWT authentication but also to change all our apps to handle JWT’s. Additionally, we required a refresh strategy for these tokens, for example, if a person remains logged in for a number of days it could be the case that your JWT token we have cached is now expired. This means we need to refresh the token. On any request from our application, we check how long the JWT has until it expires. If it is a day or less away then we refresh the token first.&lt;/p&gt;
&lt;p&gt;We leverage the JWT to store information we need for requests, for example, when a request comes into an API, then we will most likely need the accountId, we can find this in the JWT without having to pass anything through with the body of a request.&lt;/p&gt;
&lt;p&gt;There is more to tell with the architecture of a deceptively simple system, if you have ever had to architect your own communication platform, how did you do it? I’d be interested to find out and build a knowledge base.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Resiliency By Design</title>
    <link href="/resiliency-by-design/"/>
    <updated>2019-01-19T22:12:03Z</updated>
    <id>/resiliency-by-design/</id>
    <content type="html">&lt;p&gt;Resiliency by design in your products architecture is a challenging problem that is rarely tested. Building robust platforms are becoming increasingly important as large server providers such as AWS start to &lt;a href=&quot;http://nymag.com/intelligencer/2018/03/when-amazon-web-services-goes-down-so-does-a-lot-of-the-web.html&quot;&gt;show their cracks&lt;/a&gt; in addition to good old fashion human error (we had an engineer take down a server by knocking it with his ass). &lt;a href=&quot;https://github.com/Netflix/chaosmonkey&quot;&gt;Chaos monkey&lt;/a&gt; and other tools have sprung up to pursue down resiliency issues, but despite this, they can still persist. Here are a few things to look out for when designing a new system or analysing existing ones.&lt;/p&gt;
&lt;h2&gt;Backing Off&lt;/h2&gt;
&lt;p&gt;Your app tries to contact a critical external service, maybe it’s your database, or perhaps a 3rd party API — whatever the case, it **will **fail on you. A common way to handle this is by setting a timer to retry the call to the service. At &lt;a href=&quot;https://www.cloudcall.com/&quot;&gt;CloudCall&lt;/a&gt;, we have a stateless service to handle sending SMS messages, whenever it cannot save the message to the DB or send it to our SMS provider (or they return an error), we automatically requeue the message for a set time in the future. If we get the same failure the next time around, we requeue it again, this time for a bit longer and so on, until we get a success.&lt;/p&gt;
&lt;p&gt;You may not think this is possible in the world of serverless — but it is! In AWS Lambda, if you throw an exception or use &lt;code&gt;context.fail()&lt;/code&gt; then the &lt;a href=&quot;http://nymag.com/intelligencer/2018/03/when-amazon-web-services-goes-down-so-does-a-lot-of-the-web.html&quot;&gt;lambda will retry&lt;/a&gt; up to 3 times before giving up. Although with this setup you cannot have the gradual back off, you are still getting the beauty of the retry. However, if you setup the Lambda with &lt;a href=&quot;https://aws.amazon.com/sqs/&quot;&gt;SQS &lt;/a&gt;you can also configure the lambda to &lt;a href=&quot;https://www.rabbitmq.com/dlx.html&quot;&gt;DLX &lt;/a&gt;the message which can be set to requeue messages after any time you set.&lt;/p&gt;
&lt;h2&gt;Reconnection Logic&lt;/h2&gt;
&lt;p&gt;If a service does lose connection to a service it requires persistent access to, then we need some logic to reconnect to it. We can reuse the same principles from the backing off principles we discussed at the outset. If we cannot connect, try again after a time, then try again after a bit longer, and so on. Simple right?&lt;/p&gt;
&lt;p&gt;But when your app boots fresh for the first time, it also needs logic to establish anything it needs in those services. For example, if you have a queue consumer service that maintains a connection to &lt;a href=&quot;https://www.rabbitmq.com/&quot;&gt;RabbitMQ&lt;/a&gt;, when it boots, it needs logic in there to assert all the queues and exchanges it needs. Often, because a queue publisher service has been written previously, **that **service contains all the assert logic. However, when it comes to deploying the queue consumer service, you hit errors because the publisher service was not deployed previously and therefore had not asserted the exchanges and queues the consumer needs. This creates deployment dependencies, which trust me, you don’t want.&lt;/p&gt;
&lt;div class=&quot;image&quot;&gt;
	&lt;img src=&quot;https://cdn-images-1.medium.com/max/2000/0*7veIJLnOp4w_LV2m&quot; /&gt;
	&lt;em&gt;Example structure of an app with fail over&lt;/em&gt;
&lt;/div&gt;
&lt;h2&gt;Infrastructure Failover&lt;/h2&gt;
&lt;p&gt;Of course, no matter how much code you write to cope with services being down will have no bearing if your whole server goes down. With the advent of &lt;a href=&quot;https://aws.amazon.com/&quot;&gt;AWS&lt;/a&gt;, &lt;a href=&quot;https://azure.microsoft.com/en-us/&quot;&gt;Azure &lt;/a&gt;and &lt;a href=&quot;https://cloud.google.com/&quot;&gt;GCP&lt;/a&gt;, many consider this a thing of the past (99.99% is basically 100% right?), despite this, these services *will *go down. It is essential then to configure automatic failover, unless you enjoy getting woken up at 4am to redeploy an entire environment to another region.&lt;/p&gt;
&lt;p&gt;Maybe the entire server doesn’t go down though, it could be that the app is just crashing and you need to restart the container or maybe even the entire server to get it to startup again. In these cases, auto heal mechanisms should be put in place. These mechanisms can restart the service or in some cases, redeploy it elsewhere, should it go down in the primary zone.&lt;/p&gt;
&lt;h2&gt;Be wary of distributed monoliths&lt;/h2&gt;
&lt;p&gt;The world of microservices is taking over. The potential it creates in terms of flexibility and reusability are incredible — hence why it is so widely used. Nonetheless, they come with their own trade offs, namely in the structure of them.&lt;/p&gt;
&lt;p&gt;One of the main arguments you hear in favour of microservices is that means you no longer have one monolith you are dependant on — like the Death Star for the Sith. But when designing their microservices architecture, the services are just daisy chained together and completely reliant on each other. To negate this, make sure your microservices are exactly that, microscopic. Be wary of clusters of microservices that share a data store, or when changes to one service requires a redeployment of another. Most importantly of all, ensure that the services can scale independently of one another.&lt;/p&gt;
&lt;p&gt;Building resilient services can be a challenge and it does take time. Even just configuring the auto availability zone failover in AWS took a long time and consideration by some talented engineers to solve. Like with anything, there is quick wins and acceptable known faults in the system. If you don’t have time to configure auto heal and failovers, make sure you have a process written so anyone can do it manually. They all aid to delivering a optimal system and reliable user experience, but most importantly, you can sleep soundly without getting called up, safe in the knowledge all your servers are humming along nicely.&lt;/p&gt;
&lt;p&gt;Let me know any other tips you have for creating resilience in systems!&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>How to Run a Successful Tech Meetup — even if you’re forgetful</title>
    <link href="/successful-meetup/"/>
    <updated>2018-12-22T22:12:03Z</updated>
    <id>/successful-meetup/</id>
    <content type="html">&lt;div class=&quot;image&quot;&gt;
	&lt;img src=&quot;../../assets/images/meetup.jpeg&quot; /&gt;
	&lt;em&gt;A picture taken from the November 2018 LeicesterJS meetup&lt;/em&gt;
&lt;/div&gt;
&lt;p&gt;&lt;a href=&quot;https://www.meetup.com/leicesterjs/&quot;&gt;LeicesterJS&lt;/a&gt; is born out of the rise of Javascript now being the de facto programming language for a majority of developers. Additionally, we aimed to bring together a tech community in Leicester.&lt;/p&gt;
&lt;p&gt;I have been running LeicesterJS now for over 4 months and this is just the start. Before the first meetup, I was very nervous about getting everything together and although it went off without any dramatic hitches, I have learned a great deal and continue to do so with each and every event. Here is some of the advice I wish I had before planning my first meetup.&lt;/p&gt;
&lt;h2&gt;Start with the essentials&lt;/h2&gt;
&lt;p&gt;The barebones requirements from a meetup is a venue and catering. For LeicesterJS, I was able to partner with my current workplace to host the event and foot the bill for the food and drinks provided. In exchange, we give them a plug at the start of each event and they receive exposure to potential hires. Often meetups will be arranged in this manner as it makes organizing the event a lot easier, since you, the organizer, are at the venue ready to prepare anything that is required. If you are looking to arrange your own meetup, kill two birds with one stone and look to see if your employer wants to help.&lt;/p&gt;
&lt;p&gt;Next up is the speakers, for this, I turned to developers in my team and asked around if anyone wanted to give a talk. You can also reach out on twitter or to other developers you know in the area. Worst comes to the worst, do the talk yourself!&lt;/p&gt;
&lt;p&gt;And with that, we had a meetup! Except for one key thing…&lt;/p&gt;
&lt;h2&gt;….People&lt;/h2&gt;
&lt;p&gt;You want your meetup to be a success and part of that success can be measured by the attendance rate. Here is how I approached building a community of people from the ground up&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Get support from co-workers&lt;/strong&gt; — since they know you, it will be easier to break the ice that having a room of strangers&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Link up with other meetups in the area via Twitter&lt;/strong&gt; — I was very fortunate to have the generous support of &lt;a href=&quot;https://phpem.uk/&quot;&gt;PHP East Midlands&lt;/a&gt; who cancelled their own meetup to join in with the first LeicesterJS (thanks again guys!). It was great to have them along and again, a few more familiar faces&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Connect with a tech Slack group&lt;/strong&gt; — in my case, &lt;a href=&quot;https://www.technottingham.com/&quot;&gt;TechNottingham&lt;/a&gt; has a thriving Slack community, I advertised the meetup in their #javascript channel&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Personal social media&lt;/strong&gt; — in addition to using branded &lt;a href=&quot;https://twitter.com/leicesterjs?lang=en&quot;&gt;“LeicesterJS” twitter&lt;/a&gt; account. I further advertised the meetup with my own personal Twitter and LinkedIn. Additionally, you can encourage your co-workers to put the word out too&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Word of mouth&lt;/strong&gt; — you’d be surprised how many people just come to events because they saw it on &lt;a href=&quot;https://www.meetup.com/&quot;&gt;meetup.com&lt;/a&gt;. From my purely anecdotal data gathering, it seems that people mostly came because they saw the event on meetup.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;RSVP’s matter&lt;/strong&gt; — A day before the meetup starts, it is good to encourage people to update their RSVP status if they are no longer planning on attending. Often times, you will only get 60–70% of the RSVP’s you have on meetup. Make sure you keep track of this KPI and encourage the “regulars” to your meetup&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;https://cdn-images-1.medium.com/max/2000/0*5t6kUM-EnedEnCmD&quot; alt=&quot;Download the full thing here: https://gist.github.com/joshghent/2bb9c6e2ce616e29aaa3c7a2895cb17d&quot; /&gt;&lt;em&gt;Download the full thing here: &lt;a href=&quot;https://gist.github.com/joshghent/2bb9c6e2ce616e29aaa3c7a2895cb17d&quot;&gt;https://gist.github.com/joshghent/2bb9c6e2ce616e29aaa3c7a2895cb17d&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;h2&gt;Processes&lt;/h2&gt;
&lt;p&gt;After the first meetup, I had a number of people come and ask me for the slides to the talks that had been given. I hadn’t even thought about that! There were a number of these sorts of things that I had to get a process nailed down for.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Talk slides&lt;/strong&gt; — stealing an idea from NottsJS, I decided to create a Github organization and put the slides in a repo. It was the least frictionless way to distribute them. I considered using Google drive or some other file sharing service but this would give me difficult URLs that audience members may have a hard time finding&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Talk ideas&lt;/strong&gt; — after having a number of inquirers about giving talks I again turned to GitHub. The process is now to submit a talk idea as a Github issue on a repo. I have a set template for the issue so there is not any information left out that I may need at the last minute&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Somewhere to talk&lt;/strong&gt; — despite the bounty of slack groups out there, many requested to set up a Slack group to discuss the meetup and the tech community in Leicester at large. This ticks along nicely. I used &lt;a href=&quot;https://github.com/rauchg/slackin&quot;&gt;Slackin&lt;/a&gt; on a free Heroku box to create a site for people to get instant invites&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Introductions&lt;/strong&gt; — the first meet had some very basic introduction slides but I’ve gradually fleshed these out to make sure everyone feels at home. Basic things like pointing out the bathrooms and telling people what time the food will be here and the meetup is expected to end can be very useful to people. Another additional tip is I get people to raise their hand if they with me (and therefore, at the office, we were in), I informed other audience members that they can also go to those ones if they need any help&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Other Pointers&lt;/h2&gt;
&lt;p&gt;After getting through the above, it can seem as if there is nothing more to do — just grow the meetup right? I fell into the same trap too. But a further refinement came first by introducing a &lt;a href=&quot;https://www.meetup.com/leicesterjs/pages/27888143/Code_of_Conduct/&quot;&gt;code of conduct&lt;/a&gt;. In this document, it outlines how not only speakers, but all those in attendance, should behave and having clear processes for people who do not. It’s worth grabbing a document that has already been written because in all likelihood you are not a lawyer (but if you are, fill your boots!). Additionally, it’s good to have a point of contact (an email address and a person named) who should be contacted if someone wants to report something. Although you hope that something like that will never happen, there has been far too many cases in the tech industry alone that have highlighted that problems with intolerance or any sort of bad behaviour towards others is unfortunately not uncommon — often it goes unreported, I hope that never happens at LeicesterJS. Nonetheless, it’s essential to have.&lt;/p&gt;
&lt;p&gt;Additionally, you can offer yourself as a resource if any potential speakers need help preparing a talk. Since its beginning, I’ve encouraged first-time speakers to give talks. This can be a nerve-racking experience so give generously to offer support with slides, talk topics etc. The &lt;a href=&quot;http://slack.leicesterjs.org&quot;&gt;meetup group slack channel&lt;/a&gt; can be a great place to do this.&lt;/p&gt;
&lt;p&gt;A lesson I learned early on was to encourage audience participation. It makes the meetup more active and motivates people to discuss afterward as well. Perhaps recommend that your speakers open the floor so to speak with a discussion. Asking if anyone uses the technology being discussed, or ask them if they use an alternative.&lt;/p&gt;
&lt;p&gt;Starting your own tech meetup is an incredibly rewarding pursuit, but don’t underestimate its work. It’s also worth noting that, if you are in a smaller city as I am, it is better to support any existing meetups than to create your own. Ask the organizers if you can assist in any way, perhaps arranging new speakers or handling promotion of the event, they would welcome anyone to lend a hand.&lt;/p&gt;
&lt;p&gt;If you are in the Leicester area, come along to LeicesterJS — the meetup is every 3rd Thursday of the month!&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>I don’t know what to say…</title>
    <link href="/dont-know/"/>
    <updated>2018-12-04T22:12:03Z</updated>
    <id>/dont-know/</id>
    <content type="html">&lt;div class=&quot;image&quot;&gt;
	&lt;img src=&quot;../../assets/images/githubscreenshot.png&quot; /&gt;
&lt;/div&gt;
&lt;p&gt;The issue raised for the event-stream breach. It’s a grizzly flame war that I would not recommend reading&lt;/p&gt;
&lt;p&gt;I’m a little late to the party here but after having a couple of conversations at work and with others I wanted to document my thoughts on the recent security issues around the &lt;a href=&quot;https://blog.npmjs.org/post/180565383195/details-about-the-event-stream-incident&quot;&gt;event stream npm package&lt;/a&gt; which was used by lots of popular packages such as nodemon.&lt;/p&gt;
&lt;p&gt;Surrounding this controversy was many questions about vetting packages more carefully as well as the tendency for node developers to just have a package for basic functionality they could implement themselves (looking at your left-pad).&lt;/p&gt;
&lt;p&gt;Whilst all these questions are valid and worth discussing, this is not what I wanted to talk about in this blog post.&lt;/p&gt;
&lt;p&gt;I want to talk about giving back to open source.&lt;/p&gt;
&lt;p&gt;Most notably around all the discussion around why the original author handed over ownership to “some random”. People questioning how “dare” the author hand over ownership. Others even suggest a conspiracy between the package author and the perpetrator.&lt;/p&gt;
&lt;p&gt;I had co-workers approach me saying they couldn’t believe someone did that and how it was “stupid” to hand over permissions to another person.&lt;/p&gt;
&lt;p&gt;But was it “stupid” to do so? Well no.&lt;/p&gt;
&lt;p&gt;This is open source software that is completely free. It was relied on by hundreds of packages and it alone had over &lt;a href=&quot;https://npm-stat.com/charts.html?package=event-stream&quot;&gt;76M downloads&lt;/a&gt;. Yet, despite being depended on so much, it was not financially backed and was a labour of love — as so many open source projects are.&lt;/p&gt;
&lt;p&gt;When you are a maintainer of a project, especially if it is popular, there may be many issues but not enough time to fix them. People often simply complain that the free tool they are using often for enterprise software is not working. I myself have handed across ownership of a project and given them full write access. I still take a look at PR’s every now and again but for all intents and purposes, it is their package. I was originally contacted when someone &lt;a href=&quot;https://github.com/OTRChat/NodeChat/issues/31&quot;&gt;posted an issue&lt;/a&gt; about how they wanted to work on it for a class project, I was delighted! Someone wanted to actually spend their time working on a project I originally authored. It never even cross my mind that they would do anything malicious and even if I did, in the politest way possible — it’s not my problem.&lt;/p&gt;
&lt;p&gt;Rather than looking at this malicious package and thinking you either need to abandon node js or reject the whole concept of open source, &lt;strong&gt;show appreciation for the packages you use.&lt;/strong&gt; Maybe get your company to give back either by either donating development time or financial resources to it. All other assets in a business are paid for — so why shouldn’t the underlying pieces of your code base be? It is as black and white as this — if the npm library disappeared, a lot of companies would be in serious trouble, yet only a small fraction of libraries receive support.&lt;/p&gt;
&lt;p&gt;It is comforting to know that the node ecosystem is not isolated to this issue. Just last year, Equifax &lt;a href=&quot;https://www.theregister.co.uk/2017/10/02/equifax_ceo_richard_smith_congressional_testimony/&quot;&gt;blamed Apache struts&lt;/a&gt; for a breach to their entire customer base. Apache had fixed the issue but they had failed to update their servers — yet blame was still pinned to that project. From my research, I can find no record of them donating to or sponsoring the Apache foundation — yet their &lt;a href=&quot;https://www.marketwatch.com/investing/stock/efx/financials&quot;&gt;net income last year was $587M&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The message is this, go and check out some packages you use a lot across your projects, see if any of their issues need help. If you do not have development time, check out how you can donate to packages with “&lt;a href=&quot;https://github.com/feross/thanks&quot;&gt;npx thanks&lt;/a&gt;”. If you cannot do either of those things, leave them a star, &lt;a href=&quot;https://www.npmjs.com/package/appreciate&quot;&gt;there is even an npm package to do this&lt;/a&gt;. Just don’t complain about an issue in software you make money off, which you originally got for free.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>10 Things I wish I knew before giving my First Tech Talk</title>
    <link href="/tech-talk-tips/"/>
    <updated>2018-11-13T22:12:03Z</updated>
    <id>/tech-talk-tips/</id>
    <content type="html">&lt;div class=&quot;image&quot;&gt;
	&lt;img src=&quot;../../assets/images/me.jpeg&quot; /&gt;
	&lt;em&gt;Giving the talk — credit https://twitter.com/JamieTanna/status/1029428095223320576&lt;/em&gt;
&lt;/div&gt;
&lt;p&gt;Glossophobia or fear of public speaking is cited as being amongst &lt;a href=&quot;https://www.washingtonpost.com/news/wonk/wp/2014/10/30/clowns-are-twice-as-scary-to-democrats-as-they-are-to-republicans/?noredirect=on&amp;amp;utm_term=.a61b1b9d11bc&quot;&gt;mankind’s top 10 fears&lt;/a&gt;. It related to our inherent fear of failure. Although I have never been afraid of speaking publicly, for even the most experienced speakers, it can be a bit nerve racking at times.&lt;/p&gt;
&lt;p&gt;Why did I choose to give a talk then? For one, I wanted the experience, ever since &lt;a href=&quot;https://blog.joshghent.com/how-to-attend-your-first-programming-meetup-835b74f6556f&quot;&gt;going to my first meetup&lt;/a&gt; I thought “that’s really cool to speak about stuff you’re excited about”. In connection with this, I enjoy teaching people, whether that be 1-on-1 or to a group — it’s one of the reasons I contribute to open source, and write blogs. It’s a creative outlet. Overall, my primary objective was simply to share something I’m passionate about and also &lt;em&gt;try&lt;/em&gt; and make them laugh — emphasis on the word “try” there.&lt;/p&gt;
&lt;p&gt;My first talk was at the &lt;a href=&quot;https://nottsjs.org/&quot;&gt;NottinghamJS meetup&lt;/a&gt; and was titled “&lt;a href=&quot;https://github.com/nottsjs/speakers/issues/46&quot;&gt;Lightning Node Performance&lt;/a&gt;”. I’m hugely grateful to the organizers for giving me a platform. Previously they had people from Amazons Alexa division, Microsoft&#39;s Machine Learning team and more — so it seemed as if I had big shoes to fill.&lt;/p&gt;
&lt;p&gt;But giving the talk is the ending, let’s start at the beginning with things I wish I had known when preparing my first talk.&lt;/p&gt;
&lt;h2&gt;Preparation took longer than expected&lt;/h2&gt;
&lt;p&gt;First and foremost, the preparation took a long time. A long time. Initially, I had expected creating the slides and writing the talk to take around 2 days. It actually took over a week — plus all the additions I did late at night and changes to the content of the talk on the day it was supposed to take place. If there is any mistake I made, it’s I severely underestimated the time it would take. It gave me a newfound appreciation for any content I consume, whether that be talks, videos or podcasts. It takes a lot of time to prepare these things. Perhaps why criticism can hurt so much.&lt;/p&gt;
&lt;p&gt;Part of the reason the preparation took a long time was I wanted to make sure I was 100% concrete on every last word I said — in case someone picked me up on it and tore the entire talk to shreds. For example, part of my talk was speaking about the &lt;a href=&quot;https://medium.com/the-node-js-collection/what-you-should-know-to-really-understand-the-node-js-event-loop-and-its-metrics-c4907b19da4c&quot;&gt;NodeJS event loop&lt;/a&gt;. Although I know roughly how the event loop works, there were still some questions I did not know. I thought that perhaps someone may ask me about the Node event loop and therefore, I set on down the rabbit hole to explore. This kind of pattern occurred at least 6–7 times when creating the talk and accounted for a large proportion of the time I spent.&lt;/p&gt;
&lt;div class=&quot;image&quot;&gt;
	&lt;img src=&quot;https://cdn-images-1.medium.com/max/2000/1*qO8ucAj7rpUXP_tD3W1A9Q.png&quot; /&gt;
	&lt;em&gt;The image I created for Node js clusters&lt;/em&gt;
&lt;/div&gt;
&lt;p&gt;Moreover, I wanted to keep the slides almost completely visual. I wanted to keep words off the slides because I have observed people read those rather than listening to you. Finding images for NodeJS clustering is harder than it looks though and so again another time-consuming task was pouring over pages of gifs and images to find one that perfectly encapsulated the subject matter. Often times, I created my own in Photoshop, which again took a large portion of time — primarily due to my appalling photo editing skills.&lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;Choosing a topic is tricky&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;In connection with preparation time, it also took a long time to come up with a topic. Since I was not from a company, I wasn’t presenting any one particular “thing”. Therefore I went with a more general topic “application performance”. This proved difficult because it’s so broad and had so many subtopics I wanted to cover. For example, I wanted to speak about lambda cold starts, network resilience, asynchronous code in node and much more. Each one in of themselves could have been a talk in their own right. Therefore, a balance had to be struck between covering lots of topics briefly and covering a few topics in depth. I hope I eventually got that balance right, but it’s hard to tell. In the future, I would suggest coming up with a concrete outline in parallel to thinking up a topic.&lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;Not all points are equal&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;This is a lesson I learned after the fact of presenting the talk. Not every point deserves the same amount of time. Spend more time on the difficult to understand topics and breeze through the small minor points. There is often a sunken cost fallacy at play here, whereby you take lots of time to prepare all the slides so they each deserve their own ceremony. We should try to get rid of this thinking and instead prioritize the points covered. Ordering your points carefully can aid with this. No one wants to be bombarded with lots of heavy topics all in one go, so spread them out and interleave them with smaller, lighter points.&lt;/p&gt;
&lt;div class=&quot;image&quot;&gt;
	&lt;img src=&quot;https://cdn-images-1.medium.com/max/2000/0*8H9OLV-pu8qsvO4e&quot; /&gt;
	&lt;em&gt;Your delivery can get a bit wooden!&lt;/em&gt;
&lt;/div&gt;
&lt;h2&gt;&lt;strong&gt;Practice, practice… but not too much&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;Practising your talk is essential of course but you can practice it too much. At a certain point, your delivery could become too scripted or wooden. Rather than attempting to memorize a script, remember the points you are covering. Then just speak. If you have the subject knowledge then this will produce results. Furthermore, speaking from within rather than from notes will vary your talk in different ways. I found that when practising my talk, I would do it a different way each time, adding anecdotes and talking points and cutting others. This was done at an unconscious level and would not have been achieved if I were reciting verbatim.&lt;/p&gt;
&lt;div class=&quot;image&quot;&gt;
	&lt;img src=&quot;https://cdn-images-1.medium.com/max/2416/0*p-iyI1WhYQGJUMQ2&quot; /&gt;
	&lt;em&gt;There’s an NPM module for that — &lt;a href=&quot;https://twitter.com/iamdevloper/status/487606612757315584&quot;&gt;https://twitter.com/iamdevloper/status/487606612757315584&lt;/a&gt;&lt;/em&gt;
&lt;/div&gt;
&lt;h2&gt;&lt;strong&gt;Don’t fear questions&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;Questions are fantastic for people to get further insight on what you spoke about and can often reveal places where the talk should have explained a point further or provided a different angle. I didn’t so much fear these questions, more expected the worst. But the questions were about the talk. I did get a couple about technologies I hadn’t heard of but I can hardly be blamed for that — especially in the JS world. Overall, the questions were about the talk and asking me to expand on certain stories I had told about how &lt;a href=&quot;https://www.cloudcall.com/&quot;&gt;CloudCall&lt;/a&gt; was doing this performance improvement work.&lt;/p&gt;
&lt;p&gt;I learned a lot from the whole experience, briefly here are my takeaways.&lt;/p&gt;
&lt;h3&gt;Upload your slides to &lt;a href=&quot;https://github.com/joshghent/talks&quot;&gt;GitHub&lt;/a&gt; and &lt;a href=&quot;https://www.slidedeck.com/&quot;&gt;Slidedeck&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;One thing people always ask for with talks is where can I get the slides, so make them easily available. Creating a repo called “talks” and upload the file there, and upload them onto Slidedeck for those who may not have powerpoint/keynote.&lt;/p&gt;
&lt;h3&gt;Visual slides worked well&lt;/h3&gt;
&lt;p&gt;A picture says a thousand words. Words on slides should be avoided at all costs unless they are used to re-emphasize a point. You can explain much more with visuals. For example, rather than putting a slide with the conclusion from a study, put a nice chart up there with the numbers behind the study.&lt;/p&gt;
&lt;h3&gt;Avoid lots of code on slides&lt;/h3&gt;
&lt;p&gt;Code on slides are similar to words on slides. They should be used to make a specific point. Try to keep the code as short as possible, using an extract if possible. It’s not essential that the audience has a complete context around a program.&lt;/p&gt;
&lt;h3&gt;Slow down delivery&lt;/h3&gt;
&lt;p&gt;When I gave the talk, I think I rushed a little. It’s a nerves thing I suppose. My advice is to just count in your head 1–5 between points and 1–10 between slides. It will seem like a lifetime from your point of view, but it makes the delivery far more fluid.&lt;/p&gt;
&lt;h3&gt;Engage with the audience rather than speak to them&lt;/h3&gt;
&lt;p&gt;My talk was that. A talk. I hope the visuals were enough to keep people engaged but in the future, I will make an effort to ask the audience questions and engage with them further. For example, I may ask the audience if they have any experiences with dealing with X after explaining how I did it.&lt;/p&gt;
&lt;p&gt;Since my first talk, I have given a couple of others and want to do more. It’s a good experience but takes a lot of time. Be kind to those who give talks and give constructive feedback, as they have sacrificed a lot of time to deliver this. And if you are interested in giving a talk — do so! Ask the organizers of the event and I’m sure they will be happy to pen you in. If you are in the Leicester, UK area and would like to give a talk, post an issue on the &lt;a href=&quot;https://github.com/leicesterjs/speakers&quot;&gt;LeicesterJS speaker’s repo&lt;/a&gt; and I will get it in the diary — we want to encourage first-time speakers. If you have given a talk, share your experience — it’s good to break down some of the fears people may have.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Networking at Tech Meetups</title>
    <link href="/meetup-networking/"/>
    <updated>2018-08-11T22:12:03Z</updated>
    <id>/meetup-networking/</id>
    <content type="html">&lt;p&gt;Tech meetups and talks are a great way to get to know fellow developers in your locality. But it can be challenging if you are introverted by your nature. Although you may not be introverted, some find it challenging to approach people when they first attended a meetup. Networking is a core part of why many attend meetups — whether to find a project to work on, a new job or just a friend. This article is motivated by knowing my past-self and others would benefit from how to network at meetups.&lt;/p&gt;
&lt;p&gt;First of all, know who attend meetups. Developers! Rather than being a source for discomfort, you can look around the room in the midst of Gitlab hoodies, beards and sticker covered laptops and breath a sigh of relief. These are your people. Developers are like spiders, they are more afraid of you than you are of them. So just approach them! They won’t hurt you.&lt;/p&gt;
&lt;p&gt;A primary source of fear comes from not know what to speak about once you approach someone. At a meetup, beyond the “hey what’s your name” basic stuff, I ask “What project are you most passionate about at the moment? — whether at work or in your spare time”. Or perhaps “Are you going to use any of the tech that was mentioned in today’s talk?”. A safe starter question is just to ask them what they do day to day at work. Be interested in people. People are interesting and always, especially with a developer, excited about something. Find what they are excited about and then drill down on it.&lt;/p&gt;
&lt;p&gt;Avoid questions that can be answered with “yeah it’s alright” or “yeah good”. These are Boolean questions. Instead of asking “Did you enjoy the talk” ask, “Is there anything you’re going to apply today from the talk in your work? I’ve not used X technology before but the principles carry over into Y project I’m doing”. The main object of these questions is to get the person talking. People will latch onto you if they do most of the talking.&lt;/p&gt;
&lt;p&gt;Often groups can form and so it can seem like you’re butting in on the conversation but don’t fear this! Either look around for a couple of people sitting down or try and ease your way into the group. Don’t feel awkward, just start listening to the conversation. Be careful not to already ask a question that may have been asked previously. Latch onto some new information that is mentioned and ask about that.&lt;/p&gt;
&lt;p&gt;Don’t be afraid to end a conversation, you can’t stay at the meetup all night and you want to speak to more than one person. Simply say “It’s been great chatting with you, could we continue this? I have some interesting questions for you. What’s your email and/or LinkedIn?”. Then hand them your phone to put your details in. Keep it professional so try to stick to professional forms of contact (usually email, LinkedIn or Twitter).&lt;/p&gt;
&lt;p&gt;After the meetup, try to follow up with an email with a question based on what you spoke about. Keep a template handy that you can fill in for speed.&lt;/p&gt;
&lt;p&gt;By even your second meetup, you’ll find it a lot easier to begin speaking to people and the initial barrier will be cut down. Becoming a regular, you’ll also get to know friendly faces so you can check in with them to see how their project is going or their job hunt.&lt;/p&gt;
&lt;p&gt;Although these tips are to help you there will be times when you say the wrong thing or blurt something out at the wrong time. But don’t be afraid, we all do it. Even the queen wears underwear as they say. Networking shouldn’t be shied away from for fear of social embarrassment or for looking like a yuppie, it’s a critical part of your careers development and could potentially open many doors for you — it has for me.&lt;/p&gt;
&lt;p&gt;If you would like to read more, I discuss similar topics as well as more in-depth technical posts &lt;a href=&quot;https://blog.joshghent.com/&quot;&gt;here on my blog&lt;/a&gt;. I also tweet &lt;a href=&quot;https://twitter.com/joshghent&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Tracking Goals in Todoist</title>
    <link href="/goals-in-todoist/"/>
    <updated>2018-07-27T22:12:03Z</updated>
    <id>/goals-in-todoist/</id>
    <content type="html">&lt;p&gt;“There is always an app for that” is a phrase I heard repeatedly when I was looking at something to keep tabs on my goals, both short and long-term. But you know what, I don’t want an app! I began to consider using &lt;a href=&quot;https://en.todoist.com/&quot;&gt;Todoist&lt;/a&gt;, my task management app for doing this. After all, it’s one of my most used applications so there is less likelihood of me forgetting about it on the back page of a folder. Instead, it would be right front and center. Whilst I’m sure that all those goal management apps have some useful feature, you might be interested to see how you could use Todoist (or indeed any task management app) to keep track of your goals.&lt;/p&gt;
&lt;p&gt;The idea of “task management” may begin to make your goal seem more like a chore but, on the other hand, it makes you think of the goal in more segmented achievable chunks. This really aided me as it helped me apply the age-old advice of being specific and actionable with my goals. Rather than go to the gym, I create tasks to go to the gym on certain days. Additionally, tasks have completion dates so you can set each of those segments against a date to check in with the goal you are attempting to achieve.&lt;/p&gt;
&lt;p&gt;Now I’m going to dig into the specifics of how you can set your task manager up to work with your goals. Although this article is geared around Todoist, most task management/productivity type apps will have this. Other apps you could do this with are &lt;a href=&quot;https://www.omnigroup.com/omnifocus&quot;&gt;Omnifocus&lt;/a&gt;, &lt;a href=&quot;https://www.notion.so/&quot;&gt;Notion&lt;/a&gt;, &lt;a href=&quot;https://culturedcode.com/&quot;&gt;Things&lt;/a&gt;, Any.do and Microsoft Todo (formerly &lt;a href=&quot;https://www.wunderlist.com/&quot;&gt;Wunderlist&lt;/a&gt;).&lt;/p&gt;
&lt;h2&gt;Create a goal project&lt;/h2&gt;
&lt;div class=&quot;image&quot;&gt;
	&lt;img src=&quot;https://cdn-images-1.medium.com/max/2000/0*yEtGLCX52onL5Kle&quot; /&gt;
	&lt;em&gt;The championship is so close!&lt;/em&gt;
&lt;/div&gt;
&lt;p&gt;The first separate is to create a new project to separate your goal based tasks. In Todoist, you can create nested projects so I created a parent project of “Goals” and then child-projects for each goal I had.&lt;/p&gt;
&lt;h2&gt;Create Recurring Tasks&lt;/h2&gt;
&lt;p&gt;Using Todoists powerful recurring task functionality you can now create small tasks within each of these goal projects that is set to recur. For example, here is how I set up the “Gym” project.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://cdn-images-1.medium.com/max/2000/0*rA7JHrKQWXY5IHAV&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;
&lt;p&gt;Notice I have taken full advantage of recurring tasks by creating individual work out tasks. I’ve also made sure to have a task that can be chalked off every day to make sure I’m always chipping away at my goal.&lt;/p&gt;
&lt;p&gt;Incorporating some &lt;a href=&quot;https://gettingthingsdone.com/&quot;&gt;Getting Things Done&lt;/a&gt; principles, I’ve created a recurring task to review the progress of this goal. This could be in the form of a journal or a comment I put against the task. It keeps me honed into the goal’s original purpose and focused on the week ahead.&lt;/p&gt;
&lt;p&gt;By having this all planned out in advance you will never say “Oh no, I forgot to pack my gym back so I can’t go to the gym” or “I’ve had to get McDonald’s today because I hadn’t made lunch!”. It’s already in your task management app, ready for you. You don’t even need to think about it. This lowers the barrier to entry and reduces the number of “excuse vectors” — the number of different ways you could make excuses for not achieving your goal.&lt;/p&gt;
&lt;h2&gt;Use Labels and Reminders aggressively to provide context&lt;/h2&gt;
&lt;p&gt;In Getting Things Done, David Allen discusses context around certain tasks. In our example above, we can’t pack our gym bag at the office, because at that point it’s too late, we need to do it before we leave for work.&lt;/p&gt;
&lt;p&gt;Context provides that assistance to highlight the tasks you can and should do right now. Todoist enables this by providing time-based reminders and labels. I would suggest creating reminders for the pack bag task for right after you wake up so you do not forget. Additionally, trigger the press-ups tasks when you arrive at your home location. Location-based reminders are a premium feature but you can just as easily set it for the time you get home from work.&lt;/p&gt;
&lt;p&gt;Although not applicable in the example above, labels can provide extraordinary power. If we were learning to code, you could create a task to read a programming article you found and tag it with @out_and_about — this is a label I use for when I’m not physically at home, these are the sorts of tasks I can do on my lunch break. Other tasks such as “Add new API endpoint for getting users” would be a task I label with @home and @deep_work as it will require me to be at home and concentrate on that task for an extended period of time. When looking at what I should do next, labels like these help me to weigh up my motivation and available time.&lt;/p&gt;
&lt;h2&gt;Start Doing!&lt;/h2&gt;
&lt;p&gt;As much as all these fancy tools assist in visualizing your goals and the tasks that make them up, there is no replacement for real solid action. There is a vast multitude of facets to forming habits and achieving goals from the practical steps right through to the neuroscience research; that all goes well beyond the scope of this article. All in all, however, just do and be realistic about the time these tasks will take. Now, what are you waiting for? Get cracking!&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Lessons from Open Source</title>
    <link href="/open-source-lessons/"/>
    <updated>2018-07-13T22:12:03Z</updated>
    <id>/open-source-lessons/</id>
    <content type="html">&lt;p&gt;Contributing to open source is often touted as a great way to be recognized in the software development community, with many heralding their &lt;a href=&quot;https://github.com/&quot;&gt;Github&lt;/a&gt; profiles as a resumé of sorts. Additionally, open source software developers find their programming abilities enhanced and motivations for their day-jobs recharged. Beyond these, however, there are further lessons that can be learnt from contributing to open source.&lt;/p&gt;
&lt;h2&gt;Code Ownership&lt;/h2&gt;
&lt;p&gt;When I first took over as maintainer for an open source project, I found myself with a good sense of how the code base &lt;em&gt;should&lt;/em&gt; be. It was my baby that I had cared for and was trying to improve. When I began to encourage the community to add to the project I began to see strange new solutions that I would not have chosen. Moreover, I was cautious about appointing anyone else as a maintainer to the organisation I had created.&lt;/p&gt;
&lt;p&gt;I was not focused on delivering features and bug fixes but instead on how I perceived the codebase *should *be.&lt;/p&gt;
&lt;div class=&quot;image&quot;&gt;
	&lt;img src=&quot;https://cdn-images-1.medium.com/max/2000/0*KzBNzDDzzdTtEh32.&quot; /&gt;
	&lt;em&gt;“No, it should look like this!”&lt;/em&gt;
&lt;/div&gt;
&lt;p&gt;It highlights an interesting point, you can’t be precious about the approach. Problems can be solved many different ways. There is more than one way to skin a cat as they say — and it could not be truer when it comes to software development.&lt;/p&gt;
&lt;p&gt;Your goal as an open source maintainer should be to enable and encourage people to solve these problems however they like. The specifics of the code should be ignored and instead focus should be given to how well documented it is and how well tested it is.&lt;/p&gt;
&lt;p&gt;Of course, sometimes a certain approach is more convoluted than perhaps necessary. In these cases, &lt;strong&gt;discuss&lt;/strong&gt; the reasons why the person went for that approach. Perhaps they tried the approach you were thinking of, but it did not work for whatever reason. Don’t go in all guns blazing, as the more the code changes the more it might favour one solution over another.&lt;/p&gt;
&lt;h2&gt;Communication&lt;/h2&gt;
&lt;p&gt;Open source by its nature is open to basically anyone with a Github account. Therefore, people who stumble across your project and want to contribute to it may not be from the same time zone or have English as a second language. This can often lead to miscommunications. Therefore, it is best to make a concerted effort to ensure there is no ambiguity with what you are saying. Furthermore, different nations may have certain customs in their language that whilst might offend you, are thought of as nothing from others. This can be the case in reverse too, so be mindful of any language that could offend others unnecessarily.&lt;/p&gt;
&lt;div class=&quot;image&quot;&gt;
	&lt;img src=&quot;https://cdn-images-1.medium.com/max/2000/0*2nwpaiTXHO_QQXR1.&quot; /&gt;
	&lt;em&gt;Emotion doesn’t always travel well on the internet&lt;/em&gt;
&lt;/div&gt;
&lt;p&gt;An important thing to bear in mind when communicating with either maintainers or contributors of open source projects is that they are doing this in their free-time unpaid. With that in mind, you need to be wary of pressuring people into timeframe commitments or being overly critical in merge request comments. Make sure you are kind and considerate throughout. This principle applies to your day to day work, sure there might be times where ruffling some feathers in needed, but buy-and-large it pays to be positive and encouraging.&lt;/p&gt;
&lt;h2&gt;Writing&lt;/h2&gt;
&lt;p&gt;Beyond writing code, there is an even more important, yet seldom thought of, form of writing — documentation. Critically in open source, if you want people to use your thing — you gotta tell them how to use that thing!&lt;/p&gt;
&lt;div class=&quot;image&quot;&gt;
	&lt;img src=&quot;https://cdn-images-1.medium.com/max/2000/0*0Ijnny5zhcXA1nUM.&quot; /&gt;
	&lt;em&gt;Where is the “any” key?&lt;/em&gt;
&lt;/div&gt;
&lt;p&gt;This presents an interesting challenge, you need to write your documentation in a manner that suits your target audience. If your application is aimed more towards beginner programmers then gear your documentation towards that. Don’t assume someone already has &lt;a href=&quot;https://github.com/nodejs/node/wiki/Installation&quot;&gt;Node&lt;/a&gt; or &lt;a href=&quot;https://docs.mongodb.com/tutorials/install-mongodb-on-windows/&quot;&gt;MongoDB&lt;/a&gt; installed, show them how or point them to further guides where they can learn. A good way to hone your documentation writing skills is to discover a new API and write documentation for it. Dig through the source code and find out the usage of that endpoint and what it outputs. Since you’re approaching that project as an outsider your documentation will naturally lend itself to that audience and will provide an outstanding benchmark for how future documentation you will write.&lt;/p&gt;
&lt;p&gt;As with all writing, the point is to be clear, concise and easy to understand.&lt;/p&gt;
&lt;p&gt;Open source software is a lot more than just adding features and fixing bugs, it’s about the people. These lessons are important to embrace for use in a real-world environment and will prove invaluable. If you’re looking for a place to start contributing &lt;a href=&quot;https://up-for-grabs.net/&quot;&gt;Up-For-Grabs&lt;/a&gt; is great. Alternatively, find some software you already use and if it is open source, try and tackle an issue on their Github page. Open source has opened many doors for me and furthered my career more than anything else I’ve done — and it can with you too.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>The Art of Good Code Review</title>
    <link href="/good-code-review/"/>
    <updated>2018-05-30T22:12:03Z</updated>
    <id>/good-code-review/</id>
    <content type="html">&lt;p&gt;Code review is a critical part of any software development process. In theory, it is designed to broaden system knowledge amongst the team and ensure that the code is maintainable and easy to read. Perfecting code reviews can be somewhat of an art, it requires a balance of being picky and not sweating the small stuff.&lt;/p&gt;
&lt;p&gt;Before we dive into the principles around good code review, we should first define what it &lt;strong&gt;isn’t.&lt;/strong&gt; Many believe that one goal of a code review is to address any styling issues. With the rise of &lt;a href=&quot;https://prettier.io/&quot;&gt;Prettier&lt;/a&gt; and &lt;a href=&quot;https://eslint.org/&quot;&gt;ESLint&lt;/a&gt;, arguments about code formatting have become moot. Automating formatting with the use of tools removes the personal opinion and sway from any one person. Additionally, this also has the benefit of formatting the code more consistently across all areas of the code base.&lt;/p&gt;
&lt;p&gt;Other believe that code reviews should be to catch bugs. Whilst this can be true if you spot something glaringly obvious, it is often challenging to understand the code in a wider context. Bug catching should generally be left to a set of watertight unit and integration tests as well as your top-notch QA department.&lt;/p&gt;
&lt;p&gt;Now we know what code review &lt;strong&gt;isn’t&lt;/strong&gt;, what makes a good code review?&lt;/p&gt;
&lt;h2&gt;Know when to say “it’s good enough”&lt;/h2&gt;
&lt;p&gt;Although the aim of all software companies should be to deliver A-class code, there is such thing as too nitpicky. There is no need to find fault with every little niggle of code. This will only end up demoralizing the developer. It’s a fine balance between picking at the fine details and not going overboard. Only you can be the judge of that and time with hone your perception of done. I often find that overly-nitpicking comments can come down to personal preference of the reviewer. These types of comments should be avoided whilst instead focusing your attention to &lt;em&gt;actual&lt;/em&gt; issues in the code.&lt;/p&gt;
&lt;h2&gt;Stay Positive&lt;/h2&gt;
&lt;p&gt;Code review can often seem like the reviewer has marched in and torn to shreds the priceless artwork that you’ve digitally sculpted. Instead of focusing only on the issues, often time it’s good to step back and reflect on the positive attributes of a merge request. Maybe someone has used a kick-ass language feature you had no idea about. Or perhaps they’ve just written an informative function header. Commend them for this type of work. It lifts the spirits of the developer and will mean they are more receptive to the issues you do find.&lt;/p&gt;
&lt;h2&gt;Look out the output not approach&lt;/h2&gt;
&lt;p&gt;A key part of code review is analysing the output, not the approach they have taken. There are many different ways of solving a problem, and it’s important to avoid enforcing one solution over another.&lt;/p&gt;
&lt;p&gt;In some cases, a developer might not be aware of another approach. For example, let’s say they have written a method that loops through an array to find a matching element in that array. They might not be aware that &lt;a href=&quot;https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/includes&quot;&gt;Array.includes&lt;/a&gt; (in Javascript) could solve their problem! In such cases, it’s best to go and talk to the developer in person. It might be the case they cannot use Array.includes. You are most likely tackling this code base as an outsider so it is best to assume that the author is the expert.&lt;/p&gt;
&lt;h2&gt;Be constructive&lt;/h2&gt;
&lt;p&gt;When you find an issue in a merge request, which comment do you think would be better?&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;No. Don’t do this.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt;OR&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Can we change this to not include X as this may cause Z and perhaps do Y instead? What do you think?&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;In the first approach, the comment is not constructive, not only does it not tell the author the reason for the issue, it also doesn’t suggest an alternative. On the other hand, the second approach suggested using Y instead of X because Z might happen. You might have more experience on a code base to know certain ins-and-outs that the author might not be aware of, your comment will mean the developer will benefit from your wisdom. Additionally, the second approach asked what the author thought. Rather than ruling with an iron fist, it opened up a discussion.&lt;/p&gt;
&lt;p&gt;Code review is a fantastic way for all developers of any experience to get their heads stuck into different parts of your companies code base. Furthermore, it’s a great way for developers to gain experience on new approaches to solving problems. If you don’t already, try and include code review into your process and involve the whole team.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Principles of Performance</title>
    <link href="/performance-principles/"/>
    <updated>2018-05-16T22:12:03Z</updated>
    <id>/performance-principles/</id>
    <content type="html">&lt;div class=&quot;image&quot;&gt;
	&lt;img src=&quot;../../assets/images/cheetah.jpeg&quot; /&gt;
	&lt;em&gt;Photo by Cara Fuller on Unsplash&lt;/em&gt;
&lt;/div&gt;
&lt;p&gt;On the web, speed is everything. But you knew that right? Rather than throwing percentages and statistics at you about site retention rates, let’s take a look at some key principles to bear in mind when looking to improve your app or website’s performance.&lt;/p&gt;
&lt;p&gt;These are principles to be used no matter what technology you use and are more broad in their scope. The aim is to make this into a small handbook, not a manual. Just as once you learn to drive a car you, in theory, can drive any other, this article aims to teach the principles and not the implementation.&lt;/p&gt;
&lt;div class=&quot;image&quot;&gt;
	&lt;img src=&quot;https://cdn-images-1.medium.com/max/2000/0*-PJFFL6w3b2sbjF5.&quot; /&gt;
	&lt;em&gt;This is your website&lt;/em&gt;
&lt;/div&gt;
&lt;h2&gt;More Network Round-Trips = Slower Load Times&lt;/h2&gt;
&lt;p&gt;Without a doubt, one of the main contributors to slow page load times is network round trips. By having more assets to download (javascript libraries, CSS modules, images), the more network connections the end-user will have to establish. Regardless of network speed, this will have negative repercussions on page load times — the effect will be more noticeable at slower network speeds.&lt;/p&gt;
&lt;p&gt;The first way you can solve this is by compiling your assets. You can try something like Webpack to compile various stylesheets and scripts into bundles — meaning you have a single CSS and JS file that you use across the board. Further, if you have a lot of icons or sprites on your page, it may also be worth putting these onto a single image and then be referenced like sprite sheet — this is a trick used commonly by game developers but can be utilized on the web too.&lt;/p&gt;
&lt;p&gt;Another approach is to reduce the assets you are using — question whether you really need that library. There are lots of sites that have sprung up like &lt;a href=&quot;http://youmightnotneedjquery.com/&quot;&gt;youdontneedjquery&lt;/a&gt; illustrating why you might not need to include that specific library. Additionally, if you do decide you need that library or framework, then often you can import only what you need. In the case of &lt;a href=&quot;https://lodash.com/&quot;&gt;Lodash&lt;/a&gt;, you can specify a singular function to save to your dependencies (improving ‘npm install’ time for new contributors) as well as importing only that function thereby not bloating your application with unused code. I find myself keenly aware of this with Bootstrap. Often bootstrap is hastily imported and used for nothing more than it’s easy-to-use grid layout. Truth be told, bootstrap includes a &lt;strong&gt;lot&lt;/strong&gt; of CSS modules for jumbotrons, icons, wells, breadcrumbs and anything else you could think of for building a site. But in 99% of cases, you just don’t need all of its features. With &lt;a href=&quot;https://getbootstrap.com/docs/4.0/getting-started/webpack/&quot;&gt;Bootstrap 4 you can use Webpack&lt;/a&gt; to import specific plugins. With Bootstrap 3.3 you can get even more granular and create your own &lt;a href=&quot;https://getbootstrap.com/docs/3.3/customize/&quot;&gt;customized version of bootstrap&lt;/a&gt;, including only what you need.&lt;/p&gt;
&lt;p&gt;Note this tip applies only to HTTP 1.1. With the rise of HTTP2 around the corner, it is actually &lt;strong&gt;faster&lt;/strong&gt; to have lots of assets rather than 1 single bundle. However, HTTP2 has yet to see widespread adoption for performance reasons.&lt;/p&gt;
&lt;h2&gt;Larger Assets = Slow&lt;/h2&gt;
&lt;p&gt;A &lt;a href=&quot;https://www.youtube.com/watch?v=FEs2jgZBaQA&quot;&gt;fantastic talk by Addy Osmani at CSSConf&lt;/a&gt; demonstrated the detrimental effect of having large image assets on your page (especially in the visible viewport). To have a fast running app or website, you have to shed the things that slow you down.&lt;/p&gt;
&lt;div class=&quot;image&quot;&gt;
	&lt;img src=&quot;https://cdn-images-1.medium.com/max/2000/0*weuFQ40LFR1eJQyR.&quot; /&gt;
	&lt;em&gt;I couldn’t find a photo of Usain Bolt with a bag of sand but here’s the next best thing!&lt;/em&gt;
&lt;/div&gt;
&lt;p&gt;Usain Bolt can run the 100m in 9.58 seconds. That’s very fast. But if he was carrying a bag of sand, he would be a lot slower. It sounds silly but the bag of sand illustrates all those cumbersome libraries and images you are using to try and make the website look sleeker but end up slowing it down. Sure, your image pops in and does a little twirl, but people will have left the site long before that animation library even loads in.&lt;/p&gt;
&lt;p&gt;If you decide, no I need that 10Mb 4k image to load on first paint, then look to deliver it via a CDN (such as &lt;a href=&quot;https://www.cloudflare.com/&quot;&gt;CloudFlare&lt;/a&gt; or &lt;a href=&quot;https://cloudinary.com/&quot;&gt;cloudinary&lt;/a&gt;) and then cache it aggressively at the server and client level. This will benefit recurring users and can reduce page load times by a factor of 10 on the second load. Customers will thank you for respecting their data plans. Excess mobile data charges can quickly rake up and if your app is a major culprit for this, then you may find users dropping off the service.&lt;/p&gt;
&lt;h2&gt;Feels fast = Is Fast&lt;/h2&gt;
&lt;p&gt;When loading something, if it “feels” fast then it will be fast. But what does it mean to “feel” fast? Well, when loading a website, for example, prioritise the part that the user can see first — this is called the initially visible viewport. It will vary on a per-device basis but using tools such as &lt;a href=&quot;https://github.com/pocketjoso/penthouse&quot;&gt;Penthouse&lt;/a&gt; and &lt;a href=&quot;https://github.com/addyosmani/critical&quot;&gt;CriticalCSS&lt;/a&gt; you can bundle and inline the styling that renders the top of your website.&lt;/p&gt;
&lt;p&gt;It also means being interactive in the shortest time possible. You want a person to scroll down your website and not hit a load of, what I will call, the “Tasmanian scrollbar devil”. I’m sure you’ve had it yourself, scrolling down a website then an image above the visible viewport loads and pushed the content you were trying to look at down. Incredible annoying UX and takes up valuable CPU time.&lt;/p&gt;
&lt;div class=&quot;image&quot;&gt;
	&lt;img src=&quot;https://cdn-images-1.medium.com/max/2000/0*sOAULjFJsJE0_kYf.&quot; /&gt;
	&lt;em&gt;You don’t want this guy on your site&lt;/em&gt;
&lt;/div&gt;
&lt;p&gt;To combat this, place invisible div’s with the width and height of images before they load, this will prevent this scrollbar devil from ever arising. You can also perform the same trick with large areas of text or the like, for example, you may have seen Facebook and Jira using Background masks. &lt;a href=&quot;https://cloudcannon.com/deconstructions/2014/11/15/facebook-content-placeholder-deconstruction.html&quot;&gt;Here is a great article&lt;/a&gt; on specifically how they work.&lt;/p&gt;
&lt;p&gt;But what about the rest of the page’s content below the initially visible viewport? That can be lazy loaded. Simple. Utilize async scripts and other methods to ensure your assets are delivered swiftly. Try to defer any background tasks or other assets for later on down the line. For example, if you have a to-do app, I would render main menu bar critically inline as well as placeholder masks for tasks. I’d then prioritize loading the JS that will then request the to-dos from the database. Anything else, such as account settings, the users’ profile image etc. can be saved for later. The goal of the app is to display to-dos. Make that the fastest thing, and forget everything else.&lt;/p&gt;
&lt;h2&gt;Hosting&lt;/h2&gt;
&lt;p&gt;Consider your hosting provider as a possible bottleneck for performance. Although we have spoken a lot in this article about initial loading times, it’s worth considering the performance of certain interactions.&lt;/p&gt;
&lt;p&gt;Using the to-do app example above, the most important interaction is marking to-do items as completed. It might be the case that you are triggering a serverless function hosted on AWS lambda when the user marks the item complete. If you find this action slow, investigate the bottlenecks. Is it the database connection time? Is it the memory assigned to the lambda function enough? With serverless, perhaps the function is going cold and so has a slow startup time — so it’s better to host it on a long-running server instance. The point is, there are many considerations and possible bottlenecks in even the most simple action.&lt;/p&gt;
&lt;p&gt;If you are using a relational database (such as MySQL or PostgreSQL) it’s worth taking a look at your table architecture. Bad database design can necessitate more JOIN’s than would otherwise be needed. Further, joining on un-indexed columns with large datasets will be detrimental to query performance so it’s advisable to take look at what queries you are performing and optimize them. You may even want to consider using Redis or Memcached to cache common query responses.&lt;/p&gt;
&lt;h2&gt;Set budgets and targets&lt;/h2&gt;
&lt;p&gt;Chances are, if you are the developer then you will be fairly intolerant to anything slow. In addition, you will have a good idea of how fast things *should *perform as well as the device that people use most often.&lt;/p&gt;
&lt;p&gt;Now you have a clear picture of your most common use case, the next step is to create a performance budget. In other words, how fast should it load? Having a clear target will give you something to aim for and to keep a close eye with each new code change that is made. Be vigilant with sticking to that sub 1 second load time, and don’t accept any new code that pushes it over that limit.&lt;/p&gt;
&lt;p&gt;Hopefully, you can utilize these principles in the future an apply them to your own website or application. They should be transferable no matter what technology you are using. Let me know any performance principles you have at &lt;a href=&quot;mailto:hola@joshghent.com&quot;&gt;hola@joshghent.com&lt;/a&gt; or comment below! I’m also on twitter &lt;a href=&quot;https://twitter.com/joshghent?lang=en&quot;&gt;@joshghent&lt;/a&gt; where I tweet about web performance and more.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>LinkedIn For Developers</title>
    <link href="/linkedin-for-developers/"/>
    <updated>2018-04-19T22:12:03Z</updated>
    <id>/linkedin-for-developers/</id>
    <content type="html">&lt;p&gt;“Oh, not another recruiter!” – my co-worker said, lazily chucking their phone down. “They just spam!”.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://cdn-images-1.medium.com/max/10368/1*-FpmhaWSMn5ieGS-yYUPFA@2x.jpeg&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;
&lt;p&gt;This is an all too common phrase I hear from developers. I disagree with this sentiment because recruiters can get you good jobs and negotiate on your behalf – it’s in their best interest to do so. If you are looking for a new opportunity, LinkedIn can be a great way to connect with people who will start the hunt for you. Here I will break down not only how to optimize yourself for a new job but also hopefully how to remove a lot of the pain points of searching for one.&lt;/p&gt;
&lt;h3&gt;Clarify Why You Are Looking For a New Position&lt;/h3&gt;
&lt;p&gt;Before you set out on your voyage to find a new job, you need to know what you want. Further, it is helpful to reflect on your career trajectory and ultimately, to borrow a cliche, consider where you want to be in 2 years time.&lt;/p&gt;
&lt;p&gt;After establishing these points you will have a clear goal to aim towards. The tricky part, however, is sticking to that goal. If you want a job that has a foosball table and bean bag chairs, then don’t settle for a company that only has hammocks. It’s a silly example but illustrates the point. In my case, I wanted to cut my commute down and spend less time in traffic in a car. Therefore, finding a company situated right outside a train station was extremely convenient and fulfilled that criteria.&lt;/p&gt;
&lt;p&gt;I’d further recommend thinking about what sort of company you would like to work for. That sounds like a vague thing to contemplate, and it is, but this one is not meant to be specific – just in general terms. For example, some would view a workplace like Google as the God-tier level job. With its plethora of perks and benefits such as free meals, on-site gyms and of course, &lt;a href=&quot;https://www.decoist.com/nap-pods-office/&quot;&gt;nap pods&lt;/a&gt;, it presents a view of how you want to work. Whilst that level of benefits are not available in the vast majority of cases, it’s good to recognize that you hold those kinds of perks in high regard and therefore want to optimize your next job for those sorts of things. Personally, I was looking for a company whilst having a forward-looking growth culture, also recognized that people have lives and families they want to be with. Yours will be different so have a think about what you want.&lt;/p&gt;
&lt;p&gt;Now you have those key details, you can include those in your messages and calls with recruiters who can then look for opportunities that fit.&lt;/p&gt;
&lt;h3&gt;Automate InMail Replies&lt;/h3&gt;
&lt;p&gt;As I outlined earlier, a big problem is that people receive a lot of “InMail”. These are messages that are blasted out to a wide range of people en masse. Often, the mail is not applicable because the skills for the job they are offering is outside of your knowledge base, other times it may be that the job is not near you or perhaps it’s simply not appealing. In any case, it can be easy to see why these messages would be considered spam.&lt;/p&gt;
&lt;p&gt;Nevertheless, InMail can provide a doorway to connect with a recruiter in a meaningful way. I’d suggest composing a few messages that you can simply copy-paste to reply to the recruiter. They will appreciate you having taken the time to reply as it means the recruiter will get back the “InMail Credits” that it costs to send them in the first place. In 90% of cases, I have found that the recruiter would ask me what type of positions I am looking for. In which case, I can reply and suggest they keep in touch with me if they come across a role that fits that criteria.&lt;/p&gt;
&lt;p&gt;I have 3 different types of replies to InMail&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;I’m not interested&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;I’m not looking&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;I am interested&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Below is my letter I send when I am not looking for a new position.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Hi X,
Hope you’re well and thank you for reaching out to me with this opportunity.
Unfortunately, I am no longer looking for new positions at this time as I have just accepted a new offer at Z Corp.
Thank you for your consideration and best of luck with the business. I will bear you in mind when I look for future opportunities.
Kind regards,
Y&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;By having these responses pre-prepared it means I need to spend less time writing and more time gaining useful connections that I can utilize in the future. Furthermore, the friendly reply will set you apart from others they may send this InMail to. People like people, and if you’re a person who is nice, people will be drawn to you. Recruiters are people like you and me, just trying to do their job.&lt;/p&gt;
&lt;h3&gt;Update your Work History and Skills Sections&lt;/h3&gt;
&lt;p&gt;Additionally, the number of InMails you get in the first place can be cut down by updating your work history with details description of what you did at that company, what you learned and what technologies you used. The latter is especially useful as recruiters will often target you with jobs related to technologies you used most recently.&lt;/p&gt;
&lt;p&gt;Furthermore, the skills section is important as this is one of the factors that recruiters use when sending the InMail campaigns. By pruning out technologies you don’t want to work with, you should receive less mail concerning jobs using those technologies. Put in skills you have and want to use going forward.&lt;/p&gt;
&lt;p&gt;To truly go above and beyond and put yourself in the spotlight as a potential candidate, be sure to include any articles you have written, any open source projects you contribute to or maintain. These are all things that good recruiters will look out for, as it will be an indicator that you are a more capable candidate.&lt;/p&gt;
&lt;p&gt;Overall these are just a few tips to help you be that little bit extra special (I know you are) to potential companies. My dad always use to say: “There are lots of baked beans on the shelf, but why do people go for the same brand each time? – Because they believe they are the best. Be the best can of baked beans”. The point is that you have to set yourself out from the crowd. There are lots of developers out there and lots of demand but it doesn’t take much to differentiate yourself from the crowd by applying the points above.&lt;/p&gt;
&lt;p&gt;Do you have any more tips to share on LinkedIn or perhaps a grievance or two? Discuss it down below or email me at &lt;a href=&quot;mailto:hola@joshghent.com&quot;&gt;hola@joshghent.com&lt;/a&gt; or comment below! I’m also on twitter &lt;a href=&quot;https://twitter.com/joshghent&quot;&gt;@joshghent&lt;/a&gt; where I tweet about web performance and more.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Solve 90% of Google Pagespeed Insights Issues in 30 Minutes</title>
    <link href="/google-psi-problems/"/>
    <updated>2018-03-31T22:12:03Z</updated>
    <id>/google-psi-problems/</id>
    <content type="html">&lt;div class=&quot;image&quot;&gt;
	&lt;img src=&quot;../../assets/images/horserace.jpg&quot; /&gt;
	&lt;em&gt;Source: &lt;a href=&quot;https://unsplash.com/photos/fxAo3DiMICI&quot;&gt;https://unsplash.com/photos/fxAo3DiMICI&lt;/a&gt;&lt;/em&gt;
&lt;/div&gt;
&lt;p&gt;Performance is a critical factor in site retention rates. Time is money, and there is a laundry list of examples that prove &lt;a href=&quot;http://loadstorm.com/2014/04/infographic-web-performance-impacts-conversion-rates/&quot;&gt;people expect near-instant loading&lt;/a&gt; and will navigate off a web page if it &lt;a href=&quot;https://www.nytimes.com/2012/03/01/technology/impatient-web-users-flee-slow-loading-sites.html?pagewanted=all&quot;&gt;does not load in under 3 seconds&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Although the &lt;a href=&quot;https://developers.google.com/speed/pagespeed/insights/&quot;&gt;Google Page speed Insights&lt;/a&gt; score is not a guaranteed stamp that a site will be fast, it does give some indication of this fact. Additionally, it is now one of the hundreds of factors that Google and other search engines use in their SEO ranking algorithms. Truly then, web performance is something to care about from a business standpoint.&lt;/p&gt;
&lt;p&gt;Since developers and businesses alike aim for “best bang for your buck”, here I’m going to walk you through three simple steps you can action which will bump your score by at least 20 (if not more). This is not a smoking gun but, applying the &lt;a href=&quot;https://betterexplained.com/articles/understanding-the-pareto-principle-the-8020-rule/&quot;&gt;Pareto principle&lt;/a&gt;, it is better to spend your time doing 20% of the tasks which will give you 80% of the benefit.&lt;/p&gt;
&lt;h2&gt;Minify your assets&lt;/h2&gt;
&lt;p&gt;Large page sizes owe their disdain to bloated javascript and CSS assets. An easy way to reduce the sizes of these is to minify them. This can be done via a task runner like &lt;a href=&quot;https://gulpjs.com/&quot;&gt;Gulp&lt;/a&gt; or &lt;a href=&quot;https://gruntjs.com/&quot;&gt;Grunt&lt;/a&gt;. If you are in a hurry then you can use an online tool such as &lt;a href=&quot;https://www.minifier.org/&quot;&gt;Minifier&lt;/a&gt; — however, this will mean you need to re-run this everytime you change the Javascript. I’d recommend setting up an automated task.&lt;/p&gt;
&lt;p&gt;If you are feeling adventurous then you can go further with this optimization and use Webpack with &lt;a href=&quot;https://webpack.js.org/guides/tree-shaking/&quot;&gt;tree shaking&lt;/a&gt;. This will prune any unused code from your Javascript and therefore reduce the size of the underlying modules that you are minifying.&lt;/p&gt;
&lt;p&gt;Delve deeper into minification and you will quickly realize that the best way to reduce asset size is to &lt;strong&gt;just have less stuff&lt;/strong&gt; in them. Therefore, try and reduce the number of &lt;a href=&quot;https://lodash.com/&quot;&gt;Lodash&lt;/a&gt; modules you are importing, or &lt;a href=&quot;https://momentjs.com/docs/&quot;&gt;Moment.js locales&lt;/a&gt;, or perhaps you are importing the entirety of &lt;a href=&quot;https://getbootstrap.com/&quot;&gt;Bootstrap&lt;/a&gt; just to use the row and container system.&lt;/p&gt;
&lt;p&gt;Images are one of the biggest culprits in the issue of large files. According to the &lt;a href=&quot;http://httparchive.org/interesting.php?a=All&amp;amp;l=Mar%2015%202018&quot;&gt;HTTP Archive, as of the 15th March 2018&lt;/a&gt;, **over half **of an average sites payload is in images. Therefore it’s crucial to focus your efforts on reducing the number of images you use but also optimizing them. Before putting an image on your page make sure to compress them first. If you are in a hurry then try using this tool called &lt;a href=&quot;http://optimizilla.com/&quot;&gt;Optimizilla&lt;/a&gt;. The solution is to automate this process using your task running (Gulp, Grunt or Webpack) along with a plugin such as &lt;a href=&quot;https://www.imagemagick.org/script/index.php&quot;&gt;ImageMagick &lt;/a&gt;or the like. There is even a &lt;a href=&quot;https://en-gb.wordpress.org/plugins/ewww-image-optimizer/&quot;&gt;Wordpress plugin&lt;/a&gt; if you are publishing images via a blog.&lt;/p&gt;
&lt;h2&gt;Cache Assets&lt;/h2&gt;
&lt;p&gt;Huge savings in speed will come from the client’s browser not having to download the assets in the first place. This will not only drastically improve page load times but also reduce the bandwidth needed for your server (which depending on your hosting provider may reduce the bill). Additionally, if the client is viewing the site on their mobile then they will be able to load your site, safe in the knowledge that they don’t need to be concerned about their data plan.&lt;/p&gt;
&lt;p&gt;You can cache your assets in a few ways, the easiest is to set a cache control header on your requests. On an Apache server, you can do that as follows.&lt;/p&gt;
&lt;div class=&quot;image&quot;&gt;
	&lt;img src=&quot;https://cdn-images-1.medium.com/max/2152/0*SsZKehPZW3P9knAC.&quot; /&gt;
	&lt;em&gt;Code here: &lt;a href=&quot;https://gist.github.com/joshghent/fcca761d006ae34a1a2aaa0406a9e0f1&quot;&gt;https://gist.github.com/joshghent/fcca761d006ae34a1a2aaa0406a9e0f1&lt;/a&gt;&lt;/em&gt;
&lt;/div&gt;
&lt;p&gt;Application caches can also come in very useful. A major issue facing many sites is long-running queries. An easy solution to this is to cache the query response in the application, providing that the query is worth of caching and will not have its response changed a lot. &lt;a href=&quot;https://laravel.com/docs/5.6/cache&quot;&gt;Laravel has this built in&lt;/a&gt; and &lt;a href=&quot;https://www.sohamkamani.com/blog/2016/10/14/make-your-node-server-faster-with-redis-cache/&quot;&gt;Express can be extended to do this also&lt;/a&gt;. This will reduce the server response time and therefore lead to quicker page loads for the clients.&lt;/p&gt;
&lt;h2&gt;Enable Compression&lt;/h2&gt;
&lt;p&gt;Lastly, after reducing and minifying your sites payload, you can enable compression which will ensure they are transferred in the smallest possible form. There are two popular algorithms, &lt;a href=&quot;http://www.gzip.org/&quot;&gt;Gzip&lt;/a&gt; and &lt;a href=&quot;https://github.com/google/brotli&quot;&gt;Brotli&lt;/a&gt;. The latter is a more recent trend and actual has better compression rates than the long heralded Gzip. Nevertheless, I would still recommend using Gzip as Brotli takes more CPU power (and therefore time) to decompress on the client side.&lt;/p&gt;
&lt;p&gt;You can find guides on how to do this around the web that will be up to date long after this blog post is published but here is a &lt;a href=&quot;https://varvy.com/pagespeed/enable-compression.html&quot;&gt;good one for Apache&lt;/a&gt; (which I assume will stay the same!).&lt;/p&gt;
&lt;p&gt;I hasten to add, that if your site must support &lt;a href=&quot;http://schroepl.net/projekte/mod_gzip/browser.htm&quot;&gt;Netscape 3 and below &lt;/a&gt;then compression will be redundant here as they only support HTTP/1.0 which does not send the Accept-Encoding header. Nonetheless, with &lt;a href=&quot;http://gs.statcounter.com/browser-market-share&quot;&gt;less than 1% of people worldwide&lt;/a&gt; using anything other than “the big 5” browsers — I think you’ll be in the clear.&lt;/p&gt;
&lt;p&gt;Performance should be considered a feature and whilst it would be great to spend lots of time on performance, companies tend to prioritize other tasks ahead of it. Whilst that is not ideal, using the 3 tips above (which should take less than 30 minutes per point) you can quickly improve the performance of your site or application in addition to having more potential leverage for time to be allocated for furthering performance.&lt;/p&gt;
&lt;p&gt;Do you have any other performance quick tips? Let me know at &lt;a href=&quot;mailto:hola@joshghent.com&quot;&gt;hola@joshghent.com&lt;/a&gt; or comment below! I’m also on &lt;a href=&quot;https://twitter.com/joshghent?lang=en&quot;&gt;twitter @joshghent&lt;/a&gt; where I tweet about web performance and more.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>📱 Zen iPhone</title>
    <link href="/zen-iphone/"/>
    <updated>2018-03-26T00:00:00Z</updated>
    <id>/zen-iphone/</id>
    <content type="html">&lt;div class=&quot;image&quot;&gt;
	&lt;img src=&quot;../../assets/images/zeniphone.jpg&quot; /&gt;
	&lt;em&gt;Photo credit: &lt;a href=&quot;https://unsplash.com/photos/Dl6jeyfihLk&quot;&gt;https://unsplash.com/photos/Dl6jeyfihLk&lt;/a&gt;&lt;/em&gt;
&lt;/div&gt;
&lt;p&gt;Ever since the smartphone arrived in our hands, people everywhere have been utterly entranced by them. Spending never more than a moment without being bathed in white digital light.&lt;/p&gt;
&lt;p&gt;I consider myself in that crowd. But recently I have been more inclined to try and spend time on my phone in a more purposeful manner. I found that I was just scrolling endlessly on various apps just to avoid doing other things — as a form of procrastination. I’m not a maverick to the extent that I will abandon a smartphone entirely and resort to a &lt;a href=&quot;https://en.wikipedia.org/wiki/Nokia_3310&quot;&gt;Nokia 3310&lt;/a&gt; or carrier pigeon but I did “refactor” my phone. And in doing so found a few helpful points which may help others do the same.&lt;/p&gt;
&lt;p&gt;I treat my phone as my virtual desk and although it has long been touted that &lt;a href=&quot;https://www.inc.com/geoffrey-james/a-messy-desk-is-a-sign-of-genius-according-to-scie.html&quot;&gt;genius’ generally prefer their desks messy&lt;/a&gt;, to the vast majority of people, order and tidiness is preferred. In this way, be mindful about organising your home screen in a way that is conducive to things you want to accomplish and deters from things you don’t want to get distracted by.&lt;/p&gt;
&lt;p&gt;Many will say to completely delete distracting apps. Generally, social media and/or games are a way to relax on the train home or perhaps to scroll through whilst waiting in line and so deleting them seems like a step too far. As a solution to this, you can put “distracting” apps on the second page of a folder.&lt;/p&gt;
&lt;p&gt;For example, have a social media folder that on the first page has just LinkedIn. On the second page of that folder, is Instagram, Twitter, Snapchat and more. Not only does this mean it takes an extra action to access these distracting apps, but also keeps the screen less busy; Every app prefers a different color theme, Twitter is blue, Snapchat is yellow and Instagram, like a child trying to pick which ice cream they want, went for a loud 3 color gradient. Keeping these out of sight keeps your phone a more serene place.&lt;/p&gt;
&lt;p&gt;In line with this “less busy” theme, I have found simple two-color linear gradient backgrounds particularly helpful. I found single colors too boring but a gradient gave it that little extra twist. Further, the menu bar (or whatever you call the bar at the top of the screen) was a particular source of chaos. This was solved this by removed the battery percentage so that it is only icons on my menu bar — again giving it an air of peace and serenity.&lt;/p&gt;
&lt;p&gt;The plethora of large red badges that just beg you to click on them are seldom left untouched. I removed all of these with two exceptions, my unread email count and the Todoist tasks I have due for the day.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Out of sight, out of mind — proverb&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Without a doubt, one of the most useful features you can activate on your phone is automated Do not disturb. As a programmer, I like this automated. Therefore, I have it configured to turn on my do not disturb between 10 pm and 6 am. This means I am not distracted by incoming emails or other messages as I am winding down for bed.&lt;/p&gt;
&lt;p&gt;Before bed, it is optimal to ensure your phone has blue light shifted such as Night shift (iOS) or Twilight (Android). The science on it’s benefits to sleep are &lt;a href=&quot;https://www.health.harvard.edu/staying-healthy/blue-light-has-a-dark-side&quot;&gt;sketchy&lt;/a&gt; &lt;a href=&quot;https://cliradex.com/7-myths-facts-blue-light-eyes/&quot;&gt;at&lt;/a&gt; &lt;a href=&quot;https://medicalxpress.com/news/2016-04-debunking-digital-eyestrain-blue-myths.html&quot;&gt;best&lt;/a&gt; but from personal experience, it does reduce strain on my eyes compared to staring at a blue screen. When you’re used to a bright blue display, you don’t know any different. But trying to go back is impossible for me now.&lt;/p&gt;
&lt;p&gt;In my quest for a more peaceful digital experience, I stumbled across a tool called &lt;a href=&quot;https://inthemoment.io/&quot;&gt;Moment&lt;/a&gt;. It’s an app that tracks your phone usage. It works by looking at your battery usage, which you are asked to screenshot each week. The photos are then automatically scanned by the app.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://cdn-images-1.medium.com/max/2484/1*pgMkULYbNKEtX9L84MQWTw@2x.jpeg&quot; alt=&quot;My Homescreen&quot; /&gt;&lt;em&gt;My Homescreen&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;As you may already know, I’m a huge fan of making data-driven decisions and believed this was a great way to gain some insight into how I am using my phone. Because I have only been using this for the past month, the data set is fairly limited and I think I have some false positives from leaving my phone unlocked and listening to audio from Youtube (I have yet to configure the excluded apps list). Nonetheless, over time this will build into a substantial data set I can draw sensible conclusions from. It will unlock answers to questions such as &lt;em&gt;“What apps do I go to first when I unlock my phone?”&lt;/em&gt; and &lt;em&gt;“What percentage of my time is taken up by social media in my 8-hour workday?”&lt;/em&gt;. Questions like these lead to inherently biased answers when asking one’s own self — because we all want to seem like a good person. Consider the “&lt;a href=&quot;https://en.wikipedia.org/wiki/Illusory_superiority#Driving_ability&quot;&gt;I am a better driver than you&lt;/a&gt;” experiment and it quickly becomes apparent that we are not good judges of our own character. Automated applications will allow you to quickly gain insights and patterns you would never be able to see.&lt;/p&gt;
&lt;p&gt;The final conclusion I came to with all of these apps and settings is that ultimately &lt;strong&gt;I needed to be more mindful&lt;/strong&gt; of the purpose I had each time I went for my phone. I never considered my phone usage an issue; More that I could be using the time more effectively. Thinking about the task you want to accomplish will enable you to have a clearer vision of how achieve that objective. It doesn’t always have to be productive. Sometimes, the purpose of unlocking your phone will be to have a mindless scroll down the DailyMail app (a guilty pleasure of mine that is so mind-numbing they have started using it as an anaesthetic in some hospitals). Other times it will be to add a new task to your todo list. In any case, it’s fine. &lt;strong&gt;Just have something&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;Do you have any other tips for achieving zen in the busy world of devices we now have? Comment below or tweet me &lt;a href=&quot;https://twitter.com/joshghent&quot;&gt;@joshghent&lt;/a&gt;&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Understanding PHP hatred</title>
    <link href="/php-hatred/"/>
    <updated>2018-03-05T22:12:03Z</updated>
    <id>/php-hatred/</id>
    <content type="html">&lt;div class=&quot;image&quot;&gt;
	&lt;img src=&quot;../../assets/images/php.png&quot; /&gt;
	&lt;em&gt;Pictured: The PHP developer in their natural state of silent contempt&lt;/em&gt;
&lt;/div&gt;
&lt;p&gt;It’s an age-old joke to hate on &lt;a href=&quot;https://secure.php.net/&quot;&gt;PHP&lt;/a&gt;. But why do people dislike it so much? After all, &lt;a href=&quot;https://w3techs.com/technologies/details/pl-php/all/all&quot;&gt;PHP powers 80% of the web&lt;/a&gt; (a large majority of that is credited to &lt;a href=&quot;https://wordpress.com/&quot;&gt;Wordpress&lt;/a&gt;, but still). In this article I break down the main gripes of PHP development and share advice on language and system design.&lt;/p&gt;
&lt;h3&gt;Inconsistent method naming&lt;/h3&gt;
&lt;p&gt;The biggest problem people see when they first look at PHP is the inconsistency of the standard language methods. When &lt;a href=&quot;https://secure.php.net/manual/en/history.php.php&quot;&gt;PHP was first released in 1994&lt;/a&gt; it did not have &lt;a href=&quot;https://en.wikipedia.org/wiki/Namespace&quot;&gt;namespacing&lt;/a&gt; which meant all methods had to exist globally at the &lt;strong&gt;root&lt;/strong&gt; level. When &lt;a href=&quot;https://secure.php.net/manual/en/language.namespaces.basics.php&quot;&gt;namespaces were finally introduced in PHP 5&lt;/a&gt;, the damage had already been done. Methods that ordinarily have been &lt;a href=&quot;https://en.wikipedia.org/wiki/Namespace&quot;&gt;namespaced&lt;/a&gt; under it’s particular category (such as **String** or **Array**), were just plonked and prefixed with the category instead.&lt;/p&gt;
&lt;p&gt;This led to names such as &lt;a href=&quot;https://secure.php.net/manual/en/function.array-map.php&quot;&gt;array_map&lt;/a&gt; and &lt;a href=&quot;https://secure.php.net/manual/en/function.str-repeat.php&quot;&gt;str_repeat&lt;/a&gt;. Now that’s all well and good, but the problem is that the prefix + underscore method was not always used. Soon, there was a whole host of methods named things like &lt;a href=&quot;https://secure.php.net/manual/en/function.strtolower.php&quot;&gt;strtolower&lt;/a&gt; and &lt;a href=&quot;https://secure.php.net/manual/en/function.ucfirst.php&quot;&gt;ucfirst&lt;/a&gt; that broke those rules.&lt;/p&gt;
&lt;p&gt;Additionally, these method names had inconsistent usage of &lt;a href=&quot;https://en.wikipedia.org/wiki/Snake_case&quot;&gt;snake_case&lt;/a&gt; — as is the case across most of the string methods. You have functions such as &lt;a href=&quot;https://secure.php.net/manual/en/function.strtotime.php&quot;&gt;strtotime&lt;/a&gt; and &lt;a href=&quot;https://secure.php.net/manual/en/function.str-split.php&quot;&gt;str_split&lt;/a&gt; — why is it not str_to_time? Who knows.&lt;/p&gt;
&lt;p&gt;Furthermore, another minor inconsistency that had escaped my notice until studying the &lt;a href=&quot;https://secure.php.net/manual/en/indexes.functions.php&quot;&gt;list of PHP methods&lt;/a&gt;, is the usage of &lt;strong&gt;‘to’&lt;/strong&gt; and &lt;strong&gt;‘2’&lt;/strong&gt;. In some cases &lt;strong&gt;‘2’&lt;/strong&gt; was substituted into method names, presumably to look like a teenager texting on a &lt;a href=&quot;https://en.wikipedia.org/wiki/Nokia_3310&quot;&gt;Nokia 3310&lt;/a&gt; in the early 2000’s.&lt;/p&gt;
&lt;p&gt;As a result, we now have methods such as ‘&lt;a href=&quot;https://secure.php.net/manual/en/function.bin2hex.php&quot;&gt;bin&lt;strong&gt;2&lt;/strong&gt;hex&lt;/a&gt;’ and ‘&lt;a href=&quot;https://secure.php.net/manual/en/function.deg2rad.php&quot;&gt;deg&lt;strong&gt;2&lt;/strong&gt;rad&lt;/a&gt;’ as well as &lt;a href=&quot;https://secure.php.net/manual/en/function.strtotime.php&quot;&gt;str&lt;strong&gt;to&lt;/strong&gt;time&lt;/a&gt; and &lt;a href=&quot;https://secure.php.net/manual/en/function.strtolower.php&quot;&gt;str&lt;strong&gt;to&lt;/strong&gt;lower&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;With all that said, what’s the take away for you and me? In software design, and most things, consistency is key. By having a consistent interface for programs to use both programmatically (with consistent and logical API endpoints and parameters) as well as visually (with easy-to-use but also functional UI’s), we enable more logical and clear usage for people using our UI’S and developers integrating with our API’s.&lt;/p&gt;
&lt;p&gt;Not to be too hard on the PHP developers, but it is clear that seldom thought went into planning the language or thinking about its future scope. Don’t make the same mistake. Drill down into all the little features and quirks of your system as well as ones you may add in the future. It is impossible to gear up for every eventual outcome but it is worth having that forward thinking view, so you are less likely to get tunnel visioned into “the” product. Software always changes.&lt;/p&gt;
&lt;h3&gt;Inconsistent argument orders&lt;/h3&gt;
&lt;p&gt;Another inconsistency is that of argument ordering. Arrays, dictionaries, hashes, whatever you call them, they are an integral part of any language that developers using that language will use on a daily basis and form a core part of storing and manipulating data on any system.&lt;/p&gt;
&lt;p&gt;You’d think that, being such an important part of the language, that they at least would be consistent. Unfortunately, you’d be wrong.&lt;/p&gt;
&lt;p&gt;If you’ve ever done PHP development you may have run into this issue. You’ve got an array of numbers that you want to double and then return into a new array. No problem! You say.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;“I’ll use &lt;a href=&quot;https://secure.php.net/manual/en/function.array-map.php&quot;&gt;array_map&lt;/a&gt;!”.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;So you write the code and then run it and then…&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://cdn-images-1.medium.com/max/2000/0*iKeBG9ial1LFcExP.&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;
&lt;p&gt;What? You say. After much debugging, thinking there may be an issue with your method, you finally resort to the PHP docs.&lt;/p&gt;
&lt;p&gt;There you discover…&lt;/p&gt;
&lt;div class=&quot;image&quot;&gt;
	&lt;img src=&quot;https://cdn-images-1.medium.com/max/2432/1*WdQmqjGHTWJD8Avo29rMjw.png&quot; /&gt;
	&lt;em&gt;Documentation for array_map &lt;a href=&quot;https://secure.php.net/manual/en/function.array-map.php&quot;&gt;https://secure.php.net/manual/en/function.array-map.php&lt;/a&gt;&lt;/em&gt;
&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;It’s callback first. Not callback last, like you had just done with array_filter.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;I don’t know how many times I’ve done this but each time you can’t help but slightly curse the name &lt;a href=&quot;https://en.wikipedia.org/wiki/Rasmus_Lerdorf&quot;&gt;Rasmus Lerdorf&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Besides the importance of consistency as we have already spoken about, what else can we learn. Well for my money, it’s helpful error messages. Rather than spitting out something vague and meaningless like in this case, write something helpful and actionable. I read a great article about this very topic — you can find it &lt;a href=&quot;https://uxplanet.org/how-to-write-a-perfect-error-message-da1ca65a8f36&quot;&gt;here&lt;/a&gt;. I’d highly recommend reading it. Ideally you want to make your UI (including visual and programmatic UI) as intuitive as possible but account for cases where someone makes a mistake (we all do) and handle it gracefully by guiding the user to the correct course.&lt;/p&gt;
&lt;h3&gt;Frustrating usage&lt;/h3&gt;
&lt;p&gt;Asides from being poorly named, the usage of these method is also frustrating.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://secure.php.net/manual/en/function.explode.php&quot;&gt;Explode&lt;/a&gt; is a method that takes a string and a delimiter and breaks up that string on the delimiter into an array. Simple right? You’ve probably seen this in Javascript with &lt;a href=&quot;https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/split&quot;&gt;String.split()&lt;/a&gt; and other languages. The quirk here is that passing empty string (“”) or null as a delimiter will cause the method to return false. Rather than simply treat it like every other language (empty string returns every character as an element of the array or null to return the string in its entirety), PHP decides to treat it as an error condition. But because it does not throw an error you are forced to check it manually.&lt;/p&gt;
&lt;p&gt;Another aggravating method usage case is when manipulating arrays. &lt;a href=&quot;https://secure.php.net/manual/en/function.sort.php&quot;&gt;Sort()&lt;/a&gt; and all the other array sort methods (there are a lot, all more confusingly named than the last) in PHP operate the on the array in place and do not return a new manipulated array. They simply return true or false. This prevents you from method chaining and makes the code you write with array manipulation that bit more verbose than it would otherwise be. Further, &lt;a href=&quot;https://secure.php.net/manual/en/function.array-reverse.php&quot;&gt;array_reverse&lt;/a&gt; (in the same category of array manipulation) does return a new array but this again means more inconsistency (even though in this case, the inconsistency is good).&lt;/p&gt;
&lt;p&gt;Without doubt however, the trifecta of annoyance comes from finding a string within a string. This could not be more simple. Every language has a method like this, and they all work the same. A needle (the string you want to find) and a haystack (the string you want to find it in) are accepted, the method then returns the haystack index at which this needle was found and returns -1 if it wasn’t found. This is the case for Javascript, C and most other languages. PHP however, being the language hipster that it is, decided that this wasn’t good enough and decided to break the status quo by returning false if the needle wasn’t found. That doesn’t sound so bad (although inconsistent with every other language in existence) but if you loosely compare false in PHP, it becomes a 0. Now this is an issue with the following code&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://cdn-images-1.medium.com/max/2976/1*AtvD1m_29mSkdMJw_2IQhw.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;
&lt;p&gt;Unfortunately, because the developer was expecting a response of anything but -1 from the &lt;a href=&quot;https://secure.php.net/manual/en/function.strpos.php&quot;&gt;strpos&lt;/a&gt; method, this code will return true even though the needle is evidently not in the haystack. I find this one of the the most glaring oversights in PHP because it’s so easy to get wrong when programming something that depends on this as well as being again, inconsistent with other languages.&lt;/p&gt;
&lt;h3&gt;Bad error messages&lt;/h3&gt;
&lt;p&gt;Error messages are a major problem with PHP. I distinctly remember my first gripe with PHP — debugging. Being unfamiliar with PHP at the time, I did not think to use a &lt;a href=&quot;https://xdebug.org/&quot;&gt;3rd party tool to debug my code&lt;/a&gt;; that should be built in — right? Surprisingly (and unfortunately) not. I spent a while googling around to find out I &lt;a href=&quot;https://stackoverflow.com/questions/1053424/how-do-i-get-php-errors-to-display&quot;&gt;had to turn on errors with some specific variables and debug levels&lt;/a&gt;. If you look at the search results for “&lt;a href=&quot;https://www.google.co.uk/search?q=How+to+turn+on+php+errors&amp;amp;oq=How+to+turn+on+php+errors&amp;amp;aqs=chrome..69i57&amp;amp;sourceid=chrome&amp;amp;ie=UTF-8&quot;&gt;How to turn on php errors&lt;/a&gt;” or “&lt;a href=&quot;https://www.google.co.uk/search?q=PHP+blank+screen%2C+no+error&amp;amp;oq=PHP+blank+screen%2C+no+error&amp;amp;aqs=chrome..69i57j69i64&amp;amp;sourceid=chrome&amp;amp;ie=UTF-8&quot;&gt;PHP blank screen, no error&lt;/a&gt;”, the issue quickly becomes apparent.&lt;/p&gt;
&lt;p&gt;Now, you may have got error messages **_actually _**working but sooner or later you come across this gem.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;**PHP: &lt;a href=&quot;http://phpsadness.com/sad/1&quot;&gt;**Parse error: syntax error, unexpected T_PAAMAYIM_NEKUDOTAYIM&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;You&lt;/strong&gt;: A what?!&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;PHP&lt;/strong&gt;: A Paamayim Nekudatayim of course…&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;For the uninitiated, &lt;a href=&quot;https://en.wiktionary.org/wiki/%D7%A4%D7%A2%D7%9E%D7%99%D7%99%D7%9D_%D7%A0%D7%A7%D7%95%D7%93%D7%AA%D7%99%D7%99%D7%9D#Hebrew&quot;&gt;paamayim nekudatayim&lt;/a&gt; is a romanized version of the Hebrew word for “twice colon” which is referring to the &lt;a href=&quot;https://en.wikipedia.org/wiki/Scope_resolution_operator#PHP&quot;&gt;scope resolution operator&lt;/a&gt; (::). The kind you would use to call a static method such as this&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://cdn-images-1.medium.com/max/2976/1*p1H01HBwqUr8OhBQf-556w.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;
&lt;p&gt;This was originally introduced by Israeli-built &lt;a href=&quot;http://www.zend.com/en/community/php&quot;&gt;Zend Engine&lt;/a&gt; back in PHP 3. Now that’s fine for people who speak Hebrew, but English is widely accepted as the lingua-franca of programming and the internet at large. Again, it all relates back to ease of use. After finding out the meaning, in a way, I kind of like it as a fun quirk of PHP with an interesting backstory but it is very confusing to new PHP developers (or developers full stop).&lt;/p&gt;
&lt;p&gt;This error message still lives on today in PHP 7.&lt;/p&gt;
&lt;p&gt;The main issue with PHP error messages are the detail and specificity. At some point, because you’re not a robot (or maybe you are — if so please fill out this captcha before continuing) you’ll miss type something, perhaps missing a bracket or quotation mark. Maybe your code looks something like this.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://cdn-images-1.medium.com/max/2976/1*AODwsrc4MDA1ZfPh4les3w.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;
&lt;div class=&quot;image&quot;&gt;
	&lt;img src=&quot;https://cdn-images-1.medium.com/max/2000/1*zQTkfZ1biXDZ8r_rflvaaw.png&quot; /&gt;
	&lt;em&gt;Result of the code above&lt;/em&gt;
&lt;/div&gt;
&lt;p&gt;If you’re a battle hardened PHP developer then you may have spotted that the closing quotation mark on line 5 is missing. Yet PHP considers it helpful to return the message from line 8.&lt;/p&gt;
&lt;p&gt;Fast debugging is a hugely crucial issue for programming languages. Not only the frustration for the developer, but by having ambiguous error messages, it means the developer spends more time debugging which costs the company and/or client. With the above example, it may seem that this is a mountain being made out of a mole hill, but imagine this in a large application and quickly the issue becomes increasingly worse and aggravating.&lt;/p&gt;
&lt;h3&gt;Method duplication&lt;/h3&gt;
&lt;p&gt;Last but not least is the issue of method duplication — as in the case with &lt;a href=&quot;https://secure.php.net/manual/en/function.die.php&quot;&gt;die&lt;/a&gt; and &lt;a href=&quot;https://secure.php.net/manual/en/function.exit.php&quot;&gt;exit&lt;/a&gt; as well as &lt;a href=&quot;https://secure.php.net/manual/en/function.implode.php&quot;&gt;implode&lt;/a&gt; and &lt;a href=&quot;https://secure.php.net/manual/en/function.join.php&quot;&gt;join&lt;/a&gt;. Now, this may not seem like the biggest sin. After all, die came from Perl and will therefore be easier for programmers with that background to use, and exit came from C, again, allowing them to have an easier transition.&lt;/p&gt;
&lt;p&gt;The problem with this for new programmers or programmers without a C/Perl background, it doesn’t become easier, just more confusing. You end up questioning which to use? Is one better than another? Should enforcement of one over another be in a style guide? All valid questions that leave the developer going down a rabbit hole of syntax quirks rather than actually working on the task at hand.&lt;/p&gt;
&lt;p&gt;Helpfully the PHP manual clears up the differences&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://cdn-images-1.medium.com/max/2000/1*EBTgMqTKHbUjL5mIEQQyDg.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;
&lt;p&gt;Yeah… perhaps not. As a takeaway from this point, it is important to not pollute your documentation of a codebase with legacy stories of reasoning behind one decision or another (as has been done here). On the other hand, question whether there should be those stories in the first place.&lt;/p&gt;
&lt;p&gt;Although there are many more quirks from interesting to downright insane, I have found PHP very useful, as have thousands of developers worldwide. If you are a PHP developer with a startup idea, don’t wait to learn a new language or framework to build it. Just do it in PHP. Software can always be iterated on, and to worry about “What programming language should I do it in”, is even more insane and redundant that some of PHP’s quirks. All languages will be capable of doing anything (asides from system specific native apps of course but you get the point) — some are just different to others, which is why we have so many. But the speed gains you get from writing an app in a different language rather than doing it with what you know will be negated 10 times the amount due to the time it takes you to learn that new language.&lt;/p&gt;
&lt;p&gt;The age old adage is “Only a poor workman blames his tools”. Now write the damn thing in PHP.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>What Tracking My Expenses for a Year Taught Me About Personal Finance</title>
    <link href="/tracking-personal-finance/"/>
    <updated>2018-02-26T00:00:00Z</updated>
    <id>/tracking-personal-finance/</id>
    <content type="html">&lt;p&gt;2017 was the year I tracked my finances. I started doing this because I wanted to remove the mystery of where my money was going. I wasn’t overspending per se, but found it challenging to know exactly what I had spent across credit cards, debit cards, cash, etc.&lt;/p&gt;
&lt;p&gt;Although there are a lot of personal expense tracking software, uncharacteristically, I chose a low tech option — a simple &lt;a href=&quot;https://www.google.com/sheets/about/&quot;&gt;Google Sheet&lt;/a&gt;. I then have a tab for each month and columns to track the Date, Shop, Amount, Notes and then a category of the expense (I have the option of fuel, entertainment, food, shopping, gifts and other).&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://cdn-images-1.medium.com/max/2000/1*xHO8TCWzWqaSueqOiEELQw.jpeg&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;
&lt;p&gt;Now we’re in 2018, I now tracked my specific expenses for over a year. Here is what I’ve learnt.&lt;/p&gt;
&lt;h2&gt;Money isn’t going where you think&lt;/h2&gt;
&lt;p&gt;Ok sure, you might have a rough idea. But how much did you spend on lunches last month? For me, it was &lt;strong&gt;way&lt;/strong&gt; more than I expected. Whilst, how much I spent on fuel was a lot less. This enabled me to adjust my budget accordingly and also be more mindful of spending in those areas. It’s all well and good to *say *you have a budget, but are you really sticking to it?&lt;/p&gt;
&lt;p&gt;In my case, I wasn’t, but that’s not a problem. Now I have the data, I can make informed decisions about where my money is going.&lt;/p&gt;
&lt;p&gt;It also enables me to plan around larger spending that may happen on an annual basis. Car tax for example is something I pay yearly, however, in my budget it is included as a monthly expense. I then put this money into a specific account that I can later use on that date. I apply the same principle for clothing as again, it is something I do bi-monthly rather than specifically each month.&lt;/p&gt;
&lt;h2&gt;The bank should work for me&lt;/h2&gt;
&lt;p&gt;A common misconception, at least one that I believed, is that banks do not really owe you anything. They do! My previous bank that I had held for the previous 8 years paid me very low interest and didn’t offer any other benefits. I then started searching other bank accounts and found many paid 5% interest (&lt;a href=&quot;https://www.nationwide.co.uk/products/current-accounts/flexdirect/features-and-benefits&quot;&gt;Nationwide Flexdirect&lt;/a&gt;), or offered store points when you use them (&lt;a href=&quot;https://www.tescobank.com/current-accounts/&quot;&gt;Tesco Current account&lt;/a&gt;).&lt;/p&gt;
&lt;p&gt;The other misconception I had was that it would be difficult to change accounts. Most banks in the UK will be registered with the &lt;a href=&quot;https://www.currentaccountswitch.co.uk/Pages/Home.aspx&quot;&gt;current account switching service&lt;/a&gt; which means they handle transferring direct debits, standing orders and forward any incoming funds to your old account to the new account. This process usually takes 7 days.&lt;/p&gt;
&lt;p&gt;I personally have it setup so I get paid into 1 bank account and then have standing orders out to all my other accounts (saving, spending etc). This enables me to switch bank account a lot with relatively low friction. Changing regularly does negatively affect my credit score but since I am not applying for a mortgage or loan any time soon then this is not an issue (and by the time I want to, it will have improved).&lt;/p&gt;
&lt;p&gt;The point is, don’t just stay with your current bank. &lt;a href=&quot;https://www.moneysavingexpert.com/banking/compare-best-bank-accounts&quot;&gt;Martin Lewis keeps a great updated list&lt;/a&gt; of the best bank accounts and breaks down their details as well as all the ins-and-outs of qualifying for them.&lt;/p&gt;
&lt;h2&gt;Automation is key&lt;/h2&gt;
&lt;p&gt;Without doubt the best thing I did to optimize my finances this year was putting it on autopilot. Finances and money I find interesting but, all in all, they are a drab subject. By automating everything, I am free of an ambiguity about where money is going or will go and I don’t need to spend my own time dealing with it manually.&lt;/p&gt;
&lt;p&gt;Here’s what happens when my paycheck comes in:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Credit cards are paid&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Bills are paid&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Standing orders send money to my spending account, saving(s) accounts and ISA — I would recommend setting up different “pots” as I spoke about earlier for different financial jobs. For example I have an account for my emergency fund, one for car fuel, one for annual car expenses (tax and insurance) and another for general saving (I use a bank that allows me to create “pots” within this account for holidays and any other items I am saving for).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Money left in my primary bank account (the one that gets the paycheck) is left until the end of the month and then moved into my savings account.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;My investments are also all automated as I use a service called Nutmeg to balance my portfolio according to my risk profile.&lt;/p&gt;
&lt;p&gt;If you struggle with setting aside money to save, try a service such as &lt;a href=&quot;https://www.moneyboxapp.com/&quot;&gt;Moneybox&lt;/a&gt; which takes weekly deposits from your account as well as rounding up change from purchases.&lt;/p&gt;
&lt;h2&gt;Any other tips that you use?&lt;/h2&gt;
&lt;p&gt;Comment below or tweet me with any other personal finance advice you’ve applied. I’m not a qualified finance professional by any stretch of the imagination, so your circumstances may vary, but overall the principles of automation and tracking you money can be utilized no matter what your situation. By applying the advice above, I have been able to stick strictly to my spending budget, increase my savings and pay off my debts — and you can do that too. All it takes is an free afternoon and a memory of all those internet banking passwords.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Beginners Database Design Primer</title>
    <link href="/database-design/"/>
    <updated>2018-02-14T00:00:00Z</updated>
    <id>/database-design/</id>
    <content type="html">&lt;div class=&quot;image&quot;&gt;
	&lt;img src=&quot;../../assets/images/database.png&quot; /&gt;
&lt;/div&gt;
&lt;p&gt;Your boss has just got off the phone with a client who wants a bespoke social network site targeting a niche market. And they want you to head up the project. You’ve never built a social network. Your mind goes to Facebook, Twitter, and Instagram. They’re built by thousands of people with genius level IQ’s and degrees. How could you compete?&lt;/p&gt;
&lt;p&gt;Well that’s at least the situation I found myself in.&lt;/p&gt;
&lt;p&gt;My task was to first to design a database for this social network client. This is the advice and help I would have wanted to read. Please note: This applies generally for all database design. A social network will be merely used as an example.&lt;/p&gt;
&lt;h2&gt;But why did I feel that the database design was so critical?&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Bad database design leads to laborious and slow queries&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Duplicate data and incorrect column data types occupy more disk space&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;Think about your features and the relationship between them&lt;/h2&gt;
&lt;p&gt;What are the features of the site? &lt;strong&gt;Really&lt;/strong&gt; &lt;strong&gt;break it down&lt;/strong&gt;. The more you break it down, the less “&lt;em&gt;Oh yeah, let’s add a table for that&lt;/em&gt;” you’ll get later down the line. I suggest using a tool like &lt;a href=&quot;https://www.draw.io/&quot;&gt;draw.io&lt;/a&gt; to map out your tables before putting them into the actual database.&lt;/p&gt;
&lt;p&gt;As we are building a social network, the leading feature is letting users posts and allowing other (authorized) users to comment. Here we have a one-to-many relationship, because a user will be able to have many posts and comments.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://cdn-images-1.medium.com/max/2740/0*FyqSEsID4A_2jE_n.&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;
&lt;p&gt;We will have other instances such as with user “likes” (such as liking a page, or group) that will require a many-to-many relationship because a user will have many likes, and those likes will have many different properties.&lt;/p&gt;
&lt;p&gt;One of the primary objectives of good database design is to remove redundant data and increase the integrity of that data. Often, people combine tables that have one-to-one relationships. For example storing users data with their address. Let’s see why:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://cdn-images-1.medium.com/max/2808/0*JVxQ1FSGDaEupI9Z.&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;
&lt;p&gt;The example on the left combines the users table with the address for that person. On the right we have separated it into different tables. But which approach is optimal?&lt;/p&gt;
&lt;p&gt;Well, as with most things, &lt;strong&gt;it depends&lt;/strong&gt;. When I was building this particular application, the address was an optional field, therefore it was most appropriate to store these locations in another table. Furthermore, it was under future consideration to allow for many addresses for a single user. Therefore, in my case, storing them in the same table would restrict me to a 1-to-1 relationship only, thereby limiting me on future features.&lt;/p&gt;
&lt;p&gt;Performance may be a factor here as the more “relationship” tables you have, the more joins you need to make, thereby slowing query time. This could have repercussions as you scale.&lt;/p&gt;
&lt;h2&gt;Think about what data you will need&lt;/h2&gt;
&lt;p&gt;That leads onto my next point about think through what your application will display. Think about the query to get that data in your head. If, to extract that information, you have to make tons of joins and subqueries and all sorts then perhaps you need to rethink your design.&lt;/p&gt;
&lt;p&gt;I’m most keenly aware of this principle when thinking about analytics. For example, in this social network application, we want analytics for where the application is used. If we are to do this purely via the address fields we store for users (without any IP location wizardry) then we may chose to store the counties list in a separate table and then link that with the main addresses table. This would enable us to quickly query for certain country&lt;em&gt;id’s and ascertain how many users are registered there. We _could&lt;/em&gt; leave the country as a text input for the user but this may lead to incorrect spellings of counties and other duplicate data that would lead to false statistics.&lt;/p&gt;
&lt;p&gt;Overall, understand what your application requires and design around that whilst still being flexible for future expansion.&lt;/p&gt;
&lt;h2&gt;Think about the data stored in the columns&lt;/h2&gt;
&lt;p&gt;An underestimated point of database design is that of column data types. Often columns are given simple &lt;strong&gt;VARCHAR&lt;/strong&gt; or &lt;strong&gt;INT&lt;/strong&gt; data types, this is not always the most performant or memory efficient way of designing them and changing the data type after the fact can lead to corrupted data in those columns.&lt;/p&gt;
&lt;p&gt;Become familiar with the different data types, these are almost universal across all programming languages and databases and will allow you to think at a lower level of abstraction to the data you are handling.&lt;/p&gt;
&lt;p&gt;As mentioned previously, one of the main problems with inefficient data types is that they occupy too much space on disk. For example, let’s say we are storing a flag checking if a users post has been deleted or not. Because the only 2 values of this column should be 0 or 1 (the former for undeleted, and the latter for deleted), then we do not need the 4 bytes it takes to store a whole integer (which can be any number from -2147483648 to 2147483648). We merely need the &lt;strong&gt;TINYINT&lt;/strong&gt; data type (or &lt;strong&gt;BOOLEAN&lt;/strong&gt; in MySQL), which occupies a mere 1 byte. That’s a whole 4 times smaller! Now 4 times smaller than nothing is still nothing so this may seem like a needless reduction that will save a fraction of disk space. And in &lt;em&gt;most&lt;/em&gt; cases, you are &lt;strong&gt;probably right&lt;/strong&gt;; But if the service were to scale, to hundreds of thousands or even millions of rows, then your boss would be thanking you for saving them a lot of money in drive space by going for the &lt;strong&gt;TINYINT&lt;/strong&gt; option. Take the time to think of the most performant and lean design for your database. Act as if you are designing for Facebook-level scale — it will pay off.&lt;/p&gt;
&lt;h2&gt;Avoid storing calculated columns&lt;/h2&gt;
&lt;p&gt;A common mistake when designing a database is to store redundant information that can be calculated by your application. Using my example, we want to display the users age on their profile. It may be possible to store the users age directly in a column, but if we have their birth date then we can simply calculate their age on the fly. If speed becomes an issue where calculations cannot be done, then it would be best to cache it in your application or in a key-value database such as Redis before finally resorting to storing the calculated value. Other examples of where calculated information storage might be suggested is for exam grade averages or the number of orders a person has placed.&lt;/p&gt;
&lt;h2&gt;What other tips do you have for database design?&lt;/h2&gt;
&lt;p&gt;I tried to target the “&lt;a href=&quot;https://en.wikipedia.org/wiki/Pareto_principle&quot;&gt;Pareto problems&lt;/a&gt;” here — meaning that, in my view, these 20% of tips will solve 80% of basic database design problems. That being said, there is a &lt;strong&gt;lot&lt;/strong&gt; more to learn, and if you’re interested check out &lt;a href=&quot;https://en.wikipedia.org/wiki/Database_normalization&quot;&gt;database normalization rules (NF)&lt;/a&gt; once you have fully taken in the information from this article.&lt;/p&gt;
&lt;p&gt;👋 I am available for hire as a freelance application consultant/developer. Contact me at &lt;a href=&quot;mailto:hola@joshghent.com&quot;&gt;hola@joshghent.com&lt;/a&gt; if you would like to discuss any projects you have in mind.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>How to Attend Your First Programming Meetup</title>
    <link href="/attending-meetups/"/>
    <updated>2018-02-06T00:00:00Z</updated>
    <id>/attending-meetups/</id>
    <content type="html">&lt;p&gt;Attending your first programming meetup can leave you a little apprehensive. I felt the same! So, I thought it may be useful to break down my first meetup experience and how you can start attending meetups yourself.&lt;/p&gt;
&lt;blockquote class=&quot;twitter-tweet&quot;&gt;&lt;p lang=&quot;en&quot; dir=&quot;ltr&quot;&gt;First meetup &lt;a href=&quot;https://twitter.com/hashtag/nottsjs?src=hash&amp;amp;ref_src=twsrc%5Etfw&quot;&gt;#nottsjs&lt;/a&gt; &lt;a href=&quot;https://t.co/2IQ0vImjxW&quot;&gt;pic.twitter.com/2IQ0vImjxW&lt;/a&gt;&lt;/p&gt;&amp;mdash; Josh Ghent (@joshghent) &lt;a href=&quot;https://twitter.com/joshghent/status/874685729179357184?ref_src=twsrc%5Etfw&quot;&gt;June 13, 2017&lt;/a&gt;&lt;/blockquote&gt;
&lt;h2&gt;Why did I want to go to a programming meetup in the first place?&lt;/h2&gt;
&lt;p&gt;The primary reason was &lt;strong&gt;the people&lt;/strong&gt;. I think it’s always a good idea to network with people who you could potentially be working with and learn from them. There is always something you can learn from someone, regardless of their ability — and I embrace this principle heavily at these meetups by attempting to talk to as many people as possible.&lt;/p&gt;
&lt;p&gt;The most fascinating aspect, in my view, is hearing about how companies are working on a structural level and the technologies they are utilising. For example, I spoke with someone who worked at a bank and was working on rewriting their entire stack using Node. I found this intriguing because Node is a relatively new technology. I quizzed him on a how they were handling the key areas for banking software, scale and security. Since I myself had never used node in such a critical environment, it was fascinating to gain such insights.&lt;/p&gt;
&lt;p&gt;Talks are also a great way to learn about new technologies. My first meetup had a talk entitled “Machine Learning for Muggles” and walked you through how to use Azure’s Machine Learning capabilities as well as a broad overview on how machine learning works. Being on the very bleeding edge of technology, it was amazing to get industry leading teaching and far outweighed any content I had found online. Furthermore, having an expert on the subject delivering the talk allows you to ask questions that may be challenging to find answers to online.&lt;/p&gt;
&lt;p&gt;The third and final reason, is that any good meetup will have free pizza! 🍕&lt;/p&gt;
&lt;h2&gt;How did you find your meetup?&lt;/h2&gt;
&lt;p&gt;There is an easy answer to this: &lt;a href=&quot;https://www.meetup.com/&quot;&gt;meetup.com&lt;/a&gt;. I looked up all programming meetups in the area and found one that looked active and had a history of talks I was intrigued by. I signed up for a few, I would recommend any “first time meetup” folk to do the same — spread yourself out, you never know what you may like. Despite being a PHP programmer at my present job, I ended up attending a Javascript meetup! Don’t limit yourself to a meetup targeted at a language you currently use. Take a look at the talks and see if they interest you. As I say, my first meetup was the Nottingham Javascript group but the talk itself was about machine learning, far related to Javascript.&lt;/p&gt;
&lt;p&gt;Generally, talks will be geared around a certain technology, let’s say Docker for example — with tips targeted at that meetups language. As an example, a talk about Docker at a PHP based meetup may be titled “Setting up a PHP development environment in Docker”. Even if you don’t know PHP (or whatever language the meetup aims at), you’ll find the talks valuable and lots of people who, like you, don’t use that particular language.&lt;/p&gt;
&lt;h2&gt;What should I do at a meetup?&lt;/h2&gt;
&lt;p&gt;Now you can only go for the talk but that’s only half of the experience. I would highly recommend (after grabbing some pizza of course) just approaching people and introducing yourself. Programmers are generally a shy bunch, but ultimately, we all share a common interest so there will be plenty to talk about.&lt;/p&gt;
&lt;p&gt;Initially I found that everyone seemed to be in their own little circle talking amongst one another; if that is the case, just go up to them and say “&lt;em&gt;hi, how ya doing? I’m Josh&lt;/em&gt;” (obviously substitute your name for mine but you get the idea). Ask about people’s jobs and what they are doing there, what exciting technologies they are using, and if they found the talk interesting. Almost every programmer you will talk to will have some kind of side project they are working on — talk about that! That can lead to some of the most exciting discussion as usually people are experimenting with cutting edge technology that they would not be able to use day-to-day.&lt;/p&gt;
&lt;p&gt;Be thinking all the while about what questions you can ask them based on what they are saying, it show you are listening to them and are genuinely engaged in the conversation.&lt;/p&gt;
&lt;p&gt;Other talking points may include, asking them about the company they work for, how big is it, where are they based and so on.&lt;/p&gt;
&lt;p&gt;When wrapping up a conversation, don’t make excuses about going to the bathroom (unless you actually need to), simply say “&lt;em&gt;I’m going to introduce myself to some other people now but I’d love to continue this conversation, perhaps I can take your email and we can talk&lt;/em&gt;”. Since this is work related, people may get a little cagey about handing over their phone number, so opt for business related avenues of communication, email, twitter and LinkedIn.&lt;/p&gt;
&lt;p&gt;All in all, don’t be scared. These are your people, just as shy and nerdy as you are. So put yourself out there and see who you meet. There are lots of interesting people out there! If all else fails, see if you can convince a coworker or friend to come along.&lt;/p&gt;
&lt;p&gt;👋 I am available for hire as a freelance web and application developer. Contact me at &lt;a href=&quot;mailto:hola@joshghent.com&quot;&gt;hola@joshghent.com&lt;/a&gt; if you would like to discuss any projects you have in mind.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Bulletproof Node — Security Best Practises</title>
    <link href="/bulletproof-node/"/>
    <updated>2018-01-23T22:12:03Z</updated>
    <id>/bulletproof-node/</id>
    <content type="html">&lt;div class=&quot;image&quot;&gt;
	&lt;img src=&quot;../../assets/images/node.jpeg&quot; /&gt;
	&lt;em&gt;Make your Node app like this guy&lt;/em&gt;
&lt;/div&gt;
&lt;p&gt;System breaches are now commonplace. &lt;a href=&quot;https://www.iotforall.com/5-worst-iot-hacking-vulnerabilities/&quot;&gt;Stories of IoT devices being compromised&lt;/a&gt;, &lt;a href=&quot;http://www.bbc.co.uk/news/business-41575188&quot;&gt;entire countries credit history leaking online&lt;/a&gt; as well as thousands of other systems &lt;a href=&quot;https://www.theverge.com/2013/11/7/5078560/over-150-million-breached-records-from-adobe-hack-surface-online&quot;&gt;compromised&lt;/a&gt;, &lt;a href=&quot;https://www.theguardian.com/technology/2016/dec/14/yahoo-hack-security-of-one-billion-accounts-breached&quot;&gt;hacked&lt;/a&gt;, &lt;a href=&quot;https://en.wikipedia.org/wiki/2012_LinkedIn_hack&quot;&gt;infiltrated&lt;/a&gt; and destroyed.&lt;/p&gt;
&lt;p&gt;Now it may seem that from all these stories, that &lt;strong&gt;&lt;em&gt;any&lt;/em&gt;&lt;/strong&gt; attempts to improve system security is fighting a losing battle. And in a way, &lt;strong&gt;you’re right&lt;/strong&gt;. But, think about it this way, your house (or apartment) is not impenetrable. However, you still have a lock on your door and make sure to secure the premises before you leave. Security measures such as locks, alarms and perhaps even CCTV cameras are preventative — &lt;strong&gt;not guarantees of complete security. Web application security is the same&lt;/strong&gt;, the more barriers we put up, the harder it is for attackers to exploit different &lt;a href=&quot;https://www.techopedia.com/definition/15793/attack-vector&quot;&gt;“vectors”&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Here is a quick guide on changes you can make to your application right now without large code changes.&lt;/p&gt;
&lt;h3&gt;&lt;strong&gt;Use &lt;a href=&quot;https://snyk.io/&quot;&gt;Synk&lt;/a&gt; to monitor security vulnerabilities&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;Nowadays, modern web applications use many dependencies, those dependencies in turn use even &lt;strong&gt;&lt;em&gt;more&lt;/em&gt;&lt;/strong&gt; dependencies. &lt;a href=&quot;https://en.wikipedia.org/wiki/Turtles_all_the_way_down&quot;&gt;It’s dependencies all the way down&lt;/a&gt;. Either way, it’s unfeasible to know every single dependency and keep up to date with security news. &lt;a href=&quot;https://snyk.io/&quot;&gt;Synk&lt;/a&gt; is a handy tool that allows you to automatically scan security vulnerabilities in your web applications, it supports a wide range of languages from NodeJS, Python, PHP and Ruby as well as many others. Additionally, if you just have a NodeJS application, &lt;a href=&quot;https://github.com/blog/2470-introducing-security-alerts-on-github&quot;&gt;Github now comes with automated integrated CVE security alerts too.&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;&lt;strong&gt;Add &lt;a href=&quot;https://helmetjs.github.io/&quot;&gt;Helmet&lt;/a&gt; for all requests run through Express&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;A chain is only as strong as its weakest link, therefore make sure **all **API routes are secured. Additionally make sure that all those routes are used! By reducing the surface area, there is less chance of an exploit being found.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://helmetjs.github.io/&quot;&gt;Helmet is a NodeJS tool&lt;/a&gt;, that bolts onto Express and acts a middleware. It takes any outgoing requests and adds various headers that help to keep the request secure.&lt;/p&gt;
&lt;h3&gt;&lt;strong&gt;Keep NodeJS and all dependencies up to date&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;Although you don’t want and/or need to update the latest major version of NodeJS, it is important to update to any minor version that include security updates. The same applies with project dependencies. The main push back on this has always been that you can’t trust &lt;a href=&quot;https://semver.org/&quot;&gt;semver&lt;/a&gt;. I wholly agree, but with a handy tool called &lt;a href=&quot;https://github.com/bahmutov/next-update&quot;&gt;Next Updates&lt;/a&gt;, you can run your test suite against new dependency versions automatically. Now this is not a guarantee that new versions of dependencies will work as it will vary on how broad and thorough your tests are; But, it does automate a large portion of work. In line with automating processes, you can configure &lt;a href=&quot;https://greenkeeper.io/&quot;&gt;Greenkeeper&lt;/a&gt; to submit a new pull request for new versions of dependencies that your app uses. By submitting a pull request, this should flag up any problems as it runs your test suite.&lt;/p&gt;
&lt;h3&gt;&lt;strong&gt;Monitor for multiple invalid requests, and any other potentially malicious traffic&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;Your routes could be as secure as &lt;a href=&quot;https://en.wikipedia.org/wiki/Fort_Knox&quot;&gt;Fort Knox&lt;/a&gt; but attackers could still potentially bring down your site by DDoSing it or brute forcing login forms. You can configure monitoring of your site to log out to &lt;a href=&quot;https://papertrailapp.com/&quot;&gt;Papertrail&lt;/a&gt; or &lt;a href=&quot;https://www.elastic.co/products/logstash&quot;&gt;Logstash&lt;/a&gt; that will then notify you if a certain type of log (I recommend having a “malicious traffic” category) that will then notify you directly (via SMS or Email for example).&lt;/p&gt;
&lt;p&gt;Pair this with running your server with &lt;a href=&quot;https://github.com/foreverjs/forever&quot;&gt;foreverjs&lt;/a&gt; which will automatically restart the server if it crashes or gets timed out.&lt;/p&gt;
&lt;h3&gt;&lt;strong&gt;Monitoring&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;This is, in my opinion, the most important aspect of them all. By implementing monitoring of your applications usage, you can potentially pick out malicious activity. Here are a few recommendations of what you can monitor:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Multiple failed login attempts for both the application and the server itself (FTP, SSH etc.)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Logins from new IP address — many services have automated emails go out to the user if this event occurs. They can then click through and report malicious activity themselves.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Attempt to access application resources directly (e.g., environment variable files)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Changes to user details (email, password etc) — this is to cover the case where people may have access to the person’s computer and want to hijack the account.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Attempt to login with hacked credentials — a new common hack is to take details from other breached services and apply them to other services (because most people use the same password for multiple services). This one sort of ties in with multiple failed login attempts but adds a new angle in what a potential attacker is trying to do.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Attempt to do SQL injection or other XSS attacks — if you see a particular user attempting to do any of these sorts of attacks, most likely no action will be necessary, as your app should be secure and the likelihood is that they are just messing about. Nonetheless, it may be worth keeping track of these users and the IP address as a sort of “black book”.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class=&quot;image&quot;&gt;
	&lt;img src=&quot;https://cdn-images-1.medium.com/max/2000/1*TOb464uqspF5k7dG81YyNg.gif&quot; /&gt;
	&lt;em&gt;Me talking to my API routes&lt;/em&gt;
&lt;/div&gt;
&lt;p&gt;You may have noticed the general theme going here — &lt;strong&gt;automation&lt;/strong&gt;. I had a plethora of other tips for this article that I cut, as &lt;strong&gt;a)&lt;/strong&gt; you can find them in articles elsewhere and &lt;strong&gt;b)&lt;/strong&gt; data is the only way you will be able to find weak points. A chain is only as strong as its weakest link. For example, perhaps your application (targeted at a less than tech-savvy audience who don’t use crazy high entropy pass-phrases with a password manager) has a password policy which means many people are writing their passwords on post-its and putting them on their desk. This may lead to someone spotting the password and using it. Without data and monitoring, you would never be able to see that the users account was accessed from a new IP. The point is, there is no “one-size fits all” solution to security. Take a look at how your app is being used and prioritize security methods to help those use cases first.&lt;/p&gt;
&lt;p&gt;And that’s a wrap. &lt;strong&gt;Let me know which tip you found most useful or implemented yourself!&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;👋 I am available for hire as a freelance web and application developer. Contact me at &lt;a href=&quot;mailto:hola@joshghent.com&quot;&gt;hola@joshghent.com&lt;/a&gt; if you would like to discuss any projects you have in mind.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>How to Learn a Programming Language in Record Time</title>
    <link href="/how-to-learn-programming/"/>
    <updated>2017-06-28T22:12:03Z</updated>
    <id>/how-to-learn-programming/</id>
    <content type="html">&lt;p&gt;&lt;img src=&quot;../../assets/images/1_8CZLKCJ926_bhBSmSJj2ww.png&quot; alt=&quot;image&quot; /&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Note: This article is aimed primarily at beginners who perhaps know a single language but are looking to start learning another.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;When picking up a new programming language your first port of call might be the &lt;a href=&quot;https://developer.mozilla.org/bm/docs/Web/JavaScript&quot;&gt;doc&lt;/a&gt;&lt;a href=&quot;http://php.net/docs.php&quot;&gt;ument&lt;/a&gt;&lt;a href=&quot;http://guides.rubyonrails.org/&quot;&gt;ation&lt;/a&gt;, maybe it’s reading through some code on a project you admire or perhaps you learn most effectively &lt;a href=&quot;https://github.com/karan/Projects&quot;&gt;by building&lt;/a&gt;. Whatever the case, we can apply the &lt;a href=&quot;https://en.wikipedia.org/wiki/Pareto_principle&quot;&gt;Pareto principle&lt;/a&gt; to &lt;strong&gt;learn 80% of the language from 20% of it’s features&lt;/strong&gt;. If you’re coming from a background where you know &lt;a href=&quot;https://en.wikipedia.org/wiki/Creational_pattern&quot;&gt;design patterns&lt;/a&gt; and common programming features (&lt;a href=&quot;https://en.wikipedia.org/wiki/Control_flow&quot;&gt;control flow&lt;/a&gt;, loops &lt;em&gt;et cetera&lt;/em&gt;) then this is more than possible.&lt;/p&gt;
&lt;p&gt;When I initially thought of this idea I didn’t think that it would be possible to boil down a language to such a degree. But then again, when was the last time you used &lt;a href=&quot;https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Math/clz32&quot;&gt;clz32&lt;/a&gt; or &lt;a href=&quot;http://php.net/manual/en/function.bzflush.php&quot;&gt;bzflush&lt;/a&gt;? Programming languages have grown over time to implement features that, in day to day development, you mostly won’t need. Learning a new programming language can therefore seems a daunting prospect — but it need not be.&lt;/p&gt;
&lt;p&gt;I applied this exact pattern of learning when trying to learn Java and it worked relatively well. There was things I didn’t know from this, such as exact patterns of inheritance but at a very basic level, I could hold my own — and that’s the purpose. As you dive deeper into your new programming language of choice, you will get to know the nuances, why and how it solves specific problems and what’s best practise. This will, at the very least, give you a good grounding in a language in an efficient manner.&lt;/p&gt;
&lt;p&gt;Here’s my list of things to prioritise so you can pick up a new language in record time:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Variable creation&lt;/strong&gt; — if it’s a strongly or statically typed language, then this extends to how to declare variables of different types (integer, string, object, array). If the language has the feature then we can learn how to create a constant too.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Loop ’n’ number of times&lt;/strong&gt; — In Javascript this would be achieved by — &lt;code&gt;for(var i = 0; i &amp;lt; n; i++) {}&lt;/code&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;**Loop over a key:value store **— &lt;em&gt;Key:value&lt;/em&gt; stores are called Objects in Javascript in other languages they are called Hashes (Ruby) or Dictionaries (Python). Nonetheless, they are all the same, and usually there is a particular method to iterating over them because they are referenced by “keys” and not index numbers (as with arrays).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Referencing items in array&lt;/strong&gt; — In javascript you can reference &lt;code&gt;arr[1]&lt;/code&gt; for the second item of an array. In addition to basic referencing, there may be special methods like &lt;code&gt;end()&lt;/code&gt; to get the final element of an array.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Functions&lt;/strong&gt; — How to create them, with or without arguments.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Add to an array&lt;/strong&gt; — How can we add an element to an array?&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Remove from an array&lt;/strong&gt; — Likewise, how can we remove a particular item (of index ’n’) from an array?&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Class creation and constructors&lt;/strong&gt; — This is where I find languages differ wildly in particulars of the syntax. PHP, for example, has a special __construct function that you must use to construct the class.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;StdOut method&lt;/strong&gt; — In javascript this is &lt;code&gt;console.log&lt;/code&gt; in PHP it’s &lt;code&gt;print&lt;/code&gt;. This is probably my most used method when debugging.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Comparison operators&lt;/strong&gt; — How do you check if a variable is false or true? How do you compare a larger number against a smaller number?&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Length of a string&lt;/strong&gt; — A must have for any language. I find myself using this all the time but a common use case is checking whether we should truncate a string before displaying it to a user.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Length of an array&lt;/strong&gt; — Crucial when working with loops as 99% of the time, you will be iterating over an array for however long the array is.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Public and Private methods&lt;/strong&gt; — All languages (especially those with classes) should have this, and is essential when you want to disallow access to functions outside the class.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Try…catch blocks&lt;/strong&gt; — I only ever use these when integrating Stripe payments but they can be handy other times, perhaps when you are testing for a bug and want to capture it for your bug tracking software.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Returning from functions&lt;/strong&gt; — Not all languages use &lt;code&gt;return&lt;/code&gt;! (&lt;a href=&quot;https://rustbyexample.com/fn.html&quot;&gt;See Rust&lt;/a&gt;).&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;And that’s it! This will by no means teach you a language &lt;em&gt;per se&lt;/em&gt; but it will provide a good base level to become familiar with the syntax.&lt;/p&gt;
&lt;p&gt;This learning pattern relies on knowing concepts and design patterns; It goes to show how your learning can be transferred from one language to another!&lt;/p&gt;
&lt;p&gt;Remember the core concepts and features of a language are what matters.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Additional Reading/Resources&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://tim.blog/2009/01/20/learning-language/&quot;&gt;https://tim.blog/2009/01/20/learning-language/&lt;/a&gt; — The original article this is inspired by.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://learnxinyminutes.com/&quot;&gt;https://learnxinyminutes.com/&lt;/a&gt;&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>What programming language should I learn: or Why it doesn’t matter</title>
    <link href="/what-lang-to-learn/"/>
    <updated>2017-06-23T22:12:03Z</updated>
    <id>/what-lang-to-learn/</id>
    <content type="html">&lt;p&gt;Searching &lt;a href=&quot;https://www.google.com/search?q=what%20programming%20language%20should%20I%20learn&quot;&gt;‘What programming language should I learn’&lt;/a&gt; will return you over 7 million results. The first one says Javascript, the next PHP, another extols the virtues of Java and statically typed languages. What even is a statically typed language you might ask? I just want to make apps for my phone!&lt;/p&gt;
&lt;p&gt;This is the struggle of beginners learning to program — and even myself, when I was looking for a second language to learn. There is so much noise and no clear path. Now I’m not about to add to all those articles and recommend you &lt;em&gt;another&lt;/em&gt; language.&lt;/p&gt;
&lt;p&gt;I’m instead going to promote a new idea — &lt;strong&gt;IT DOESN’T MATTER.&lt;/strong&gt; No really! Here me out.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;There is no one programming language to rule them all.&lt;/strong&gt; Although there are many people who vouch that Javascript can be used anywhere and is therefore the best, with React Native for native applications on both mobile and desktop, Node JS on the server and regular plain ol’ Javascript baked right into the browser. It sounds a very compelling argument. And it certainly is. I think Javascript is a great language to learn for anyone. Having said all that however, let’s look at the history of programming languages and frameworks.&lt;/p&gt;
&lt;p&gt;According to the &lt;a href=&quot;https://www.tiobe.com/tiobe-index/&quot;&gt;TIOBE index&lt;/a&gt;, Java is by far, firmly positioned in the number 1 spot. Javascript is on track to eclipse that, but it wasn’t that long ago, that the coolest language on the market was &lt;a href=&quot;https://en.wikipedia.org/wiki/Perl&quot;&gt;Perl&lt;/a&gt; or even &lt;a href=&quot;https://en.wikipedia.org/wiki/Fortran&quot;&gt;Fortran&lt;/a&gt;. My point here is that, even though a language seems popular right now, technology is an extremely vicious landscape. Ever changing as preferences ebb and flow and new technologies are favoured for older ones.&lt;/p&gt;
&lt;p&gt;Even within the Javascript ecosystem specifically, only a year ago Angular was the thing. Then React came along, and everyone jumped ship. Angular 2 was later released with little interest — Angular 1’s usage continued to dwindle. It can flip in an instant. A number of people were rightfully aggravated that they had spent all this time learning a framework that was now as good as dead.&lt;/p&gt;
&lt;p&gt;My point here is this…&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Languages and frameworks, come and go. Concepts are here to stay.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Let’s take &lt;a href=&quot;https://facebook.github.io/react/docs/thinking-in-react.html&quot;&gt;React&lt;/a&gt; as an example here. I am of the opinion that although React is not here to stay. The concepts and underlying principles behind React are definitely here to stay. The way React promotes a hierarchical component structure and flows data through that structure; The way it only re-renders the parts of the components that have had a state change, is all amazing.&lt;/p&gt;
&lt;p&gt;Now whether you chose to learn React or not is your choice, but the concepts behind it will soon be applied in other languages. And at that point, if you do understand React, you will be able to adopt that way of thinking when building your application in that languages version of React.&lt;/p&gt;
&lt;p&gt;In the wildly successful game &lt;a href=&quot;https://en.wikipedia.org/wiki/Portal_(video_game)&quot;&gt;Portal&lt;/a&gt;, the narrator comments “&lt;a href=&quot;https://www.youtube.com/watch?v=TluRVBhmf8w&quot;&gt;Now you’re thinking with Portals&lt;/a&gt;”. The idea of the saying is that now you’re thinking with the technology you have, in this case the Portal gun which the player users to create two ends of a portal that link between each other.&lt;/p&gt;
&lt;p&gt;We can apply this saying to the technology we work with. When you finally crack the concept of how data can travel from your models through to your views, you can say “Now I’m thinking with &lt;a href=&quot;https://medium.freecodecamp.com/model-view-controller-mvc-explained-through-ordering-drinks-at-the-bar-efcba6255053&quot;&gt;MVC&lt;/a&gt;”.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://cdn-images-1.medium.com/max/2000/1*6O2fiWWJy7Ban_oF5AJejg.png&quot; alt=&quot;You’re not a REAL programming if you’re not doing this #obvi (source: https://xkcd.com/378/)&quot; /&gt;&lt;em&gt;You’re not a REAL programming if you’re not doing this #obvi (source: &lt;a href=&quot;https://xkcd.com/378/&quot;&gt;https://xkcd.com/378/&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Trying to get into this space and become a developer can be a scary place, but fear not! Programming is about the problem solving skills you develop. Not about what shiny syntax you happen to use. Many people use a language that they never used before that job (myself included). When learning to program, you will naturally acquire these skills. It’s something I was told when I first started to learn to code, when I thought to get a job you had to be this NASA-level genius that had made 10 apps and half a dozen website (hint: you don’t need that — it certainly helps though!), I didn’t believe what they had to say. I simply couldn’t fathom this indescribable skill you just ‘acquired’ as if by some miasma. Well, it is true. Trust me.&lt;/p&gt;
&lt;p&gt;My piece of specific advise would be to &lt;strong&gt;start building things&lt;/strong&gt; as soon as possible. They don’t have to be amazing, they don’t have to be original. They just have to be something you’d like to try out. &lt;a href=&quot;https://www.freecodecamp.com/&quot;&gt;Freecodecamp&lt;/a&gt; does this excellently in their curriculum. One of my first projects was a calculator. I came to that project because I know how calculators work and I know how add, subtract, divide and multiple work. From my learning of programming, I knew that to keep track of the buttons the user had pushed I could store it in a variable. I knew I would need to hook up the buttons to trigger a function that would do various things. I had almost no idea of the inner workings of the application. But &lt;strong&gt;overarchingly&lt;/strong&gt; I knew the concepts of how it should work. Apply this same principle to things you build and you’ll go far. It’s also great because you get to solve &lt;strong&gt;actual&lt;/strong&gt; problems as if you were on a job. And debugging, like you will in a job. 90% of programming is debugging what you’ve written — so get used to it! Further, doing a project yourself will give you an understanding of what the code does. Many tutorials adopt a “type this, do that” mentality, with little explanation given to what or how this thing works. Take the time to read documentation and truly understand what you are writing.&lt;strong&gt;If you don’t right away, that’s ok&lt;/strong&gt;. Take some time, chew on it, and come back to it later. That’s part of learning these concepts.&lt;/p&gt;
&lt;p&gt;As a bonus to all this project building, at the end, you get something cool you can show your Mum, Dad, Cat or beloved family member/pet.&lt;/p&gt;
&lt;p&gt;Keep learning. Don’t get discouraged. And have fun, make some cool stuff, programming is like digital wizardry, use your powers!&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Additional reading&lt;/em&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://www.reddit.com/r/learnprogramming/wiki/faq#wiki_which_programming_language_should_i_start_with.3F&quot;&gt;https://www.reddit.com/r/learnprogramming/wiki/faq#wiki_which_programming_language_should_i_start_with.3F&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
</content>
  </entry>
</feed>
