<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by Shane Osbourne on Medium]]></title>
        <description><![CDATA[Stories by Shane Osbourne on Medium]]></description>
        <link>https://medium.com/@shakyShane?source=rss-6daf98a660a4------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Sun, 17 May 2026 02:12:21 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@shakyShane/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Finally, the TypeScript + Redux/Hooks/Events blog you were looking for.]]></title>
            <link>https://medium.com/hackernoon/finally-the-typescript-redux-hooks-events-blog-you-were-looking-for-c4663d823b01?source=rss-6daf98a660a4------2</link>
            <guid isPermaLink="false">https://medium.com/p/c4663d823b01</guid>
            <category><![CDATA[javascript]]></category>
            <category><![CDATA[typescript]]></category>
            <category><![CDATA[redux]]></category>
            <category><![CDATA[react]]></category>
            <dc:creator><![CDATA[Shane Osbourne]]></dc:creator>
            <pubDate>Wed, 08 May 2019 23:31:05 GMT</pubDate>
            <atom:updated>2021-06-13T19:51:52.715Z</atom:updated>
            <content:encoded><![CDATA[<blockquote>Just want the code? <a href="https://codesandbox.io/s/jpj18xoo85">https://codesandbox.io/s/jpj18xoo85</a></blockquote><p>We write event-driven applications, even if we don’t realise it at first, any application of decent size will tend towards being powered by events, it’s inevitable. Whether it’s a large-scale Redux architecture, React hooks like ‘useReducer’, NgRx in the Angular world, web sockets, custom-rolled systems etc, etc, etc. The list could go on forever.</p><p>Whilst an event based-architecture can lead to your programs having amazing properties in the form of ultra de-coupling &amp; composition, it also comes with it’s own hidden cost — events by their nature are abstract and therefor we don’t typically get the best level of help from our IDE’s when it comes to creating, consuming &amp; filtering streams of them.</p><h3>So, What’s the problem?</h3><p>To be productive in a large codebase that has any part of it powered by events, you need `type` information. There’s nothing more infuriating than listening for an event by using a string identifier, only to find out 3 hours later that there was a typo in the event name, or that an expected payload from an event didn’t match what you can actually see in your debugger! 😖</p><p>We need to start thinking of ways, especially in these dynamic languages, of mapping out every single event that our application can produce so that we <br>can handle them safely.</p><p>This is crucially important, you absolutely need to be able to verify that events are created correctly, with valid identifiers, and if they take one — a valid payload.</p><h3>Prior Art, events as individual interfaces.</h3><p>I’ve tried a number of different ways, and I’ve seen a million others, but the most common amongst them seems to be a variation on the following:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/932/1*yT3AMUQe8GQ-NX8VUZW0Xw.png" /></figure><p>This approach lends itself nicely to being tucked away in its own file, a nice little ‘events.ts’ if you will. Now initially I agree it seems elegant at first, but since the types are written separate to the code that uses/consumes them, it requires a human to keep it up to date and make sure typos don’t exist.</p><p>Also, how would you use these when dealing with an event stream? For example, if you had a switch statement, how could you know inside each ‘case’ statement which payload you’re dealing with? With individual interfaces like this, the only option is to export a separate type that’s a union of all possible event shapes.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/614/1*Y3xX0ubMMf8Ga1IA7WIz5g.png" /><figcaption>Such niceness… so tempting…</figcaption></figure><p>But now you’ve just created another thing to maintain! GRRR! and when it’s 20 events and you’re scrolling down the file to add your next event to this list, you might just be ready to throw your monitor out of the window. 📺</p><p>There’s also so much duplication — the fact that `SignIn` &amp; `SignOut` are duplicated in some fashion across the interface naming scheme and the required ‘type’ field seems excessive — there has to be a better way?</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/932/1*-nYXXHnfoumFZtBJX91tsA.png" /><figcaption>Such cry :(</figcaption></figure><p>But it gets worse, because these are <em>just</em> ‘types’, we need to either create functions for each to create a type-safe way of raising these events, or annotate objects at the point of emitting the event. The following shows the former, I’m omitting the latter for my own sanity.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*qSsgxOnxnTJ8holzbM0STA.png" /></figure><p>This example is good in the type-safety sense. You have a guarantee on the function parameters, and since the function has a return-type annotation the type-checker will ensure the ‘payload’ here not only has the correct members but also that the parameters were valid to be used in that context (eg: a string where a string is expected)</p><p>But it’s far from good, we’ve also now duplicated the parameter names!</p><p>We have to remove some of this noise. Type-safety is nice, but this is coming at the cost of developer sanity! Also I believe examples like this are a prime reason that certain seasoned JS devs are so vocal about being anti-types — who can blame them?</p><h3>A Step Forward, inferring the interface from the return type.</h3><p>After looking at the code above, you’d be correct in thinking that you don’t even need the interface at all…</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/966/1*bxaibc4zdqptfRg2lJnjug.png" /><figcaption>2.5 billions times better. But still not good</figcaption></figure><p>Thanks to conditional types being added to TypeScript back in 2.8, we now have useful helpers like ReturnType&lt;T&gt; which does exactly what it says on the tin — it infers the return type of a function if it can.</p><p>If you’re interested in type systems (you’re not?, how have you gotten this far?) then it’s worth a quick look at ReturnType&lt;T&gt;</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*CBVo912BNa2R2rRMBMF8rg.png" /></figure><p>It’s just a ternary operator, but for types, how neat! With a bit of practice you get good at being able to visually ‘parse’ this type of code :)</p><p>So, this <em>is</em> better than having a separate interface for the event + a function to create it. Now the type is driven by the implementation, not the other way around — this means you can change the function parameters &amp; the return payload, and everywhere you’ve used that type will get the new type checking information.</p><p>A major downside though, is that you’d still need to maintain a list of them and union them together to get the full feature set of type safe events.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/966/1*71H_Ij5QZwi6G5RbjHTx0g.png" /><figcaption>Getting better, but you still need to union the inferred types together.</figcaption></figure><p>The keen-eyed amongst you may notice I’ve just used string literals in these screen shots and that the type-narrow may not actually work correctly — this is for simplicity &amp; to prevent bringing in enums/consts in these examples that I’m actually encouraging you not to use. 😂</p><h3>On the right track, getting Typescript to do more, with less.</h3><p>So I’ve played around with numerous variations of the previous example in different sized applications, and overall it was OK. What’s pushed to me demand more though, is the work I’ve done in other languages like Rust &amp; Elm.</p><p>It just hit me like a ton of bricks one day, the answer for a better way to type events in Typescript lies in it’s ability to model ADT’s (algebraic data types).</p><p>Let’s look at how you’d provide type information for the SignIn + SignOut events in Elm &amp; how you’d create a union of them.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/512/1*m03-TCTQxQOiLCRnOn5YjQ.png" /><figcaption>Erm, is that it?</figcaption></figure><p>Seriously how cool is this? It’s just single identifiers SignIn &amp; SignOut (they are type constructor in Elm) and some associated data — in the case of SignIn it’s 2 strings to represent the username and password (there are other, more descriptive ways, but this is not an Elm tutorial 😝).</p><p>In Rust it’s a similar story, there’s a little bit more syntax noise, but it’s ultimately the same thing.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/882/1*K0GDLZ-hpnktv0amcLaYpg.png" /><figcaption>Rust enums are the biz</figcaption></figure><p>It’s perfection. It’s just identifiers, and optional associated data. No more, no less. I needed to have this in TypeScript.</p><h3>Challenge accepted</h3><p>The goal is clear, a way to model event identifiers + optional payloads, with as small a footprint as possible. It should be like code-golfing, but with the goal being greater type-safety &amp; developer productivity.</p><p>Looking at the rust implementation (since that has curly braces so it’s a fairer comparison 🤣) I wondered just how close I could get to it.</p><p>The first few attempts included using an actual JS object and having the ‘event creator’ function as the value to each identifier.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*9eT-UYtdZv_f7-JYCwr-UQ.png" /><figcaption>decided it was an anti-pattern after all</figcaption></figure><p>I was convinced for the longest time that this was a great approach. The idea would be that you’d define this object, and then call another function with it that would augment the return values to ensure they fulfilled the type + payload contract (so that they returned {type: …, payload:…} )</p><p>I put a considerable amount of effort into making this approach work with safety guarantees at all points, but I had another realisation when reviewing some of the large codebases I’ve worked on that were full of events — over 90% of the function I was using just forwarded on the params to the payload — they served no other purpose. In fact, in the few cases I found myself doing ‘work’ in those functions, it could easily be extracted out.</p><h4>Moving to a types-only approach</h4><p>Since Typescript supports object literal types, we can even closer to that beautiful world of Rust &amp; Elm with the following:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/966/1*GwK9PeuzNU0TAtfJAb8GOA.png" /><figcaption>Pretty damn close to the Rust equivalent</figcaption></figure><p>If you squint, you’ll see we’ve reduced everything down to just the essentials — the identifier &amp; the associated data. Now it did take me an embarrassingly long time to come to this solution, mainly because I was completely hindered by this notion that the events needed a ‘creation’ function.</p><p>Even though this does absolutely nothing yet, I knew this was an amazing idea.</p><h4>Making it useable</h4><p>Of course, that’s just a type definition right now, it’s not exactly ready to be deployed in a giant Redux Application now is it.</p><p>But just wait, with a few fancy Typescript feature and a single JS function, you’re about to witness the holy grail of working with events in Typescript.</p><h4>#1 Switch to an enum for the identifiers</h4><p>This is a no-brainer, and was only omitted from previous examples to reduce concepts — but since we want a <strong>single </strong>way to create and consume events, we’ll want to switch out the string keys for an enum — this is also a great way to name-space events.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1016/1*Ep5IEqHPguX1-vgW4mFI3g.png" /><figcaption>Starting to look like something useable</figcaption></figure><p>Using string enums here is the perfect solution to our ‘identifier’ problem, since we can use the enum members as the discriminant (common property) but also it gives us the flexibility to implement pseudo namespaces in the sting values if we wish (or suffixes/prefixes etc).</p><p>There’s also an weirdly nice knock-on effect of this approach in that it will ensure the string values don’t collide (which can happen in enums), although only when the types on the ‘rhs’ differ, so it can’t be relied upon fully 😀</p><p>The whole point here though, is that throughout the application, we only ever would use the enum to reference this event —no extra functions or separate interfaces in sight.</p><h4>#2 A way to create type-safe events</h4><p>So we have the identifier &amp; associated data part nailed, now we just need a single wrapper function so that we can apply some Typescript magic. We are going to essentially create a function that takes the enum member as the first param, and the associated data as the second. Beautifully simple.</p><p>Let’s look back at a Rust example for a second. This snippet will create the SignIn variant with it’s required data (I’ve skipped some rust-specifics here for brevity) Notice how the creation of this is almost identical to it’s definition.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*--mNbGucu2sQihSBtZJf5g.png" /><figcaption>this.is.the.goal</figcaption></figure><p>Can we do this in Typescript? With full type safety? This is what I came up with…. 😂</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*z-J9EV8-sYwGRhBPXpq7bQ.png" /><figcaption>Hidden away in your types folder</figcaption></figure><p>Erm, now you’re thinking, where’s the beautiful or simple bits… Well, this is actually just taking a handful of the newer Typescript features in order to create this elm/rust clone. This code would be written once and then hidden away in a file that you never touch — the point is to enable the following:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*aWc7WRBTeCwy1TAiDb1VnA.png" /><figcaption>elm/rust/ts 💜</figcaption></figure><p>The last 2 lines show how to create events in this system, the beauty comes from the fact that you don’t need to dream up function names for each event, but instead you just use the identifier and its data.</p><p>Msg here will only accept events from the User namespace (given as the first param, like a type constructor) and the second param would be type-safe based on that enum members value in the type object.</p><h4>#3 benefits of Discriminated Unions/Algebraic data types</h4><p>All of this type stuff is completely useless if you cannot also narrow the type of incoming events based on the common property. In our case we’re using enum members as the ‘type’ property so this should be simple enough — we want full type-safety in case blocks like this:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*EwMYcJgye9Co0TDQb9gb3w.png" /></figure><p>In this example we need to know that when a User.Token event is raised its payload is a string. Likewise we need to know when payloads are absent, or complex types etc etc.</p><p>This is where Typescript’s discriminated unions come into play — since we already have type safety at event creation, if we can nail getting type guards like this working, we’ll be onto something huge.</p><p>In the screenshot above, we wouldn’t get any type help. If you remember back to the type definitions for our events, we didn’t include the type or payload property — but also this reducer function doesn’t have a type annotation for the action — we’ll fix that now.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Y5ukaDjfMaok3sIck8vA9Q.png" /><figcaption>total type nerdery</figcaption></figure><p>The above, whilst hard to read for those unfamiliar, is just known as a Mapped Type —this is a way fo creating a new type that is based on a transformation of the original type. In our case we had {identifier: value} , so this mapped type will do nothing more than create new types for each key</p><pre>// before<br>{identifier: value}</pre><pre>// after <br>{identifier: {type: identifier, payload: value }</pre><p>Why do we do this? Because it allows us to create a union type of all possible enum + value combinations, so that when given to something like a switch statement the expected type narrowing occurs.</p><p>You’d add the following single line, where Messages here is just that object type that has the enum + associated value.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*ZHO_FGwfsEM1uA6AlGAJoQ.png" /><figcaption>Type magic — indexing in a object literal type with a union of it’s own keys</figcaption></figure><p>The square brackets give it away, you’re indexing into a type, but since keyof produces a union of all the keys, you get a union back. You are essentially indexing an object literal type with a union of its own keys. I know, it’s a bit of a mind bender, but it’s awesome.</p><p>The result is that Actions above now has exactly the correct type that enables the super-powerful type narrowing — if you hovered over it in our editor, the type would look like this:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/564/1*KOvDQXDlz-wjxqnb9bjAQQ.png" /></figure><p>And that’s pretty much it — if you can produce this type, you’re going to get awesome type safety throughout your application.</p><h3>Conclusion</h3><p>This has incredible type-safety, a nice api that matches languages with type constructors, has no duplication, uses a single identifier for event definition/creation/consumption and is basically just all-round better for all of your event-based needs.</p><p>If there’s any interest in a deep-dives into the more advanced typescript stuff, please request it and I may just write a post for type nerds like me :)</p><p>Check out the examples here <a href="https://codesandbox.io/embed/jpj18xoo85">https://codesandbox.io/embed/jpj18xoo85</a> — it should make things a lot clearer :)</p><p>— -</p><p>Like this? If you did, and you find yourself doing any front-end work, perhaps you’d enjoy some of my lessons on <a href="https://egghead.io/instructors/shane-osbourne">https://egghead.io/instructors/shane-osbourn</a>e— many are free and I cover Vanilla JS, Typescript, RxJS and more.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=c4663d823b01" width="1" height="1" alt=""><hr><p><a href="https://medium.com/hackernoon/finally-the-typescript-redux-hooks-events-blog-you-were-looking-for-c4663d823b01">Finally, the TypeScript + Redux/Hooks/Events blog you were looking for.</a> was originally published in <a href="https://medium.com/hackernoon">HackerNoon.com</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Redux & Flow-type — getting the maximum benefit from the fewest key strokes]]></title>
            <link>https://medium.com/hackernoon/redux-flow-type-getting-the-maximum-benefit-from-the-fewest-key-strokes-5c006c54ec87?source=rss-6daf98a660a4------2</link>
            <guid isPermaLink="false">https://medium.com/p/5c006c54ec87</guid>
            <category><![CDATA[react-native]]></category>
            <category><![CDATA[flowtype]]></category>
            <category><![CDATA[react]]></category>
            <category><![CDATA[redux]]></category>
            <category><![CDATA[typescript]]></category>
            <dc:creator><![CDATA[Shane Osbourne]]></dc:creator>
            <pubDate>Mon, 11 Sep 2017 21:03:03 GMT</pubDate>
            <atom:updated>2017-09-12T16:03:20.599Z</atom:updated>
            <content:encoded><![CDATA[<p>Being a frequent Typescript user I’m fully onboard with the benefits of Type systems. Especially in the arena of UI development with React &amp; React-Native, I don’t think I could ever imagine going back to a time where your props &amp; state updates are not verified by the compiler as you type…</p><p>Typescript is great, no complaints from me, but one thing that really caught my eye recently about Flow was this ability to alias &amp; re-use the inferred type of a function’s return value.</p><p>That means, given a function that has it’s arguments annotated, and performs no side-effects, Flow will be able to work out the ‘shape’ or the ‘type’ of a return value, and allow you to use that elsewhere.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*YLPbvr5iZpOKcfoM67YkoA.png" /><figcaption>With only a single annotation, we’ve provided a lot of information for Flow to use when verifying our programs.</figcaption></figure><h3>How to re-use the inferred return type of a function</h3><p>I know it’s mouthful, but bear with me. Here’s a snippet I found whilst digging through about 2000 github issue threads. I’ve seen others, but this seems to work well, so…</p><pre><em>type </em>_ExtractReturn&lt;B, F: (...args: <em>any</em>[]) =&gt; B&gt; = B;<br><em>export type </em><strong>ExtractReturn</strong>&lt;F&gt; = _ExtractReturn&lt;*, F&gt;;</pre><p>We’re exporting a generic type called <strong>ExtractReturn</strong> that accepts another type (in this case, the ‘typeof’ a function), and extracts the inferred return type — to use it with the function above, you’d use typeof setName like this:</p><pre>import {<strong>setName</strong>} from &#39;./actions&#39;;<br>import {ExtractReturn} from &#39;./types&#39;;</pre><pre>type <strong>ReturnValue</strong> = ExtractReturn&lt;typeof <strong>setName</strong>&gt;</pre><p>Don’t worry if you don’t understand exactly what’s going on here, just copy/paste the snippet and move onto the next bit — I’m being deliberately quite hand-wavey here as I want to focus on the point of this blog (and the fact I’m no type-system expert myself)! But anyway, onto the good stuff!</p><h3>How this can help in a Redux Project</h3><p>The most common approach I’ve seen to providing type-safety in a Redux Application is to have the shape of each action defined separately from your actual function implementations. So you’d typically define all of the action objects your application is ‘allowed’ to use, and then you’d re-use those types in action-creators and reducers. It normally looks something like this:</p><p><strong>Before:</strong></p><pre><em>export const </em><strong>SET_NAME</strong> = &#39;SET_NAME&#39;;<br><em>export const </em><strong>SET_AGE</strong> = &#39;SET_AGE&#39;;<br><br><em>export type </em><strong>SetName</strong> = {type: &#39;SET_NAME&#39;, payload: <strong>string</strong>}<br><em>export type </em><strong>SetAge</strong> = {type: &#39;SET_AGE&#39;, payload: <strong>number</strong>}<br><br><em>const </em>setName = (name: <strong>string</strong>): <strong>SetName</strong> =&gt; {<br>    <em>return </em>{type: <strong>SET_NAME</strong>, payload: name}<br>}</pre><pre><em>const </em>setAge = (age: number): <strong>SetAge</strong> =&gt; {<br>    <em>return </em>{type: <strong>SET_NAME</strong>, payload: age}<br>}<br><br><em>export type </em><strong>Actions</strong> = <strong>SetName</strong> | <strong>SetAge</strong>;</pre><p>Of course this example is tiny with just 2 actions creators and I’ve condensed it down onto 1 imaginary file for the sake of space, but it’s enough to explain the concepts here.</p><p>Defining each action ‘shape’ as it’s own type as seen on lines 3 and 4, means that you can be very specific about an object/action that a function should return. Given that you’ve had to declare them all up front like this, it gives very strong guarantees that you’re not going to be dispatching unexpected object shapes into your Redux store. Awesome.</p><p>Crucially though, this technique also allows you to create what’s known as a ‘tagged union’ as seen on the last line. It’s known as a ‘discriminated union’ in Typescript land, and is basically a way of a narrowing any type-checking based on the value of a particular field — in our case it’s the ‘type’ field from the object.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*sr-I3gYWV2OWyMKWG0cr-A.png" /><figcaption>Flow calls it a ‘tagged union’, whilst in TS land it’s known as a ‘discriminated union’ — very powerful.</figcaption></figure><p>So with this technique of defining all actions as separate types, we can both enforce the return-types of action-creators whilst also narrowing our type-checking in reducers based on which action was fired. All sounds great, right?</p><p>It is great actually — I’ve used this technique before without any real <em>issues,</em> but I still find it interesting to experiment with these type systems in an attempt to extract as much value from them with as little typing as possible.</p><p>So looking back at the previous code, my personal opinion is that having to come up with new names for each and every Action, duplicating the action name, and having to annotate the return value of every action-creator are all things that can be avoided.</p><p>So with that in mind, if we just make use of the ExtractReturn helper from before, the previous example could be be reduced to the following:</p><p><strong>After:</strong></p><pre><em>export const </em><strong>SET_NAME</strong> = &#39;SET_NAME&#39;;<br><em>export const </em><strong>SET_AGE</strong> = &#39;SET_AGE&#39;;</pre><pre><em>const </em>setName = (name: string) =&gt; {<br>    <em>return </em>{type: <strong>SET_NAME</strong>, payload: name}<br>}<br><br><em>const </em>setAge = (age: number) =&gt; {<br>    <em>return </em>{type: <strong>SET_AGE</strong>, payload: age}<br>}</pre><pre><em>export type </em><strong>Actions</strong> =<br>    ExtractReturn&lt;<em>typeof </em><strong>setName</strong>&gt; |<br>    ExtractReturn&lt;<em>typeof </em><strong>setAge</strong>&gt;</pre><p>The fact that we’ve reduced the duplication of the action names themselves is great, but we’ve also removed the need to come up with separate names for our actions — we all know just how hard naming things are, right?!</p><p>The exported type Actions on the last line will also still generate the ‘tagged union’ as described previously, meaning we’re not losing out on any of the (awesome) narrowed type-checking that we were getting before — yay!</p><p>Perhaps most interestingly though, is that we’ve now inverted the source-of-truth to be our actual code, rather than the separate types. This means any changes to the return values in any action-creators are automatically propagated throughout the system — there’s no need to update any separate types!</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*3LiSAQ6A5OesxOOzUHcWmQ.png" /><figcaption>More type-safety per keystroke, and less duplication!</figcaption></figure><p>This is a trade-off that some people will not be comfortable with, and I would totally understand that — I can see the benefit of having all action object types explicitly defined separately from the code (as I said, I’ve used this approach before without issue).</p><p>But, having worked on large typed projects (both in Typescript &amp; Flow) I’ve ended up with a few opinions about this type of stuff:</p><ul><li>Types are extremely important, and the larger the project, the larger the importance &amp; also the larger the payoff.</li><li>If you’re going to use a type-checker, go all in. It doesn’t work nearly as well as you might imagine having only a subset of a project typed.</li><li>Type annotations are noisy though, and ‘noise’ in code is always something we should aim to reduce.</li><li>If you can use the type-checker to actually create types for you dynamically based on your code, do it! This blog is an example of that and I hope we see much more of this.</li></ul><h4>One final note</h4><p>When I use phrases such as ‘less typing’ and ‘fewer key strokes’ I’m NOT talking about creating ‘shorter’ code in terms of actual characters — I’m talking directly about having less *things* to create, import, export, name &amp; maintain. This blog post has shown a way of producing very similar type-safety with dramatically less code (when you consider a real code-base).</p><p>Hope you enjoyed this experiment!</p><h4>Links:</h4><ul><li>Gist — <a href="https://gist.github.com/anonymous/9ffb548a38b6c24114d4bad360bfe8f8">https://gist.github.com/anonymous/9ffb548a38b6c24114d4bad360bfe8f8</a></li><li><a href="https://flow.org/try/#0PTAEAEDMBsHsHcBQJQAkCm0AO6BOoAXWUdADwNwEMBjA0ASwDtI9d0ATUNggV10cIBPHKFiRQlUJB6Na9WI0QFh6UAH0AouSq0ASul78APACEANKABiALlAAKAHRPKuAOYBnW5UaCA2gF0ASlAAXgA+UBMIkMiAbiUVUC0KGgJ9Q0YjS2j1ZJ00gz5MgCoLbPjkMABVd1VqBXcCbwJ3CVbGWFwAW0poRHrGRtAAQQBxDVDQAHIxjSn4gaGAOWGAWQmYqZX1+cRK0UZoQVBGdA5CYixcWAA3enZVZRxWyE6JNx4u9EYWhgFU+SMAC01DYlCIuHc+24RSEIng9Gg0FAACNVEwWLg2OxENJZARAaBagRhq50HZKGTbIxPmjcMEAN6IUAsriFfigBlw9C2WYWLCUQRwSjsLxk0AAX0QUtxMjkCiJBiWlC+dkYKp5J1peEZzNZMI5XKemu2Gn5guFopOGsl0r2xtAAGUmgRVDEuZTNTSunSLOqvrZGrgmK5bfsAMJg10SUAyQkOgZNJghiRIwgAC1UBoExtaYn2YlAglgfFA1HTsFq-3lwNB6HBnXcDlAABVM2z3DxoHRqN5UaoeLVOJRWpImq4yTiUHGFQR0+DU3B4C8lxdrViEBnVAKhbARdzUSPzgqpsapqArrAcLhlAkRMMa60QnqWXlUukikZjYXiaT0BEAB8XySbR33ZTJv3EYllS+MI9jxGs2XYHhqDwOxGnBTVnUwyYPSpUAAAY-Q1WwpnPCULABBReUfQJbGw6MmVZIkEQIcsKRrBxjV1ZjmN7WoRnGWwmN40SUEzNgLBgTcAGsOmXCROJ3S0GFHLUfTwYDRJZbNOS07TmKcBwMNdMx9IMllPS8JSLT3dgzIs1kpQM5ztP41RTWE8zePEvB0Ck1c5IQUcbN3fd6DUoMQ285jdJExzDKcEz-Ji7T-U1KjGAcZS7K42AABkEDwcMjzsQIHIS1zRKq3iHkgSguwIWxdOS+JmKlCUgA">Flow Try</a></li></ul><p>Like this? If you did then perhaps you’d enjoy some of my lessons on <a href="https://egghead.io/instructors/shane-osbourne">https://egghead.io/instructors/shane-osbourn</a>e— many are free and I cover Vanilla JS, Typescript (soon Flow?), RxJS and more.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=5c006c54ec87" width="1" height="1" alt=""><hr><p><a href="https://medium.com/hackernoon/redux-flow-type-getting-the-maximum-benefit-from-the-fewest-key-strokes-5c006c54ec87">Redux &amp; Flow-type — getting the maximum benefit from the fewest key strokes</a> was originally published in <a href="https://medium.com/hackernoon">HackerNoon.com</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Create React App + Docker — multi-stage build example. Let’s talk about artifacts!]]></title>
            <link>https://medium.com/@shakyShane/lets-talk-about-docker-artifacts-27454560384f?source=rss-6daf98a660a4------2</link>
            <guid isPermaLink="false">https://medium.com/p/27454560384f</guid>
            <category><![CDATA[docker]]></category>
            <dc:creator><![CDATA[Shane Osbourne]]></dc:creator>
            <pubDate>Mon, 07 Aug 2017 08:54:09 GMT</pubDate>
            <atom:updated>2017-08-07T13:24:23.366Z</atom:updated>
            <content:encoded><![CDATA[<blockquote><a href="https://en.wikipedia.org/wiki/Artifact_(software_development)">Artifact (software development)</a>, one of many kinds of tangible byproducts produced during the development of software</blockquote><p>Have you ever been part of a project that required some kind of ‘build’ or ‘compilation’ step? Perhaps a project where you manually edit your ‘source files’ first, then you run some command to produce the final ‘production-ready’ assets?</p><blockquote>Just want to see the code?, you can skip straight to the <a href="https://github.com/shakyShane/cra-docker">repo</a> I setup for this post</blockquote><p><strong>Terminology: </strong>Every time you see the work ‘artifact’ in this post, I’m referring to something like an executable, directory or anything else that is produced as part of your workflow — it’s typically the thing you want to run on a server &amp; it’s almost always excluded from version-control.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*7KBGbk5IiX5JPoL1rmNcjw.png" /><figcaption>Example from create-react-app — node_modules contains about 22676 files here, none of which are required to serve the application in production.</figcaption></figure><h3><strong>Build tools have extremely different requirements to those of your production App.</strong></h3><p>This is a big problem. Whether it’s a simple blog generated from Markdown files, a fully-fledged SPA written in something like Angular or React, or any other type of project that uses tooling — the dependencies required to ‘build’ your project — that is, take your source files and produce the actual thing you want to throw on a server — are vastly different.</p><p>Just take a look inside your `node_modules` directory (or equivalent in your chosen lang/env) — I bet all (or most) of those packages will <em>only</em> be required for the build process — and if that’s true, they have no place on a production server, ever!</p><h3><strong>Current solutions.</strong></h3><p>Of course, no-one right now would admit to building their projects on the same server that runs their App, but we all know it happens… too often.</p><p>To get around this problem, the more responsible projects out there will tend to do one of the following:</p><ul><li>1) Have developers run the ‘build’ command locally, producing an artifact that is then ‘uploaded’ somewhere, or added to a docker image etc…</li><li>2) Have a separate CI service sitting in between Github &amp; the production server — the artifact can be produced there instead and then deployed to a server…</li><li>3) Run the build process on the same server as that which will run the app in production…</li></ul><p>But now, for those using Docker, there’s a better way. A technique that allows you to consolidate your build + production setup to a single Dockerfile. This has huge implications for the future as it allows things such as auto-builds/deployments often without the need for <em>yet another </em>3rd party service</p><h3>Docker multi-stage builds</h3><p>No need for jargon here, the concept is so simple it’s brilliant.</p><ul><li>1) Create the environment needed for your build process</li><li>2) Run that build process to produce your artifact</li><li>3) Create your production environment</li><li>4) Copy the artifact into your production environment</li><li>5) Discard EVERYTHING ELSE from the build environment.</li><li>6) profit?…</li></ul><p>The fact that Docker handles all of this complexity is amazing — now let’s run through a real-world example to fully understand it.</p><p>I’m going to use the popular create-react-app CLI tool in this example, but you can take the concept and apply it to <em>any</em> similar situation.</p><h3>Tutorial using &#39;create-react-app&#39;</h3><h4>Step 1: Install create-react-app</h4><pre>yarn global add create-react-app</pre><h4>Step 2: Create a new project</h4><pre>create-react-app docker-build</pre><p><strong>Notes:</strong></p><ul><li>After creating a new project, you’ll notice you have a ‘src’ directory containing the files you should edit in development.</li></ul><h4><strong>Step 4: Add build process to Dockerfile</strong></h4><p>We’ll build upon the latest official NodeJS Docker image, which comes with yarn pre-installed.</p><pre><strong>FROM</strong> node:7.10 as build-deps<em><br></em><strong>WORKDIR</strong> /usr/src/app<em><br></em><strong>COPY</strong> package.json yarn.lock ./<em><br></em><strong>RUN</strong> yarn<br><strong>COPY</strong> . ./<br><strong>RUN</strong> yarn build</pre><p><strong>Notes:</strong></p><ul><li>On line 1, we’re using the FROM &lt;image:tag&gt; as &lt;name&gt; format which is new to <strong>Docker 17.05.</strong></li><li>The as <strong>build-deps</strong> part allows us to <em>name</em> this part of the build process. That name can then be referred to when configuring the production environment later.</li><li>On lines <strong>4</strong> &amp; <strong>5 </strong>we copy package.json and yarn.lock into the image and then run yarn — this separates the dependency installation from the edits to our actual source files. This allows Docker to cache these steps so that subsequent builds — one’s in which we only edit source files and don’t install any new dependencies — will be faster.</li><li>Next on lines <strong>6</strong> &amp; <strong>7 </strong>we copy everything else into the image and then run the build command. This will produce the ‘artifact’ inside of the build directory — just as it would if you were to run this command locally!</li><li>Be careful, copy . ./ can be quite dangerous is it will copy the entire current directory into a build context, which may be huge! Add a .dockerignore file to combat this, mine would look something like <a href="https://gist.github.com/anonymous/f1b3e2cc530900338a0c38bce5e0e4c1">https://gist.github.com/anonymous/f1b3e2cc530900338a0c38bce5e0e4c1</a></li></ul><h4>Step 5: Add production environment to the SAME Dockerfile</h4><p>This is where things start to get seriously interesting! In the exact same Dockerfile we can add the setup for our production environment, right below the setup for the build process!</p><p>When Docker sees a second <strong>FROM</strong> statement, it will begin an entirely new ‘build stage’ — which includes NOTHING from the first step. That’s right, the whole thing is discarded… kind of. Crucially it does allow you access to the previous builds file system. So, this is where everything starts to come together and make sense, because Docker allows you to selectively copy anything you like from the first build step, into the second one!</p><p>This means we can grab hold of the build directory, which is our ‘artifact’ and discard <em>everything </em>else from the first step. So everything about the base NodeJS Docker image is discarded, along with all the files we don’t choose to copy over into the new build step.</p><p>In this example, with create-react-app , that means we get to wave goodbye to the 22,676 files required to build the project — none of those are needed to serve the application, so we don’t want them lingering around!</p><p>Let’s see it in action</p><pre><strong>FROM</strong> nginx:1.12-alpine<br><strong>COPY</strong> --from=build-deps /usr/src/app/build /usr/share/nginx/html<br><strong>EXPOSE</strong> 80<br><strong>CMD</strong> [&quot;nginx&quot;, &quot;-g&quot;, &quot;daemon off;&quot;]</pre><p><strong>Notes:</strong></p><ul><li>We’re using one of the official nginx images here to serve our application, but this could be any other type of server — I only chose nginx as it’s popular &amp; I know how to configure it :)</li><li>On line <strong>2 </strong>is the shiny new stuff. The <strong>COPY </strong>statement has been around since Docker first hit the scenes, but so far it’s been limited to copying files from a context (like a host) into an image. The new part is the flag --from=build-deps — if you remember back to the first stage, build-deps is the name we gave that stage, and this is how we can refer to it here.</li><li>Again on line 2, we know that create-react-app creates a build directory as an artifact, so we add that path to the working directory and end up with: /usr/src/app/build this is the absolute path of our artifact inside the first stage.</li><li>So, we know how to access the artifact from the first stage, now we just need to copy it into the correct place in our production environment — and because we’re using stock nginx, that directory is /usr/share/nginx/html</li><li>The final 2 lines are just the regular docker commands to expose a port and run the server when a container start.</li></ul><h4>Step 6: Build the image!</h4><p>Now we get to put it all together — we have both our development build process &amp; production environment specified in a single Dockerfile — it should look something like:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/00f4b1075c4c0b772ddb634ec74bb45e/href">https://medium.com/media/00f4b1075c4c0b772ddb634ec74bb45e/href</a></iframe><p>Now we can instruct Docker to create an image from this:</p><pre>docker build . -t shakyshane/cra-docker</pre><p><strong>Notes:</strong></p><ul><li>docker build . instructs Docker to use the current directory as it’s build context</li><li>-t shakyshane/cra-docker instructs Docker to ‘tag’ this particular build. In this case I’m naming the image as it would appear on my Docker Hub account, but you can use any tag name you want.</li></ul><h4>Step 7: Run it locally to test it works!</h4><p>After running the previous build command, you can now use the tag name to start a container from this image.</p><pre>docker run -p 8080:80 shakyshane/cra-docker</pre><p><strong>Notes:</strong></p><ul><li>-p 8080:80 allows us to map the port 8080 on our local dev machine to port 80 inside the container — you can omit the first part if you’re happy for Docker to assign a random port for you, eg: -p 80 will result in something like http://localhost:32888 — which will change each time you run it.</li><li>If it all worked well you should now be able to see the following in your browser:</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/984/1*UrGPYsEfbok4U1BXrwl7wA.png" /><figcaption>That’s it! A fully Docker-ised create-react-app.</figcaption></figure><h3>Next steps</h3><p>Now that you’ve seen how to build <em>and </em>serve your project with Docker, you can go ahead and take advantage of everything that containers have to offer, some examples are:</p><ul><li>100% consistent builds across any machine that can run Docker</li><li>Fully automated deployments via a service like Docker Cloud (<a href="https://github.com/shakyShane/cra-docker/blob/master/docker-cloud-example.yml">check this example which includes fully-automated SSL certs</a>)</li><li>Run your EXACT production setup locally before deploying</li><li>Combine with other services, eg: for a frontend App you might want to add something like a CouchDB instance — this is a trivial task with Docker.</li><li>etc etc</li></ul><p>Docker is taking over the world, and with every release it’s getting easier for regular devs to utilise its power!</p><h3><strong>Final Notes:</strong></h3><p>The example given here does <em>not </em>include the production-ready configuration for the nginx server — I didn’t want the details of such a thing to cloud the content of this post — if there’s demand I can follow up with a post detailing that!</p><p><strong>Links</strong>:</p><p><a href="https://docs.docker.com/engine/userguide/eng-image/multistage-build/">Use multi-stage builds</a></p><p>Like this? If you did, and you find yourself doing any front-end work, perhaps you’d enjoy some of my lessons on <a href="https://egghead.io/instructors/shane-osbourne">https://egghead.io/instructors/shane-osbourn</a>e— many are free and I cover Vanilla JS, Typescript, RxJS and more.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=27454560384f" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[JavaScript Framework Battle: ‘Hello World’ in each Command-line interface]]></title>
            <link>https://medium.com/dailyjs/javascript-framework-battle-hello-world-in-each-cli-cfdba8bf5e4b?source=rss-6daf98a660a4------2</link>
            <guid isPermaLink="false">https://medium.com/p/cfdba8bf5e4b</guid>
            <category><![CDATA[angularjs]]></category>
            <category><![CDATA[preact]]></category>
            <category><![CDATA[javascript]]></category>
            <category><![CDATA[vuejs]]></category>
            <category><![CDATA[react]]></category>
            <dc:creator><![CDATA[Shane Osbourne]]></dc:creator>
            <pubDate>Sat, 25 Mar 2017 23:15:24 GMT</pubDate>
            <atom:updated>2017-04-08T12:34:07.312Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*1mWW6XHjlpAGHiK1fUwmDw.jpeg" /></figure><h3>JavaScript Framework Battle: ‘Hello World’ in each CLI</h3><p>I was just wondering, given that most of the big JavaScript frameworks offer a Command-line interface (CLI) tool nowadays — to automate the creation of new projects and the building of production assets — <strong>how do they actually compare to each other</strong>? I mean surely they must all be hitting the same ‘ballpark’ when it comes to bundle size/render perf right? <em>Maybe it’s not as close as you might think.</em></p><p>I decided to test this out by installing 6 popular CLI tools — <em>Create React App, Angular CLI, Ember CLI (for both Ember &amp; Glimmer), Vue CLI, Create Inferno App </em>and<em> Create *Preact App</em> globally onto my laptop, and then follow the official documentation for each.</p><p>I was only interested in the ‘out of the box’ project generation — so there was literally <strong>zero</strong> application code added by me. I just ran the relevant command to scaffold a project, then I immediately ran the production build…</p><p>I think this is an interesting test, because although each framework can give you more or less features by default, the point here is that the authors of the framework must deem this default scaffold+build process to be <em>‘the best you’re going to get’</em> out of the tool, and that’s what I found fascinating in the results.</p><p>So let’s start in this first post by looking at just two easy metrics, JavaScript bundle size &amp; JS first render time.</p><blockquote>Note: see <a href="https://github.com/shakyShane/arewereadyformobileyet">https://github.com/shakyShane/arewereadyformobileyet</a> for the automated results.</blockquote><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*ZrJKJqBsksWd-8uKM9OvgA.png" /></figure><h3>JavaScript Bundle Size</h3><p>For each CLI, I ran the ‘as-documented’ command for producing a production-ready build. T̶h̶e̶n̶ ̶I̶ ̶m̶a̶n̶u̶a̶l̶l̶y̶ ̶g̶z̶i̶p̶p̶e̶d̶ ̶t̶h̶e̶ ̶o̶u̶t̶p̶u̶t̶ ̶(̶u̶s̶i̶n̶g̶ ̶d̶e̶f̶a̶u̶l̶t̶ ̶m̶a̶c̶O̶S̶ ̶g̶z̶i̶p̶ ̶c̶o̶m̶p̶r̶e̶s̶s̶i̶o̶n̶ ̶s̶e̶t̶t̶i̶n̶g̶s̶)̶ ̶t̶o̶ ̶g̶e̶t̶ ̶t̶h̶e̶s̶e̶ ̶r̶e̶s̶u̶l̶t̶s̶.̶</p><p>Then, I spin up a lightweight server pointing at the resulting ‘build’ or ‘dist’ directory with Gzip enabled. Finally I pump the URL from this server into <a href="https://developers.google.com/web/tools/lighthouse/">Lighthouse</a> &amp; measure the size of all the Javascript combined.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*kBjmBbFAHoNxcygcWSSD_g.png" /><figcaption>Total amountJavascript in KB (as served by the browser)</figcaption></figure><p><strong>Notes</strong></p><ul><li>Preact (and other micro libs, didn’t want to clutter the chart) obviously come out on top given they are simply a thin layer on top of the DOM. You’ll naturally get more features out of something like Angular &amp; Ember, given they are <strong>full</strong> frameworks — but I’ve left a micro lib in here because with PWAs becoming so popular/necessary, it’s often the case that Apps are built by composing many of these micro libs together and given the starting point is 8kb, you’d need to add a LOT of them before your code is anywhere near the size of what Ember/Angular are shipping by default.</li><li>Interesting how close React &amp; Vue are nowadays — although I believe Vue can be trimmed further depending on which options you select in the CLI scaffold stage (I just went with whatever was presented as a ‘default’)</li><li>Angular seems to have shed about 50kb since I last tried a few months ago, so they are certainly making progress;</li><li>Ember is off the chart (size-wise) and cannot be considered suitable for mobile at this point (to be fair, I don’t think they claim to be).</li><li>Glimmer (by the Ember team) is the newest, and coming in at 34kb is impressive to say the least. Great work.</li></ul><h3>First JavaScript Render</h3><p>Next, I throttled the connection speed in the network panel Chrome Dev Tools <em>(down to regular 3G)</em> and applied the 5x CPU slowdown in the timeline section. Then I hit ‘reload’ with screenshots enabled and then scrubbed through until I spotted something appear on screen that was the result of the framework running.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*K8r0QUfv4nWhqZ-QYttfjQ.png" /><figcaption>Totally un-sciency way of measuring how quickly frameworks ‘boot’</figcaption></figure><p>There are of course many other ways of measuring this more accurately — depending on what type of answer you want. Not to mention the difference that may be made via SSR etc, but still, it’s pretty obvious that more JS === more time. Even if a page were rendered by the server and picked up on the client side by the JS framework — the render times in that chart above would simply become the ‘time to interactive’ as the JS still needs to download/be-parsed/be-executed, whether there’s static HTML there or not. A user ‘seeing something’ on the screen is simply not enough if the core product includes JS interactivity.</p><h4>Interactive in 2 seconds…</h4><p>I’ve often heard the mythical goal is ‘interactive in 2s’ (with the 3G throttle and CPU slowdown) and recently at my day-job I set a target on a very large project of having the initial bundle (enough JS for rendering, interactivity and Ajax) to be less than 50kb. Safe to say that hitting this target required me to chose a micro view lib like Preact, which, because of its tiny size, allowed me ‘space’ to bring in a few essential libraries whilst still keeping things below 50kb.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*ZrJKJqBsksWd-8uKM9OvgA.png" /></figure><h3>How I came up with the results</h3><h4>Create React App</h4><p><strong>Commands:</strong></p><ul><li><strong>Install: </strong>yarn global add create-react-app</li><li><strong>Scaffold:</strong> create-react-app cra-test</li><li><strong>Production Build: </strong>yarn run build</li><li><strong>JS Bundle Size: </strong>46kb</li><li><strong>First render</strong>: 770ms</li></ul><h4>Angular 2 CLI</h4><p><strong>Note:</strong> Why is aot not automatically part of the target=production configuration? It’s extremely easy to miss and I only knew about it because of a Podcast addiction :p</p><p><strong>Commands:</strong></p><ul><li><strong>Install: </strong>yarn global add @angular/cli</li><li><strong>Scaffold:</strong> ng new ng-test</li><li><strong>Production Build: </strong>ng build --aot --target=production</li><li><strong>JS Bundle Size: </strong>92kb (4 files)</li><li><strong>First render</strong>: 1500ms</li></ul><h4>Ember CLI (Ember app)</h4><p><strong>Note:</strong> The Ember scaffold process does NOT render any JS by default, which leads to a false ‘first render’ score — so I followed the instructions and added an outlet as directed. <em>This is the only CLI tool I had to do this for.</em></p><p><strong>Commands:</strong></p><ul><li><strong>Install:</strong></li><li>- yarn global add ember-cli</li><li>- yarn global add bower</li><li><strong>Scaffold:</strong> ember new ember-test</li><li><strong>Production Build: </strong>ember build --target=production</li><li><strong>JS Bundle Size: </strong>198.4kb (4 files)</li><li><strong>First render</strong>: 4200ms</li></ul><h4>Ember CLI (Glimmer app)</h4><p><strong>Note: </strong>There also appears to a --suppress-sizes flag, but when I used that, the JS bundle it produces appeared to be the same size + caused an error in the browser</p><p><strong>Commands:</strong></p><ul><li><strong>Install: </strong>yarn global add ember-cli/ember-cli</li><li><strong>Scaffold:</strong> ember new glimmer-app -b <a href="http://twitter.com/glimmer/blueprint">@glimmer/blueprint</a></li><li><strong>Production Build: </strong>ember build --target=production</li><li><strong>JS Bundle Size: </strong>34kb</li><li><strong>First render</strong>: 1000ms</li></ul><h4>Vue CLI</h4><p><strong>Note:</strong> Vue was by far the most confusing CLI experience — the need to specify a template + lots of questions about tooling — it feels very off-putting to newcomers, where’s the ‘default’?</p><p><strong>Commands:</strong></p><ul><li><strong>Install:</strong> yarn global add vue-cli</li><li><strong>Scaffold:</strong> vue init webpack vue-cli (followed by <strong>lots</strong> of questions)</li><li><strong>Production Build: </strong>npm run build</li><li><strong>JS Bundle Size: </strong>43.48kb (4 files)</li><li><strong>First render</strong>: 840ms</li></ul><h4>Create *Preact App</h4><p><strong>Note:</strong> I’m not entirely sure this is well supported, but it’s the first/only CLI tool I could find for a ‘micro’ lib.</p><p><strong>Commands:</strong></p><ul><li><strong>Install:</strong> yarn global add create-preact-app</li><li><strong>Scaffold:</strong> create-preact-app preact-test</li><li><strong>Production Build: </strong>npm run build</li><li><strong>JS Bundle Size: </strong>8.8kb</li><li><strong>First render</strong>: 412ms</li></ul><h4>Create Inferno App</h4><p><strong>Commands:</strong></p><ul><li><strong>Install:</strong> yarn global add create-inferno-app</li><li><strong>Scaffold:</strong> create-inferno-app inferno-test</li><li><strong>Production Build: </strong>yarn run build</li><li><strong>JS Bundle Size: </strong>70kb</li><li><strong>First render</strong>: 737ms</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*ZrJKJqBsksWd-8uKM9OvgA.png" /></figure><blockquote>Did you enjoy this? <em>Perhaps you’d enjoy some of my lessons on </em><a href="https://egghead.io/instructors/shane-osbourne"><em>https://egghead.io/instructors/shane-osbourn</em></a><em>e — many are free and I cover Vanilla JS, Typescript, RxJS and more.</em></blockquote><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=cfdba8bf5e4b" width="1" height="1" alt=""><hr><p><a href="https://medium.com/dailyjs/javascript-framework-battle-hello-world-in-each-cli-cfdba8bf5e4b">JavaScript Framework Battle: ‘Hello World’ in each Command-line interface</a> was originally published in <a href="https://medium.com/dailyjs">DailyJS</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Laravel + Docker Part 2 — preparing for production]]></title>
            <link>https://medium.com/@shakyShane/laravel-docker-part-2-preparing-for-production-9c6a024e9797?source=rss-6daf98a660a4------2</link>
            <guid isPermaLink="false">https://medium.com/p/9c6a024e9797</guid>
            <category><![CDATA[docker]]></category>
            <category><![CDATA[laravel]]></category>
            <category><![CDATA[nginx]]></category>
            <category><![CDATA[devops]]></category>
            <dc:creator><![CDATA[Shane Osbourne]]></dc:creator>
            <pubDate>Fri, 13 Jan 2017 22:17:46 GMT</pubDate>
            <atom:updated>2017-01-13T22:17:46.354Z</atom:updated>
            <content:encoded><![CDATA[<p>This is part 2 of 2 in which we’ll cover how to run a Laravel application <em>in production mode </em>with Docker. Feel free to catch up on <a href="https://medium.com/@shakyShane/laravel-docker-part-1-setup-for-development-e3daaefaf3c">Part 1</a> before diving into this.</p><p>The goal of this second post is highlight the differences in workflow between the development environment that we use to build our apps and the production environment in which our apps actually run.</p><h3>What’s different in production?</h3><p>This post will only cover a single-server setup. We’ll do it all the manual way so you can see all the bits and pieces that are needed.</p><p><strong>Source files:</strong></p><p>In development, we usually mount the current working directory into a container so that changes can be made to the source code and will be immediately reflected in the running application. In production however we’ll copy our source files directly into images so that they can be run independent of the host (ie: on a server somewhere).</p><p><strong>Exposed ports:</strong></p><p>In development we’ll sometimes bind ports on the host to containers to enable easier debugging. This is not required in production as services can communicate through the network docker-compose creates automatically for us.</p><p><strong>Data persistence:</strong></p><p>In development we setup MySql with a named volume to allow the data written to it to persist (even when our containers stop and restart) — but we didn’t handle situations where the app may want to write to disk (in the case of Laravel, this happens with views) — so, in production we’ll need to create a volume which will allow the data that’s written to disk to persist.</p><p><strong>Environment Variables</strong></p><p>In development we just had a .env file that was available to the container along with all the rest of our source code. In production we’ll load a different one in dynamically</p><p><strong>SSL &amp; Nginx config</strong></p><p>We’ll need to configure our web server to enable secure connections, but we’ll want to load the path to the certs dynamically at run time so that we can swap live ones for self-signed when testing the production setup locally.</p><h3>Step 1 — Prepare the ‘app’ image</h3><p>In the first post we created a PHP-FPM based image that was suitable for running a Laravel application. Then all we needed to do was mount our current directory into that container and the app worked. For production we need to think about it differently though, the steps are:</p><ol><li>Use a Laravel-ready PHP image as a base</li><li>Copy our source files into the image (not including the vendor directory)</li><li>Download &amp; Install composer</li><li>Use composer within the image to install our dependencies</li><li>Change the owner of certain directories that the application needs to write to</li><li>Run PHP artisan optimize to create the class map needed by the framework.</li></ol><p>So let’s dive in. First up you’ll want to create the app.dockerfile that we’ll use to accomplish all of this.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/72426f74bd94759c753dfb06f768dfc5/href">https://medium.com/media/72426f74bd94759c753dfb06f768dfc5/href</a></iframe><p><strong>Notes:</strong></p><ul><li>On line <strong>1 </strong>we’re<strong> </strong>building from an image I’ve published to the Docker hub that contains just enough to run a Laravel Application.</li><li>Because on line 3 &amp; 5 we use the COPY command, Docker will check the contents of the what’s copied each time we attempt a build. If nothing has changed, it can use a cached version of that layer — but if something does change, even a single byte in any file, the entire cache is discarded and all following commands will execute again. In our case that means every time we attempt to build this image, if our decencies have not changed (because the compose.lock file is identical) then Docker will not execute the RUN command that contains composer install and our builds will be nice and fast!</li></ul><h3>Step 2 — create .dockerignore</h3><p>In the previous snippet we saw the line COPY . /var/www This alone would include every single file (including hidden directories like .git) resulting in a HUGE image. To combat this we can include a .dockerignore file in the project root.</p><p>It works just like the .gitignore you’re used to, and should include the following as a minimum.</p><pre><em>.git<br></em>.idea<br>.env<br><em>node_modules<br></em>vendor<br><em>storage/framework/cache/**<br>storage/framework/sessions/**<br>storage/framework/views/**</em></pre><p><strong>Notes:</strong></p><ul><li>The last three lines are there to ensure any <em>files</em> written to disk by Laravel in development are not included, but we do need the directory structures to remain.</li></ul><h3>Step 3 — Build the ‘app’ image</h3><p>With the files app.dockerfile and .dockerignore created, we can go ahead and build our custom image.</p><pre>docker build -t shakyshane/laravel-app .</pre><p><strong>Notes:</strong></p><ul><li>This may take a few minutes whilst the dependencies are downloaded by composer.</li><li>-t here instructs Docker to ‘tag’ this image. You can use whatever naming convention you like, but you need to consider where this image is going to end up when deciding. In this example I’m publishing this image under my username on docker hub (shakyshane) and want the repo to be named laravel-app You can see what I mean here <a href="https://hub.docker.com/r/shakyshane/laravel-app/">https://hub.docker.com/r/shakyshane/laravel-app/</a> &amp; for the more visual amongst us:</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/786/1*LsNo3YjNuJxK8-sYabEocA.png" /></figure><p>When the image has built successfully, you can run docker images to verify the image is tagged correctly.</p><h3>Step 4— Add the ‘app’ service to docker-compose.prod.yml</h3><p>Now we can begin to build up the file docker-compose.prod.yml starting with our app service. (as in the first post, these are all under the ‘services’ key, but don’t worry as there will be full snippets later).</p><pre><em>#  The Application<br></em><strong>app:<br>  image: </strong>shakyshane/laravel-app<br>  <strong>volumes:<br>    </strong>- /var/www/storage<br>  <strong>env_file: </strong>&#39;.env.prod&#39;<br>  <strong>environment:<br>    </strong>- &quot;DB_HOST=database&quot;<br>    - &quot;REDIS_HOST=cache&quot;</pre><p><strong>Notes:</strong></p><ul><li>We use <strong>image: </strong>shakyshane/laravel-app here to point to the image we just built in the last step. Remember this has all of our source code inside it, so we do not need to mount any directories from our host.</li><li>We need a way for files written to disk by the application to persist in the environment in which it runs. Using a volume definition in this manner /var/www/storage will cause Docker to create a persistent volume on the host that will survive any container stop/starts.</li><li>We’re going to set up Redis as the session and cache driver, but I’ve yet to find a way to stop Laravel writing view caching to disk, that’s why this single volume is required.</li><li>We use <strong>env_file: </strong>‘.env.prod’ to mount a Laravel environment file into the container. This part is something that can be improved by using a dedicated secret-handling solution, but I’m not dev-opsy enough to do that, and in cases where we’re just using a single-server setup, I think this approach is ok. (please, any security experts out there, correct/point me in the right direction)</li></ul><h3><strong>Step 5</strong> — Prepare the ‘web’ image</h3><p>So, now that we’re building a custom image that contains a PHP environment, all of our source code and all application dependencies, it’s time to do the same for the web server.</p><p>This one is much simpler. We’re just going to build Nginx, copy a vhost.conf into place (suitable for a Laravel app), and then copy in the entire public directory. This will allow nginx to serve our static files that do not require processing by the application (such as images, css, js etc)</p><p>So go ahead now and create web.dockerfile . It only needs the following:</p><pre>FROM nginx:1.10-alpine<br><br>ADD vhost.conf /etc/nginx/conf.d/default.conf<br><br>COPY public /var/www/public</pre><p><strong>Notes:</strong></p><ul><li>This time we’re using nginx:1.10-alpine — the -alpine bit on the end means this base image was built from a teeny tiny linux base image that will shave 100s of MB from our final image size.</li><li>The alpine build also supports HTTP2 out of the box also, so, bonus.</li></ul><h3>Step 6 — Create the NGINX config</h3><p>Save the following as vhost.conf — that’s the name we had above in web.dockerfile</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/8b839629e4da3f0a24057c4694fffed6/href">https://medium.com/media/8b839629e4da3f0a24057c4694fffed6/href</a></iframe><p><strong>Notes:</strong></p><ul><li>Where you see paths like/etc/letsencrypt/live/example.com in that file above, you can change example.com for your own domain if you have one, but either way, go and download some self-signed certs from <a href="http://www.selfsignedcertificate.com/">http://www.selfsignedcertificate.com/</a> (or generate your own if you know how) and stick them in the following directory certs/live/example.com — it should look like this when done.</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/450/1*0Ig-RqenxtkTRSvnzSfgmA.png" /></figure><ul><li>The idea is that when testing locally, you can pass in something like LE_DIR=certs — referring to your local directory, but then on the server you can pass LE_DIR=/etc/letsencrypt — which is where the <a href="https://certbot.eff.org/">certbot</a> will dump your certs. This will create security warnings locally due the self-signed certificates, but you can click past the warning to fully test HTTP2 + all your links over a HTTPs connection.</li></ul><h3>Step 7 — Build the ‘web’ image</h3><p>With the files web.dockerfile and vhost.conf created, we can go ahead and build our second custom image.</p><pre>docker build -t shakyshane/laravel-web .</pre><p><strong>Notes:</strong></p><ul><li>Just as the previous image, we tag this one to match the repo name under which it will live later.</li></ul><h3>Step 8— Add the ‘web’ service to docker-compose.prod.yml</h3><pre><em># The Web Server<br></em><strong>web:<br>  image: </strong>shakyshane/laravel-web<br>  <strong>volumes:<br>    </strong>- &quot;${LE_DIR}:/etc/letsencrypt&quot;<br>  <strong>ports:<br>    </strong>- 80:80<br>    - 443:443</pre><p><strong>Notes:</strong></p><ul><li>We mount the directory containing our certificates using an environment variable. Docker-compose will replace ${LE_DIR} with the value we provide at run time which will allow us to swap between live/local certs.</li><li>We bind both port 80 &amp; 443 from host-&gt;container. This is so that we can handle both insecure and secure traffic — you can see this in the second server block in the vhost.conf file above.</li></ul><h3>Step 9 — Add MySql and Redis services to docker-compose.prod.yml</h3><p>Finally, the configuration for MySql and Redis needs to be placed in our docker-compose.prod.yml file — but for these we do <em>not </em>need to build any custom images.</p><pre><strong>version: </strong>&#39;2&#39;<br><strong>services:<br><br>  </strong><em># The Database<br>  </em><strong>database:<br>    image: </strong>mysql:5.6<br>    <strong>volumes:<br>      </strong>- dbdata:/var/lib/mysql<br>    <strong>environment:<br>      </strong>- &quot;MYSQL_DATABASE=homestead&quot;<br>      - &quot;MYSQL_USER=homestead&quot;<br>      - &quot;MYSQL_PASSWORD=secret&quot;<br>      - &quot;MYSQL_ROOT_PASSWORD=secret&quot;<br>  <br>  <em># redis<br>  </em><strong>cache:<br>    image: </strong>redis:3.0-alpine<br><br><strong>volumes:<br>  dbdata:</strong></pre><p><strong>Notes:</strong></p><ul><li>We use a named volume for MySql to ensure data will persist on the host, but for Redis we don’t need to do this as the image is configured to handle this for us.</li></ul><h3>Step 10 — Create .env.prod</h3><p>As with any Laravel Application, you’re going to need a file containing your app’s secrets, a file that is usually different for each environment. Now because we want to run this application in ‘production’ mode on our local machine, we can just copy/paste the default Laravel .env.example sample file and rename to .env.prod — then when this application ends up on a server somewhere we can create the correct environment file and use that instead.</p><h3><strong>Step 11 — Test in production mode, on your machine!</strong></h3><p>This is where things start to get seriously cool. We’ve built our source &amp; dependencies directly into images in a way that allows them to run on any host that has Docker installed, but the best bit is that this includes your local development machine!. Say goodbye to test locally, pushing and hoping for the best!</p><p>At this point we just have a final command to run, but it’s worth recapping what you should’ve done up to this point.</p><ol><li>Created a app.dockerfile , built an image from it &amp; configured it in docker.compose.prod.yml</li><li>^ same again for web.dockerfile</li><li>Created a .dockerignore file for exclude files and directories from COPY commands</li><li>Created the vhost.conf file with NGINX configuration, created self-signed certs for local testing &amp; added the docker-compose.prod.yml config for it</li><li>Added the Redis &amp; MySql configurations</li><li>Created a env.prod configuration file</li></ol><p>Your docker-compose.prod.yml should look something like the following:</p><pre><strong>version: </strong>&#39;2&#39;<br><strong>services:<br><br>  </strong><em>#  The Application<br>  </em><strong>app:<br>    image: </strong>shakyshane/laravel-app<br>    <strong>working_dir: </strong>/var/www<br>    <strong>volumes:<br>      </strong>- /var/www/storage<br>    <strong>env_file: </strong>&#39;.env&#39;<br>    <strong>environment:<br>      </strong>- &quot;DB_HOST=database&quot;<br>      - &quot;REDIS_HOST=cache&quot;<br><br>  <em># The Web Server<br>  </em><strong>web:<br>    image: </strong>shakyshane/laravel-web<br>    <strong>volumes:<br>      </strong>- &quot;${LE_DIR}:/etc/letsencrypt&quot;<br>    <strong>ports:<br>      </strong>- 80:80<br>      - 443:443<br><br>  <em># The Database<br>  </em><strong>database:<br>    image: </strong>mysql:5.6<br>    <strong>volumes:<br>      </strong>- dbdata:/var/lib/mysql<br>    <strong>environment:<br>      </strong>- &quot;MYSQL_DATABASE=homestead&quot;<br>      - &quot;MYSQL_USER=homestead&quot;<br>      - &quot;MYSQL_PASSWORD=secret&quot;<br>      - &quot;MYSQL_ROOT_PASSWORD=secret&quot;<br><br>  <em># redis<br>  </em><strong>cache:<br>    image: </strong>redis:3.0-alpine<br><br><strong>volumes:<br>  dbdata:</strong></pre><p>Once that’s in place, all that’s left to do is run a single command…</p><pre>LE_DIR=./certs docker-compose -f docker-compose.prod.yml up</pre><p>And in a few moments your application will be accessible at https://0.0.0.0</p><p>Congratulations, you’re now running your application in a way which is 100% reproducible on a production server — no kidding! Once the 2 images we built are published to a registry like Docker Hub, all you’d need to do is place the docker-compose.prod.yml &amp; an .env file on a server and your application will be running using the exact system you already tested on your local machine.</p><p>This is developer bliss.</p><h3>Next Steps.</h3><p>I can only cover so much here, and the main focus of this blog was to fill in the pieces I found to be lacking from other posts in terms of the <em>differences </em>between using Docker in Development and then in Production.</p><p>As I mentioned at the beginning, this setup has served me well in a single-server environment where I did the following steps to get it running in production:</p><ol><li>Created a Digital Ocean Droplet with Docker pre-installed.</li><li><a href="https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-compose-on-ubuntu-14-04">Followed one of their guides</a> for installing docker-compose (it doesn’t come pre-installed on Linux like it does on Docker for Mac/Windows)</li><li>Setup auto-building in Docker Hub, so that when I push to Github the two images are automatically built.</li><li>Used <a href="https://certbot.eff.org/">certbot</a> to generate SSL certs on the server</li><li>Ran the same docker-compose command seen above, but this time swapping the LE_DIR part for the path on the server to the certs. eg: LE_DIR=/etc/letsencrypt docker-compose -f docker-compose.prod.yml up</li></ol><p>There’s so much more to Docker however, I’m just showing you the first steps here so you can grasp some concepts about what it means to containerize your application.</p><p>Here’s the <a href="https://github.com/shakyShane/laravel-docker/tree/blog/part-2">REPO</a>, you can find all the files in there, have fun!</p><h3><strong>Next things to look at:</strong></h3><ul><li><strong>CI:</strong> Lots of the steps in this post can be automated, for example, you may use a CI service to build and publish your images to a registry, along with running tests etc. I would recommend that you get familiar with building/pushing images manually first however, just so that you fully understand the workflow bits and pieces before you move into a completely automated flow.</li><li><strong>Secret &amp; certs management: </strong>Having an entire application along with it’s running environment neatly packaged into containers just feels like the <em>correct</em> thing to do, but due to my lack of dev opts/sysadmin skills, I honestly don’t know how to remove these 2 items from being mounted at run time. I hear that Docker is currently working on it’s own solution for secrets management, which will be so cool to have it integrated. Container services such as <a href="https://kubernetes.io/">Kubernetes</a> already provide solutions for it out of the box however, and then there’s also dedicated things such as <a href="https://www.vaultproject.io/">Vault</a>…. So much to learn… :)</li><li><strong>Swarm mode, scaling etc: </strong>Docker has native support for scaling applications across multiple host using simple CLI commands. Very much looking forward to using this in the future. I’ve tried using docker-machine to boot up cloud servers on Digital Ocean and I can tell you right now that it’s a mind-blowing experience — especially when you realise that all of your regular docker commands, including things such as docker-compose, all work over the network… Crazy cool.</li></ul><p>Like this? If you did, and you find yourself doing any front-end work, perhaps you’d enjoy some of my lessons on <a href="https://egghead.io/instructors/shane-osbourne">https://egghead.io/instructors/shane-osbourn</a>e— many are free and I cover Vanilla JS, Typescript, RxJS and more.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=9c6a024e9797" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Laravel + Docker Part 1 — setup for Development]]></title>
            <link>https://medium.com/@shakyShane/laravel-docker-part-1-setup-for-development-e3daaefaf3c?source=rss-6daf98a660a4------2</link>
            <guid isPermaLink="false">https://medium.com/p/e3daaefaf3c</guid>
            <category><![CDATA[docker-compose]]></category>
            <category><![CDATA[laravel-5]]></category>
            <category><![CDATA[laravel]]></category>
            <category><![CDATA[docker]]></category>
            <category><![CDATA[devops]]></category>
            <dc:creator><![CDATA[Shane Osbourne]]></dc:creator>
            <pubDate>Mon, 02 Jan 2017 12:26:18 GMT</pubDate>
            <atom:updated>2017-01-06T18:34:53.959Z</atom:updated>
            <content:encoded><![CDATA[<p>This is part 1 of 2 in which we’ll cover how to run a Laravel application <em>locally</em> with Docker. Part 2 will then complete the tutorial by showing how to run the same application in Production.</p><p>The goal in this first post is to create a reproducible development environment that is lightweight, fast &amp; does not depend on anything being globally installed on our local machine (other than docker itself).</p><p>So, we’re talking about achieving the following goals.</p><ul><li><strong>No</strong> Mamp or similar programs</li><li><strong>No</strong> Vagrant or similar VM setups</li><li><strong>No</strong> Globally installed PHP</li><li><strong>No</strong> Globally installed Composer</li></ul><h3>Step 1 — grab the latest Laravel release</h3><p>I’m using curl here to grab the latest release from Github, but feel free to obtain a copy of the source however you like — you could just do a git clone — but if you do, don’t forget to wipe the .git directory immediately after.</p><blockquote>We’re not following the official guide for setting up Laravel here as we don’t want the hassle of installing PHP/Composer globally on our dev machine</blockquote><pre>curl -L <a href="https://github.com/laravel/laravel/archive/v5.3.16.tar.gz">https://github.com/laravel/laravel/archive/v5.3.16.tar.gz</a> | tar xz</pre><p>That would create a directory called laravel-5.3.16 — you should rename it to whatever you want your project to be. eg mv laravel-5.3.16 my-site and then cd into it.</p><h3>Step 2 — Install dependencies</h3><p>We need to run composer install to pull in all of the libraries that make up Laravel — we can use the composer/composer image from the <a href="https://hub.docker.com/r/composer/composer/">docker hub</a> to handle this for us.</p><p>We’ll create a throw-away container by executing the following command.</p><pre>docker run --rm -v $(pwd):/app composer/composer install</pre><p><strong>Notes:</strong></p><ul><li>We use the--rm flag to ensure this container does not linger around following the install.</li><li>-v $(pwd):/app is used to mount the current directory on the host (your cpu) to /app in the container — this is where composer running inside the container expects to find a composer.json</li><li>-v $(pwd):/app will also ensure that the vendor folder created by composer inside the container is also visible on our machine.</li></ul><h3>Step 3 — create the development docker-compose.yml file</h3><p>We’ll be using 2 separate files to define how our environments should run. 1 for development, and another for production. Now, Docker-compose <em>does</em> actually support using multiple input files, allowing you to override specific keys — but because of the way it <em>merges</em> arrays, it will not be suitable for our particular use case. We’ll just have to put up with a bit of duplication in both files.</p><p>Anyway, you’ll need to create the following file:</p><ul><li>docker-compose.yml</li></ul><p>It should begin with…</p><pre><strong>version: </strong>&#39;2&#39;<br><strong>services:<br>   ... our services will go here</strong></pre><p>… into which we can begin to add our services.</p><h4>PHP-FPM</h4><p>This will handle executing the code within the application. We’ll also use this service to execute arbitrary php scripts — such as running artisan the CLI tool that ships with Laravel. (remember this block is under the ‘services’ key from above — don’t worry, there’ll be full snippets later)</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/9c9b65140c7e678db651a18e4dcfa608/href">https://medium.com/media/9c9b65140c7e678db651a18e4dcfa608/href</a></iframe><p><strong>Notes:</strong></p><ul><li>We’re going to use a separate app.dockerfile <em>(line </em><strong><em>4</em></strong><em>)</em> to build our image as we want to control exactly what modules PHP is using.</li><li>We set the working directory to /var/www — that’s where the app code will be inside the container, so it’ll save us having to CD into that folder should we ever attach to the container. It will also make exec commands shorter (we’ll see an example of this shortly)</li><li>We use a single volume definition ./:/var/www to mount the everything in the current directory on the host into /var/www in the container. This will allow us to make changes to our source code and have them reflected in the running application immediately. This is going to make your app feel sluggish in the browser — a few hundred ms lag (especially on OSX), but fear not — in part 2 when we transition to a production setup the speed issues will no longer be there.</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/796/1*YkSyi6BL3TQSqTrX_AZKKg.png" /><figcaption>host:container volume mapping explained</figcaption></figure><ul><li>The environment variables DB_PORT &amp; DB_HOST are set here to match up with the database container we’ll create shortly</li></ul><p>Now we need to create the app.dockerfile that we referenced in the build setting above.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/026cab9263750c1b15b80f6a17ea7f23/href">https://medium.com/media/026cab9263750c1b15b80f6a17ea7f23/href</a></iframe><p><strong>Notes:</strong></p><ul><li>php:7.0.4-fpm is used here, but could be anything of your choosing.</li><li>The rest is the just basics needed for a typical Laravel CRUD app, with imagick thrown in for good measure (I have a few photo apps :))</li></ul><h4>NGINX</h4><p>Next, we need a web-server configured to handle both the serving of static files, and pass-through of requests that need to be handled by the Laravel application. We’ll follow the same pattern as before, this time naming the service web and it’s accompanying file web.dockerfile</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/e8b33854e101603a6df8cb4babfeebdd/href">https://medium.com/media/e8b33854e101603a6df8cb4babfeebdd/href</a></iframe><p><strong>Notes:</strong></p><ul><li>We use volumes_from here to re-use what we defined in the PHP-FPM service above. This means that this nginx container will inherit the /var/www directory (which is in turn mounted to our development machine)</li><li>we map port 8080 on the host to 80 in the container. This is so that we can access 0.0.0.0:8080 whilst in development and won’t need to mess around with host names.</li></ul><p>Next, we’ll create the web.dockerfile mentioned above:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/5c4b0f42fc278e0a95d97333279bfe21/href">https://medium.com/media/5c4b0f42fc278e0a95d97333279bfe21/href</a></iframe><p><strong>Notes:</strong></p><ul><li>As you can see, it’s just the standard nginx official image, with the vhost.conf file from our local directory (created next) added to configure the server.</li></ul><p>And here’s that file vhost.conf</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/9f7169e7cf3d532f87454de418831400/href">https://medium.com/media/9f7169e7cf3d532f87454de418831400/href</a></iframe><p><strong>Notes:</strong></p><ul><li>Notice how on <em>line </em><strong><em>12</em></strong> we hand off php requests to app:9000 — this works because docker-compose will automatically link our services in a way that allows them to talk to each other via these simple hostnames.</li><li>The rest is just very basic nginx config — it’s not tuned in any way for performance or security— we’ll handle those in another post!</li></ul><h4>MYSQL</h4><p>Next, we’ll configure the database, but we need to handle this one slightly differently to the previous services. With PHP-FPM &amp; Nginx, we want files from our <em>local</em> directory to be accessible <em>inside</em> containers, to help speed up the development process. But this is not the case with a database — instead we want files created in the container to persist, allowing us to stop &amp; restart services without losing data. This can also be achieved with a volume, only this time there’s no need for it to be synchronised with our host files.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/11cee1e69dfa8c08690be7959d30d1c7/href">https://medium.com/media/11cee1e69dfa8c08690be7959d30d1c7/href</a></iframe><p><strong>Notes:</strong></p><ul><li>On line <strong>16 </strong>we create a named volume, dbdata (the trailing : is deliberate, a limitation of yaml that you don’t need to worry about)</li><li>Then, on line <strong>7</strong> we reference that volume by using the format &lt;name&gt;:&lt;dir&gt; So this is saying “mount the directory /var/lib/mysql from the volume named dbdata”</li><li>We set the required environment variables on lines <strong>9, 10, 11 &amp; 12 </strong>as required by the MySql docker image.</li><li>We used homestead as the database/user name &amp; secret as the password as these values match what can be found in the default .env that ships with Laravel, meaning we won’t have to change it there.</li><li>Finally on line <strong>13</strong> we create an addition port mapping of 33061 on the host to the regular 3306 inside the container. This is done solely to allow external tools easier access to the database whilst in development — it will not be needed in the production setup.</li></ul><h4><strong>All services together</strong></h4><p>Let’s look at the final docker-compose.yml now to see how these pieces fit together.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/731955fa8dda66695aacc12b4cdb2f3a/href">https://medium.com/media/731955fa8dda66695aacc12b4cdb2f3a/href</a></iframe><h3>Starting the services</h3><p>If you’re following along, we’re almost there! But before we boot up the environment, we need to ensure 1) you ran composer install as instructed earlier in the post &amp; 2) you have created the following files in the root of the project (all detailed above).</p><ul><li>docker-compose.yml</li><li>app.dockerfile</li><li>web.dockerfile</li><li>vhost.conf</li></ul><p><a href="https://gist.github.com/anonymous/a13cf604981726c8e8b0bb05a35664e2">Here’s a gist</a> showing all the files together, for your reference/copy&amp;paste needs.</p><p>Once you have all of those in place, you can go ahead and execute the following command which will start all 3 services.</p><pre>docker-compose up</pre><p><strong>Notes:</strong></p><ul><li>The very first time you run this, it’s going to take <em>minutes </em>to start as it will need to download the images for all 3 services. Subsequent start times will be in the region of a second or 2, so don’t be put off by that initial download time.</li></ul><h3>Final steps, prepare the Laravel Application.</h3><p>The last steps involve a couple of things that would’ve occurred automatically had we been using the official installer.</p><p><strong>Environment configuration file</strong></p><p>We first need to copy the .env.example file into our own .env file. This file will not be checked into version control &amp; we’ll have separate ones for development &amp; production. For now, just copy .env.example -&gt; .env</p><p><strong>Application key &amp; optimize</strong></p><p>Next we’ll need to set the application key &amp; run the optimize command. Both of these are handled by artisan, but because we have PHP and the entire Laravel app running <em>inside</em> of a container, we can’t just run php artisan key:generate on our local machine like you normally would— we need to be issuing these commands directly into the container instead.</p><p>Luckily docker-compose has a really nice abstraction for handling this, the two commands needed would look like:</p><pre>docker-compose exec app php artisan key:generate<br>docker-compose exec app php artisan optimize</pre><p>In plain english, the first line is saying:</p><blockquote>Execute the command ‘php artisan key:generate’ inside the container used by the service ‘app’</blockquote><p>And just to make it crystal clear, here’s a visual showing how those commands break down:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*XtLoc1ZwMLm1XRVY1ifQuA.png" /></figure><p>You’ll need this pattern any time you want to use artisan — remember the whole point of using docker is to avoid having the headaches of installing PHP versions on your local machine &amp; this is how we get around it — by <em>sending </em>commands into containers, rather that running them directly.</p><p>Some other commands you’ll be running often in a Laravel project:</p><pre>docker-compose exec app php artisan migrate --seed<br>docker-compose exec app php artisan make:controller MyController</pre><p>I think you get the point by now.</p><blockquote>Tip: create an alias like phpd that removes the need to type the full command, eg: phpd artisan migrate --seed</blockquote><p>Once you’ve executed the two command mentioned before (artisan key:generate &amp; artisan optimize) the application will now be ready to use — go ahead and hit <a href="http://0.0.0.0:8080">http://0.0.0.0:8080</a> in your browser and you’ll presented with this lovely screen.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*4MleKBjK5aU_rL4fs1ro8g.png" /><figcaption>Look ma, no PHP installed on my machine.</figcaption></figure><p>Happy developing!</p><h3><strong>Next up, modifying this setup for production.</strong></h3><p>There are already a few blogs out there covering development environments like this for Laravel, but I’ve found <strong>none </strong>so far showing how to complete the next important step — taking this environment and getting it ready for production use. I look forward to sharing that one very soon.</p><p><strong>Resources:</strong></p><ul><li><a href="https://github.com/shakyShane/laravel-docker">Git repo</a></li><li>Git <a href="https://github.com/shakyShane/laravel-docker/commit/432fefd2a030475aaa972ba45131b79938b21e17">commit</a> showing the files created in this post</li></ul><p>Like this? If you did, and you find yourself doing any front-end work, perhaps you’d enjoy some of my lessons on <a href="https://egghead.io/instructors/shane-osbourne">https://egghead.io/instructors/shane-osbourn</a>e— many are free and I cover Vanilla JS, Typescript, RxJS and more.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=e3daaefaf3c" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Hello World with Preact, JSX & Typescript]]></title>
            <link>https://medium.com/@shakyShane/hello-world-with-preact-jsx-typescript-6d70cf2ebf01?source=rss-6daf98a660a4------2</link>
            <guid isPermaLink="false">https://medium.com/p/6d70cf2ebf01</guid>
            <category><![CDATA[typescript]]></category>
            <category><![CDATA[javascript]]></category>
            <category><![CDATA[preact]]></category>
            <category><![CDATA[react]]></category>
            <dc:creator><![CDATA[Shane Osbourne]]></dc:creator>
            <pubDate>Thu, 01 Dec 2016 21:52:39 GMT</pubDate>
            <atom:updated>2017-01-10T18:17:01.235Z</atom:updated>
            <content:encoded><![CDATA[<blockquote>Update: Jan 10th 2017: If you’re using Typescript 2.1 or above, you can can swap any mention of typescript@next with typescript — the option required (`jsxFactory’) is now in the stable release :)</blockquote><p>In this tutorial we’re going to setup the <em>bare minimum</em> needed to use <a href="https://preactjs.com/">Preact</a> with <a href="https://www.typescriptlang.org/">Typescript</a>. No Hot-Module-Reloading or complicated Webpack configurations (we will use Webpack, but in its simplest form), just the actual bits you need to learn how to use both of these amazing tools in your projects today.</p><p>Now, Typescript already works extremely well with <strong>React (a &gt;40kb Preact alternative)</strong> — static typing across both props &amp; state objects makes authoring JSX an absolute pleasure. Having the compiler or your editor warn you about typos or mis-matched types as you work is something you just get used to and will be reluctant to give up when switching between libraries.</p><p>This is the problem I faced recently — I want my 3KB view renderer (Preact), but I also want my static types and awesome developer experience.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*csQO9mOYxbfxrAyclT-Ujg.png" /><figcaption>The goal — let Typescript guide us through development</figcaption></figure><p>The good news is that it <em>can</em> be done, it just requires a bleeding edge Typescript version and the correct configuration. Here, let me walk you through it.</p><h4>Step 1 — Install dependencies</h4><pre>yarn add typescript@next preact webpack ts-loader</pre><pre># or, for npm users</pre><pre>npm i typescript@next preact webpack ts-loader</pre><p>Note: We need to use <strong>typescript@next</strong> as this allows us to use the upcoming compiler option jsxFactory which we’ll see shortly.</p><h4>Step 2 — Create a tsconfig.json file</h4><p>This will configure the Typescript compiler with the options needed for Preact.</p><pre>{<br>  &quot;compilerOptions&quot;: {<br>    &quot;sourceMap&quot;: <strong>true</strong>,<br>    &quot;module&quot;: &quot;commonjs&quot;,<br>    &quot;jsx&quot;: &quot;react&quot;,<br>    &quot;jsxFactory&quot;: &quot;h&quot;,<br>    &quot;target&quot;: &quot;es5&quot;<br>  },<br>  &quot;include&quot;: [<br>    &quot;src/*.ts&quot;,<br>    &quot;src/*.tsx&quot;<br>  ]<br>}</pre><p><strong>Notes:</strong></p><ul><li>We set &quot;jsx&quot;: &quot;react&quot; to instruct Typescript to transpile JSX like &lt;Hello /&gt; into regular JS… But, this alone would result in something like React.createElement(...) calls which isn’t going to work for us, as Preact uses h instead.</li><li>This is why we set &quot;jsxFactory&quot;: &quot;h&quot; so that when Typescript encounters some JSX like &lt;HelloWorld /&gt; it will instead compile that into h(HelloWorld) which is what Preact can then use.</li></ul><h4>Step 3 — Create a webpack.config.js file</h4><p>This is the minimum code needed to get this setup working. You can go crazy and do some awesome thing with Webpack, but I would suggest you start here and build up piece by piece as you need it — you’ll learn more this way as opposed to copy/pasting a 200 line config that someone else set up.</p><pre>module.exports = {<br>    devtool: &#39;source-map&#39;,<br>    entry: [&#39;./src/app&#39;],<br>    output: {<br>        path: &#39;dist&#39;,<br>        filename: &#39;app.js&#39;<br>    },<br>    resolve: {<br>        extensions: [&#39;&#39;, &#39;.ts&#39;, &#39;.tsx&#39;]<br>    },<br>    module: {<br>        loaders: [<br>            {<br>                test: /\.tsx?$/,<br>                exclude: /node_modules/,<br>                loaders: [&#39;ts-loader&#39;]<br>            }<br>        ]<br>    }<br>};</pre><p><strong>Notes:</strong></p><ul><li>We set the extensions array on the resolve property so that we can ‘import’ or ‘require’ both .ts &amp; .tsx files.</li><li>The single item in the loaders array will instruct Webpack to process any .ts or .tsx files with the ts-loader plugin that we installed before. The cool thing about ts-loader is that it will use your locally installed version of Typescript by default, so there’s nothing extra we have to configure. :)</li></ul><p><strong>Step 4 — Create the entry file</strong> src/app.tsx</p><pre><strong>import </strong>{ h, render } <strong>from </strong>&#39;preact&#39;;<br><strong>import </strong>HelloWorld <strong>from </strong>&#39;./HelloWorld&#39;;<br><br>render(&lt;HelloWorld name=&quot;World&quot; /&gt;, document.querySelector(&#39;#app&#39;));</pre><p><strong>Notes:</strong></p><ul><li>on line 1 we import the render function from preact, but also the h function. We don’t need to manually call this function, but remember that Typescript will convert any JSX into h() calls, so we need it to be available in any file that contains JSX, otherwise we’ll see an error when trying to compile.</li><li>on line 3 we’re mounting the HelloWorld component into the DOM, and passing a single prop called name . We’re doing this as a simple example of where Typescript can help — by validating your props!</li></ul><p><strong>Step 4 — Create the HelloWorld Component</strong>src/HelloWorld.tsx</p><pre><strong>import </strong>{h, Component} <strong>from </strong>&#39;preact&#39;;<br><br><strong>export interface </strong>HelloWorldProps {<br>    name: <strong>string<br></strong>}<br><br><strong>export default class </strong>HelloWorld <strong>extends </strong>Component&lt;HelloWorldProps, <strong>any</strong>&gt; {<br>    render (props) {<br>        <strong>return </strong>&lt;p&gt;Hello {props.name}!&lt;/p&gt;<br>    }<br>}</pre><p><strong>Notes:</strong></p><ul><li>I’ve declared the interface HelloWorldProps with a single name property here just to show how you can utilise generics when creating Components in Preact — just think of it as <em>better</em> propTypes. This is not a Typescript tutorial however, so we’ll leave the TS details there — just know that by providing Component&lt;HelloWorldProps... we’re associating the interface HelloWorldProps with this HelloWorld class and this is what allows Typescript to provide type checking both inside this component and at any point in which this component is used elsewhere.</li><li>Notice that in the render function here we get access to the component&#39;s props (as well as state) — this is an area in which Preact improves upon React :). Again though, this is not a how-to-use Preact tutorial either, so we’ll keep this bit brief.</li></ul><h3>Recap</h3><p>If you’re following along, your directory should now look like the following, (although if you used NPM to install your dependencies, you won’t have a yarn.lock file)</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/510/1*n1v9qyGQlMjRTgkcHZinPQ.png" /><figcaption>The minimum files needed for this setup</figcaption></figure><p><strong>Step 5 — run Webpack to create the bundle</strong></p><p>Because we installed Webpack locally, we can run it in the following way.</p><pre>./node_modules/.bin/webpack</pre><p>Webpack will use the config file that we created earlier to load the ‘entry’ point for the app. It will then create a bundle that includes each file we’ve referenced. So in our case the entry was ./src/app — so that will be included along with the Preact Library and our HelloWorld component.</p><p>If it all worked correctly, you should see something along the following lines:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*sfygLKJRGIRBd3IDNJ-2kQ.png" /></figure><p><strong>Notes:</strong></p><ul><li>The file sizes in that screenshot are un-minified &amp; un-gzipped as I didn’t want to include any extra config in this tutorial — but you can rest assured that when in production Preact really <em>is</em> just over 3.5kb :)</li></ul><p><strong>Step 6 — use the bundle in a web page</strong> index.html</p><p>Time to load it up in your browser of choice and marvel at the amazing HelloWorld component!</p><pre>&lt;!doctype html&gt;<br>&lt;html lang=&quot;en&quot;&gt;<br>&lt;head&gt;<br>    &lt;meta charset=&quot;UTF-8&quot;&gt;<br>    &lt;meta name=&quot;viewport&quot; content=&quot;width=device-width, user-scalable=no, initial-scale=1.0, maximum-scale=1.0, minimum-scale=1.0&quot;&gt;<br>    &lt;meta http-equiv=&quot;X-UA-Compatible&quot; content=&quot;ie=edge&quot;&gt;<br>    &lt;title&gt;Document&lt;/title&gt;<br>&lt;/head&gt;<br>&lt;body&gt;<br>&lt;div id=&quot;app&quot;&gt;&lt;/div&gt;<br>&lt;script src=&quot;dist/app.js&quot;&gt;&lt;/script&gt;<br>&lt;/body&gt;<br>&lt;/html&gt;</pre><h4>Conclusion</h4><p>Sometimes you have to do a little bit of extra work to get your favourite tools to play nicely together, but in my eyes it’s almost always worth it — especially if it boosts your productivity!. In this performance obsessed front-end world we live in I just cannot ignore a library like Preact that clocks in at only 3.5kb — but at the same time I was not willing to give up the developer experience I’ve become accustomed to with Typescript. This post shows that you can indeed have your cake, and eat it.</p><p>Like this? Perhaps you’d enjoy some of my lessons on <a href="https://egghead.io/instructors/shane-osbourne">https://egghead.io/instructors/shane-osbourn</a>e— many are free and I cover Vanilla JS, Typescript, RxJS and more.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=6d70cf2ebf01" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Docker Private Beta (OSX) first look]]></title>
            <link>https://medium.com/@shakyShane/docker-private-beta-osx-first-look-24a5306561a4?source=rss-6daf98a660a4------2</link>
            <guid isPermaLink="false">https://medium.com/p/24a5306561a4</guid>
            <category><![CDATA[docker]]></category>
            <category><![CDATA[devops]]></category>
            <dc:creator><![CDATA[Shane Osbourne]]></dc:creator>
            <pubDate>Tue, 19 Apr 2016 15:50:12 GMT</pubDate>
            <atom:updated>2016-04-19T15:50:12.976Z</atom:updated>
            <content:encoded><![CDATA[<p>Initial improvements:</p><h3>Every terminal session is now auto-configured</h3><p>Before: every single time you started a new terminal session, you needed to run the following:</p><pre>eval “$(docker-machine env)”</pre><p>Now: Nothing — it just works every time, in every session :)</p><h3>`docker ps` will now report the docker IP when listing any addresses</h3><p>Before: You needed to know your docker-machine IP address &amp; then append the port to it. (via `docker-machine ip`, or `env` etc)</p><p>Now: The correct IP is right there in the `docker ps` command. :heart: — this means access to your services are a copy/paste away</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/516/1*tOQ0NLZy0sU39E9GbdWZKw.png" /><figcaption>Previously, we would of seen 0.0.0.0:8000 here</figcaption></figure><p>Updating as I discover more. (looking for fixes re: permissions, volumes etc)</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=24a5306561a4" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Your Let’s Encrypt cert is expired.]]></title>
            <link>https://medium.com/@shakyShane/your-let-s-encrypt-cert-is-expired-17965eea0cda?source=rss-6daf98a660a4------2</link>
            <guid isPermaLink="false">https://medium.com/p/17965eea0cda</guid>
            <category><![CDATA[nginx]]></category>
            <category><![CDATA[lets-encrypt]]></category>
            <category><![CDATA[ssl-certificate]]></category>
            <dc:creator><![CDATA[Shane Osbourne]]></dc:creator>
            <pubDate>Fri, 11 Mar 2016 17:06:09 GMT</pubDate>
            <atom:updated>2016-03-11T17:29:43.724Z</atom:updated>
            <content:encoded><![CDATA[<p>Mine too! Well, more like the cert I’m using for browsersync.io has expired.</p><p>If you’re anything like me, you rushed onto your server the day that Let’s Encrypt entered public beta. Then because the auto service wasn’t yet available for nginx, you probably used the `certonly` command and wired up the SSL certs manually yourself…</p><p>Great, although now you’re in the situation where the cert is expired and you need to renew it. Must be the `renew` command right?</p><p>Not so easy — because your site is still online, and Let’s Encrypt needs to bind to port 80 to verify you’re the owner of the domain, you might get a horrible error stating that you should quit nginx for a moment whilst let’s encrypt does it’s thing…</p><p>Well that wouldn’t work either as the moment you disable nginx, you’ll get another error stating the domain name cannot be verified…</p><p>It’s well confusing, especially if you’re not very dev-opsy like me — but after about an hour of searching, I learned about the bundled plugin called ‘webroot’ that seemed to solve this exact problem.</p><p>So I went through all of the painful searching and came out the other side with not only my freshly renewed certs, but also this post that contains the copy/paste style solution you were looking for.</p><h4>Step 1 — update let’s encrypt</h4><p>You probably cloned it into your home directory right? Cool, so just move into that directory and do a git pull to grab all the latest goodies.</p><pre>cd letsencrypt<br><em>git </em>pull</pre><h4>Step 2 — Generate the certs again</h4><p>This time though, you need to pass the `webroot` and `w` flags that point to the directory on your server that holds your files (aka, your webroot, see what they did there?). Something like this</p><pre>./letsencrypt-auto certonly --webroot -w <em>&lt;your-web-root&gt; --email &lt;your-</em>email<em>&gt; -d &lt;your-domain&gt; -d &lt;your-other-domain&gt;</em></pre><p>So, for me, with my files located inside `/usr/share/nginx/browsersync/` it looked something like this.</p><pre><em>./letsencrypt-auto </em>certonly --webroot -w /usr/share/nginx/browsersync/ --email shane.osbourne8@gmail.com -d browsersync.io -d www.browsersync.io</pre><h4>Step 3— restart nginx</h4><p>Let’s Encypt will dump the freshly generated files into the same directory as it did on the first run, so all you need to do now is run something like.</p><pre><em>service nginx restart</em></pre><p>(You can probably just reload the config into nginx if you know how to do that to avoid restarting…)</p><p>That’s it! Follow the steps and you now have another 3 months (I think) of free SSL.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=17965eea0cda" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Macbook pro retina 15" 2012 battery replacement for £60.00]]></title>
            <link>https://medium.com/@shakyShane/macbook-pro-retina-15-2012-battery-replacement-6592fbb138d4?source=rss-6daf98a660a4------2</link>
            <guid isPermaLink="false">https://medium.com/p/6592fbb138d4</guid>
            <category><![CDATA[apple]]></category>
            <category><![CDATA[tech]]></category>
            <dc:creator><![CDATA[Shane Osbourne]]></dc:creator>
            <pubDate>Mon, 18 Jan 2016 08:49:56 GMT</pubDate>
            <atom:updated>2016-01-18T09:15:32.340Z</atom:updated>
            <content:encoded><![CDATA[<p>All six batteries are *glued* to the inside of the case (I know), so this is ridiculous, but possible. You’ll need</p><p><a href="http://www.amazon.co.uk/gp/product/B013FJPZGQ?psc=1&amp;redirect=true&amp;ref_=oh_aui_detailpage_o00_s00">New Battery</a></p><p><a href="http://www.ebay.co.uk/itm/like/251576390547?adgroupid=13585920426&amp;hlpht=true&amp;hlpv=2&amp;rlsatarget=pla-131843266386&amp;adtype=pla&amp;ff3=1&amp;lpid=122&amp;poi=&amp;ul_noapp=true&amp;limghlpsr=true&amp;ff19=0&amp;googleloc=1006965&amp;device=c&amp;chn=ps&amp;campaignid=207297426&amp;crdt=0&amp;ff12=67&amp;ff11=ICEP3.0.0-L&amp;ff14=122&amp;viphx=1&amp;ops=true&amp;ff13=80">Sticky Stuff Remover</a></p><p><a href="http://www.diy.com/screwdrivers-keys/stanley-slotted-screwdriver-25-x-50mm/174476_BQ.prd">Some kind of flat screwdriver</a></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*3bs8bpCw5-vrH8rLxw-pbA.jpeg" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*tuFpwUfXRmhQwvgkd_C2KQ.jpeg" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*vePwmR49BiXGWF00yumXxw.jpeg" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*yE1a0s3g3IQ1esmrFEQtPQ.jpeg" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*EhGXcvdQDf5pXVIagnrMDw.jpeg" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*TNjWntvJBa5axCHzcD8FEQ.jpeg" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*otKMAT7Ry7DLt04l6aY_VA.jpeg" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*aZAAGw2Xx3YoRbbNerR5yQ.jpeg" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*0coQ0cEnO1RR5B4hkBDacA.jpeg" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*dCYqNyjWO7wgA00cXViUvA.jpeg" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*F_Dn7r1XeBQdmMv4j7BmMA.jpeg" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*a3C7h-7Q58yz9Qe2gXQv4A.jpeg" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*nckHBpjgFU1sGhBChVbNSg.jpeg" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*6XzZQ0zr8sBdQ-4NhRXg0g.jpeg" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*RdFxZSAHBr5PicWJ3mFutQ.jpeg" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*WpMAYAP2mjpfOO1vysnNKQ.jpeg" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*e7Ziqt23GXQ98EiJtNe6sQ.jpeg" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*YnOWBznGuiNWpApHfsMaxg.jpeg" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*73UaAowBLgc3eyUNMPjWFQ.jpeg" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*EsJxuf9eLm0yn-gDH8dSpA.jpeg" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*0b9v1gc50f02jaiBBZ5tFw.jpeg" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*QWOOsE4EkXeLI6v0R5-new.jpeg" /></figure><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=6592fbb138d4" width="1" height="1" alt="">]]></content:encoded>
        </item>
    </channel>
</rss>