<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by Rafael Nunes on Medium]]></title>
        <description><![CDATA[Stories by Rafael Nunes on Medium]]></description>
        <link>https://medium.com/@peaonunes?source=rss-11295a0a71b8------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Fri, 10 Apr 2026 13:05:11 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@peaonunes/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Improving Time To First Byte and Web Vitals]]></title>
            <link>https://peaonunes.medium.com/improving-time-to-first-byte-and-web-vitals-e06638f7dd03?source=rss-11295a0a71b8------2</link>
            <guid isPermaLink="false">https://medium.com/p/e06638f7dd03</guid>
            <category><![CDATA[cache]]></category>
            <category><![CDATA[web-development]]></category>
            <category><![CDATA[web-vitals]]></category>
            <category><![CDATA[performance]]></category>
            <dc:creator><![CDATA[Rafael Nunes]]></dc:creator>
            <pubDate>Mon, 27 Sep 2021 11:58:10 GMT</pubDate>
            <atom:updated>2021-09-27T11:58:10.989Z</atom:updated>
            <content:encoded><![CDATA[<p>In this post we will cover quite a few different concepts I recently explored that influence the page speed, how they relate to Core Web Vitals and how to improve them!</p><p>Let’s start by agreeing on some terminologies and concepts that will be often referred on this post!</p><p><strong>Time To First Byte (TTFB)</strong></p><blockquote><em>TTFB measures the duration from the user or client making an HTTP request to the first byte of the page being received by the client’s browser.<br></em><a href="https://en.wikipedia.org/wiki/Time_to_first_byte"><em>https://en.wikipedia.org/wiki/Time_to_first_byte</em></a></blockquote><p>This measure is used to indicate the responsiveness of the resource, our websites, our servers and so forth. This is often displayed in milliseconds (ms) in the tools, and the <a href="https://dictionary.cambridge.org/dictionary/english/rule-of-thumb">rule of thumb</a> recommended by <a href="https://developers.google.com/speed/docs/insights/Server#overview">several players in the industry</a> is 200ms!</p><p>This concept alone is important enough to look for improvements that will impact our customer’s experience. However, it becomes better when we correlate it with another customer-focused metric, the Largest Contentful Paint.</p><p><strong>Web Vitals</strong></p><p>The <a href="https://web.dev/vitals/">Core Web Vitals (CWV)</a> initiated is meant to help us quantify the experience of our sites and find improvements that will result in a better customer experience. Besides providing metrics to look after and improve, these factors are now considered a <a href="https://developers.google.com/search/blog/2020/11/timing-for-page-experience">ranking signal for the Google Search algorithm</a>.</p><p>From the CWV metrics, we will be focusing on Largest Contentful Paint (LCP). If you are interested in knowing more about these metrics, check the <a href="https://web.dev/vitals/">Web Vitals</a> page.</p><blockquote><em>LCP metric reports the render time of the largest image or text block visible within the viewport relative to when the page started loading.<br></em><a href="https://web.dev/lcp/"><em>https://web.dev/lcp/</em></a></blockquote><figure><img alt="" src="https://cdn-images-1.medium.com/max/880/0*cPtvIfoShnVrwBYe.png" /><figcaption>LCP as per <a href="https://web.dev/lcp/"><em>https://web.dev/lcp/</em></a></figcaption></figure><p>The time it takes to render our website’s largest image or text block depends on how fast we deliver our pages and how fast they download any additional assets that make it.</p><p>So knowing that TTFB measures the responsiveness of our websites, then the LCP is probably the most important metric we can influence from those of CWV. And that is why we are going to focus on improving TTFB in this post.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/320/0*fQFLwledSxg_735z.png" /><figcaption><em>Screenshot of the element associated with LCP on </em><a href="http://spotify.com"><em>spotify.com</em></a><em>.</em></figcaption></figure><p>Now we know what these concepts are and how to interpret them, let’s see how to measure them!</p><p><strong>Measuring where the time is spent</strong></p><p>Before jumping on ways to improve metrics, we need to understand the current state of our applications and where the bottlenecks are.</p><p>Knowing how to measure changes is the most important step to get confidence out of our initiatives.</p><p>It is possible to track TTFB on;</p><ul><li>Devtools, <a href="https://developer.chrome.com/docs/devtools/network/reference/#timing-preview">on previewing time breakdown</a> that highlights the value for every resource requested by the browser, including the website itself. That is present in every modern browser.</li><li><a href="https://curl.se/">cURL</a>, on your terminal, can tell you the TTFB of any request. <a href="https://www.notion.so/1f5fbdbdd89551ba7925abe2645f92b5">There are plenty of gists on how to do it</a>.</li><li>Using other tools/sites like Bytecheck or KeyCDN.</li><li>Application Performance Monitoring (APMs) can also help us track this from within our clients (CSR) and servers (SSR).</li></ul><p>There are also a few ways you can track LCP;</p><ul><li><a href="https://developers.google.com/web/tools/lighthouse/">Lighthouse</a> is available on Chrome or as a standalone app on <a href="http://wed.dev">wed.dev</a> and generates a report about Performance that tells you the LCP of the page inspected.</li><li>Other websites like <a href="https://webpagetest.org">WebPageTest</a> will review your website and provide useful and detailed reports on areas of improvement.</li><li>Some tools, like <a href="https://calibreapp.com/">Calibre</a>, help us automate and track progress over time.</li><li>Application Performance Monitoring (APMs) can also help us track this from our clients and servers #2!</li></ul><p>The problem can be anywhere in between our routing infrastructure to the application code! Thankfully these tools help us understand better where the issues lay.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/480/0*7Du23ealMphYltV0" /></figure><p>My advice here is to start small and start early. Pick the tool we are currently more familiar with, or the one we find easier to start and then move on until we reached its limits!</p><p>Let’s talk improvements now!</p><p><strong>Improving TTFB for websites</strong></p><p><a href="https://en.wikipedia.org/wiki/Content_delivery_network">CDNs</a> are an excellent way to speed up the responsiveness of your pages, assets, etc. That is especially true when serving assets that do not change so often or rarely change. We should aim to have CDN caches on top of our fonts, images, data payloads, and entire pages (when possible).</p><p>This directly impacts several customer experience factors, more evident on LCP, as the customer will be downloading our pages much faster than if they had to reach the server.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/880/0*ZiW9QdzFOfN6aoaY.png" /><figcaption><em>Photo by NASA on Unsplash</em></figcaption></figure><p>Next is the data source closer to the server and the server closer to the customer!</p><p>Caching strategies are ineffective when the requests are unique or too distributed to a point CDNs will not get many hits. This scenario increases the importance of;</p><ol><li>Having our server as close to the customer as possible, distributing our sites globally when possible.</li><li>Having our data stores as close as possible to the servers. If our pages fetch data from databases or APIs to render (<a href="https://developers.google.com/web/updates/2019/02/rendering-on-the-web">CSR or SSR</a>), then let’s ensure these resources are in the same region as our servers.</li></ol><p>Both of these strategies avoid round-trips between regions and avoid adding a lot of latency to the requests.</p><p><strong>Improving TTFB of the assets in your websites</strong></p><p>Occasionally we can also observe a good time spent on the “pre-transfer” phase. The DNS resolution/Handshake/SSL is part of the initial setup of a request lifecycle, and they can take a considered portion of the time of the request.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/880/0*6fbhKMSUqRPx2RnO.png" /><figcaption>Drawing of web request time breakdown considering HTTP + TCP + TLS as per <a href="https://blog.cloudflare.com/a-question-of-timing/">Timing web requests</a>.</figcaption></figure><p>Anecdotally I often observe around 200ms spent on this phase on various sites and their respective resources.</p><p>The following rel values for the link tags are good ways to speed up your TTFB on our websites.</p><ul><li><a href="https://developer.mozilla.org/en-US/docs/Web/Performance/dns-prefetch">DNS prefetching</a>; adding this rel to a link tag pointing to the domain you will download the resource will make the browser attempt resolving the domain before that resource is requested on the page. Effectively saving time when you actually need the resource. Example; &lt;link rel=&quot;dns-prefetch&quot; href=&quot;[https://fonts.googleapis.com/](https://fonts.googleapis.com/)&quot;&gt;.</li><li><a href="https://developer.mozilla.org/en-US/docs/Web/HTML/Link_types/preconnect">Preconnect</a>; adding this rel to a link tag results in the DNS resolution, and also the TCP handshake, connection establishment (on HTTPS). Example; &lt;link rel=&quot;preconnect&quot; href=&quot;[https://fonts.googleapis.com/](https://fonts.googleapis.com/)&quot; crossorigin&gt;.</li><li><a href="https://developer.mozilla.org/en-US/docs/Web/HTML/Link_types/preload">Preload</a>; adding this rel to a link tag results in the browser fetching the asset while looking at that tag in the head of our documents. This will make the resources available sooner and avoid blocking or delaying the rendering! Example&lt;link rel=&quot;preload&quot; href=&quot;style.css&quot; as=&quot;style&quot;&gt;.</li></ul><p>⚠️ Utilising DNS-prefetch or preconnect against the same website domain is ineffective because that would already be resolved and cached by the browser. So target other domains!</p><p>⚠️ Because these are all tags included in the head of our documents, if we are already preloading assets under a DNS, we are less likely to have the compounding effect of using preload+prefetch+preconnect.</p><p>⚠️ Do not preload too many assets; otherwise, we can make things worse than before! Any preloaded asset will compete for bandwidth with other resources of ours pages.</p><p>💡 Consider using both preconnect and prefetching together so while one will save time on the resolution, the other will save time on the handshake.</p><p>💡 Consider using preload for assets above the fold only to optimise LCP, for example, hero images or fonts. Additionally, consider using prefetch and preconnect for resources that live in other domains and will be requested later in the page lifecycle.</p><p><strong>Improving TTFB on server</strong></p><p>Reviewing the connections between the servers and other data sources (databases, APIs, …) is important because the pre-transfer phase can take a long time there too!</p><p>This can positively impact all requests on the servers and not only initial page loads.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/880/0*iIfps8PwFjch1BLn.png" /><figcaption>Drawing of a request breakdown when fetching extra resources from a data source on the server</figcaption></figure><p>The most impactful advice regarding TTFB is to utilise <a href="https://en.wikipedia.org/wiki/HTTP_persistent_connection">keep-alive</a> when possible.</p><ul><li>Keep-alive is a property of an HTTP connection that maintains a connection alive after being established and used for the first time. Subsequent requests to the same destination will reuse that connection as opposed to creating new ones every time.</li><li>This is commonly supported by HTTP clients in the vast majority of frameworks and languages. For instance, in Node.js, we could do it by doing const agent = new https.Agent({ keepAlive: true });.</li></ul><p>I hope we can see now how we can save time spent on pre-transfer protocols on <strong>every request **</strong> when utilising this.</p><p>⚠️ Worth pointing out that maintaining keep-alive connections on the server can impact load balancing and memory consumption, so <a href="https://www.quora.com/Are-there-any-disadvantages-of-enabling-Keep-Alive-on-WebServer">there are valid reasons to keep it disabled</a>. It requires testing!</p><p>When using <a href="https://en.wikipedia.org/wiki/HTTP/2">HTTP/2</a>, this will probably be handled for us when utilizing their clients, and it is even more powerful.</p><p><strong>The impact</strong></p><p>TL: DR; The higher your TTFB, the higher the LCP will be! However, I could not find a linear correlation between TTFB and LCP in my endeavours on page performance. For instance, in some experiments, I noticed:</p><ul><li>A small delay in the request time, 50ms to 200ms, did not clearly affect the LCP.</li><li>A longer delay, 1s to 2s, correlated to an increase of the LCP time, but not by the same values, maybe from 0.5 to 1 second?</li></ul><p>My personal conclusion is that chasing up improvements of &lt; 200ms is less likely to improve LCP scores individually, but if that is an improvement on the TTFB of your website, then it is awesome!</p><p>The point is not to get fixated on the metrics! Depending on your website and infrastructure, different initiatives can yield many different results!</p><p>Ensuring we review our websites and APIs from <a href="https://en.wikipedia.org/wiki/First_principle">first principles</a> is important to identify potential improvements and deliver better customer experiences!</p><p>I hope this was useful, and I see you next time 👋</p><p><strong>Related readings</strong></p><ul><li><a href="https://en.wikipedia.org/wiki/Time_to_first_byte">Time To First Byte</a>, <a href="https://developers.google.com/speed/docs/insights/Server#overview">Improve server response time</a></li><li><a href="https://web.dev/vitals/">Core Web Vitals</a></li><li><a href="https://developers.google.com/search/blog/2020/11/timing-for-page-experience">Timing for bringing page experience to Google Search</a></li><li><a href="https://web.dev/lcp/">Largest Contentful Paint (LCP)</a></li><li><a href="https://developer.chrome.com/docs/devtools/network/reference/#timing-preview">Previewing time breakdown</a>, <a href="https://curl.se/">cURL</a></li><li><a href="https://blog.cloudflare.com/a-question-of-timing/">Timing web requests</a></li><li><a href="https://en.wikipedia.org/wiki/Content_delivery_network">Content Delivery Network</a>, <a href="https://developers.google.com/web/updates/2019/02/rendering-on-the-web">Page Rendering</a></li><li><a href="https://developer.mozilla.org/en-US/docs/Web/Performance/dns-prefetch">Using NS-Prefetch</a>, <a href="https://developer.mozilla.org/en-US/docs/Web/HTML/Link_types/preconnect">Using preconnect</a>, <a href="https://developer.mozilla.org/en-US/docs/Web/HTML/Link_types/preload">Link preload</a></li><li><a href="https://en.wikipedia.org/wiki/HTTP_persistent_connection">HTTP Keep-alive</a>, <a href="https://www.quora.com/Are-there-any-disadvantages-of-enabling-Keep-Alive-on-WebServer">Disadvantages of keep-alive</a></li></ul><p><em>Originally published at </em><a href="https://peaonunes.com/blog/improving-time-to-first-byte-and-web-vitals-44hc/"><em>https://peaonunes.com</em></a><em>.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=e06638f7dd03" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[How to store raw values with Rails.cache]]></title>
            <link>https://peaonunes.medium.com/how-to-store-raw-values-with-rails-cache-1d60592a6d14?source=rss-11295a0a71b8------2</link>
            <guid isPermaLink="false">https://medium.com/p/1d60592a6d14</guid>
            <category><![CDATA[redis]]></category>
            <category><![CDATA[ruby-on-rails]]></category>
            <category><![CDATA[cache]]></category>
            <category><![CDATA[english]]></category>
            <dc:creator><![CDATA[Rafael Nunes]]></dc:creator>
            <pubDate>Sun, 19 Sep 2021 07:13:03 GMT</pubDate>
            <atom:updated>2021-09-19T07:13:03.816Z</atom:updated>
            <content:encoded><![CDATA[<p>I was using the Rails cache and Redis and I quickly overflew the memory storage so I went on a small quest to better understand the Rails cache implementation. I thought it worths writing a bit about it.</p><blockquote><em>TLDR; use your cache client directly or pass the raw option as true to the Rails.cache methods.</em></blockquote><p>Rails provides a comprehensive and easy to use interface for caching, that is the <a href="https://guides.rubyonrails.org/caching_with_rails.html#cache-stores">Cache-Store</a>. It provides a common interface to any of the standard cache implementations that Rails provides out of the box, from the in-memory cache to file, <a href="https://www.memcached.org/">Memcached</a> and <a href="https://redis.io/">Redis</a>.</p><p>The cache implementation is very convenient because that allows us to store from HTML partials to Models and complex classes. The best part is that it abstracts the whole serialization so you always end up with workable entities without needing to worry about a thing.</p><pre>&gt; game = Game.last<br> =&gt; #&lt;Game id: 1, name: &quot;Pokemon&quot;, created_at: &quot;2021-01-14 12:10:59.872271000 +0000&quot;, updated_at: &quot;2021-01-14 12:10:59.872271000 +0000&quot;&gt;<br>&gt; Rails.cache.write(&#39;pokemon&#39;, game)<br> =&gt; &quot;OK&quot;<br>&gt; pokemon = Rails.cache.read(&#39;pokemon&#39;)<br> =&gt; #&lt;Game id: 1, name: &quot;Pokemon&quot;, created_at: &quot;2021-01-14 12:10:59.872271000 +0000&quot;, updated_at: &quot;2021-01-14 12:10:59.872271000 +0000&quot;&gt;<br>&gt; pokemon.name<br> =&gt; &quot;Pokemon&quot;</pre><p>In the example above we load a record from the Games table then we cache that entity using the Rails.cache.write method. When retrieving the cache entry with its key we end up with the same model class we were using before, and we can even call its methods and attributes as expected. That&#39;s super cool, isn&#39;t it!? But how does Rails do it?</p><pre># https://github.com/rails/rails/blob/291a3d2ef29a3842d1156ada7526f4ee60dd2b59/activesupport/lib/active_support/cache.rb#L598-L600<br>def serialize_entry(entry)<br>  @coder.dump(entry)<br>end</pre><p>The answer is in the snippet above from the cache-store implementation, and what the @coder instance holds, it holds an instance of the library.</p><blockquote><em>The marshaling library converts collections of Ruby objects into a byte stream, allowing them to be stored outside the currently active script. This data may subsequently be read and the original objects reconstituted.</em></blockquote><p>Before reading or writing any record the cache-store will serialize the entry by default, and it will use the Marshal library to do so. In that way, the magic is done for us and we can read and write any Ruby object 🥳!</p><h3>Simple objects storage cost</h3><p>Let’s now set this learning for a moment and analyze another example. Imagine we want to store a boolean.</p><pre>&gt; Rails.cache.write(&#39;yes&#39;, true)<br> =&gt; &quot;OK&quot;<br>&gt; Rails.cache.fetch(&#39;yes&#39;)<br> =&gt; true</pre><p>Rails is able to store it and retrieve without any issues.</p><p>That said, we would expect the value stored in the cache to be stringified version of the boolean, right? To confirm that let’s connect directly to the storage and inspect the values there.</p><p>- In my case, I’m using Redis as the cache so I just instantiate a new instance of its client to connect directly to it.</p><p>After getting the yes value it is clear than we have much more than &quot;true&quot;.</p><pre>&gt; redis = Redis.new<br> =&gt; #&lt;Redis client v4.1.4 for redis://127.0.0.0:6379/0&gt;<br>&gt; redis.get(&#39;yes&#39;)<br> =&gt; &quot;\\u0004\\bo: ActiveSupport::Cache::Entry\\t:\\v@valueT:\\r@version0:\\u0010@created_atf\\u00161609929749.567886:\\u0010@expires_at0&quot;</pre><p>What ends up being stored is the serialized version of an <a href="https://github.com/rails/rails/blob/291a3d2ef29a3842d1156ada7526f4ee60dd2b59/activesupport/lib/active_support/cache.rb#L792">ActiveSupport::Cache::Entry</a> instance. The Entry class is an abstraction that implements expiration, compression and versioning of any cache record. Through this class, Rails can implement these features independently from the actual storage used behind it.</p><p>The cache entry class encapsulates whatever value we store in the cache by default. Leveraging the Marshal lib the Rails cache is capable of storing any simple/complex object while offering the cache features. That is great!</p><p>In our previous example, the serialized version of the cache entry is a String of 100 chars instead to of a 4 chars String - true. That is an extra 96 chars for storing the same information.</p><p>While for the most cases that is totally fine, what if you really need to care about the amount of the stored data?</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1000/0*21BI2o26IDFLLM-T.png" /></figure><p>To understand the impact of these extra chars let’s elaborate more on our example.</p><blockquote><em>short detour: </em><a href="https://github.com/redis/redis"><em>Redis is implemented in C</em></a><em> and it probably needs a few extra bytes to maintain our String value which is an array of chars underneath. But let’s not consider it since that’s the same extra bytes to all String values.</em></blockquote><p>Knowing we need 1B to store 1 char, in C, we can conclude we would need 100B to store the serialized version of Entry cache store.</p><p>Now for 1 million records with the value true we would need 100MB (1M * 100B). This example is &quot;simple&quot; and 100MB may not sound a lot but if you need to store a little bit more than a boolean, if you are using the in-memory store, or if you have limited space in Redis that can start hurting.</p><h3>The Alternatives</h3><p>The direct alternative I could think about was to use the Redis client directly instead of using the Rails.cache abstraction.</p><pre>&gt; redis.set(&#39;no&#39;, false)<br> =&gt; &quot;OK&quot;<br>&gt; redis.get(&#39;no&#39;)<br> =&gt; &quot;false&quot;</pre><p>It should work as expected and we are no longer utilizing the extra space for that value 🙌🏽. We are left then with the job to parse that object back to a boolean value.</p><p>Another alternative that I found after looking at the Redis cache store implementation on GitHub was to pass down the raw option.</p><pre>&gt; Rails.cache.write(&#39;yes&#39;, true, raw: true)<br> =&gt; &quot;OK&quot;<br>&gt; redis.get(&quot;yes&quot;)<br> =&gt; &quot;true&quot;<br>&gt; Rails.cache.read(&#39;yes&#39;, raw: true)<br> =&gt; &quot;true&quot;</pre><p>This option is only mentioned in the Memcached part of the docs, but that is at least also supported on Redis cache store implementation as it overrides the default serialize_entry method <a href="https://github.com/rails/rails/blob/291a3d2ef29a3842d1156ada7526f4ee60dd2b59/activesupport/lib/active_support/cache/redis_cache_store.rb#L468-L474">[1]</a>. Similar to utilizing the Redis client directly we will need to parse the resulting string back to a boolean manually. Even though we lose the Entry features that is not a big deal if you are using Redis or Memcached since they provide most of these features out of the box.</p><h3>Conclusions</h3><p>Thanks a lot if you got this far!</p><p>The level of caution that this post brings to the usage of Rails cache is, most of the times, not required. However, if you ever want to cache millions of simple objects knowing some of these details can make a difference!</p><p>See you next time!</p><p><em>Originally published at </em><a href="https://peaonunes.com/blog/how-to-store-raw-values-with-rails-cache-1c59"><em>https://peaonunes.com</em></a><em>.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=1d60592a6d14" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[N+1 & Eager loading & Beyond]]></title>
            <link>https://peaonunes.medium.com/n-1-eager-loading-beyond-9fa90d6104c2?source=rss-11295a0a71b8------2</link>
            <guid isPermaLink="false">https://medium.com/p/9fa90d6104c2</guid>
            <category><![CDATA[ruby-on-rails]]></category>
            <category><![CDATA[performance]]></category>
            <category><![CDATA[english]]></category>
            <category><![CDATA[sql]]></category>
            <dc:creator><![CDATA[Rafael Nunes]]></dc:creator>
            <pubDate>Sun, 12 Sep 2021 07:06:07 GMT</pubDate>
            <atom:updated>2021-09-12T07:06:07.391Z</atom:updated>
            <content:encoded><![CDATA[<blockquote>Hey all, this is a cross post from the <a href="https://peaonunes.com/blog/n-1-eager-loading-beyond-1in7">original post on my website</a>. Although I am not sure if I should keep cross-posting here it is one more.</blockquote><p>The <strong>N+1 problem</strong> is one of the most common issues when our applications grow. That is frequently associated with <a href="https://en.wikipedia.org/wiki/Object%E2%80%93relational_mapping">ORMs</a> because their abstractions can hide the resulting queries executed. However, it’s not limited to them since we can always manually fetch data in a manner that will culminate in that problem.</p><p>Imagine we have Match, and we want to report the last 100 matches with their duration. In an ORM like <a href="https://guides.rubyonrails.org/active_record_basics.html">ActiveRecord</a> that could look like this:</p><pre>Match.last(100).map do |match|<br>  Report.call(duration: match.duration)<br>end</pre><p>Now imagine that for every Match played we have an associated Game. So besides the match itself, we want to report the game name, for example.</p><pre>Match.last(100).map do |match|<br>  Report.call(duration: match.duration, game_name: match.game.name)<br>end</pre><p>So even though Game is in another table that is tied to Match via the association, so Active Record will make use of <a href="https://rubyinrails.com/2014/01/08/what-is-lazy-loading-in-rails/">lazy loading</a> to load the required data for us. Lazy loading is handy because you can easily query related data on demand.</p><pre>SELECT * FROM matches ORDER BY matches.id DESC LIMIT 100;</pre><pre>SELECT * FROM games WHERE games.match_id = 100;<br>SELECT * FROM games WHERE games.match_id = 99;<br># 98 queries later...<br>SELECT * FROM games WHERE games.match_id = 1;</pre><p>However, the problem that was introduced here is hidden in the queries level. Because we are lazy loading the games on every loop iteration that will lead to a new SQL query to the database every time. The resulting number of queries would be 1 for the matches + 100 for loading games = 101 queries —boom that&#39;s our N+1.</p><h3>Eager Loading</h3><p>Eager loading is another strategy to prevent N+1. The strategy consists in loading upfront any data of interest so whenever you need to access that data it would already be available in memory.</p><pre>Match.includes(:game).last(100).map do |match|<br>  Report.call(duration: match.duration, game_name: match.game.name)<br>end</pre><p>The code uses the includes query method to indicate what relationships we need to query alongside the Matches one, it does so by leveraging the relationship between Matches and Games. Active Record will ensure that all of the specified associations are loaded using the minimum possible number of queries. It could do a join and or an additional SQL, <strong>but no lazy load anymore.</strong></p><pre>SELECT * FROM matches ORDER BY matches.id DESC LIMIT 100;<br>SELECT * FROM games WHERE games.match_id IN (100..1);</pre><h3>Preventing lazy loading 🙅‍♂️</h3><p>One alternative is to stop working with Models and transforming them into hashes, or other in-memory only entities. Having that, we can safely operate without worrying about dispatching queries, but the trade-off is that we lose that rich API and we need to re-expose any functionality attached to the Model.</p><p>Another alternative is to use tools that help us to track and identify the N+1 issues early in the process. Packages like <a href="https://github.com/flyerhzm/bullet">bullet</a> alert us when we should add eager loading to queries or when you should cache, for example. We the tools work for us but we are still reactive to alerts (on prod or dev environments).</p><p>Rails 6.1 was released with <a href="https://guides.rubyonrails.org/6_1_release_notes.html#strict-loading-associations">strict loading</a> and that introduces an optional strict mode for models to prevent lazy load!</p><pre>lass Match &lt; ApplicationRecord<br>  # attrs...<br>  has_one: :game, strict_loading: true<br>end<br><br>class Game &lt; ApplicationRecord<br>  # attrs...<br>end<br><br>Match.first.game.name<br># =&gt; ActiveRecord::StrictLoadingViolationError Exception: Match is marked as strict_loading and Game cannot be lazily loaded.</pre><p>We can set up the strict_loading at the application level, the model level or association level. If we ever try to lazy load the association we will get an error 🎉 . That is great because we are proactive and we cannot create lazy queries in the first place!</p><h3>Pros and Cons</h3><p>The clear benefit is that we avoid flooding the data source with individual queries for every relationship inside the loop. That reduces the risk of the calls, the load in the data sources and generally ends up being more performant.</p><p>The caveat is that these single queries to all records and its relationships are not much more expensive than the ones we run on every loop iteration. So if you need to load everything at some point, then eager loading should be adequate.</p><p>Of course, that comes with a memory cost to load everything upfront. And ultimately if you in our code you end up not using all the data queried that you’re loading then you might be wasting memory and slowing down some queries.</p><h3>Beyond relational Eager Loading 🚀</h3><p>Sometimes we need to load data that is not explicitly related in the database, that can be fetched from other data source (APIs, cache, DBs, …) so we cannot leverage frameworks’ features like the one from ActiveRecord. However, it is possible to implement eager loading by ourselves or use other patterns that avoid N+1 in similar manners, like <a href="https://github.com/graphql/dataloader#other-implementations">data-loaders</a>.</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fgiphy.com%2Fembed%2Fyns3VgsP30GDm%2Ftwitter%2Fiframe&amp;display_name=Giphy&amp;url=https%3A%2F%2Fmedia.giphy.com%2Fmedia%2Fyns3VgsP30GDm%2Fgiphy.gif%3Fcid%3D790b76116ded7c959590cce3ad234b4823e1555994b502c1%26rid%3Dgiphy.gif%26ct%3Dg&amp;image=https%3A%2F%2Fi.giphy.com%2Fmedia%2Fyns3VgsP30GDm%2Fgiphy.gif&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=giphy" width="435" height="244" frameborder="0" scrolling="no"><a href="https://medium.com/media/d4feca1c7f4486f28593a7a02bd0c2ce/href">https://medium.com/media/d4feca1c7f4486f28593a7a02bd0c2ce/href</a></iframe><p>If your process is very data-intensive or is not a web request-response you might need to look into further alternatives. We could consider caching, denormalising the data, preprocessing data, batch vs streaming processing, etc. These are all big topics that deserve much more elaboration than this post aims to do. If you are interested in these topics, I recommend <a href="https://dataintensive.net/">Designing Data-Intensive</a> applications which cover these topics in a great manner.</p><p>In summary, whatever implementation the principle is still the same: <strong>use abstractions to query all necessary data with the minimum amount of queries possible</strong>.</p><p>I might explore more of these topics later, but for now, this is it, see you later 👋</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=9fa90d6104c2" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[ Reflections of five years doing Hacktoberfest]]></title>
            <link>https://peaonunes.medium.com/reflections-of-five-years-doing-hacktoberfest-c34e4c73f5c1?source=rss-11295a0a71b8------2</link>
            <guid isPermaLink="false">https://medium.com/p/c34e4c73f5c1</guid>
            <category><![CDATA[hacktoberfest]]></category>
            <category><![CDATA[open-source]]></category>
            <category><![CDATA[english]]></category>
            <dc:creator><![CDATA[Rafael Nunes]]></dc:creator>
            <pubDate>Sat, 04 Sep 2021 05:15:52 GMT</pubDate>
            <atom:updated>2021-09-04T05:15:52.769Z</atom:updated>
            <content:encoded><![CDATA[<blockquote>Hey all, this is a cross post from the <a href="https://peaonunes.com/blog/reflections-of-five-years-doing-hacktoberfest-481c">original post on my website</a>. I just realized my Medium is way behind the website, and although I am not sure if I should keep cross-posting I think I will do it at least for the few next posts.</blockquote><p>The event has grown and I with it. I was thinking about that and decided to share some thoughts… I might change my mind later.</p><p>Five or four years ago, I had little engagement with open source and collaborating with the community in general. Despite my genuine interest in giving back, I knew little about practical ways of doing it.</p><p>Then I heard about <a href="https://hacktoberfest.digitalocean.com/">Hacktoberfest</a> for the first time. The idea of going beyond using and sharing open source, and actually to collaborate sounded nice, in addition to that, we would get rewards, looks great!</p><p>I am going to be open (puns intended), and I admit that the shirt was appealing to me. We can argue whether or not that’s “cheap” from me, but we should agree to the fact that with the right incentives, we all get more traction.</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fgiphy.com%2Fembed%2F3o6MbnubDfLpci65fq%2Ftwitter%2Fiframe&amp;display_name=Giphy&amp;url=https%3A%2F%2Fmedia.giphy.com%2Fmedia%2F3o6MbnubDfLpci65fq%2Fgiphy.gif%3Fcid%3D790b761102df177de9997e920bf0b55b054447dde4831aee%26rid%3Dgiphy.gif%26ct%3Dg&amp;image=https%3A%2F%2Fi.giphy.com%2Fmedia%2F3o6MbnubDfLpci65fq%2Fgiphy.gif&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=giphy" width="435" height="244" frameborder="0" scrolling="no"><a href="https://medium.com/media/917e4f34a550e911f007023c90156c8a/href">https://medium.com/media/917e4f34a550e911f007023c90156c8a/href</a></iframe><h3>The event over the years</h3><p>The event changed the goal number of PRs a couple of times, 3 to 4, 4 to 5… I also saw multiple in-person meetups spawning so people could work together, sense of community growing! And in this year’s edition, you could even plant a tree instead of getting a T-shirt you can still <a href="https://tree-nation.com/profile/fundraising/digitalocean">donate it</a> 🌲.</p><p><a href="https://dictionary.cambridge.org/dictionary/english/imho">Imho</a>, they did the most meaningful change was done this year, and it came for good: the new rules. The increasing success and the lack of quality checks of the contributions laid the path to the maintainer’s burden. The problem only got more significant, and the organisers acknowledged it.</p><p>The PRs now only count towards the goal if maintainers opted-in to participate by classifying their projects with the hacktoberfest topic, or if they labelled it with the specific hacktboer label, or if they approved it.</p><p>That’s great at various levels, and I hope we all will see more and get a more mature format over the upcoming years.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/880/1*IMCY0k3Gsewd_hoG_rTGXQ.jpeg" /></figure><h3>My contributions over the years</h3><p>I started by collaborating with open projects from the uni I went and <a href="https://github.com/pet-informatica">research groups</a> I was part of previously. That was sufficient to achieve that first year, but that had very little to do the broader development community.</p><p>The following years I tried to fill that gap, I contributed to the open <a href="https://github.com/inloco/supernova">design systems</a> as the one from the company I was working. In addition to that, I tried to find issues related to technologies I was interested in or that I used. For example, I found some nice issues about <a href="https://github.com/facebook/react-native/issues/21581">converting React Native components out off React.createClass</a> and <a href="https://github.com/facebook/react-native/issues/21485">removing TimerMixin, SubscribableMixin from React Native components</a>.</p><blockquote><em>⭐️ Trying to find issues that relate with you interest is a great strategy. I struggled in the beginning because I was looking for any repos, on any languages, etc. I soon realised that if I searched for something related to my skills/interests, I would be able to collaborate much more.</em></blockquote><p>Another example, when I was looking into functional programming with <a href="https://ramdajs.com/">Radma</a>, it made sense to me to collaborate on <a href="https://github.com/char0n/ramda-adjunct">radmda-adjunct</a> utility functions. And when I was having fun using <a href="https://www.gatsbyjs.com/">Gatsby</a>, I was happy to know they were translating the docs to <a href="https://github.com/gatsbyjs/gatsby-pt-BR">pt-BR</a>, and I could jump into that.</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fgiphy.com%2Fembed%2Flzz3B3xLZluuY%2Ftwitter%2Fiframe&amp;display_name=Giphy&amp;url=https%3A%2F%2Fmedia.giphy.com%2Fmedia%2Flzz3B3xLZluuY%2Fgiphy-downsized-large.gif%3Fcid%3D790b76111407445a767b4c86841c5b8f12fb9963e566cb6d%26rid%3Dgiphy-downsized-large.gif%26ct%3Dg&amp;image=https%3A%2F%2Fi.giphy.com%2Fmedia%2Flzz3B3xLZluuY%2Fgiphy-downsized-large.gif&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=giphy" width="435" height="318" frameborder="0" scrolling="no"><a href="https://medium.com/media/3d356f8afb6cdbeb3cb926e5d9482fb9/href">https://medium.com/media/3d356f8afb6cdbeb3cb926e5d9482fb9/href</a></iframe><h3>2020 edition</h3><p>Early in 2020 <a href="https://blitzjs.com/">Blitz</a> appeared with a very appealing to me with its focus on productivity, simplicity and conventions to the React world. Having worked both Ruby and JS, I always felt we were messing something like Rails for JS. Naturally, I wanted this to succeed, so I tried to collaborate with it whenever I could. Although I have not worked on any big project, I was able to chip away some issues throughout the whole year.</p><p>At the beginning of this October, the dependabot updated a gem called <a href="https://github.com/BetterErrors/better_errors">better_errors</a>, and some projects I use on work started to fail. Despite unblocking my team reverting the version, I took a step further to try to find the issue, and I managed to push a fix for it. I was not very bad, after all!</p><p>Lastly, I found a great project called <a href="https://github.com/rubyforgood">rubyforgood</a>, which is a collection of open projects dedicated to making the world gooder! How cool is that? Despite my contribution to that not even counting towards the event goal, I think I found something much more valuable, and I hope I can do way more for it soon.</p><p>None of the examples above would be possible if I have not gradually been more involved with the community and comfortable with jumping around open codebases. It gets more doable with time.</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fgiphy.com%2Fembed%2FRHOy0GlzSCK5o7zjrP%2Ftwitter%2Fiframe&amp;display_name=Giphy&amp;url=https%3A%2F%2Fmedia.giphy.com%2Fmedia%2FRHOy0GlzSCK5o7zjrP%2Fgiphy.gif%3Fcid%3D790b761194cf4dd3b6eabcc6c5eb440ad89abd897e2ed20c%26rid%3Dgiphy.gif%26ct%3Dg&amp;image=https%3A%2F%2Fi.giphy.com%2Fmedia%2FRHOy0GlzSCK5o7zjrP%2Fgiphy.gif&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=giphy" width="435" height="244" frameborder="0" scrolling="no"><a href="https://medium.com/media/f831eb3cb2141928a418593ad4f90e11/href">https://medium.com/media/f831eb3cb2141928a418593ad4f90e11/href</a></iframe><h3>Some observations</h3><p>I learned a lot just by exploring different codebases and navigating the issues, but of course, tackling some of them and pushing the code changes required. All that helped me to be much more comfortable with new codebases, different stacks/paradigms, mental models, etc. And learn the way people solve various problems.</p><p>Another good thing that I thought the event created was a sense of belonging. For sure, a shirt might not be much of a big deal for a lot of people, but I soon realised that it created a sense of community for a bunch of others.</p><p>The event gradually brought every part of the community closer, and I think that is great. I started to watch various repositories, communities discussions, innovations and to filter way the desire to try re-write my apps every time a new thing popped up.</p><p>I believe there is much more space for learning and evolution on all sides of the event, and just time and practice will allow us to develop a better community as a whole.</p><h3>Final thoughts</h3><p>The most challenging part is to start. So if you are new to it, then start small, because small issues and small PRs will lay the path for you to get comfortable. Try to find topics that relate to your interests/skills. Share your achievements. I’m bad at this, but I feel nice when others celebrate my accomplishments with me.</p><p>Fail fast and learn quickly. Try to hold your fear of failing, the more you fail, the more you learn as well. Ultimately you get used to taking risks and learn much faster.</p><p>Let’s support our community, both maintainers and users in a healthy way 🙌. Let’s keep this wheel rolling. There is still much left to be done, and do not wait for another <a href="https://hacktoberfest.digitalocean.com/">hacktoberfest</a> to start doing it!</p><p>Lastly, be open and stay safe. See you soon 👋</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=c34e4c73f5c1" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Circuitbox: How to Circuit Breaker in Ruby]]></title>
            <link>https://peaonunes.medium.com/circuitbox-how-to-circuit-breaker-in-ruby-3413fd9789e6?source=rss-11295a0a71b8------2</link>
            <guid isPermaLink="false">https://medium.com/p/3413fd9789e6</guid>
            <category><![CDATA[ruby]]></category>
            <category><![CDATA[ruby-on-rails]]></category>
            <category><![CDATA[design-patterns]]></category>
            <category><![CDATA[circuit-breaker]]></category>
            <dc:creator><![CDATA[Rafael Nunes]]></dc:creator>
            <pubDate>Thu, 26 Nov 2020 03:42:34 GMT</pubDate>
            <atom:updated>2020-11-26T03:42:34.238Z</atom:updated>
            <content:encoded><![CDATA[<p>A few years ago, I made a post called <a href="https://medium.com/@peaonunes/hystrix-from-creating-resilient-applications-to-monitoring-metrics-a38bffdca897">Hystrix: from creating resilient applications to monitoring metrics</a> where I talked about avoiding catastrophes with circuit breakers in Java and <strong>monitoring</strong> our applications using <strong>Prometheus</strong> and <strong>Grafana</strong>.</p><p>This post focused on using <a href="https://github.com/yammer/circuitbox">Circuitbox</a>, a Ruby gem for creating circuit breakers.</p><p>If you are not keen to read now then you can watch the video version of this post with extended examples that is available below.</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2F33l9ROqmapk%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3D33l9ROqmapk&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2F33l9ROqmapk%2Fhqdefault.jpg&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/d1150c278aebddd20885d081df99728d/href">https://medium.com/media/d1150c278aebddd20885d081df99728d/href</a></iframe><h3>Service to service communication</h3><p>When building some service, it is not uncommon that we would do <strong>remote calls</strong> to APIs to perform some action. These points of connection to external services turn to be <strong>points of failure</strong> on our applications.</p><p>Downstream services help us to achieve our goals, but the more critical is that service, the more significant the impact of its failure will generate and propagate in the platform.</p><p>If any external API stops working, would your application <strong>still operate</strong>?</p><h3>Circuit Breakers ⚡️</h3><p><a href="https://martinfowler.com/bliki/CircuitBreaker.html">Circuit Breaker</a> is a software design pattern that proposes the monitoring of the remote calls <strong>failures</strong>, so when it reaches a certain <strong>threshold</strong>, it “opens the circuit” forwarding all the calls to an elegant and alternative flux. Handling the error gracefully and/or providing a default behaviour for the feature in case of failure.</p><p>The pattern states a <strong>time window</strong> on what it will request that service again to check its health. When the downstream API start responding accordingly under the same thresholds, then the circuit will close and enable the main flow again.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/880/0*-u3urBYWF7LPFsfd.png" /></figure><p>The intention behind circuit breakers is to avoid pursuing a failure service if we <strong>already know</strong> that it is unstable and not reliable at the moment. We then can decide what to do and keep our systems operating or partially operating.</p><h3>Hands-on 🙌</h3><p>What would that look to use a circuit breaker in Ruby?</p><p>The following exercise code is available on GitHub at <a href="https://github.com/peaonunes/circuitbox-example">circuitbox-example</a>. There we have a node server and a <a href="https://rubyonrails.org/">Rails</a> application. The node server will be the downstream service consumed by our main Rails application.</p><h3>Simulating a failure ⚠️</h3><p>To simulate a downstream service, we have the <strong>target-server</strong>, an express server with only the index route (/). We will be querying it from our main application.</p><pre>app.get(&quot;/&quot;, function (req, res) {<br>  const delay = req.query.speed === &quot;fast&quot;  ? 50 : 200;</pre><pre>  setTimeout(function () {<br>    res.sendStatus(200);<br>  }, delay);<br>});</pre><p>To facilitate our experiments, we are going to pass in the query string a parameter called speed. We&#39;ll use that parameter to simulate a <strong>slow</strong> or a <strong>fast</strong> network response.</p><p>Whenever it receives speed as fast, then the server will delay the answer by 50ms. If it gets anything else, it will delay the reply by 200ms.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/880/0*LFdjazy_T_iqIRV0.jpg" /></figure><h3>The consumer 💻</h3><p>In our Rails application, I&#39;ve created an index route as well. It will have the same parameter as <strong>target-server</strong> that will be passed down to the remote call.</p><p>We are going to perform <strong>ten calls</strong> in sequence to our downstream service and return all the answers at once. The following code will do the work for us:</p><pre>def index<br>  speed = params[:speed]</pre><pre>  answers = Array(0..9).map do |i|<br>    sleep(1)<br>    puts &quot;Request number #{i+1}&quot;<br>    HTTP.get(&quot;http://localhost:4000/?speed=#{speed}&quot;).status<br>  end</pre><pre>  render json: answers<br>end</pre><p>We take the speed parameter, and then we map through an array of size ten doing a call for every element passing down our query param. We delay every call by 1 second on purpose, so later we can tweak the circuit configuration and see different results. Finally, we render the answers array with all the responses from our API.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/880/0*LoUgQherTV-jVqJV.jpg" /></figure><p>The app is hitting the express server, that looks great! 🚀. Even though the requests are &quot;slow&quot; they are successful as we have not set any timeouts.</p><p>Changing the remote call to HTTP.timeout(0.1).get will set the timeout as 100ms and then we should see an error as the API is delaying the response by 200ms.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/880/0*mo4-NoYStBi50mhK.jpg" /></figure><p>And <em>voila</em> ✨ we got our exception!<br> Let’s keep that HTTP::TimeoutError in a note because we are going to need it later.</p><h3>Adding the Circuitbox 📦</h3><p><a href="https://github.com/yammer/circuitbox">Circuitbox</a>, a Ruby gem for creating circuit breakers.</p><blockquote><em>“It wraps calls to external services and monitors for failures in one minute intervals. Once more than 10 requests have been made with a 50% failure rate, Circuitbox stops sending requests to that failing service for one minute. That helps your application gracefully degrade.”</em></blockquote><p>We can add that in our Gemfile as follows:</p><pre>gem &#39;circuitbox&#39;, &#39;~&gt; 1.1&#39;, &#39;&gt;= 1.1.1&#39;</pre><p>To utilise the circuit breaker, we need to instantiate a new Circuitbox class passing at least two arguments: the <strong>server/action name</strong> and an <strong>array of exceptions</strong>.</p><p>The exceptions collection is there for counting failures, so if we get an error that is not on that collection, it will not count towards the circuit threshold. Do you remember that HTTP::TimeoutError? We are going to use it now 😁.</p><pre>class TargetClient<br>  def initialize(speed:)<br>    @speed = speed<br>    @circuit = ::Circuitbox.circuit(:target_server, exceptions: [HTTP::TimeoutError])<br>  end</pre><pre>  attr_reader :speed, :circuit</pre><pre>  def call<br>    circuit.run(circuitbox_exceptions: false) do<br>      HTTP.timeout(0.1).get(&quot;http://localhost:4000/?speed=#{speed}&quot;).status<br>    end<br>  end<br>end</pre><p>We can create a TargetClient class that will encapsulate that logic of creating the circuit breaker and making the request. On initialise, it receives and saves the speed parameter and creates the circuit assigning the target_server name and the exception we want to track.</p><pre>def index<br>  speed = params[:speed]</pre><pre>  answers = Array(0..9).map do |i|<br>    sleep(1)<br>    puts &quot;Request number #{i+1}&quot;<br>    TargetClient.new(speed: speed).call<br>  end</pre><pre>  render json: answers<br>end</pre><p>By default, the circuit will return nil for failed requests and open circuits. Then if we hit the endpoint again with speed=slow, we should expect all the answers to be nil, but no exception is thrown.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/880/0*ZxWNiTKSVm6BSRi5.jpg" /></figure><p>And that is what happens 🥳! Because we gave a name to the circuit, the <strong>gem</strong> can track down the requests across all the instances of it. Another cool thing it does by default is to log the status of the call and whether or not the circuit is open.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/880/0*lXdJz88qZLVsrAhD.jpg" /></figure><p>The first message indicates that we are making the request. The second one exposes the status of the request (as a failure in this case). Although all the applications failed, the circuit remained <strong>closed</strong> because we are no hitting the threshold necessary to open it.</p><h3>Tweaking configs ⚙️</h3><p>Circuitbox provides a set of configuration we can change to fine-tune its shape and make that fit in our needs. Some of them are:</p><ul><li>sleep_window: seconds the circuit stays open once it has passed the error threshold. Defaults to 300.</li><li>time_window: length of the interval (in seconds) over which it calculates the error rate. Defaults to 60.</li><li>volume_threshold: number of requests within time_window seconds before it calculates error rates. Defaults to 10.</li><li>error_threshold: exceeding this rate will open the circuit (checked on failures). Defaults to 50%.</li></ul><p>Knowing that default configuration, we can conclude that the circuit will open if within <strong>60</strong> seconds we get at least <strong>10</strong> requests and <strong>50%</strong> of them fail. For the sake of the exercise, we can rearrange that numbers so we can force the circuit to open and see the results.</p><pre>@circuit = ::Circuitbox.circuit(<br>  :target_server,<br>  exceptions: [HTTP::TimeoutError],<br>  time_window: 5,<br>  volume_threshold: 5,<br>)</pre><p>Making both the time_window and the volume_threshold to <strong>5</strong> will lead to open the circuit when we get at least <strong>5</strong> requests during <strong>5</strong> seconds, and <strong>50%</strong> of them fail. We then can try again hitting our app with /?speed=slow.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/880/0*ObahBt1GNVXIsDTr.jpg" /></figure><p>These are the logs after executing it again. If we were to look at the actual response of the Rails app, we would not notice anything different; the answers would be all nil again. However, there is some cool stuff going on under the hood.</p><p>We are still passing the slow as a parameter, and as a result, all requests should fail. After the <strong>5th</strong> request, the gem calculates the error rate, and we get above the <strong>50%</strong> threshold of failure. Therefore the circuit <strong>opens</strong>, and we stop querying the API and just default to the nil answer. That&#39;s why it stops logging the querying message and its status.</p><h3>Watching it recover 📈</h3><p>The beauty of using circuit breakers is that we avoid pursuing a failure service if we <strong>already know</strong> that it is unstable, but beyond that, we check back to see if it recovered already.</p><p>Let’s now tweak our inputs to force the instability and then force a <strong>fast</strong> response.</p><p>To simulate the API recovered, we are going to change the speed parameter passed down. So we start with whatever we get from the query string, but after some requests, we change that to be fast. For example:</p><pre>answers = Array(0..9).map do |i|<br>  sleep(1)<br>  puts &quot;Request number #{i+1}&quot;<br>  speed = &#39;fast&#39; if i &gt; 3<br>  TargetClient.new(speed: speed).call<br>end</pre><p>In the app side of things we are keeping time_window as <strong>5</strong> seconds, so the interval observed is the same. However, let&#39;s decrease volume_threshold to <strong>2</strong>, so we need to get at least <strong>2</strong> requests to calculate the error rate. Finally, let&#39;s override sleep_window to be <strong>2</strong> seconds (which by default. waits 300 seconds).</p><p>So whit a shortersleep_window we should expect the circuit will get back to the API sooner. That new configuration should look like the following:</p><pre>@circuit = ::Circuitbox.circuit(<br>  :target_server,<br>  exceptions: [HTTP::TimeoutError],<br>  time_window: 5,<br>  volume_threshold: 2,<br>  sleep_window: 2<br>)</pre><p>After calling our endpoint again, we get null on every answer besides the last one.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/880/0*mr8_33_qI96LGqpA.png" /></figure><p>We can break down the flux of our requests as:</p><ol><li>The first <strong>4</strong> requests happen within the <strong>5</strong> seconds window, the gem realises we hit our threshold. Therefore, it opens the circuit.</li><li>Our next attempt to request the API will pass speed=fast as we set to do that if i &gt; 3.</li><li>Because the circuit is open all efforts during the sleep_window will be short-circuited and return null immediately.</li><li>After some time the sleep_window ceases and closes the circuit. That makes the following request to reach the API again, and as we passedspeed=fast so the request will succeed.</li></ol><p>The actual logs tell us that story.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/880/0*y0lBcqq1FfJfAX2-.png" /></figure><p>And that is awesome 🥳! We were able to simulate the whole process of querying an API in a bad state, opening the circuit and querying it back once everything is good!</p><h3>Conclusions 🌯</h3><p>A circuit breaker is a pattern that helps us to avoid catastrophes. We manage to make our applications to <strong>fail gracefully</strong>, and therefore they end up being <strong>more resilient</strong>. It helped me in past experiences on Java and Ruby projects, but there are many <a href="https://github.com/search?q=circuit+breaker">implementations on GitHub</a> for different languages as well.</p><p>I’m curious about your thoughts, how do you all avoid catastrophes? Have you ever tried circuit breakers?</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=3413fd9789e6" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[How I avoid the side project frustration]]></title>
            <link>https://peaonunes.medium.com/how-i-avoid-the-side-project-frustration-f5b93683aeb3?source=rss-11295a0a71b8------2</link>
            <guid isPermaLink="false">https://medium.com/p/f5b93683aeb3</guid>
            <category><![CDATA[productivity]]></category>
            <category><![CDATA[side-project]]></category>
            <category><![CDATA[javascript]]></category>
            <category><![CDATA[software-development]]></category>
            <category><![CDATA[time-management]]></category>
            <dc:creator><![CDATA[Rafael Nunes]]></dc:creator>
            <pubDate>Tue, 03 Nov 2020 00:02:30 GMT</pubDate>
            <atom:updated>2020-11-03T00:02:30.459Z</atom:updated>
            <content:encoded><![CDATA[<p>I just don’t do them anymore.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/480/0*gUWVg8f-WciTZFht.gif" /></figure><p>Okay, seriously now…</p><p>How many repositories of unfinished side projects do you have? I have a lot of them. There is no <strong>secret sauce</strong> in this post, but I realised that some methods that I already know could push me to be more productive and happy. So I’m sharing them here. 🤗</p><h3>From excitement to frustration</h3><p>Many times I get excited about new technologies and get in that hacker mood, I cannot rest until I try that out. Maybe I am looking into practice some skill that will be useful in the future. Or that project might be just for fun!</p><p>No matter what the trigger is, we may get motivated about an idea and want to turn that into a real thing. However, it is not extraordinary to get frustrated about that. That is very common it and happens to me quite often.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/480/0*6_F3uNT79r0qpcSa.gif" /></figure><p>I was frustrated about the side project frustration 🤣. I started thinking more and more about that, and I decided to focus on the problems behind that occurring so often to me. Then I managed to narrow down and classify some of the issues that contribute to that feeling.</p><h3>The friction of setting up a new project</h3><p>We create a repository, install the dependencies, play around with the configs, then we have to wrap up for the day, never to come back to it. Or worse, we go back to it only to find out all that initial excitement was gone and we would not even know the next step anyway.</p><p>Setting up a new project is time-consuming, and I dropped various projects at this stage.</p><h3>Hard to quickly get feedback on something</h3><p>Given that I would have everything set up, another painful part was sharing progress with someone. How do I deploy my project? Everybody is using <a href="https://jamstack.org/">JAMStack</a> now… or maybe <a href="https://aws.amazon.com/">AWS</a>… I used <a href="https://www.heroku.com/">Heroku</a> before… <em>Damn</em>, I just wanted to show that to a friend. 😕</p><p>Sharing work in progress can be overwhelming, indeed, from choosing the “right” option to configuring everything properly. There is a lot to cover.</p><h3>The undefined scope</h3><p>Most of the times, I start the project without much of a plan. Frequently I would find myself doing a lot at the same time (frontend, backend, hosting) without a clear strategy and boundaries around it. So I always felt I was not getting any closer to the “end”.</p><p>The point is: there is no “end” unless <strong>you</strong> define it.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/450/0*jkZRDIMW2bEfNiBA.gif" /></figure><h3>Speeding up the setup and feedback loop</h3><p>When I have a short time to try out something I do not want to bother that much about the settings and the infrastructure unless those are the targets of my study. I’ll be probably interested in application code.</p><p>One thing that is helping me a lot is to use <a href="https://codesandbox.io/dashboard/recent">CodeSandBox</a> more and more. Whenever I have an idea that I can try out by creating a sandbox, I do. I open up a new tab in the browser, select a template and start coding.</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fcodesandbox.io%2Fembed%2Fdo6sy&amp;display_name=CodeSandbox&amp;url=https%3A%2F%2Fcodesandbox.io%2Fs%2Fdo6sy&amp;image=https%3A%2F%2Fcodesandbox.io%2Fapi%2Fv1%2Fsandboxes%2Fdo6sy%2Fscreenshot.png&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=codesandbox" width="1000" height="500" frameborder="0" scrolling="no"><a href="https://medium.com/media/b8298967bbb8d0cbbad7616b51a8e9d3/href">https://medium.com/media/b8298967bbb8d0cbbad7616b51a8e9d3/href</a></iframe><p>The app is fantastic, and you can choose from a wide variety of templates from GraphQL Apollo Server to Svelte. It optimizes my time, so I get to focus on coding. I’ve been way more productive that way. 😊</p><ul><li>I do not lose time creating the repository, installing dependencies, opening an editor, tweaking configs. It’s all already there in a minute.</li><li>I can share the sandbox links with anyone 🎉. There is both the editor and the app view available to share. I got rid off thinking about deploying.</li></ul><p>Plus it has a lot of other benefits:</p><ul><li>Connect to GitHub and create a repository from the sandbox.</li><li>Deploy to Netlify and Vercel.</li><li>Invite others to edit a sandbox live with you.</li></ul><p>If your side project cannot live in a Codesandbox, you can still leverage tools and services. Maybe there are boilerplate that you could use. You may expose your localhost application with <a href="https://ngrok.com/">Ngrok</a>. Perhaps deploy to <a href="https://devcenter.heroku.com/articles/git">Heroku</a> using its cli. You can even deploy to <a href="https://github.com/apps/netlify">Netlify</a> or <a href="https://vercel.com/docs/v2/git-integrations/vercel-for-github">Vercel</a> using their GitHub integrations. And many more.</p><h3>Defining the undefined scope</h3><p>Done is better than perfect. Although it may sound a bit cliche, it is true. Getting something working helps you to build a sense of <strong>fulfilment</strong>, <strong>happiness</strong>, and why not <strong>relief</strong> about achieving the goal.</p><p>If I’m working on something in the sandbox or not, it does not matter. I realised I needed to plan to get more things done. Then I started to figure out the necessary work and set specific objectives.</p><p>In my method, I write down small enough pieces of work that I could tackle at once. They may require half an hour, an hour, or more, as long as I can understand the time they might take <em>it’s okay</em>. That way, whenever I have spare time to work on something, I try to pick the piece of work that fits.</p><p>It’s not always easy to understand the chunks of work, and that is precisely why it is so important to plan. You have the opportunity to search first, look for tutorials, read docs, and you get to learn before doing it!</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/880/0*Wut4ezFWy6SG_UJH.png" /></figure><p>I’ve been using a todo-list at the moment (on <a href="https://www.notion.so/">Notion</a>), and the best part is that it is working! The way you decide to organise that list is always up to you. You may use a <a href="https://trello.com/">Trello</a> board, notepads, post-its in your workstation, or even nothing at all if that’s not your style. Find the tool that you are comfortable using, and that can do the job.</p><p>It all narrows down to <strong>planning</strong>. Although it may sound too much for a side project, if you can spend some time thinking about it before doing it, you will probably be more productive.</p><p>I’ve noticed some improvements after applying these techniques:</p><ul><li>I’m happier because I’m always <strong>moving forward</strong>.</li><li>I know things that I could do in <strong>10min</strong> if I want to.</li><li>I have a clear picture of all projects in progress and their <strong>next steps</strong>.</li><li>I got to practice more <strong>breaking things down</strong> into smaller units of work.</li><li>I now realise ahead of time if the project is what I actually <strong>want</strong> to do.</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/320/0*mL3WZwwy5U2SxgHU.gif" /></figure><h3>Conclusions</h3><p>Being more <strong>intentional</strong> in planning and using tools that I am comfortable with is what is making the difference to me. The way I look and approach my side projects is different now. The techniques I use to tackle them can help you too, but you may find your unique routines as well! 🥳</p><p>So, what are your big frustrations with side projects? How are you overcoming them? Thoughts, ideas and suggestions are always welcome.</p><p><strong>Did you like this post? Read the original one and more at </strong><a href="https://peaonunes.com"><strong>https://peaonunes.com</strong></a><strong>.</strong></p><p><a href="https://peaonunes.com/blog/how-i-avoid-the-side-project-frustration-57eh/">How I avoid the side project frustration</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=f5b93683aeb3" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Loading a directory as a tree structure in Node]]></title>
            <link>https://peaonunes.medium.com/loading-a-directory-as-a-tree-structure-in-node-5c30cf223426?source=rss-11295a0a71b8------2</link>
            <guid isPermaLink="false">https://medium.com/p/5c30cf223426</guid>
            <category><![CDATA[english]]></category>
            <category><![CDATA[jscity]]></category>
            <category><![CDATA[nodejs]]></category>
            <dc:creator><![CDATA[Rafael Nunes]]></dc:creator>
            <pubDate>Sun, 05 Jul 2020 05:49:50 GMT</pubDate>
            <atom:updated>2020-07-05T05:49:50.819Z</atom:updated>
            <content:encoded><![CDATA[<p>Hey you all 👋! This article is the first follow up on JSCity series. If you haven&#39;t read it yet, feel free to check it out in the post bellow.</p><p><a href="https://peaonunes.com/blog/jscity-visualizing-javascript-code-5cej">JSCity visualizing JavaScript code</a></p><p>In this post we will explore:</p><ul><li>Loading directories using node APIs.</li><li>Building a tree structure that represents the directories loaded.</li><li><a href="https://en.wikipedia.org/wiki/Test-driven_development">Test driven development</a> to define the expectations around the output before implementing the code.</li></ul><p>While in the MVP of JSCity all the processing happens in the browser (file upload, code parsing, visualization), for the second version I&#39;m aiming to create modular packages, with the intention of increasing re-usability of these modules for future versions.</p><p>In this post, we’ll be building the module that loads a local directory into a well-defined structure. The objective is to be able to export it as function of a package later.</p><blockquote><em>I took this opportunity to use Typescript, but you can achieve the same goal with vanilla JavaScript.</em></blockquote><h3>Defining the structure</h3><p>Directories in operating systems are displayed and represented in a hierarchical <a href="https://en.wikipedia.org/wiki/Tree_structure">tree structure</a>. The <a href="https://en.wikipedia.org/wiki/Tree_(data_structure)">tree data structure</a> is widely used to represent and traverse data efficiently.</p><p>The elements in a tree are called nodes and edges. A node contains some piece information, in our case information about the file or directory. In the following image, the arrows between the nodes are what we call edges.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/799/0*xZmHxJrYMeHe26WG.png" /></figure><p>Nodes without children are often called leaf nodes and the highest node in a tree is called the root node.</p><p>There are multiple well known algorithms to traverse a tree. These will facilitate the process of building the city. So how can we make that directory tree in node?</p><p>The node <a href="https://nodejs.org/api/fs.html#fs_file_system">file system API</a> allows us to read any directory with fs.readdirSync, for example. It returns the array of strings representing the subfolders and files of that folder.</p><pre>console.log(fs.readdirSync(initialPath));<br>// [ &#39;example.js&#39;, &#39;utils&#39; ]</pre><p>We can then leverage this to build our own tree structure!</p><blockquote><em>I did find </em><a href="https://github.com/mihneadb/node-directory-tree"><em>node-directory-tree</em></a><em> package that does that for us. Despite that, I believe it is a good exercise to build the tree by ourselves.</em></blockquote><p>To represent a node I decided to create the TreeNode class. The properties of a TreeNode are the path in the file system and an array of TreeNode (representing the sub-directories and files). When TreeNode is a file the children array will remain empty just like the leaf nodes we learned before.</p><pre>class TreeNode {<br>  public path: string;<br>  public children: Array&lt;TreeNode&gt;;<br><br>  constructor(path: string) {<br>    this.path = path;<br>    this.children = [];<br>  }<br>}</pre><p>That’s a good enough first version of our tree nodes. Let’s keep going.</p><h3>Defining the root node</h3><p>Now let’s create some tests!</p><p>I will use a folder called fixtures as the input of our tests. That folder contains just some example files.</p><p>So given an initial path, we want it to return the root node representing that directory. We want to assert that the root contains the expected properties.</p><pre>describe(&#39;buildTree&#39;, () =&gt; {<br>  const initialPath = path.join(__dirname, &#39;fixtures&#39;);<br><br>  it(&#39;should return root node&#39;, () =&gt; {<br>    const rootNode = buildTree(initialPath);<br>    expect(rootNode).not.toBeNull();<br>    expect(rootNode).toHaveProperty(&#39;path&#39;, initialPath);<br>    expect(rootNode).toHaveProperty(&#39;children&#39;);<br>  });<br>});</pre><p>For now this test will fail, but that&#39;s expected. We still need to build the function mentioned in the code above.</p><p>The buildTree function receives a path as input and returns the tree structure for that directory.</p><pre>function buildTree(rootPath: string) {<br>  return new TreeNode(rootPath);<br>}</pre><p>That is enough to get our first test to pass ✅🎉</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/880/0*vKED5vSNre4XtC2j.png" /></figure><h3>Reading the folder and its children</h3><p>We can see that the buildTree function does not really build the full tree structure yet. That&#39;s our next step. The fixtures folder used by our test looks like the following.</p><pre>fixtures<br>├── example.js<br>└── utils<br>   └── sum.js</pre><p>The output of the function should represent the following tree.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/294/0*8VHVTyGzjg2e-49t.png" /></figure><p>We can assert that the root, in our case fixtures, has two children: utils folder and example.js file.</p><pre>it(&#39;should return root node with its exact 2 children&#39;, () =&gt; {<br>  const rootNode = buildTree(initialPath);<br>  expect(rootNode.children.length).toEqual(2);</pre><pre>  const childrenPath = rootNode.children.map(child =&gt; child.path);<br>  expect(childrenPath.includes(`${initialPath}/utils`)).toEqual(true);<br>    expect(childrenPath.includes(`${initialPath}/example.js`)).toEqual(true);<br>});</pre><p>We can also assert that utils folder has the sum.js file inside of it.</p><pre>it(&#39;should add utils node with its children inside root&#39;, () =&gt; {<br>  const rootNode = buildTree(initialPath);<br>  const utils = rootNode.children.find(<br>    child =&gt; child.path === `${initialPath}/utils`<br>  );</pre><pre>  expect(utils).not.toBeNull();<br>  expect(utils?.children.length).toEqual(1);<br>  expect(utils?.children[0]?.path).toEqual(`${initialPath}/utils/sum.js`);<br>});</pre><p>And of course, they are going to fail at this point.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/880/0*5IgZtXL5PkmO4Jhe.png" /></figure><h3>Building the tree</h3><p>We now need to extend buildTree so it builds the entire tree, not only the root node.</p><p>The <a href="https://en.wikipedia.org/wiki/Depth-first_search">Depth-first search aka DFS</a> algorithm is a well-known technique to traverse a tree. In the iterative DFS algorithm we will need to use a <a href="https://en.wikipedia.org/wiki/Stack_(abstract_data_type)">Stack</a>, which has the first-in-last-out (FILO) approach.</p><p>With DFS, our step by step looks like this:</p><ol><li>We first add the root to the stack.</li><li>We loop while the stack is not empty (that means we still have nodes to visit).</li><li>We pop an item from the stack to be our new currentNode.</li><li>We use fs.readdirSync(currentNode.path) to get the node&#39;s sub-directories and files.</li><li>For each one of them, we create a node and add it to the currentNode.children array. If it&#39;s a directory we also push it in the stack to visit it later.</li></ol><figure><img alt="" src="https://cdn-images-1.medium.com/max/879/0*SchG2e_yW14BM6Hj.gif" /></figure><p>In the end, we’ve visited all the directories, files and sub-directories and built our tree. The implementation looks like the this:</p><pre>function buildTree(rootPath: string) {<br>  const root = new TreeNode(rootPath);</pre><pre>  const stack = [root];</pre><pre>  while (stack.length) {<br>    const currentNode = stack.pop();</pre><pre>    if (currentNode) {<br>      const children = fs.readdirSync(currentNode.path);</pre><pre>      for (let child of children) {<br>        const childPath = `${currentNode.path}/${child}`;<br>        const childNode = new TreeNode(childPath);<br>        currentNode.children.push(childNode);</pre><pre>        if (fs.statSync(childNode.path).isDirectory()) {<br>          stack.push(childNode);<br>        }<br>      }<br>    }<br>  }</pre><pre>  return root;<br>}</pre><p>We used fs.readdirSync as before to discover the children of a folder. We also used fs.statSync to read the stats of the current path, it allows us to ask whether or not that child I&#39;m looking at is a directory.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/880/0*t15oVdrVWaz0egJn.png" /></figure><p>Green tests, yay 🙌, we have solved the problem of building the tree structure! When we log our root we are able to see its properties.</p><pre>TreeNode {<br>  path: &#39;test/fixtures&#39;,<br>  children: [<br>    TreeNode {<br>      path: &#39;test/fixtures/example.js&#39;,<br>      children: []<br>    },<br>    TreeNode {<br>      path: &#39;test/fixtures/utils&#39;,<br>      children: [Array]<br>    }<br>  ]<br>}</pre><h3>What’s next?</h3><p>We got the desired output, but there is more we could do. For example, we can add a filter to exclude files of certain extension from our tree. I’ll do that since I want to visualize .js files only.</p><p>There is also the possibility of adding properties like type, extension, size (...) to our TreeNode.</p><p>The next chapter will leverage this newly created structure to parse every JavaScript file in it and compute metrics about the code!</p><p>Was this post useful to you? I’m always keen to hear suggestions and comments. 👋</p><p>I’ll be still cross-posting some things, but I will by posting first on my <a href="http://peaonunes.com">personal site</a> and <a href="https://dev.to/peaonunes/loading-a-directory-as-a-tree-structure-in-node-52bg">DEV.to</a>. You can check out this and other posts bellow</p><p><a href="https://peaonunes.com/blog/how-i-avoid-the-side-project-frustration-57eh/">How I avoid the side project frustration</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=5c30cf223426" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[JSCity visualizing JavaScript code]]></title>
            <link>https://peaonunes.medium.com/jscity-visualizing-javascript-code-ff6e5b40c409?source=rss-11295a0a71b8------2</link>
            <guid isPermaLink="false">https://medium.com/p/ff6e5b40c409</guid>
            <category><![CDATA[javascript]]></category>
            <category><![CDATA[visualization]]></category>
            <category><![CDATA[english]]></category>
            <category><![CDATA[react]]></category>
            <dc:creator><![CDATA[Rafael Nunes]]></dc:creator>
            <pubDate>Sat, 13 Jun 2020 05:49:20 GMT</pubDate>
            <atom:updated>2020-06-13T05:49:20.047Z</atom:updated>
            <content:encoded><![CDATA[<p>Hey everyone 👋 it&#39;s been a long time since I do not post here! The main reason why that&#39;s the case is because I&#39;ve been posting more on my dev.to account and making my personal blog on <a href="http://peaonunes.com">peaonunes.com</a>. I do not know if I&#39;ll keep posting in this platform so if you want to read more of my posts feel free to follow me on <a href="https://dev.to/peaonunes/jscity-visualizing-javascript-code-5cej">dev.to</a>, <a href="https://twitter.com/peaonunes">twitter</a>, or subscribe to the<a href="https://peaonunes.com/blog/rss.xml"> rss feed of my blog</a>.</p><p>This is the very first article that talks about the <strong>idea</strong>, <strong>motivation</strong> and the <strong>mvp</strong> of <a href="https://github.com/peaonunes/jscity">JSCity</a>.</p><p>I learned a lot while doing it and hopefully you will also find something interesting to take out of it as well.</p><h3>react-three-fiber</h3><p>So, late last year I started creating <a href="https://github.com/peaonunes/jscity">JSCity</a>. The ultimate goal was to visualize JavaScript projects as cities. But why?</p><p>It all started when I saw the following <a href="https://twitter.com/0xca0a/status/1184586883520761856">tweet</a> by <a href="https://twitter.com/0xca0a">Paul Henschel</a>.</p><h3></h3><p></p><p>My reaction was: “<em>Wow! That’s really cool and it’s built with </em><em>React, how?!</em>&quot;</p><p>The answer was <a href="https://github.com/react-spring/react-three-fiber">react-three-fiber</a>. I was amazed by the project 🤯. It is a React <a href="https://reactjs.org/docs/reconciliation.html">reconciler</a> for <a href="https://threejs.org/">Threejs</a>, and I got really curious to see what it does.</p><p>On one side there is React, a very popular and robust library to build UIs. Because of its declarative nature React is really good for a lot of things and the community is always pushing the ecosystem forward.</p><p>On the other side, there is Threejs, the most popular 3D library for JavaScript with a very powerful and rich API.</p><p>Although it is very possible to combine them together the Threejs imperative nature makes that a non-trivial work. For example, synchronizing React state with the 3D canvas can be painful.</p><p>Now let’s take checkout this <a href="https://codesandbox.io/s/rrppl0y8l4?file=/src/index.js">sandbox</a>. Feel free to play around with it.</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fcodesandbox.io%2Fembed%2Frrppl0y8l4&amp;display_name=CodeSandbox&amp;url=https%3A%2F%2Fcodesandbox.io%2Fs%2Frrppl0y8l4&amp;image=https%3A%2F%2Fcodesandbox.io%2Fapi%2Fv1%2Fsandboxes%2Frrppl0y8l4%2Fscreenshot.png&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=codesandbox" width="1000" height="500" frameborder="0" scrolling="no"><a href="https://medium.com/media/b1f7f237a140a0ee9a26e3add975bfcf/href">https://medium.com/media/b1f7f237a140a0ee9a26e3add975bfcf/href</a></iframe><p>react-three-fiber builds the bridge between the composable and declarative React nature and the powerful API from Threejs.</p><p>A lot of initial setup and complexity is abstracted. However, the main part is that it exposes a very good <a href="https://github.com/react-spring/react-three-fiber/blob/master/api.md">API</a>, handy hooks and maps objects from Threejs to React components.</p><p>Now we can leverage of the best of these two different universes.</p><h3>Matching an intention with an idea</h3><p>I immediately wanted to try it. But what should I do?</p><p>I made a few examples using react-three-fiber on <a href="https://codesandbox.io">CodeSandbox</a>, but I wanted to continually explore and build something bigger.</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fgiphy.com%2Fembed%2FLlLPJYt7pgolgHLcmq%2Ftwitter%2Fiframe&amp;display_name=Giphy&amp;url=https%3A%2F%2Fgiphy.com%2Fgifs%2Fbrooklynninenine-nbc-brooklyn-nine-b99-LlLPJYt7pgolgHLcmq&amp;image=https%3A%2F%2Fmedia3.giphy.com%2Fmedia%2FLlLPJYt7pgolgHLcmq%2Fgiphy.gif&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=giphy" width="435" height="244" frameborder="0" scrolling="no"><a href="https://medium.com/media/7c76c12f21bd3a065bff8f54807248c2/href">https://medium.com/media/7c76c12f21bd3a065bff8f54807248c2/href</a></iframe><p>I’ve used Threejs in the past to build a project called <a href="https://swiftcity.github.io/">SwiftCity</a> (no longer maintained) - a visualizer of Swift code. Then it clicked 💡! I can bring that same idea of visualizing code as cities, but this time applying to JavaScript universe.</p><p>I would also be able to explore some other interesting topics like react-three-fiber, <a href="https://en.wikipedia.org/wiki/Abstract_syntax_tree">ASTs</a>, babel, etc.</p><p>Besides, 3D things are so cool, right?</p><h3>JSCity</h3><p>So JSCity idea came to visualize JavaScript code as cities. My intention was to build a <strong>demo</strong> to get a sense of how that would look like.</p><h3>Explaining “The City Metaphor”</h3><p>To summarize, the idea behind <a href="https://www.inf.usi.ch/faculty/lanza/Downloads/Wett07b.pdf">City Metaphor</a> is to analyze and view software systems as cities. The key point is to explore and understand the complexity of a system by mapping the source code to a city.</p><h3>The concepts</h3><p>Before rushing to implement the MVP we have to define how to translate JavaScript code into a city. For example, cities have buildings, right? And also blocks. So here are the building blocks that I’ve chosen for our city:</p><ul><li>What piece of code does a <strong>building</strong> represent?</li><li>How do we define the dimensions of a building (width, height and length) ?</li><li>How do we show the code hierarchy as areas of a city?</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/880/0*c08vDJGa2hs-Lvvl.png" /><figcaption>Azures City Topology from Visualizing Software Systems as Cities</figcaption></figure><h4>The buildings in the city</h4><p>In the original metaphor, a building was mapped to represent a Java class. However, various elements can own a lot of responsibilities in JavaScript.</p><p>For instance, the buildings could be Functions, Classes, Objects, etc. Beyond that, functions might appear as <a href="https://developer.mozilla.org/en-US/docs/Glossary/IIFE">IFFEs</a>, <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Functions/Arrow_functions">arrow functions</a>, <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Closures">closures</a>, <a href="https://developer.mozilla.org/en-US/docs/web/JavaScript/Reference/Operators/function">function expressions</a> and more.</p><p>I decided then to <strong>only consider</strong> <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Functions">simple function definitions</a> and its clojures for the purpose of the demo.</p><h4>Dimensions</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/191/0*2Npv718CFLZ_SKFD.png" /></figure><p>Several metrics can be taken to define the measurements of a building. I decided that the building <strong>height</strong> would be given by the <strong>number of lines of code</strong> of the analyzed function. The <strong>width and length</strong> then would be defined by the amount of <strong>calls to other functions</strong> inside the examined function.</p><h4>Topology</h4><p>The city topology is mapped to elements of the system hierarchy.</p><p>Since there are no packages, namespaces or anything equivalent in JavaScript, the mapping will limit to the following levels: the <strong>project</strong>, <strong>directories</strong> and the <strong>files</strong>.</p><pre>function sum(a, b) {<br>  return a + b<br>}<br><br>function calculate(a,b) {<br>  function log(text) {<br>    console.log(text)<br>  }<br>  log(sum(a, b));<br>  log(minus(a, b));<br>  log(times(a, b));<br>}<br><br>function minus(a, b) {<br>  return a - b<br>}<br><br>function times(a, b) {<br>  return a * b<br>}</pre><p>The code above would look something like this:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/580/0*d0XVmIfWCMbeA11e.png" /></figure><p>Function definitions that belong to a file will appear inside the file limits. In the image above, the light grey area represents a file and its functions are the buildings.</p><p>Function declarations that are declared inside another function (<a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Closures">clojures</a>) will have their own block stacked on top of the building that represents their parent function. We can see that the largest building represents a function that has a clojure inside of it.</p><h3>Building the MVP</h3><p>Even having previous experience with the concept and knowing more or less what I would need to build, I still wanted to keep it minimal at first.</p><p>I <strong>will not go</strong> into implementations details here. But do not worry! The next posts will be really focused on exploring every part.</p><p>In summary, the steps that I took were:</p><ol><li>Created functions where I could read the content of a JavaScript file and parse it to get its AST. For that I&#39;ve used <a href="https://babeljs.io/docs/en/babel-parser">babel-parser</a>.</li><li>Coded something to navigate the AST and collect the necessary data from the functions. Instead of using tools like <a href="https://babeljs.io/docs/en/babel-traverse">babel-traverse</a> for this, I actually implemented it myself (and I regret it 😅)!</li><li>Implemented an algorithm to create the city grid and place the buildings in the right place from scratch. Although I still have the code from SwiftCity, I can barely understand it anymore 😬.</li><li>Used react-three-fiber to read the city definition and to render the city.</li><li>Used Reactto build the app and handle the input file that would be imported.</li></ol><h3>The first version</h3><p>I made it in the end 🎉! The live version is available at <a href="https://peaonunes.github.io/jscity/">JSCity</a> and the code is on <a href="https://github.com/peaonunes/jscity">GitHub</a> as well.</p><h3></h3><p></p><p>There is a lot that this first version does not cope with. For example, as I opt in to just consider function definitions a lot of modern JavaScript code will not be represented. One example is functions defined using the the arrow notation.</p><h3>Some conclusions</h3><p>I got inspired and could not rest until trying out react-three-fiber. All because of a tweet. And that is a lesson to me:</p><blockquote><em>Share your work whatever that is. It may inspire someone.</em></blockquote><p>My initial intention was to focus on experimenting with 3D through react-three-fiber, but what ended up happening was me spending most of the time trying to get a good enough city creation algorithm... That was hard! Another lesson here was:</p><blockquote><em>Choose a project that will let you focus on what you want to learn.</em></blockquote><p>Nonetheless, it was really fun to build it: I got to play around with other nice tools like babel-parser.</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fgiphy.com%2Fembed%2FifYxDMBr1XP8hQvPeY%2Ftwitter%2Fiframe&amp;display_name=Giphy&amp;url=https%3A%2F%2Fgiphy.com%2Fgifs%2Fbrooklynninenine-nbc-brooklyn-nine-b99-ifYxDMBr1XP8hQvPeY&amp;image=https%3A%2F%2Fmedia3.giphy.com%2Fmedia%2FifYxDMBr1XP8hQvPeY%2Fgiphy.gif&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=giphy" width="435" height="244" frameborder="0" scrolling="no"><a href="https://medium.com/media/6d7d18ad227979a65311085ba26d84d1/href">https://medium.com/media/6d7d18ad227979a65311085ba26d84d1/href</a></iframe><h3>What is next?</h3><p>I got it working. Now I want to make it better.</p><p>The code is very limited and fragile at the moment. I want to extend JSCity and be able to load a directory and build the entire city from that.</p><p>From now on I will be diving deep into each part of the implementation one at the time. I’ll be sharing it through posts and I hope some of these things are interesting to someone else too!</p><p>Let me know what do you think about the idea. Thoughts, suggestions and comments are always welcome. See you in the next chapter.</p><p>Feel free to reach out to me on <a href="https://twitter.com/peaonunes/">twitter</a>.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=ff6e5b40c409" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Feature Flag approach for React using Apollo Client ]]></title>
            <link>https://medium.com/open-graphql/feature-flag-approach-for-react-using-apollo-client-5bfbd99c6cbd?source=rss-11295a0a71b8------2</link>
            <guid isPermaLink="false">https://medium.com/p/5bfbd99c6cbd</guid>
            <category><![CDATA[graphql]]></category>
            <category><![CDATA[cache]]></category>
            <category><![CDATA[apollo-client]]></category>
            <category><![CDATA[react]]></category>
            <category><![CDATA[feature-flags]]></category>
            <dc:creator><![CDATA[Rafael Nunes]]></dc:creator>
            <pubDate>Fri, 12 Oct 2018 19:29:27 GMT</pubDate>
            <atom:updated>2020-05-09T02:47:08.213Z</atom:updated>
            <content:encoded><![CDATA[<p><a href="https://martinfowler.com/articles/feature-toggles.html">Feature flag</a> (or feature toggle, feature switch…) comes with different shapes and implementations, however is a well known and powerful technique for allowing teams to modify system behaviour without changing code.</p><p>The idea behind is: enable or disable features during execution time without any deploy. There are <a href="https://github.com/search?q=feature+flag">hundreds of implementations</a> across different languages and the applications are many: A/B testing, toggle app configuration, delivery new features gradually, etc.</p><p>This post suggests an approach for implementing React components for Feature Flag consuming a GraphQL API using <a href="https://www.apollographql.com/docs/react/">Apollo Client </a>for that!</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*UjQvxWY9Nq2ysewq" /><figcaption>“close-up photo of guitar” by <a href="https://unsplash.com/@bechbox?utm_source=medium&amp;utm_medium=referral">Mikkel Bech</a> on <a href="https://unsplash.com?utm_source=medium&amp;utm_medium=referral">Unsplash</a></figcaption></figure><h4>EnabledFeatures</h4><p>Let&#39;s first assume the schema below as our main schema.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/450cb6cdea66b4b6130184a27a906b0e/href">https://medium.com/media/450cb6cdea66b4b6130184a27a906b0e/href</a></iframe><p>In this schema, if a Feature is present in the array it means that the user has that feature enabled . The lack of some feature means that the feature is not visible/available for that user.</p><blockquote>Disclaimer: The post assumes you are identifying the user through headers such as <a href="https://developer.mozilla.org/pt-BR/docs/Web/HTTP/Headers/Authorization">Authorization</a>. Learn more on User Authentication with Apollo with their <a href="https://www.apollographql.com/docs/react/networking/authentication/">docs</a>.</blockquote><p>Having this contract we are now able to fetch user features by querying enabledFeatures .</p><pre>query {<br>  enabledFeatures {<br>    name<br>  }<br>}</pre><p>You can see a live querying using the <a href="https://codesandbox.io/s/graphql-feature-flag-7jxju">Codesandbox</a> of our example.</p><p>For querying with Apollo Client though we need to create the query definition which will be passed as an argument of the query executor.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/e9817d01397ebb456a3ca640723544dd/href">https://medium.com/media/e9817d01397ebb456a3ca640723544dd/href</a></iframe><p>The snippet above defines our query where we fetch the name of each feature.</p><h4>FlaggedFeature</h4><p>Now that we have a contract and a way to query, we have to implement the component which in fact queries the user features.</p><p>We will use the Query component as indicated by the <a href="https://www.apollographql.com/docs/react/essentials/queries.html">docs for querying</a>. Since Query component provides a render props API we can keep this strategy in our implementation.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/367e9373577c1a7721c77758227b8c15/href">https://medium.com/media/367e9373577c1a7721c77758227b8c15/href</a></iframe><p>While using FlaggedFeature you should pass the children function to the component. If you are not familiar with render function read <a href="https://reactjs.org/docs/render-props.html">this</a>.</p><p>The component queries enabledFeatures and while it is not ready it returns the loading or error variables from Query component.</p><p>Once the data is retrieved, enabled is changed for the result of the filter and you will know whether or not the feature is enabled for this user.</p><p>With the following example you are already able to render a component if feature1 is available for the user or a fallback strategy instead.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/688a8e79f922d5903154509e700f7c30/href">https://medium.com/media/688a8e79f922d5903154509e700f7c30/href</a></iframe><h4>Caching and EnabledFeatures</h4><p>Although our FlaggedFeature<strong> </strong>is enough for define if a feature is enabled, it queries enabledFeatures when the component is mounted. Depending on the application and the goal of your flag it can decrease the user experience because the user will have to wait for the query to finish.</p><p>Thankfully Apollo Client has a <a href="https://www.google.com/search?client=opera&amp;q=apollo+in+memory+cache&amp;sourceid=opera&amp;ie=UTF-8&amp;oe=UTF-8">in memory cache</a> implementation that caches the queries results by default! Knowing that, we deduce the FlaggedFeature will be slower only on it first execution. After that, the result will be cached. However, we can go further with cache implementing another strategy.</p><p><strong>What if we pre-cache the user features?</strong> If we query the feature beforehand then when they are actually needed they will be already available in cache.</p><p>To achieve that, we can implement a pretty similar component which I called EnabledFeatures. This component follows the same principles as FlaggedFeature, but it is not concerned about any specific feature. It is only worried about querying the API.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/d8011b03731f055fd6b64d08b350351b/href">https://medium.com/media/d8011b03731f055fd6b64d08b350351b/href</a></iframe><p>As the first component we did, this implementation provides the query state via render props and return a ready property when its work is finished.</p><p>A good place to useEnabledFeatures would be wrapping up your App then the component will first query the user features. Or perhaps, you would wrap a portion of your App that can be controlled by the feature flags. The query will be executed whenever you reach that portion of the App .</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/95935b05358cb796108e53d1948ea483/href">https://medium.com/media/95935b05358cb796108e53d1948ea483/href</a></iframe><p>If the immediate children do not depend on the flags then they can be rendered right. You may ignore the loading state and return your App.</p><p>In the example, the EnabledFeatures queries the API and relies on Apollo to cache the result in memory. So when FlaggedFeature renders, instead of going for a network call the component first hits the local cache, fetches the results from there and saves network time for you! 🎉</p><p>The main goal of using EnabledFeatures is that you are able to query all the features before you actually need them. Rendering this component as a wrap of your application does not impact the user experience, actually provides you a way of showing some extra feedback and when the features are needed, accessing them is much faster.</p><p>This approach reduces networks calls thanks to Apollo cache and improves the user experience. But, be aware that the flags will stick in the memory cache until the user refreshes the page.</p><h4>What&#39;s next?</h4><p>Now that the basic flow is done, you can think about evolving the schema and features. For example, you might wanna add options or variations as a field of Feature, then you can branch the feature implementation depending on its variation or properties. You can make the components more generic passing the query as props to them.</p><p>In the end, whenever implementing new system features using GraphQL and Apollo keep in mind that the cache can be a valuable ally to improve user experience.</p><p>Comments and suggestions are welcome as always! 😊</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=5bfbd99c6cbd" width="1" height="1" alt=""><hr><p><a href="https://medium.com/open-graphql/feature-flag-approach-for-react-using-apollo-client-5bfbd99c6cbd">Feature Flag approach for React using Apollo Client 🚀</a> was originally published in <a href="https://medium.com/open-graphql">Open GraphQL</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Apollo Multiple Clients with React?]]></title>
            <link>https://medium.com/open-graphql/apollo-multiple-clients-with-react-b34b571210a5?source=rss-11295a0a71b8------2</link>
            <guid isPermaLink="false">https://medium.com/p/b34b571210a5</guid>
            <category><![CDATA[react]]></category>
            <category><![CDATA[apollo-client]]></category>
            <category><![CDATA[multiple-clients]]></category>
            <category><![CDATA[graphql]]></category>
            <category><![CDATA[apollo]]></category>
            <dc:creator><![CDATA[Rafael Nunes]]></dc:creator>
            <pubDate>Thu, 09 Aug 2018 00:19:02 GMT</pubDate>
            <atom:updated>2019-08-07T02:35:44.896Z</atom:updated>
            <content:encoded><![CDATA[<p><em>updated at: 06/08/2019</em></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/381/1*hQVG0syimTtmYpw6ljRjag.png" /><figcaption>React + Apollo Client + GraphQL = ❤</figcaption></figure><p>This quick post explains how to use different Apollo clients in the same React application, but in the end, discuss other approaches when working on multiple GraphQL APIs. This is not intended to question the <a href="https://graphql.org">GraphQL</a> philosophy by any means 😝!</p><p>I wrote this because I found myself questioning how could I use multiple clients to query different GraphQL APIs from my React application. It turns out there were lots of issues in Apollo GitHub project, discussing the <strong>need </strong>and presenting suggested implementations for it.</p><blockquote>TL;DR: passing any ApolloClient instance to Query/Mutation/Subscription components as props works just fine! Check: <a href="https://github.com/peaonunes/apollo-multiple-clients-example">https://github.com/peaonunes/apollo-multiple-clients-example</a></blockquote><p>Some links to related issues, discussions, and proposals are listed below. Some of the old proposals indeed were merged and come along old react-apollo versions. However, the approach to using the Apollo client and querying changed a lot (for better) since 2.1.</p><ul><li><a href="https://github.com/apollographql/react-apollo/pull/481">https://github.com/apollographql/react-apollo/pull/481</a></li><li><a href="https://github.com/apollographql/react-apollo/issues/464">https://github.com/apollographql/react-apollo/issues/464</a></li><li><a href="https://github.com/apollographql/react-apollo/issues/1588">https://github.com/apollographql/react-apollo/issues/1588</a></li><li><a href="https://github.com/apollographql/react-apollo/pull/729">https://github.com/apollographql/react-apollo/pull/729</a></li></ul><h3>Why would we need multiple clients?</h3><p><a href="https://github.com/apollographql/apollo-client">Apollo Client</a> accepts only one client uri on its initialization, therefore, it is meant to be used with one client at the time.</p><pre>import ApolloClient from &quot;apollo-boost&quot;;<br><br>const client = new ApolloClient({<br> uri: &quot;https://48p1r2roz4.sse.codesandbox.io&quot;<br>});</pre><p>So if in your React application you need to retrieve data from two different GraphQL services, for example, you cannot use the same client or provider instance.</p><p>In my case specifically, I was just looking for a quick win implementation approach to get data from two GraphQL APIs to validate a solution. I was not worrying too much about schemas collision since the types, cache, state (…) would not overlap.</p><p>In my scenario, it would make sense to have a way of switching clients when querying APIs on Apollo. In the current approach though, you wrap your entire application with the ApolloProvider component which passes the client for the application through the context.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/ecf80b9c93a9250c0888802306ab9af8/href">https://medium.com/media/ecf80b9c93a9250c0888802306ab9af8/href</a></iframe><p>That actually makes it simple to query data using the <a href="https://www.apollographql.com/docs/react/essentials/queries/">Query Component</a>, but it also means that the client provided via context is the only used when querying.</p><p>I spent some time looking through numerous issues and related projects and it turns out that there is a way of overriding the context client for Query and Mutation component passing another client through props 🎉 🎉 !</p><pre>&lt;Query client={anotherClient} query={query}&gt;<br> {({ data }) =&gt; (&lt;div&gt;{data.name}&lt;/div&gt;)}<br> &lt;/Query&gt;</pre><blockquote><strong>Update, Aug 2019:</strong> Although they have changed the implementation it still works. <a href="https://github.com/apollographql/react-apollo/blob/master/packages/components/src/Query.tsx#L17">https://github.com/apollographql/react-apollo/blob/master/packages/components/src/Query.tsx#L17</a></blockquote><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/f2fba5ad8b2244ee87c7f444ccf3a3aa/href">https://medium.com/media/f2fba5ad8b2244ee87c7f444ccf3a3aa/href</a></iframe><p>This feature is not mentioned in any part of the <a href="https://www.apollographql.com/docs/react/essentials/get-started/">official documentation</a>. We can indeed pass any client for the components that they will give preference for the one passed via props order than via context. So we could make:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/c9b08dfc140a3b872a071228a6aba7fe/href">https://medium.com/media/c9b08dfc140a3b872a071228a6aba7fe/href</a></iframe><p>I have implemented a runnable example that uses two different clients in this repository: <a href="https://github.com/peaonunes/apollo-multiple-clients-example">https://github.com/peaonunes/apollo-multiple-clients-example</a></p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fgiphy.com%2Fembed%2FfLyBfKE6A4KKV9gQpa%2Ftwitter%2Fiframe&amp;url=https%3A%2F%2Fmedia.giphy.com%2Fmedia%2FfLyBfKE6A4KKV9gQpa%2Fgiphy.gif&amp;image=https%3A%2F%2Fmedia.giphy.com%2Fmedia%2FfLyBfKE6A4KKV9gQpa%2Fgiphy.gif&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=giphy" width="435" height="250" frameborder="0" scrolling="no"><a href="https://medium.com/media/74573b13817cba4fc85f39f13b9f9d7a/href">https://medium.com/media/74573b13817cba4fc85f39f13b9f9d7a/href</a></iframe><p>Even though this approach is functional you should keep in mind that you won’t have Apollo features running applying for both clients unless you pass the same cache to the clients (that might be a risk in case of schema collision), manage other features will be by your own. Apollo features will get compromised and as the application grows your codebase becomes thicker and development will probably be slower.</p><h3>What would be the ideal approach then?</h3><h4>Solving the problem in the Frontend</h4><p>As a product of this article discussion some approaches have come up by folks who did their own abstractions/implementations for solving this issue.</p><p><strong>Community own package implementations</strong></p><p><a href="https://medium.com/u/bba79c257560">Michael Duve</a>, wrote <a href="https://www.npmjs.com/package/@titelmedia/react-apollo-multiple-clients">react-apollo-multiple-clients</a> a packaged that allows you to switch between clients. It considers multiple providers and provides you a HOC component that accepts a client prop to switch to the desired client consumer. <a href="https://medium.com/@dazlious/hey-rafael-your-approach-inspired-me-to-find-a-more-generic-solution-to-the-problem-at-hand-9714a47fabfd">Discussion</a></p><p><a href="https://medium.com/u/e0610a7550d5">Paul Grieselhuber</a>, suggested in his <a href="https://www.loudnoises.us/next-js-two-apollo-clients-two-graphql-data-sources-the-easy-way/">post</a> a way where everything worked through a single client and allowed you to simply toggle on context to select a uri where the client will dispatch the requests. You can follow the discussion <a href="https://medium.com/@paulgrieselhuber/while-this-looks-like-a-great-solution-for-data-from-multiple-graphql-sources-and-im-sure-works-4dc4da33d3cf">here</a>.</p><p><strong>Client-side schema stitching</strong></p><p>Despite support for server side, it is not common to see people trying to solve the issue right on the client, there are some issues looking for or requesting stitching on client side, #797 for example.</p><p>The company Hasura, though, points out an implementation of <a href="https://dev.to/peaonunes/blog.hasura.io/client-side-graphql-schema-resolving-and-schema-stitching-f4d8bccc42d2">client side schema stitching</a> and it might be sufficient in your case.</p><p>Although I think that these approaches solve the problem I also think they can increase so much the complexity of the frontend application as the application grows. From my point of view, the work should be done on Backend by providing a unique interface for all the different APIs.</p><h4>Gateways for Frontends</h4><p><a href="https://microservices.io/patterns/apigateway.html">API Gateway</a> is a known pattern with increasing adoption in our “microservice boom” age. API Gateway is a single interface between the services and clients.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/454/1*82bUhr5v8oRjMTCNz4de3w.png" /></figure><p>It seems to be a consensus in the GraphQL world as well that the API Gateway is the way to go on connection with different GraphQL APIs. However sometimes going beyond that, since the Gateway itself can create a GraphQL interface for other <strong>REST</strong> and <strong>RPC</strong> APIs.</p><p>The real problem of serving different APIs through a unique gateway is how to manage and orchestrate different schemas.</p><p><strong>Schema Stitching</strong></p><p>The first attempt the Apollo team advocated for was <a href="https://www.apollographql.com/docs/graphql-tools/schema-stitching/">Schema Stitching</a> as I mentioned in this post before.</p><ul><li><a href="https://blog.apollographql.com/the-next-generation-of-schema-stitching-2716b3b259c0">The next generation of schema stitching</a></li><li><a href="https://codeburst.io/nodejs-graphql-micro-services-using-remote-stitching-7540030a0753">GraphQL Remote Stitching Micro-services with NodeJS</a></li></ul><p>After some time of development and feedback from the community this approach was considered fragile and it is now deprecated.</p><p><strong>Apollo Federation</strong></p><p>Apollo recently launched a new concept for solving this problem of managing different schemas through a gateway which is called <a href="https://blog.apollographql.com/apollo-federation-f260cf525d21">Apollo Federation</a>.</p><blockquote>“Apollo Federation is our answer for implementing GraphQL in a microservice architecture. It’s designed to replace schema stitching and solve pain points such as coordination, separation of concerns, and brittle gateway code.” James Baxley III</blockquote><p>They have launched the Federation spec before and it already counts with implementations in some languages, <a href="https://github.com/apollographql/apollo-server/tree/master/packages/apollo-gateway">apollo-gateway</a>, for example. The idea is to have a gateway that composes the schemas and the federated services can connect with each other through keys (just like primary keys) and they are also able to <strong>extend</strong> types. All this just using regular GraphQL spec.</p><p>I recommend taking the time to watch the video below and spend some time playing around with this promising approach.</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FlRI0HfXBAm8%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DlRI0HfXBAm8&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FlRI0HfXBAm8%2Fhqdefault.jpg&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/9c7d860a7b410c03f55cac5ac8698e7f/href">https://medium.com/media/9c7d860a7b410c03f55cac5ac8698e7f/href</a></iframe><p>I personally tried it and I am seeing companies working on solutions based on this new approach. It is also notable that there are some challenges and space for other discussions like managing authentication/authorization, how flexible the gateway should be, etc. Hopefully, the Federation get evolving based on feedbacks from the community and companies.</p><h3>Conclusion</h3><p>As I mentioned before this post is not about questioning the &quot;right&quot; way on requesting multiple GraphQL APIs, but it is about pointing out approaches that hopefully might be enough for solving the issues of today.</p><p>I think the whole discussing about using API Gateways and managing different GraphQL schemas is just at the beginning and the community will work on nicer and better solutions.</p><p><em>I am more than happy to read suggestions and engage discussions so leave your thoughts below.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=b34b571210a5" width="1" height="1" alt=""><hr><p><a href="https://medium.com/open-graphql/apollo-multiple-clients-with-react-b34b571210a5">Apollo Multiple Clients with React?</a> was originally published in <a href="https://medium.com/open-graphql">Open GraphQL</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
    </channel>
</rss>