<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by thilonel on Medium]]></title>
        <description><![CDATA[Stories by thilonel on Medium]]></description>
        <link>https://medium.com/@thilonel?source=rss-41e076cea28------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Wed, 08 Apr 2026 22:08:33 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@thilonel/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[fixing request queuing with server-sent events using Ruby on Rails]]></title>
            <link>https://medium.com/@thilonel/fixing-request-queuing-with-server-sent-events-using-ruby-on-rails-86a9177cb2f?source=rss-41e076cea28------2</link>
            <guid isPermaLink="false">https://medium.com/p/86a9177cb2f</guid>
            <category><![CDATA[web-performance]]></category>
            <category><![CDATA[ruby-on-rails]]></category>
            <category><![CDATA[server-sent-events]]></category>
            <category><![CDATA[new-relic]]></category>
            <dc:creator><![CDATA[thilonel]]></dc:creator>
            <pubDate>Mon, 14 Nov 2022 15:19:01 GMT</pubDate>
            <atom:updated>2022-11-14T15:19:01.394Z</atom:updated>
            <content:encoded><![CDATA[<p>In this article I’ll demonstrate a request queuing problem on a Ruby on Rails application and walk you through solving it using server-sent events.</p><h3>The problem</h3><p>There’s a big slowdown — unusable app — in high traffic hours.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/635/1*yTc5DtCfBSnn37z7PY8_hg.png" /><figcaption>requests are waiting for seconds to be served</figcaption></figure><p>As soon as I opened NewRelic, I saw that there’s something wrong.<br>The picture you see above is straight from the dashboard.</p><p>Today — finally — enough users reported slowness in the system for me to be able to dedicate time to solving this issue.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/637/1*tjShPbEW0LJh3oPEmnSAug.png" /></figure><p>These web transaction times (picture from the dashboard too) indicate that we have crazy slow endpoints. That means that we’d need ridiculously high concurrency —more than the total amount of tabs our users open at the same time — to be able to stay responsive and serve all the requests fast.</p><p>These endpoints contain complicated business logic and while they can all be optimized, it would take a lot of time and testing, while our users suffer.</p><h3>The idea</h3><p>By decoupling the HTTP response from the request, while a background worker is creating the response, we can serve other requests.</p><p>How can we do this? How can we notify our user that the respone is ready? WebSockets, polling, and <a href="https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events">server-sent events</a>.</p><p>With SSE we can make HTTP requests async. A lighter and easier solution than WebSockets. You can just do this from the browser console:</p><pre>source = new EventSource(&#39;/my_endpoint&#39;);<br>source.onmessage = (event) =&gt; {<br>  console.log(&quot;EventSource message received:&quot;, event);<br>};</pre><p>And from Rails:</p><pre>class <em>SseController </em>&lt; ActionController::Base<br>  include ActionController::Live<br><br>  def index<br>    response.headers[&#39;Content-Type&#39;] = &#39;text/event-stream&#39;<br>    <em>sse </em>= SSE.new(response.stream, retry: 300, event: &quot;open&quot;)<br>    <em>sse</em>.write(&quot;Hello!&quot;, event: &quot;message&quot;)<br>  ensure<br>    <em>sse</em>.close<br>  end<br>end</pre><p>And ta-dam! Well, almost. Here’s <a href="https://medium.com/@thilonel/how-to-use-rails-actioncontroller-live-sse-server-sent-events-d9a04a286f77">my article</a> on how to start using SSE with Rails.</p><h3>Verifying the idea</h3><p>There’s a bit more to this. Is this really going to free up our precious threads? Let’s test!</p><p>With the following endpoint</p><pre>class <em>SseController </em>&lt; ActionController::Base<br>  include ActionController::Live<br><br>  def index<br>    response.headers[&#39;Content-Type&#39;] = &#39;text/event-stream&#39;<br>    response.headers[&#39;Last-Modified&#39;] = Time.now.httpdate<br><br>    <em>sse </em>= SSE.new(response.stream, retry: 300, event: &quot;open&quot;)<br><br>    loop do<br>      sleep 2<br>      <em>sse</em>.write(&quot;The current time is: #{Time.current.to_s}&quot;, event: &quot;message&quot;)<br>    end<br>  rescue ActionController::Live::ClientDisconnected<br>    <em>sse</em>.close<br>  ensure<br>    <em>sse</em>.close<br>  end<br>end</pre><p>I will start Rails with 1 thread, as the idea is that like this, we’ll be able to use a single thread to serve multiple requests without blocking:</p><pre>RAILS_MAX_THREADS=1 PORT=3001 bundle exec rails s</pre><p>and open 3 browser tabs with my rails app and run the following from console:</p><pre>source = new EventSource(&#39;/sse&#39;);<br>source.onopen = (event) =&gt; {<br>  console.log(&quot;The connection has been established.&quot;, event);<br>};<br>source.onmessage = (event) =&gt; {<br>  console.log(&quot;EventSource message received:&quot;, event);<br>};<br>source.onerror = (err) =&gt; {<br>  console.error(&quot;EventSource failed:&quot;, err);<br>};</pre><ul><li>First tab connects nicely and starts printing out the message as expected.</li><li>Second tab does the same.</li><li>Third tab is stuck pending.</li><li>I try to open a 4th tab, but of course, the message doesn’t reach my web server.</li></ul><p>Well of course. :)</p><blockquote><strong>sleep</strong>(*args) <em>public<br></em>Suspends the current thread for <em>duration</em> seconds</blockquote><p>No surprise here, we basically put our server to permanent sleep. This is exactly the state that the requests are experiencing when they’re reported as “queuing” by NewRelic.</p><h3>True asynchronicity</h3><p>To free up the Rails thread while it’s computing and allow it to serve new requests, we can use <a href="https://github.com/rack/rack/blob/main/SPEC.rdoc#hijacking-">Rack Partial Hijack</a>. We’ll be using partial hijack, as the full is only supported with HTTP/1 which <a href="https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events">has limitations on SSE</a> (see the warning part).</p><pre>class <em>SseController </em>&lt; ActionController::Base<br>  include ActionController::Live<br><br>  def index<br>    response.headers[&#39;Content-Type&#39;] = &#39;text/event-stream&#39;<br>    response.headers[&#39;Last-Modified&#39;] = Time.now.httpdate<br><br>    response.headers[&quot;rack.hijack&quot;] = proc do |<em>stream</em>|<br>      Thread.new { perform_task(<em>stream</em>) }<br>    end<br><br>    head :ok<br>  end<br><br>  def perform_task(<em>stream</em>)<br>    <em>sse </em>= SSE.new(<em>stream</em>, retry: 300, event: &quot;open&quot;)<br><br>    loop do<br>      sleep 3<br>      <em>sse</em>.write(&quot;Time is: #{Time.current.to_s}&quot;, event: &quot;message&quot;)<br>    end<br>  rescue ActionController::Live::ClientDisconnected, Errno::EPIPE<br>    <em>sse</em>.close<br>  ensure<br>    <em>sse</em>.close<br>  end<br>end</pre><h3>Browser limit</h3><p>Using the code above, I could open 6 tabs and they were all promptly receiving their time updates. However I couldn’t open a 7th tab.<br>I warned you about the limitation on HTTP/1… well I’m on Rails and we can’t use HTTP/2 so we are limited to 6 connections to this domain by the browser. Opening a new (normal / incognito / private browsing) window is an easy way to bypass this limitation.</p><p>That said, our users will still complain — rightfully —about how the 7th tab fails to open.</p><p>In my case, when I just want to unblock the server, the solution is to retry connecting and close down this connection from the server side after the first message is delivered.</p><h3>Server limit</h3><p>On the server side, think about where your code runs. For example the <a href="https://devcenter.heroku.com/articles/limits#processes-threads">Heroku thread limits</a> could surprise you.</p><p>Also, now that you can accept a huge amount of slow and resource heavy requests, you can easily run out of memory (on web, worker or DB too). It might be a good idea to offload the processing to the workers (e.g. Sidekiq) and in the thread just checking in for the rendered response, that you can just put under a Redis key.</p><h3>Final code</h3><p>We’d like to close the connection once the first message has arrived, or if the request times out. (like when you forget to start sidekiq :))</p><pre>source = new EventSource(&#39;/sse?delay=0&amp;wait_limit=5&#39;);<br>source.onopen = (event) =&gt; {<br>    console.log(&quot;The connection has been established.&quot;, event);<br>};<br>source.onmessage = (event) =&gt; {<br>    console.log(&quot;EventSource message received:&quot;, event);<br>    source.close();<br>};<br>source.onerror = (err) =&gt; {<br>    if (err.data === &quot;timeout&quot;) {<br>        source.close();<br>        console.error(&quot;EventSource timeout&quot;, err);<br>    } else {<br>        console.error(&quot;EventSource error&quot;, err);<br>    }<br>};</pre><p>and we’ll offload processing to a Sidekiq worker and wait for the response:</p><pre>class <em>SseController </em>&lt; <em>ActionController</em>::<em>Base<br>  include ActionController</em>::<em>Live<br><br>  </em>def <em>index<br>    </em>response.headers[&quot;Content-Type&quot;] = &quot;text/event-stream&quot;<br>    response.headers[&quot;Last-Modified&quot;] = <em>Time</em>.now.httpdate<br><br>    response.headers[&quot;rack.hijack&quot;] = proc do |<em>stream</em>|<br>      <em>Thread</em>.new { perform_task(<em>stream</em>, params) }<br>    end<br><br>    head :ok<br>  end<br><br>  def <em>perform_task</em>(<em>stream</em>, <em>params</em>)<br>    <em>sse </em>= <em>SSE</em>.new(<em>stream</em>, retry: 300, event: &quot;open&quot;)<br><br>    <em>delay </em>= <em>params</em>[:delay].present? ? <em>params</em>[:delay].to_i : 5<br>    <em>response_key </em>= &quot;sse_#{<em>SecureRandom</em>.uuid}&quot;<br>    <em>SseWorker</em>.perform_in(<em>delay</em>.seconds, <em>response_key</em>)<br><br>    <em>wait_limit </em>= <em>params</em>[:wait_limit].present? ? <em>params</em>[:wait_limit].to_i : 30<br>    <em>wait_limit</em>.times do<br>      sleep 1<br><br>      <em>response </em>= <em>Redis</em>::<em>HashKey</em>.new(<em>response_key</em>).all<br>      if <em>response</em>.present?<br>        <em>sse</em>.write({ response: <em>response </em>}, event: &quot;message&quot;)<br>        break<br>      end<br>    end<br><br>    <em>sse</em>.write(&quot;timeout&quot;, event: &quot;error&quot;)<br>    <em>sse</em>.close<br>  rescue <em>ActionController</em>::<em>Live</em>::<em>ClientDisconnected</em>, <em>Errno</em>::<em>EPIPE<br>    sse</em>.close<br>  ensure<br>    <em>sse</em>.close<br>  end<br>end</pre><p>the worker will set the response:</p><pre>class <em>SseWorker<br>  include Sidekiq</em>::<em>Worker<br><br>  </em>def <em>perform</em>(<em>response_key</em>)<br>    <em>Redis</em>::<em>HashKey</em>.new(<em>response_key</em>).bulk_set({ hello: &quot;world!&quot; })<br>  end<br>end</pre><p>To test, try to open as many connections as you can and see how they’ll all nicely arrive.</p><h3>Production results</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/650/1*EClnE4GXbwYuJZ4acqIXYw.png" /><figcaption>Results</figcaption></figure><p>You can easily compare this image with the first one as the usage was almost identical and the time frame too. We have practically no response queuing anymore and our users stopped complaining as well.</p><p>We did have to add some memory to our DB instance though, as now we have also workers running these expensive DB queries not just the web servers.</p><h3>That’s it!</h3><p>If you’re interested about implementation details see <a href="https://medium.com/@thilonel/how-to-use-rails-actioncontroller-live-sse-server-sent-events-d9a04a286f77">my other article</a>.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=86a9177cb2f" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[How to use Rails ActionController::Live::SSE (server-sent events)]]></title>
            <link>https://medium.com/@thilonel/how-to-use-rails-actioncontroller-live-sse-server-sent-events-d9a04a286f77?source=rss-41e076cea28------2</link>
            <guid isPermaLink="false">https://medium.com/p/d9a04a286f77</guid>
            <category><![CDATA[rails]]></category>
            <category><![CDATA[how-to]]></category>
            <category><![CDATA[server-side-events]]></category>
            <dc:creator><![CDATA[thilonel]]></dc:creator>
            <pubDate>Tue, 01 Nov 2022 17:13:39 GMT</pubDate>
            <atom:updated>2022-11-14T15:20:26.351Z</atom:updated>
            <content:encoded><![CDATA[<p>The <a href="https://api.rubyonrails.org/v6.1.4/classes/ActionController/Live/SSE.html">official Rails API documentation</a> doesn’t say much, but enough to get started. Let’s try to figure out how to use this!</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*jW46FasopT3_hsL7tCLSgw.png" /></figure><p>Create the controller action as per the example in the guide and define the route.</p><pre>&gt; sse_controller.rb</pre><pre>class <em>SseController </em>&lt; <em>ActionController</em>::<em>Base<br>  include ActionController</em>::<em>Live<br></em>  <br>  def index<br>    response.headers[&#39;Content-Type&#39;] = &#39;text/event-stream&#39;<br>    sse = SSE.new(response.stream, retry: 300, event: &quot;event-name&quot;)<br>    sse.write({ name: &#39;John&#39;})<br>    sse.write({ name: &#39;John&#39;}, id: 10)<br>    sse.write({ name: &#39;John&#39;}, id: 10, event: &quot;other-event&quot;)<br>    sse.write({ name: &#39;John&#39;}, id: 10, event: &quot;other-event&quot;, retry: 500)<br>  ensure<br>    sse.close<br><strong>  </strong>end<br>end</pre><pre>&gt; routes.rb</pre><pre>resources :sse, only: [:index]</pre><p>Now we can start calling it from the browser, see what happens. I’m following the <a href="https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events">official docs from Mozilla</a>.</p><p>In a new tab I open the local Rails app localhost:3001 and then I open a browser console. In the console I’ll connect to this EventSource and I set handlers for the 3 different even types described by the docs.</p><pre>source = new EventSource(‘/sse’);<br>source.onopen = (event) =&gt; {<br>  console.log(&quot;The connection has been established.&quot;, event);<br>};<br>source.onmessage = (event) =&gt; {<br>  console.log(&quot;EventSource message received:&quot;, event);<br>};<br>source.onerror = (err) =&gt; {<br>  console.error(&quot;EventSource failed:&quot;, err);<br>};</pre><p>Running this, you’ll start receiving a lot of errors. Don’t worry: source.close() will stop the madness.</p><p>Taking another look at our code, we can see that the server and client side events have nothing to do with each other. Let’s change the server side code to return events that we subscribe to.</p><pre>def index<br>  response.headers[&#39;Content-Type&#39;] = &#39;text/event-stream&#39;<br>  <em>sse </em>= SSE.new(response.stream, retry: 300, event: &quot;open&quot;)<br>  <em>sse</em>.write({ name: &#39;John&#39;}, event: &quot;message&quot;)<br>  sleep 2<br>  <em>sse</em>.write({ name: &#39;John&#39;}, id: 10, event: &quot;message&quot;)<br>ensure<br>  <em>sse</em>.close<br>end</pre><p>After executing the same code from the console against this one, it looks much better! I got the something like this:</p><pre>The connection has been established. Event {…}<br>EventSource message received: MessageEvent {isTrusted: true, data: &#39;{&quot;name&quot;:&quot;John&quot;}&#39;, origin: &#39;<a href="http://localhost:3001&#39;">http://localhost:3001&#39;</a>, lastEventId: &#39;&#39;, source: null, …}<br>EventSource message received: MessageEvent {isTrusted: true, data: &#39;{&quot;name&quot;:&quot;John&quot;}&#39;, origin: &#39;<a href="http://localhost:3001&#39;">http://localhost:3001&#39;</a>, lastEventId: &#39;10&#39;, source: null, …}<br>EventSource failed: Event {…}</pre><pre>The connection has been established. Event {…}<br>EventSource message received: MessageEvent {isTrusted: true, data: &#39;{&quot;name&quot;:&quot;John&quot;}&#39;, origin: &#39;<a href="http://localhost:3001&#39;">http://localhost:3001&#39;</a>, lastEventId: &#39;&#39;, source: null, …}<br>EventSource message received: MessageEvent {isTrusted: true, data: &#39;{&quot;name&quot;:&quot;John&quot;}&#39;, origin: &#39;<a href="http://localhost:3001&#39;">http://localhost:3001&#39;</a>, lastEventId: &#39;10&#39;, source: null, …}<br>EventSource failed: Event {…}</pre><p>It’s worth going to the Network tab as well to check the request log. What you will notice there, is that despite the sleep 2 on the server side, all the messages are being delivered at the same time, then the request is just closed. That’s not really async and it really defeats the purpose.</p><p>Luckily, we are not the first ones with the issue, as you can see on this <a href="https://stackoverflow.com/questions/63432012/server-sent-events-in-rails-not-delivered-asynchronously/65127528#65127528">StackOverflow question</a>. The culprit is a middleware called Rack::ETag which wants to buffer your response. It’s an <a href="https://github.com/rack/rack/issues/1619">issue reported on GitHub</a> and the proposed solution is to use Rack 3, but as it’s not an option for me at the time of writing. The suggested workaround there is:</p><pre>response.headers[&#39;Last-Modified&#39;] = Time.now.httpdate</pre><p>Yes, this fixes it. We receive the two messages with 2 seconds of delay, yay!</p><p>The next issue I noticed, is that if I source.close() before all the messages are delivered, it throws an error, so let’s handle that and see how the code will look like:</p><pre>class <em>SseController </em>&lt; <em>ActionController</em>::<em>Base<br>  include ActionController</em>::<em>Live</em></pre><pre>  def index<br>    response.headers[&#39;Content-Type&#39;] = &#39;text/event-stream&#39;<br>    response.headers[&#39;Last-Modified&#39;] = Time.now.httpdate<br><br>    <em>sse </em>= SSE.new(response.stream, retry: 300, event: &quot;open&quot;)<br><br>    5.times do<br>      sleep 2<br>      <em>sse</em>.write({ name: &#39;John&#39;}, event: &quot;message&quot;)<br>    end<br>  rescue ActionController::Live::ClientDisconnected<br>    <em>sse</em>.close<br>  ensure<br>    <em>sse</em>.close<br>  end<br>end</pre><p>Perfect, I did source.close() after 3 messages and there were no issues on the server side reported.</p><p>Read <a href="https://medium.com/@thilonel/fixing-request-queuing-with-server-sent-events-using-ruby-on-rails-86a9177cb2f">my other post</a> on how you can use Rails server-side events to fix responsiveness issues (when requests are queueing for too long)!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=d9a04a286f77" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[testing Firebase authentication in Go]]></title>
            <link>https://medium.com/@thilonel/testing-firebase-authentication-in-go-aacd4139a218?source=rss-41e076cea28------2</link>
            <guid isPermaLink="false">https://medium.com/p/aacd4139a218</guid>
            <category><![CDATA[firebase]]></category>
            <category><![CDATA[guides-and-tutorials]]></category>
            <category><![CDATA[testing]]></category>
            <category><![CDATA[authentication]]></category>
            <category><![CDATA[golang]]></category>
            <dc:creator><![CDATA[thilonel]]></dc:creator>
            <pubDate>Sun, 23 Jun 2019 09:19:01 GMT</pubDate>
            <atom:updated>2019-06-23T09:19:01.962Z</atom:updated>
            <content:encoded><![CDATA[<p>This puts some information together for convenience and gives a quick overview of using Firebase authentication on your server, for those who are interested or just starting out with it.</p><p>By that I mean when the client (e.g. mobile or web app) sends an ID token with each request to the server and using the <a href="https://firebase.google.com/docs/auth/admin/verify-id-tokens">Firebase Admin SDK</a> it is resolved into a User UID, identifying the sender.</p><p>First <a href="https://firebase.google.com/docs/admin/setup/">we need a Firebase client</a>:</p><pre>opt := option.WithCredentialsFile(<strong>&quot;serviceAccountKey.json&quot;</strong>)<br>firebaseApp, err := firebase.NewApp(context.TODO(), nil, opt)<br>client, err := firebaseApp.Auth(context.TODO())</pre><p>Then we can<a href="https://firebase.google.com/docs/auth/admin/verify-id-tokens"> get a UID from the ID token</a>:</p><pre>authToken = client.VerifyIDToken(context.TODO(), idToken)<br>authToken.UID</pre><p>That’s quite straightforward, but how can we get an ID token to test with? The easiest way I’ve found is to <a href="https://firebase.google.com/docs/auth/admin/create-custom-tokens">create a custom token using the Admin SDK</a> passing the UID we can find on the Firebase console.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*bpuzgiJXa9NzWmExtvDjKQ.png" /><figcaption>User UID on Firebase console</figcaption></figure><pre>customToken, err := client.CustomToken(context.TODO(), &quot;&lt;UID&gt;&quot;)</pre><p>Then create a Web app on Firebase console and use its <a href="https://firebase.google.com/docs/reference/rest/auth#section-verify-custom-token">API key to transform it into an ID token</a>.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*YT8UH-2MwNx6pfcFdncUXA.png" /><figcaption>Create a Web app on console, next screen shows the API key</figcaption></figure><pre>payload, err := json.Marshal(<strong>map</strong>[string]string{<strong>&quot;token&quot;</strong>: customToken, <strong>&quot;returnSecureToken&quot;</strong>: <strong>&quot;true&quot;</strong>})</pre><pre>response, err := http.Post(<br><strong>&quot;https://www.googleapis.com/identitytoolkit/v3/relyingparty/verifyCustomToken?key=&lt;API_KEY&gt;&quot;</strong>,<br><strong>&quot;application/json&quot;</strong>,<br>bytes.NewBuffer(payload))</pre><pre>body, err := ioutil.ReadAll(response.Body)</pre><pre><strong>type </strong>tokensResponse <strong>struct </strong>{<br>   IDToken string<br>}<br><strong>var </strong>tokens tokensResponse<br>json.Unmarshal(body, &amp;tokens)</pre><pre>tokens.IDToken</pre><p>And that’s it, we have an ID token that we can pass to client.VerifyIDToken.</p><p>Hopefully I could save you some time!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=aacd4139a218" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[BitBucket pipelines config with cache for Go]]></title>
            <link>https://medium.com/@thilonel/bitbucket-pipelines-config-with-cache-for-go-8e15e8c7cfda?source=rss-41e076cea28------2</link>
            <guid isPermaLink="false">https://medium.com/p/8e15e8c7cfda</guid>
            <category><![CDATA[golang]]></category>
            <category><![CDATA[configuration]]></category>
            <category><![CDATA[cache]]></category>
            <category><![CDATA[bitbucket-pipelines]]></category>
            <dc:creator><![CDATA[thilonel]]></dc:creator>
            <pubDate>Fri, 21 Jun 2019 05:34:07 GMT</pubDate>
            <atom:updated>2020-04-10T03:12:19.672Z</atom:updated>
            <content:encoded><![CDATA[<p>It took me some time to get the setup just right, so I decided to share it.</p><p>For dependency management I’m using go mod. As far as I know that’s the recommended — and in my experience the most convenient — way.<br>All you do is go mod init then check in the created files.</p><p>This is how my current bitbucket-pipelines.yml looks like:</p><pre><strong>image</strong>: golang:1.12<br><br><strong>pipelines</strong>:<br>  <strong>default</strong>:<br>    - <strong>step</strong>:<br>        <strong>caches</strong>:<br>          - gomodules<br>        <strong>script</strong>:<br>          - if [ ! -d &quot;vendor&quot; ] || [ -z &quot;$(ls -A vendor)&quot; ]; then go mod vendor; fi<br>          - go test -mod=vendor ./...<br><strong>definitions</strong>:<br>  <strong>caches</strong>:<br>    <strong>gomodules</strong>: vendor</pre><p>What happens here?<br>Define a cache called gomodules by specifying the directory name vendor .<br>Run go mod vendor if the vendor folder doesn’t exist or empty.<br>Execute the tests.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*MBvsdpiTH6jgNKvzIIO3xA.png" /><figcaption>Result</figcaption></figure><p>FYI: The cache will only be uploaded if the build passes, and if you add a new dependency then you’ll have to invalidate the cache from the UI. Maybe I’ll improve this to handle that, or if you do so, please comment! :)</p><p>For further information on caching: <a href="https://confluence.atlassian.com/bitbucket/caching-dependencies-895552876.html">https://confluence.atlassian.com/bitbucket/caching-dependencies-895552876.html</a></p><p>I appreciate feedback. <br>If this was useful to you click applaud below. Thank you!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=8e15e8c7cfda" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[performance issue: “…searching is unusable for huge accounts…”]]></title>
            <link>https://medium.com/@thilonel/the-issue-searching-is-unusable-for-huge-accounts-5df1985e6f29?source=rss-41e076cea28------2</link>
            <guid isPermaLink="false">https://medium.com/p/5df1985e6f29</guid>
            <category><![CDATA[javascript]]></category>
            <category><![CDATA[profiling]]></category>
            <category><![CDATA[css]]></category>
            <category><![CDATA[search]]></category>
            <category><![CDATA[optimization]]></category>
            <dc:creator><![CDATA[thilonel]]></dc:creator>
            <pubDate>Tue, 31 Jan 2017 10:46:17 GMT</pubDate>
            <atom:updated>2022-07-22T07:49:14.459Z</atom:updated>
            <content:encoded><![CDATA[<blockquote>performance issue: “…searching is unusable for huge accounts…”</blockquote><blockquote>“Just tried some searching and couldn’t do much with it — it kills the browser. I guess we should add some debouncing (if not already added) to it and approach searching smarter, the DOM is probably super heavy for that, could be even filtered with ajax.”</blockquote><blockquote>first comment: “Ember UI anyone?”</blockquote><p>After just a few seconds, we are thinking about rewriting something — classic. ;)</p><p>But what is actually taking that much time?<br>As we are talking about a client side search we turn to Chrome’s profiler.</p><p><em>Developer Tools / Profiles</em></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*96uF8gO8TugsuJkf-qcUUA.png" /><figcaption>Yes Chrome, this is exactly what we’d like to see.</figcaption></figure><p>Load the page, start recording, search, stop recording, results:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/982/1*COWtfVC_0FTFhbkoLBhHkw.png" /><figcaption>This is indeed slow…</figcaption></figure><p>Let’s not get lost in the numbers too much: 39 seconds, CSS. That’s a clue. The source was simple enough to quickly find these:</p><pre>showElements: (elements) -&gt;<br>  @list.find(elements).parents(@listElement).<strong>slideDown()</strong><br><br>hideElements: (elements) -&gt;<br>  @list.find(elements).parents(@listElement).<strong>slideUp()</strong></pre><p>What if we just…</p><pre>showElements: (elements) -&gt;<br>  @list.find(elements).parents(@listElement).<strong>show()</strong><br><br>hideElements: (elements) -&gt;<br>  @list.find(elements).parents(@listElement).<strong>hide()</strong></pre><p>Measuring…</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/990/1*NEFMFUhE4NMD3JN5mWwpeg.png" /></figure><p>Voilà!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=5df1985e6f29" width="1" height="1" alt="">]]></content:encoded>
        </item>
    </channel>
</rss>