<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by nairihar on Medium]]></title>
        <description><![CDATA[Stories by nairihar on Medium]]></description>
        <link>https://medium.com/@nairihar?source=rss-b0ffd91825e5------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Tue, 07 Apr 2026 05:15:56 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@nairihar/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Node.js EventLoop Lag + Kafka Consumer Lag: One Root Cause]]></title>
            <link>https://javascript.plainenglish.io/node-js-eventloop-lag-kafka-consumer-lag-one-root-cause-89465672de13?source=rss-b0ffd91825e5------2</link>
            <guid isPermaLink="false">https://medium.com/p/89465672de13</guid>
            <category><![CDATA[javascript]]></category>
            <category><![CDATA[microservices]]></category>
            <category><![CDATA[distributed-systems]]></category>
            <category><![CDATA[kafka]]></category>
            <category><![CDATA[nodejs]]></category>
            <dc:creator><![CDATA[nairihar]]></dc:creator>
            <pubDate>Mon, 02 Mar 2026 03:51:22 GMT</pubDate>
            <atom:updated>2026-03-02T11:23:16.380Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*jSjQyjp-Mfzrqdd0NUsiMA.jpeg" /><figcaption>Generated using <a href="http://leonardo.ai">Leonardo AI</a> + Edited via <a href="https://www.canva.com/">Canva</a></figcaption></figure><h4>The 3-Second Production Mystery That Took Me 20 Days to Solve</h4><p>This is a story about how we found a <strong>3s EventLoop lag (p99)</strong> in one of our microservices while exploring the Kafka consumer lag… and how I tracked it down and fixed.</p><p>It’s Feb 6. I notice an increase in Kafka lag in our Grafana chart, so I ask team members if they performed any actions that could have increased the traffic. I get a negative answer.</p><p>Together with two other senior engineers, I jump on a call to understand what’s happening. One engineer suggests a hypothesis:</p><blockquote>Every time we restart the servers and reconnect to Kafka, things behave differently at the beginning. Kafka Cluster works efficiently in production at first, but after some time, it might do internal calculations/optimisations and start processing the queues with bigger lags.</blockquote><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*jgzqgcmnMc-y1GRvRhOIAQ.png" /><figcaption>Grafana chart showing<strong> Kafka lag</strong> goes down every time the services are restarted</figcaption></figure><p>I was very sceptical of this idea, and didn’t believe the Kafka cluster could behave like that, even though I see the Grafana chart showing the lag drops to 0 every time we restart our Node.js consumer services.</p><p>I request approval to perform a test in production: simply restarting the services to see if the same thing happens. I do it; the lag drops to 0, and I still don’t believe that&#39;s the real reason.</p><p>We spent 3 hours checking the logic, Grafana charts, and anything else we could think of, because I can’t accept that Kafka would behave like that.</p><p>After 3 hours, we find this picture…</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*xID11Um1Gk3DVNQHY_bytw.png" /><figcaption>Grafana chart that shows <strong>3s Event Loop lag (p99)</strong></figcaption></figure><p>And yet it’s February 26th: after 20 days, I was finally able to start writing this article, because it took that long to track down the issue, reproduce it, test it in dev, and fix it.</p><p>The Event Loop usually works very fast: it’s on the order of milliseconds. Around 10 ms is a pretty good number, but if you see it increasing (or getting much bigger), you’re in huge trouble.</p><p>Even though we had this issue in production for a month, we didn’t notice it because our alerting wasn’t set up properly. We also didn’t feel a big impact because traffic wasn’t that high yet. But once we discovered the issue, the first thing I said was: <strong>we can’t accept more and more traffic, because this will only get worse over time</strong>. We need to understand the root cause and fix it as soon as possible.</p><p>I could clearly see when it started because we had historical Event Loop lag data in Grafana. And guess what the first thing I checked was: what we released that day (Jan 16, based on the picture above) and what changes we included. After reviewing it, it was pretty obvious which commit/MR introduced the problem.</p><p>The issue was that it was a big refactor: we couldn’t simply revert it, and we couldn’t easily understand the root cause just by looking at the file changes.</p><p>So, guess what: <strong>I got really happy</strong>, because it was a challenging and interesting problem that had to be solved. And I jumped into the work…</p><h3><strong>Reasons Event Loop Lag Can Grow</strong></h3><p>There are a couple of reasons why event loop lag can grow. It’s mostly because we have:</p><ul><li><strong>Synchronous JavaScript work</strong> that blocks the event loop.</li><li><strong>Too many variables/objects are being created and not cleaned up</strong>, so GC takes longer and ends up blocking the event loop.</li></ul><p>Thanks to the proper Grafana charts, we also found this chart, along with the previous one above.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*LIhs9I_il3Bbj7B01QdK1A.png" /><figcaption>Grafana chart that shows <strong>single server RSS in bytes</strong></figcaption></figure><blockquote>Resident Set Size (RSS) is the portion of a process’s memory currently held in physical RAM.</blockquote><p>So it’s already obvious that we’re creating things that aren’t being cleaned up, and because of that, the event loop slows down.</p><p>Now … we know what we’re looking for.</p><h3>Fixes and Changes I Made</h3><p>The first thing that caught my eye in that MR was a caching solution for Kafka batch processing. Basically, the consumer was handling <strong>100 messages per batch</strong>, but before processing them it was creating a cache using <strong>JavaScript </strong><strong>Map</strong>s and loading data from the DB into those maps, so the batch processing could reuse it for each message in the batch.</p><pre>function createCache() {<br>  return {<br>    entity1: new Map(),<br>    entity2: new Map(),<br>    entity3: new Map(),<br>    ...<br>  };<br>}</pre><p>So it was doing this <strong>for every batch</strong>. The naming also bothered me and kept pushing me toward the idea that this could be the memory leak. I tried to track down whether those Maps could be referenced by a function or object that lives longer than the batch (or maybe something running in the background), so they wouldn’t get cleaned up. I couldn’t find anything obvious, but I still removed that logic and implemented it differently, just to be sure my changes didn’t leave any references that could prevent the maps from being garbage-collected.</p><p>In addition to those changes, I tried to clean up anything unnecessary that caught my eye and disable any additional processes related to batch-metrics performance tracking.</p><h4>Performance Tracking Implementation</h4><p>There was a class responsible for collecting some statistics about batch processing, and it was enabled in the main file. The related functions were injected into the Kafka consumer, so they could generate data for those statistics.</p><p>Inside the performance tracker class, there was a setInterval running <em>every 1 second,</em> that printed the results (simply console logs). The environment check (“don’t run outside dev”) existed only inside the print function, so even with that check, the other methods still ran regardless of the environment and kept generating statistics data.</p><p>I simply disabled this performance tracking implementation entirely (for all environments), and along with my other changes, prepared everything for the next release, which I labelled as a <strong>potential fix for the event loop lag</strong>.</p><h4>Reproducing the Issue in the Dev Environment</h4><p>We decided to do some stress testing to reproduce the memory leak and observe the event loop lag. Even though we didn’t have Grafana charts for the dev environment, we exposed <a href="https://www.npmjs.com/package/prom-client">Prometheus</a> collectDefaultMetrics via an HTTP endpoint. That helped me stress-test the service while monitoring RSS and event loop lag to make sure I was actually reproducing the issue.</p><p>I was able to reproduce it by sending <strong>800K Kafka messages</strong> through the batch processing. On top of that, I also proved that my changes fixed the situation.</p><p>Even though I still couldn’t 100% understand which exact line caused the issue.</p><h3>Releasing the Fix to Production</h3><p>So the important day came, and we decided to release the changes at 8 AM during low traffic.<br>We kept a rollback plan. If the new changes didn’t work, we could revert to the previous broken version, which was slow, but at least it was working reliably.</p><p>We released it and monitored the logs and charts closely, along with the data generated in the database after batch processing.</p><p>And this was the picture after a couple of hours.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*CSrhn5GrFmAqSs6-kZKHaQ.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*4gBDpjJyRxRFbmcn_RsB_g.png" /><figcaption>Grafana charts right after the release: <strong>RSS &amp; Event Loop Lag (p99)</strong></figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*cwYk0bHOwTZOrtpe_n_Jrg.jpeg" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*IAxZc_UY67LFEqRSX2Ln9g.png" /><figcaption>Grafana charts a couple of hours later: <strong>RSS &amp; Event Loop Lag (p99)</strong></figcaption></figure><p>Beautiful, isn’t it!?</p><p>Even though we were doing batch processing and committing the offset only after 100 messages were processed and saved to our database in a single transaction, I was sure the event loop lag would drop. We would still have some lag because of batch processing, but it wouldn’t be that big.</p><p>And here is proof that I was right, again!</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Eh8DIBAr6CCZtO3NB7A5hw.png" /><figcaption>Grafana chart showing <strong>Kafka lag </strong>before and after the release</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*ubIiSJdNQoJ-E5ae3wViVw.png" /><figcaption>Grafana chart showing <strong>Kafka lag </strong>a couple of hours later</figcaption></figure><p>It was finally the time when I could rest and breathe without thinking about the issue anymore.</p><h3><strong>Solving the Mystery: The Root Cause</strong></h3><p>Even though I fixed the issue, I still didn’t know which exact line caused it: and I really didn’t like that.</p><ul><li>How could I be sure it wouldn’t happen again?</li><li>How could I write this article without understanding the root cause? (hehe)</li></ul><p>So I spend few hours during the night to dive deeper into the MR changes from Jan 16.</p><p>After some investigation, I finally understood…</p><p>Remember the performance tracker I was talking about? It was enabled, but it simply didn’t log anything in prod. And the related functions were generating a huge amount of data inside an array. It kept pushing data into the array and never cleaned them up, so they just kept growing.</p><pre>this.latencies.push(...);</pre><h4>JavaScript Garbage Collector</h4><p>If we have too many JavaScript object references in the heap, GC will slow down the event loop because it runs on the main thread. GC has to scan the JS objects and their references, and that’s exactly what was happening to us.</p><p>Even though there are ways to keep data in memory and reduce how much GC blocks the event loop, in general, using too much JS heap can cause big trouble like this.</p><h3>The AI-generated code (Claude Opus 4.6)</h3><p>Obviously, this code was generated with AI. But we weren’t using it properly, and it also wasn’t generated with safeguards.</p><p>It was a huge class with many methods: around 500 lines in total. It had a reset method that would clean the data, but reset was never used. That was the primary issue. Another problem was that the AI could have added safeguards to prevent the arrays from growing infinitely.</p><p>So you probably get my point: AI can’t fully replace engineers. Who else would have tracked this issue down and fixed it if it… hehe</p><h3>Summary</h3><p>This was an interesting production bug that I really enjoyed tracking down, understanding, and fixing: although in this case, it was the other way around: I fixed it first, and only then understood the real reason.</p><p>It took around 20 days, but luckily, I was able to track it down relatively easily without using advanced tools (e.g. <a href="https://clinicjs.org/">Clinicjs)</a> or doing a heap dump to monitor Node.js memory usage. Otherwise, it could have taken much longer.</p><p>I hope you enjoyed reading this article and that it gave you some insights and ideas on how to handle situations like this. If you’re interested, I highly recommend reading my article about the event loop: <strong>“</strong><a href="https://nairihar.medium.com/you-dont-know-node-js-eventloop-8ee16831767"><strong>You Don’t Know Node.js Event Loop</strong></a><strong>.”</strong></p><p>Thank you for taking the time to read this comprehensive article. I hope you found it informative and gained valuable insights from it.<br>Feel free to ask any questions or tweet me <a href="https://twitter.com/nairihar">@nairihar</a></p><p>Also, follow my <strong>JavaScript</strong> newsletter on Telegram: <a href="https://t.me/javascript">@javascript</a></p><ul><li><a href="https://nairihar.medium.com/you-dont-know-node-js-eventloop-8ee16831767">You Don&#39;t Know Node.js EventLoop</a></li><li><a href="https://nairihar.medium.com/an-advanced-retry-mechanism-in-node-js-with-kafka-36741142c693">An Advanced Retry Mechanism in Node.js with Kafka</a></li><li><a href="https://blog.bitsrc.io/how-to-scale-node-js-socket-server-with-nginx-and-redis-b02e23b3423c">How to Scale Node.js Socket Server with Nginx and Redis</a></li></ul><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=89465672de13" width="1" height="1" alt=""><hr><p><a href="https://javascript.plainenglish.io/node-js-eventloop-lag-kafka-consumer-lag-one-root-cause-89465672de13">Node.js EventLoop Lag + Kafka Consumer Lag: One Root Cause</a> was originally published in <a href="https://javascript.plainenglish.io">JavaScript in Plain English</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[An Advanced Retry Mechanism in Node.js with Kafka]]></title>
            <link>https://blog.bitsrc.io/an-advanced-retry-mechanism-in-node-js-with-kafka-36741142c693?source=rss-b0ffd91825e5------2</link>
            <guid isPermaLink="false">https://medium.com/p/36741142c693</guid>
            <category><![CDATA[kafka]]></category>
            <category><![CDATA[event-driven-architecture]]></category>
            <category><![CDATA[nodejs]]></category>
            <category><![CDATA[microservices]]></category>
            <category><![CDATA[distributed-systems]]></category>
            <dc:creator><![CDATA[nairihar]]></dc:creator>
            <pubDate>Mon, 09 Feb 2026 18:33:19 GMT</pubDate>
            <atom:updated>2026-02-09T18:33:19.666Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*HKRMg3YK7rYF993KNzpzEw.jpeg" /><figcaption>Generated using Leonardo AI</figcaption></figure><h4>Reliable Event Delivery with an Easy-to-Monitor Approach</h4><p>Have you ever wondered how systems are designed to ensure reliable event delivery and processing? How can we minimise event loss to a very low level: or even achieve near-zero message loss and highly reliable processing?</p><p>In this article, I’ll show you an example of Node.js architecture that ensures event processing and minimises event loss, keeping it almost zero. Even during deployments or server crashes, data loss stays minimal. I’ll also demonstrate how this approach enables you to monitor and debug every part of the system, including failures and the retry processes.</p><p>Here is the list of topics covered in this article.</p><p><strong>— Acknowledgment<br>— Basic Retry Mechanism<br>— Why do we need to store the event?<br></strong>— — — Kafka as an Alternative<br>— — — Kafka over SQL DBs and Redis<br><strong>— Kafka as a Solution<br></strong>— — — Example 1: speed-over-safety<br>— — — Example 2: lossless-over-speed<br>— — — CRONJob Retries with Kafka<br><strong>— Processes should be idempotent<br>— Kafka UI: Monitoring and Debugging<br></strong>— — — Visualisation and Alerts<br><strong>— Summary</strong></p><h3>Acknowledgment</h3><p>Lately, I ran into an interesting issue related to this “Retry” topic.<br>An <strong>Internal Service </strong>should send an event to an <strong>External</strong> <strong>System.</strong></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*BcLeZsksYqZfbnHWoILxkg.jpeg" /><figcaption>Basic flow: An Internal Service sends an event to an External System.</figcaption></figure><p>But after looking closely at the requests and service behaviour, I realised that the <strong>Internal</strong> <strong>Service </strong>can fail to send some events to the <strong>External System </strong>for many reasons:</p><ul><li>If <strong>E.S.</strong> is unavailable for a while, <strong>I.S.</strong> will fail to send the event.</li><li>If <strong>I.S.</strong> starts the request but crashes because of an unexpected failure, then the request won’t be delivered.</li><li>And so on.</li></ul><p>It was clear that I needed to implement a retry mechanism to solve this problem, because event loss can be a serious issue.</p><p>Before jumping straight into the topic, I want to highlight that every system is unique, and every problem needs its own solution. <strong>You</strong> <strong>can’t just copy</strong> this approach into any system that needs a retry mechanism. Depending on your system, available tools, environment, and overall situation, you might choose a different approach. This solution worked well for my case, but you might need something much simpler, or even a more advanced version, based on your needs.</p><p>I’ll explain why I chose this approach, what other options I considered, and why I didn’t pick any of those.</p><h3>Basic Retry Mechanism</h3><p>Here is a very basic retry mechanism that tries to run a function (for example, a <strong><em>fetch</em></strong> request). If the provided function fails, then our retry mechanism waits for 3 seconds and tries again, up to 3 times.</p><pre>const sleep = (ms) =&gt; new Promise((resolve) =&gt; setTimeout(resolve, ms));<br><br>async function retry(fn, retries = 3, delayMs = 3000) {<br>  let lastError;<br><br>  for (let attempt = 1; attempt &lt;= retries; attempt++) {<br>    try {<br>      return await fn();<br>    } catch (err) {<br>      lastError = err;<br><br>      if (attempt === retries) {<br>        throw lastError; // final fail<br>      }<br><br>      console.log(`Attempt ${attempt} failed. Retrying in ${delayMs}ms...`);<br>      await sleep(delayMs);<br>    }<br>  }<br>}</pre><p>What do you think:</p><ul><li>Is this an ideal solution?</li><li>Does it prevent event loss?</li><li>What kind of issues might you face with this approach?</li></ul><p>Of course, this can help a bit, but it’s not an ideal solution. Here are a couple of reasons why:</p><ul><li>What if the service crashes during the retries? The message won’t be delivered, and we’ll lose that event.</li><li>What if the destination system/service is unavailable? We will retry three times over 9 seconds, but what if the service is down for longer than 9 seconds? In that case, we’ll still lose the event.</li></ul><p>So this solution is not bulletproof.</p><h3>Why do we need to store the event?</h3><p>It’s quite obvious that if we want to prevent such losses, we need to store the event somewhere first. But where should we store it? That’s the main question.</p><p>Should it be <strong><em>MySQL/PostgreSQL, Redis</em></strong>, or something else?</p><p>If you think about it, an SQL database is probably a better option than Redis, or maybe that’s just my preference. But I’ll try to explain why.</p><p>With Redis, of course, everything feels easier because we don’t need a strict structure, and the event payload can change over time. But for this case, I don’t like Redis as an option because it can make debugging harder. In general, we can install <a href="https://redis.io/insight/">Redis Insight</a> and monitor the data inside Redis, but if we compare it with other solutions, and if you’ve tried them, you might understand why I say this.<br>Btw, one <strong>very critical point</strong>: Redis must be configured with persistence. Otherwise, we might lose the data if Redis restarts. For me, it was an option because Redis was already configured to be persistent.</p><p>With an SQL database, we need a basic structure and at least one JSON column to store the event payload. But debugging becomes much easier: we can quickly inspect events and check their retry count with a simple query.</p><p>Whatever storage we choose, we need a job that runs periodically, checks for failed processes, and retries them. But there is one very important part in this flow which makes everything a bit harder when using an SQL database or Redis. I’ll explain it in a bit.</p><h4>Kafka as an Alternative</h4><p>If you’ve never worked with <a href="https://kafka.apache.org/">Kafka</a>, I’ll briefly introduce it to you. It’s a <strong>message queue </strong>where one or multiple service(s) produces messages and another service consumes them. Messages stay in the queue until the consumer marks them as processed.</p><p>You probably get the point: Kafka fits our problem much better than the storage options we discussed above. Why? Because it already provides a native way to produce and consume events, and it has a built-in mechanism to mark the messages as processed. With the other solutions, we would need to implement some things by ourselves.</p><p>For monitoring and debugging, we can install <a href="https://github.com/provectus/kafka-ui">Kafka UI</a>, which shows all the queues(topics) and their messages with the details.</p><h4><strong>Kafka over SQL DBs and Redis</strong></h4><p>Remember, I mentioned that there is one part that can be implemented much better with Kafka compared to the other options.</p><p>Let’s say there is a requirement that when the Internal Service has an event, it should send it as fast as possible, without any extra delays.</p><p>Here we have two choices:</p><p>1. <strong>speed-over-safety: </strong>Either the Internal Service immediately tries to send the event to the External Service once it’s available. And only in case of failure, it moves the event to the storage.</p><pre>function processEvent(event) {<br>  try {<br>    await fetch(event);<br>  } catch (err) {<br>    moveToStorage(event);<br>  }<br>}</pre><p>2. <strong>lossless-over-speed: </strong>Or the Internal Service immediately stores the event, and only after that, another job/service processes it accordingly and sends it to the External Service.</p><pre>function processEvent(event) {<br>  moveToStorage(event);<br>}</pre><p>In the first option (<strong>speed-over-safety</strong>), the downside is that if the server crashes during the fetch request, we will lose the event. Also, fetch requests can take much longer because the request goes outside of your system, and you don’t control it anymore. On the other hand, the benefit of this solution is that you won’t add extra delay to the request if everything is fine.</p><p>In the second approach (<strong>lossless-over-speed</strong>), we are much more secure because the chances of losing data are significantly lower. Once the event is available, we store it first. Of course, that’s also a request, but it can be an internal request inside our cluster, meaning we control everything. So writing to storage should be much faster, which reduces the risk of data loss if the server crashes. However, the downside is that we delay the delivery of the event to the External Service.</p><p>First of all, for both options (<strong>speed-over-safety &amp; lossless-over-speed</strong>), we would need to create a cron job that runs every <strong>X</strong> minutes and retries sending the failed events.</p><p>For the <strong>lossless-over-speed </strong>approach, a cron job can’t be the primary solution, because it runs periodically with a fixed interval, and that interval alone could already be too long a delay. We need a better mechanism to deliver the event as soon as possible once it’s stored. We need a solution where the event is emitted in real time for processing. The idea is that we save the event to storage first, and the first fetch request should happen as soon as possible. After that, the remaining retries can be handled with a cron job, which will bring some delays.</p><p>Of course, we can use Redis Pub/Sub or PostgreSQL with additional extensions to work with real-time events instead of manually pulling them via read operations. However, Redis Pub/Sub doesn’t provide delivery guarantees: it simply lets us publish events and subscribe to them, so we’d still face issues with non-guaranteed delivery in case of system failures. And the PostgreSQL with extensions also feels a bit heavy/overkill for this use case.</p><p>And this is the point where Kafka will shine over the other solutions.</p><h3>Kafka as a Solution</h3><p>Here is how the record inside the Kafka can look.</p><pre>{<br>  &quot;id&quot;: &quot;evt_1737429812_001&quot;,<br>  &quot;event&quot;: {<br>    &quot;type&quot;: &quot;USER_REGISTERED&quot;,<br>    &quot;timestamp&quot;: &quot;2026-01-20T12:34:56Z&quot;,<br>    &quot;userId&quot;: &quot;u_98231&quot;,<br>    &quot;email&quot;: &quot;user@example.com&quot;<br>  },<br>  &quot;retry_count&quot;: 2,<br>  &quot;is_manual_review_needed&quot;: false<br>}</pre><p>In Kafka, you don’t simply read and write like in traditional storage solutions. Instead, you <strong>produce (write)</strong> messages, and then <strong>consumers</strong> receive them once they’re sent to Kafka by a producer.</p><pre>// PRODUCER (writes to Kafka)<br>await producer.send({<br>  topic: &quot;user-events&quot;,<br>  messages: [<br>    { key: &quot;user_1&quot;, value: JSON.stringify({ type: &quot;USER_REGISTERED&quot;, ... }) }<br>  ]<br>});</pre><p>In an SQL database, you have <strong>tables</strong>, but in Kafka, you have entities called <strong>topics</strong>. A topic is a <strong>queue</strong> where your messages are stored in the correct order. Once you start consuming, you will receive messages <strong>in order</strong> from the specified topic.</p><pre>// CONSUMER (reads from Kafka)<br>await consumer.subscribe({ topic: &quot;user-events&quot; });<br><br>await consumer.run({<br>  eachMessage: async ({ message }) =&gt; {<br>    console.log(&quot;Received:&quot;, message.value.toString());<br>  }<br>});</pre><p>This means that when the Internal Service has a message, it can produce the event into Kafka, and it will be delivered <strong>in real time</strong> to another service (or even the same service) that is consuming the topic. If there is no active consumer, the message will stay in the topic until someone consumes it and <strong>marks it as processed</strong>.</p><p>Here, I mentioned something really important: <em>“mark it as processed.”</em> Kafka has the concept of <strong>acknowledgement</strong> (also known as committing). When you consume a message, you can acknowledge it to confirm that it was processed successfully, so it won’t be sent to you again.</p><p>If you consume a message, but your internal processing fails, and you <strong>skip the acknowledgement (commit)</strong>, you will receive the same message again once your service is back online. This gives you a strong message processing guarantee.</p><pre>// CONSUMER (reads from Kafka)<br>await consumer.subscribe({ topic: &quot;user-events&quot; });<br><br>await consumer.run({<br>  eachMessage: async ({ message }) =&gt; {<br>    console.log(&quot;Received:&quot;, message.value.toString());<br><br>    ...<br><br>    // ACKNOWLEDGE =&gt; message won&#39;t be re-delivered<br>    await consumer.commitOffsets(...);<br>  }<br>});</pre><p>Btw, the consumer can also auto-commit your message if the eachMessage callback function finishes without errors.</p><p>Let’s see what the solutions will look like for both options:</p><h4><strong>Option 1: speed-over-safety</strong></h4><p>Once we have the event first, we try to fetch, and only after initial failure, we send it to storage for a later retry process.</p><pre>function processEvent(event) {<br>  try {<br>    await fetch(event);<br>  } catch (err) {<br>    moveToStorage(event); // producer.send - for retries<br>  }<br>}</pre><p>If the fetch system crashes, we will lose the message. But if the fetch fails, we will continue with the retry processes, and eventually it will be delivered.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*eB0DmBpXkaMgwkQiYDex-g.jpeg" /><figcaption><strong>speed-over-safety: </strong>retry via Kafka and a CRON job after the initial failure</figcaption></figure><h4><strong>Option 2: lossless-over-speed</strong></h4><p>In this case, once we receive the event, we first publish it to Kafka. Then we consume it from the topic and process it via fetch. This way, we guarantee the message is processed and delivered. But this will slow down the event delivery to the External System.</p><pre>function processEvent(event) {<br>  moveToStorage(event); // producer.send - for main processing (fetch)<br>}</pre><pre>await consumer.subscribe({ topic: &quot;user-events&quot; });<br><br>await consumer.run({<br>  eachMessage: async ({ message }) =&gt; {<br>    ...<br><br>    try {<br>      await fetch(event);<br>    } catch (err) {<br>      moveToStorage(event); // producer.send - to another topic for retries<br>    }<br>  }<br>});</pre><p>Keep in mind that if the internal service is down and there’s no consumer available to process the queued messages, those messages will remain in the Kafka topic (queue) until the service comes back up and starts consuming them.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*eKhqY2paQe0irrLIB3E-GQ.jpeg" /><figcaption><strong>lossless-over-speed: </strong>first move the message to Kafka, then process it afterward (without CRON Job)</figcaption></figure><p>We will need to add cron job support here, because if we keep producing events into the same topic that also contains events that have never been fetched, we’ll slow down their process. In that case, we won’t be able to prioritise the events properly, and retry processing may negatively impact new events’ delivery time. So we should add a separate topic and a cron job to ensure proper prioritisation.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*4uTPQo2xejtwYvBSiZeq8g.jpeg" /><figcaption><strong>lossless-over-speed </strong>solution with a CRON Job</figcaption></figure><h4>CRONJob Retries with Kafka</h4><p>Ideally, we should have a couple of topics for proper retry with a cron job:</p><ul><li>topicName.retry</li><li>topicName.manual-review</li></ul><p>When the fetch fails, we produce the message to the user-events.retry topic. Then we create a job that consumes messages from this topic and tries to re-execute the fetch method. If it fails again, we produce the message back to the same topic. If it succeeds, we simply commit the offset and mark everything as done.</p><p>During each retry, we also update the message headers (metadata stored alongside the message). Similar to HTTP, where we have a body and headers, Kafka messages can include headers, too. For example, we can set an x-retry-count header (if it’s not available yet) and keep increasing it every time the job consumes the message and produces it back to the retry topic after a failure.</p><p>Finally, after <strong>X</strong> retries, we can move the message to user-events.manual-review. The name literally describes what needs to happen next. We can also store the latest error message in a header, so we can quickly understand why the message failed and why multiple retries didn’t resolve it.</p><pre>// Retry Job<br><br>await consumer.subscribe({ topic: &quot;user-events.retry&quot; });<br><br>await consumer.run({<br>  eachMessage: async ({ message }) =&gt; {<br>    ...<br>    const retryCount = Number(<br>      message.headers?.[&quot;x-retry-count&quot;]?.toString() || 0<br>    );<br><br>    try {<br>      await fetch(event);<br>      // Success<br>    } catch (err) {<br>      const nextRetry = retryCount + 1;<br><br>      if (nextRetry &gt;= MAX_RETRIES) {<br>        // Move to manual review topic<br>        await producer.send(...);<br>        return;<br>      }<br><br>      // Re-send to retry topic with updated headers<br>      await producer.send(...);<br>    }<br>  },<br>});</pre><p>Again, depending on the option we choose above, we can have 2 or 3 topics, and they can always be adjusted based on our needs, regardless of the exact setup I’m showing here.</p><p>So, for the second option (<strong>lossless-over-speed</strong>), we have a user-events topic for the initial fetch processing and fast delivery. This means a consumer should always be running for this specific topic. If the fetch fails, we send the event to the user-events.retry topic, and the rest works the same way.</p><pre>await consumer.subscribe({ topic: &quot;user-events&quot; });<br><br>await consumer.run({<br>  eachMessage: async ({ message }) =&gt; {<br>    ...<br><br>    try {<br>      await fetch(event);<br>    } catch (err) {<br>      moveToStorage(event); // producer.send - user-events.retry<br>    }<br>  }<br>});</pre><p>Just to highlight again, topics are like tables; you can name them however you want. There’s no strict rule for topic names.</p><p>At the beginning of this section, I showed two topic names (retry and manual-review). That’s the case for the <strong>speed-over-safety</strong> option.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*RR6uYR4Zt_hBA0fwQhMWBA.jpeg" /><figcaption><strong>speed-over-safety: </strong>retry via Kafka and a CRON job after the initial failure (with topic names)</figcaption></figure><p>For the <strong>lossless-over-speed</strong>, we can simply have two topics. As we already have the user-events topic, we can treat it as the same retry topic: simply republish messages there with an updated header, and we’d only need the manual-review topic.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Sui2ujVzo8Ck46CAmIAfcg.jpeg" /><figcaption><strong>lossless-over-speed: </strong>first move the message to Kafka, then process it afterward (without CRON Job &amp; with topic names)</figcaption></figure><p>But as I mentioned before, retries will delay the other newly delivered events’ delivery time. So it’s better to use this 3-topic approach:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*TQrKY1PmiVnGIyDziRIMqQ.jpeg" /><figcaption><strong>lossless-over-speed </strong>solution with a CRON Job (with topic names)</figcaption></figure><p>Remember, it’s not strictly necessary to have three topics. For retries, two topics are enough. But if you also consume your initial event via Kafka, then this approach works better.</p><h3>Processes should be idempotent</h3><p>Kafka will send the message <strong>at least once</strong>, so we can be confident it won’t be lost. However, depending on configuration, network issues, and other factors, you might receive <strong>duplicate</strong> messages. That’s why it’s best to make your functions and processes <strong>idempotent: </strong>meaning that running them multiple times with the same input produces the same result.</p><p>You can also add a <strong>deduplication</strong> mechanism using Redis. In my case, I implemented this with Redis by using the message key as a unique identifier: I store the key in Redis when processing starts and mark it as processed with a TTL (for example, 1 minute).</p><p>Here, we should use an <strong>atomic operation</strong>, because the same message can be delivered to different services at the same time. I suggest looking into the <a href="https://redis.io/docs/latest/commands/setnx/">SETNX</a> command (or SET with NX).</p><p>When you receive a message, try to lock its key using SETNX. If you succeed, you can proceed and keep that lock until the TTL expires. If processing fails, delete (DEL) the key so the message can be reprocessed.</p><h3>Kafka UI: Monitoring and Debugging</h3><p>This article’s subtitle includes something like <strong>“…Easy-to-Monitor Approach.”</strong> That’s the part I really like, because it helped me monitor the growth and processing of topic messages in real time in my project. And in case of failures, I could quickly inspect the headers and payloads to see <strong>which events were failing, how many times they had been retried, and why, </strong>or even review the messages that eventually ended up in the manual-review topic.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*gQiu1hERYVpSVZemVejQAg.png" /><figcaption>Kafka UI Topics</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*_1WVYAPdEBbslNdWlCDYQw.png" /><figcaption>Kafka UI Message Headers/Preview</figcaption></figure><p>Recently, I had an incident in my project, and because of it, we ended up with a lot of messages in the manual-review topic. As you understand, we don’t have anything that automatically processes those messages.</p><p>During the investigation, I found a bug inside the main processor and the retry processor, which is why the events ended up in the manual-review topic after multiple retry attempts.</p><p>Once the bug was fixed, I needed to retry the messages from the manual-review topic. What I did was simply use a shell command to move the messages from the manual-review topic back to the retry topic, and then rerun the job.</p><pre>kafka-console-consumer --bootstrap-server localhost:9092 \<br>  --topic source-topic \<br>  --group move-to-destination-v1 \<br>  --property print.key=true \<br>  --property key.separator=&quot;:&quot; | \<br>kafka-console-producer --bootstrap-server localhost:9092 \<br>  --topic destination-topic \<br>  --property parse.key=true \<br>  --property key.separator=&quot;:&quot;</pre><p>When the job was successful, I simply cleaned all the messages from the manual-review topic. This makes it much easier to debug future cases. The same idea applies to retry topic: once they are processed and the messages are committed, they can eventually be deleted.</p><p>Btw, you can configure <strong>Kafka retention</strong>, and Kafka will remove old messages after a specified amount of time (by default it’s 7 days).<br>You don’t necessarily need to treat Kafka like a database. Even in a database, you would probably remove those records at some point, because you no longer need them. Kafka acts as a storage layer and comes with many useful features. It’s a distributed event-streaming system that can be used for internal service communication, and for similar cases, depending on your needs.</p><h4>Visualisation and Alerts</h4><p>It would also be good to set up a tool like <a href="https://grafana.com/">Grafana</a> to visually monitor pending messages in Kafka topics that are waiting to be processed. You can also configure thresholds, so you get a notification in Slack, for example, when the manual-review topic is growing (or any other topic), indicating that messages aren’t being processed.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*JfCZQSHdnstjlkMBYiAkBA.jpeg" /><figcaption>Visualizing Uncommitted Kafka Messages in Grafana (Consumer Lag)</figcaption></figure><h3>Summary</h3><p>In this article, I showed how to implement a proper retry mechanism to minimise data loss and provide eventual consistency. I also explained why an SQL database or Redis is not ideal in this case, and how Kafka shines as a solution. Finally, I showed how you can use Kafka UI to visualise existing messages and processes.</p><p>This Kafka cron job solution will fit most systems where you need a retry mechanism. Remember that each problem is unique and requires specific research and focus. You might find a better approach for your particular case, which is totally normal.</p><p>Thank you for taking the time to read this comprehensive article. I hope you found it informative and gained valuable insights from it.<br>Feel free to ask any questions or tweet me <a href="https://twitter.com/nairihar">@nairihar</a></p><p>Also, follow my <strong>JavaScript</strong> newsletter on Telegram: <a href="https://t.me/javascript">@javascript</a></p><ul><li><a href="https://nairihar.medium.com/you-dont-know-node-js-eventloop-8ee16831767">You Don&#39;t Know Node.js EventLoop</a></li><li><a href="https://blog.bitsrc.io/how-to-scale-node-js-socket-server-with-nginx-and-redis-b02e23b3423c">How to Scale Node.js Socket Server with Nginx and Redis</a></li><li><a href="https://nairihar.medium.com/nodejs-health-checks-and-overload-protection-368a132a725e">Node.js Health Checks and Overload Protection</a></li></ul><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=36741142c693" width="1" height="1" alt=""><hr><p><a href="https://blog.bitsrc.io/an-advanced-retry-mechanism-in-node-js-with-kafka-36741142c693">An Advanced Retry Mechanism in Node.js with Kafka</a> was originally published in <a href="https://blog.bitsrc.io">Bits and Pieces</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Monorepo: From Hate to Love]]></title>
            <link>https://blog.bitsrc.io/monorepo-from-hate-to-love-97a866811ccc?source=rss-b0ffd91825e5------2</link>
            <guid isPermaLink="false">https://medium.com/p/97a866811ccc</guid>
            <dc:creator><![CDATA[nairihar]]></dc:creator>
            <pubDate>Sat, 07 Jun 2025 17:40:49 GMT</pubDate>
            <atom:updated>2025-06-07T17:48:57.054Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*dzzyyECAnAwTYoVo_OoWZQ.jpeg" /><figcaption>Generated using Leonardo AI</figcaption></figure><h4>When and How a Backend Monorepo Can Be a Better Choice</h4><p>In this article, I will discuss multi-repo and monorepo structures in the context of backend microservices. I’ll share my journey, why I used to hate monorepos and how I eventually came to love them, while also covering the pros and cons of each architecture. The article mainly focuses on the backend, as I see the benefits of monorepos primarily on one side, not both (i.e., not frontend and backend together). That said, I will also briefly touch on the idea of having both frontend and backend in the same monorepo.</p><p>There are many ways to build and manage the monorepo architecture, but believe me, it’s a challenging task.</p><p>Here is the list of topics covered in this article.</p><p>— Briefly About Multi-repos &amp; Monorepos<br> — Sharing Common Logic Across Multi-repos<br> — Why I Didn’t Like the Idea of a Monorepo<br> — You Don’t Need a Library for Everything<br> — — Database migrations<br> — — Node.js Native Test Runner<br> — Decision-Making Between Engineers<br> — Backend Monorepo with NPM Workspaces<br> — TypeScript &amp; ESLint Configuration<br> — — Common Paths in Monorepo using TSConfig<br>— There is Always a Need to Adjust the Monorepo<br> — CI/CD: Challenges with Setting up Docker files<br> — Summary</p><h3>Briefly About Multi-repos &amp; Monorepos</h3><p>The simplest way to build an application with frontend and backend communication is to have two separate repositories (or folders): one for the backend code and one for the frontend application. Later, if you decide to introduce a mobile app, you can simply create a third folder to hold everything related to the mobile app.</p><p>But things can get more complex. What happens when you need to support both iOS and Android? Or when your backend grows and you realise that a single service isn’t enough, and now you need multiple services for scalability.</p><p>What do you do then? Is it okay to have 10 different projects or repositories just to support the entire application end-to-end?</p><p>Well, there’s no simple answer to that question.<br>If you had asked me this question a year ago, I would have definitely said “yes.” Keeping projects isolated and as small and simple as possible seemed like the best approach. That way, everything stays clean: we can have one backend service just for reporting, another for billing, and a third for the admin panel. Simple, right?</p><p>Even though I was happy with that setup, I often found myself wanting to share logic between services. I didn’t want to duplicate the same code across multiple projects. When those thoughts start creeping in, it’s usually a sign that your experience is evolving.</p><p>There are many ways to solve this issue, which we’ll explore soon.</p><p>Having 10 separate repositories for 10 different services is a <strong>multi-repo</strong> structure. On the other hand, the whole point of a <strong>monorepo</strong> is to collect multiple services into a single repository. But that doesn’t necessarily mean putting everything, frontend, backend, mobile, into one giant repo. I used to think that too, and honestly, that mindset was one of the main reasons I avoided monorepos for a long time.</p><p>In reality, you can have multiple monorepos: one for backend projects, one for frontend, maybe even separate ones for web and mobile, depending on your needs.</p><h3>Sharing Common Logic Across Multi-repos</h3><p>Over the years, while working on different projects, I often needed to share code or logic across multiple repositories.</p><p>On the frontend, I wanted to reuse things like form validations, HTTP request/response logic, and Redux-related functionality between React and React Native projects.<br>On the backend, I wanted to share database models and cryptographic logic used across 6 or 7 different services.</p><p>I didn’t want to duplicate the same 30+ files across every repository. That just didn’t feel like a respectable or maintainable choice.</p><p>So, that marked the beginning of the <strong>private packages era</strong> for me.</p><p>I started building private packages for internal services, things like:</p><ul><li>@company/db</li><li>@company/integrations</li><li>@company/crypto</li><li>@company/validations, etc.</li></ul><p>Each of these modules had its internal structure, tests, documentation, and a versioning file. Every time we introduced a change, we’d update the version manually.</p><p>For example:</p><pre># Version history of the @company/database library<br><br>## 2.3.0<br>- Update `users` model, add `name` field<br><br>## 2.2.5<br>- Fix vulnerabilities; update crypto lib</pre><p>This approach worked well, and I used it in a large multi-repo setup for many years. I was happy with it and had no desire to change anything.</p><p>The only downside was that when we updated a common library, we had to manually bump the dependency version in every service that used it. But honestly, that wasn’t a big deal at the time.</p><h3>Why I Didn’t Like the Idea of a Monorepo</h3><p>To be honest, I think I didn’t have enough experience to set up a monorepo properly at the time. I was also worried that putting both frontend and backend code in the same place would make decision-making between the frontend and backend teams much harder. Things like formatting, coding styles, etc. It felt like a constant debate. I preferred having my own space where I could make decisions independently instead of constantly negotiating everything with frontend engineers.</p><p>This second point still holds for me today, and I’ll dive deeper into it later.</p><p>But regarding the first point: as I mentioned before, setting up a monorepo can be tough. Managing deployments, testing, and shared code logic isn’t always straightforward. There are many tools that help with this, but you don’t always need to rely on existing libraries. In fact, I believe you should first understand the fundamentals and ideally build things yourself. That way, you maintain full flexibility. Otherwise, your plans may get blocked by limitations in a tool you adopted too early.</p><p>It’s similar to using <strong>Expo</strong> in React-Native; it helps you get started fast, but eventually, you’re stuck in a bubble where you can’t do everything you want. Or take <strong>ORM</strong> libraries as another example. Many developers write migrations in JavaScript, mixing JS with SQL. I don’t agree with that. If you want to create a table, just run CREATE TABLE using SQL. You don’t need a library for everything. Sometimes the best way is the direct way, with the right tool for the job.</p><p>That’s why I always avoided tools like:</p><ul><li><a href="https://lerna.js.org/">Lerna</a></li><li><a href="https://nx.dev/">Nx</a></li></ul><p>… and similar tooling for managing monorepo structure.</p><p>In addition to my lack of knowledge on setting up monorepos properly, I also noticed that testing and deployment could sometimes get harder and more time-consuming. In a monorepo setup, you might need to build or test everything together. That has its pros and cons. For example, maybe you just updated a shared library to introduce a feature in Service A. It works perfectly there, but it unexpectedly breaks Service B and C. In a monorepo, you’ll find that out <strong>much</strong> faster than in a multi-repo setup.</p><p>I’ll tell you soon about how I managed a monorepo starting from configurations, testing, and deployment, don’t worry, we won’t miss the details.</p><h3>You Don’t Need a Library for Everything</h3><p>There are so many people who always try to solve problems using libraries, without even checking the source code.<br>If you’re a professional engineer, you should understand every dependency you install: <strong>why</strong> it’s there, <strong>how</strong> it works, and what trade-offs it introduces.</p><p>You should always be mindful not to get trapped in a “library bubble.” At some point, you might need to remove that library or do something it doesn’t support, and you’ll be stuck.</p><p>I’ll give you an example involving database migrations and testing to show you how I prefer to approach things. That should help explain my point more clearly. After that, I’ll dive into the important part: why I used to hate monorepos, and how I eventually started to love them.</p><h4>Database migrations</h4><p>Having a proper way to handle migrations is straightforward, you don’t need to mix JavaScript and SQL code in the same file. That’s like a cocktail that gives you nausea after drinking it.</p><p>What’s stopping you from simply keeping your SQL code in .sql files, and having a single JavaScript/TypeScript script that reads and runs those files? It’s clean, simple, and each tool does exactly what it’s meant to do.</p><pre>+-- run-migration.ts<br>+-- migrations/<br>|   +-- migration-1.1.sql<br>|   `-- migration-1.0.sql</pre><h4>Node.js Native Test Runner</h4><p>I’m sure you’ve used Mocha or Jest at some point. They’re great tools that help with testing, I’ve used them too. But I don’t need them anymore, because Node.js now comes with its <a href="https://nodejs.org/api/test.html">built-in test runner</a>.</p><p>If the native tool does the same job, why shouldn’t I prefer it?<br>Fewer libraries mean fewer vulnerabilities. Simple as that.</p><p>Nowadays, Node.js includes more and more native features, and that’s a great thing. We’re finally moving in the right direction. Hopefully, at some point, we’ll have all the essential tools needed to build solid backend systems, without relying too much on external dependencies.</p><p>I think you get my point, it’s like enjoying engine-based cars over electric ones. Or preferring to drive a stick shift over an automatic. There’s just more control, and it feels right.</p><p>Because of this, I never ended up using a dedicated tool for setting up a monorepo until I discovered NPM workspaces.</p><p>But before jumping into that, I also promised to talk about decision-making. Decision making</p><h3>Decision-Making Between Engineers</h3><p>I have more than 10 years of experience, and I want to point out that the decision-making process within a team is critical. It can slow down your progress or even lead to the failure of an entire project.</p><p>Frontend and backend engineers live in completely different universes. Even though we may use the same language, JavaScript, and we’re all engineers, we think in very different ways.</p><p>From variable naming to code style, formatting, and dependency choices, our mindsets differ significantly. This difference always kept me from seriously considering putting frontend and backend code in the same place. So it’s often better to define clear boundaries.</p><p>And eventually, I realised: maybe I could set up a monorepo just for backend projects. And that’s where the fun and the challenge started, especially when I discovered NPM workspaces.</p><h3>Backend Monorepo with NPM Workspaces</h3><p>Recently, I was starting a new project and needed to set up the architecture. It was clear from the beginning that we’d have around 7–10 services, since it was intended to be a microservice-based system. I was on the verge of going with a multi-repo structure.</p><p>But then found this: <a href="http://Workspaces is a generic term that refers to the set of features in the npm cli that provides support to managing multiple packages from your local files system from within a singular top-level, root package.">NPM Workspaces</a></p><blockquote><strong>Workspaces</strong> is a generic term that refers to the set of features in the npm cli that provides support to managing multiple packages from your local files system from within a singular top-level, root package.<br><br>docs.npmjs.com</blockquote><p>So basically, you can have different packages in the same folder and manage them through a single root-level package.json. Shared dependencies can live at the root, while each service can have its own dedicated package with its dependencies.</p><p>I started diving deeper into this approach and eventually decided to set up the project using a monorepo structure that includes only the backend services.</p><pre>.<br>+-- node_modules<br>+-- package-lock.json<br>+-- package.json<br>+-- api<br>|   `-- package.json<br>+-- core<br>|   `-- package.json<br>+-- reporting<br>|   `-- package.json<br>`-- sdk<br>    `-- package.json</pre><p>It was straightforward in the beginning, until I had to deal with TypeScript, ESLint configurations, and CI/CD.</p><p>A monorepo is like a BMW: it requires constant maintenance and attention. You can’t just set it up once and expect it to work smoothly for the next five years.</p><p>I spent 1–2 weeks setting up the initial structure, but I still make adjustments whenever I need something specific.</p><h3>TypeScript &amp; ESLint Configuration</h3><p>This was truly a challenge for me, because I’m not a fan of TypeScript, yet I had to set it up properly for the monorepo. I’ve always said: if we need typing, why not just use a statically typed language in the first place?</p><p>Well, that’s a big topic for another day. The reality was, TypeScript was a team decision, and I had to work with it.</p><p>Both TypeScript and ESLint support extending configurations, so I immediately created a generic root-level config file and extended that in each package.</p><pre>.<br>+-- node_modules<br>+-- package-lock.json<br>+-- package.json<br>+-- tsconfig.json<br>+-- .eslintrc.js<br>+-- api<br>|   +-- package.json<br>|   +-- tsconfig.json<br>|   `-- .eslintrc.js<br>+-- core<br>|   +-- package.json<br>|   +-- tsconfig.json<br>|   `-- .eslintrc.js<br>+-- reporting<br>|   +-- package.json<br>|   +-- tsconfig.json<br>|   `-- .eslintrc.js<br>`-- sdk<br>    +-- package.json<br>    +-- tsconfig.json<br>    `-- .eslintrc.js</pre><p>I needed this approach because I wanted the ability to run, test, and work with individual services/packages independently. I didn’t want everything to be controlled from the root level.</p><p>These configurations should always be based on the actual needs of the team and the project. There’s no single package that will perfectly fit and satisfy all of those needs. That’s why I believe in solving each problem individually, rather than just installing a library that tries to do everything.</p><p>It helps you to define your boundaries.</p><p>I won’t show specific code examples because the configuration might look different tomorrow: packages are constantly being updated.</p><p>However, you can check out the <strong>extend</strong> functionality of TypeScript and ESLint by following these links:</p><ul><li><a href="http://eslint.org/docs/latest/extend/">Extend ESLint</a></li><li><a href="https://www.typescriptlang.org/tsconfig/extends.html">TSConfig Option: extends</a></li></ul><p>And in a similar approach, you can configure other tools.</p><p>But what I <em>do</em> want to show you is how to handle shared logic: where to store it and how to set it up properly.</p><h4>Common Paths in Monorepo using TSConfig</h4><p>In the TypeScript config, you can define paths like this:</p><pre>{<br>  &quot;compilerOptions&quot;: {<br>    &quot;paths&quot;: {<br>      &quot;@common/*&quot;: [&quot;./_common/*&quot;],<br>      &quot;@db/*&quot;: [&quot;./_db/*&quot;]<br>    }<br>  }<br>}</pre><p>You can place reusable code in the root of the monorepo and treat it as an external dependency within each service.</p><pre>.<br>+-- node_modules<br>+-- package-lock.json<br>+-- package.json<br>+-- tsconfig.json<br>+-- .eslintrc.js<br>+-- _db/<br>|   `-- user.ts<br>+-- _common/<br>+-- configs/<br>|   `-- env.ts<br>+-- api<br>|   +-- package.json<br>|   +-- tsconfig.json<br>|   `-- .eslintrc.js<br>+-- core<br>|   +-- package.json<br>|   +-- tsconfig.json<br>|   `-- .eslintrc.js<br>+-- reporting<br>|   +-- package.json<br>|   +-- tsconfig.json<br>|   `-- .eslintrc.js<br>`-- sdk<br>    +-- package.json<br>    +-- tsconfig.json<br>    `-- .eslintrc.js</pre><p>As an example, inside both the api and core packages, I can import my User entity like this:</p><pre>import { User } from &#39;@db&#39;;<br>import { env } from &#39;@common/env&#39;;</pre><p>The reason I use the @ symbol in the config is for better observability: it helps me immediately recognise that it’s a custom or shared path. I apply the same logic with underscores (_) in folder names. When you prefix a folder with an underscore, it appears at the top in most code editors, which improves visibility.</p><p>Of course, these are just conventions for clarity: you’re free to name them however you prefer.</p><h3>There is Always a Need to Adjust the Monorepo</h3><p>I want to point out a very important example that shows why a monorepo always requires maintenance.</p><p>At some point, I introduced an SDK. It wasn’t part of the microservice architecture: it was just a shared SDK that I planned to publish to npm. I decided to keep it inside the monorepo as I reused some things from the common folder.</p><p>Because of that decision, I had to separate the ESLint and TypeScript configs for the SDK instead of extending from the root. Sharing the root configuration caused issues with building, packaging, and publishing.</p><p>You have to keep in mind that when publishing an SDK, you don’t want to expose your entire backend architecture. We will discuss the solution to this particular issue in the next section.</p><p>So the key takeaway is: over time, as you make different decisions, your monorepo setup will evolve. There’s no one-size-fits-all solution. You should evaluate your specific needs and choose the setup that works best for you.</p><h3>CI/CD: Challenges with Setting up Docker files</h3><p>When you’re working with a monorepo and have reusable code, and you want to keep everything as close to professional as possible so that you’re proud of your solution, you need to put in real effort.</p><p>For example, if the SDK were in a separate repo, I could simply build and publish it without much hassle. But I chose to keep it inside the monorepo because it relied on reusable functionality. On the other hand, I don’t want to publish the entire common folder with the SDK: only the files actually used. So as you can see, there are pros and cons to every approach.</p><p>The same challenge applies to building and deploying services. We have to carefully control which files get exposed during service deployment or package publishing. And that makes our Dockerfile setup a bit more complicated.</p><p>First, you need a build environment where you isolate the specific service or module (like the SDK) you want to build. Then you need to include only the necessary files from the reusable/shared folders. Only by doing this can you avoid exposing your entire architecture or internal codebase.</p><p>It might all sound easy in theory, but believe me, it’s challenging in practice.</p><p>Example of what they Dockerfile could look like:</p><pre>FROM node:20 AS builder<br><br>WORKDIR /app<br><br>COPY package.json package-lock.json tsconfig.json .eslintrc.js ./<br><br>COPY sdk ./sdk<br>COPY _db ./_db<br>COPY configs ./configs<br>COPY _common/number.ts ./_common/number.ts<br><br>COPY sdk/package.json sdk/tsconfig.json sdk/.eslintrc.js ./sdk/<br><br>RUN npm install<br><br>RUN cd sdk &amp;&amp; npm run build<br><br>FROM node:20-alpine<br><br>WORKDIR /app<br><br>COPY --from=builder /app/sdk/dist ./dist<br>COPY sdk/package.json ./<br><br>RUN npm install --only=production<br><br>RUN npm publish</pre><p>And as you might already have guessed, you may need to adjust your monorepo as it grows: copy-pasting more files becomes harder, and this approach might no longer be ideal</p><h3>Summary</h3><p>Having a monorepo structure is much more challenging than it might seem. It requires a lot of experience if you want to do everything properly. If you don’t have much experience and you’re just starting a project, it might be better to work with multiple repositories, at least until you have a mentor or gain enough knowledge to configure and manage all the edge cases on your own.</p><p>I would personally recommend keeping frontend and backend in separate repositories unless there’s a very specific reason to combine them.</p><p>In any case, avoid introducing tons of libraries for every issue. Sometimes, it’s better to do some research and spend time understanding how certain things should be done properly without using any libraries. Also, you will have fewer vulnerabilities.</p><p>Both monorepos and multi-repos have their pros and cons, and we can’t simply say one is always better than the other. I just shared my experience, why I used to hate monorepos and how I slowly began to understand their value and became comfortable using them. But that doesn’t necessarily mean I’ll stop using multi-repos for future projects.</p><p>It depends on factors like:</p><ul><li>How many engineers are working on the project</li><li>How much code needs to be reused</li><li>The overall experience level of the team</li></ul><p>If there’s not much to share across services, I don’t think a monorepo adds much value; in that case, a multi-repo setup will likely be a safer, faster and simpler choice.</p><p>Even though I haven’t gone into very much detail about how to do certain things, I shared my experience with multi-repos and monorepos, and showcased some of the problems and solutions, including resourceful links.</p><p>Thank you for taking the time to read this comprehensive article. I hope you found it informative and gained valuable insights from it.<br>Feel free to ask any questions or tweet me <a href="https://twitter.com/nairihar">@nairihar</a></p><p>Also, follow my <strong>JavaScript</strong> newsletter on Telegram: <a href="https://t.me/javascript">@javascript</a></p><ul><li><a href="https://nairihar.medium.com/you-dont-know-node-js-eventloop-8ee16831767">You Don&#39;t Know Node.js EventLoop</a></li><li><a href="https://blog.bitsrc.io/how-to-scale-node-js-socket-server-with-nginx-and-redis-b02e23b3423c">How to Scale Node.js Socket Server with Nginx and Redis</a></li></ul><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=97a866811ccc" width="1" height="1" alt=""><hr><p><a href="https://blog.bitsrc.io/monorepo-from-hate-to-love-97a866811ccc">Monorepo: From Hate to Love</a> was originally published in <a href="https://blog.bitsrc.io">Bits and Pieces</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[What is new in Node.js V22.5]]></title>
            <link>https://blog.bitsrc.io/what-is-new-in-node-js-v22-5-8899620ddebf?source=rss-b0ffd91825e5------2</link>
            <guid isPermaLink="false">https://medium.com/p/8899620ddebf</guid>
            <category><![CDATA[nodejs]]></category>
            <category><![CDATA[node]]></category>
            <category><![CDATA[javascript]]></category>
            <category><![CDATA[multithreading]]></category>
            <category><![CDATA[sql]]></category>
            <dc:creator><![CDATA[nairihar]]></dc:creator>
            <pubDate>Wed, 17 Jul 2024 19:19:16 GMT</pubDate>
            <atom:updated>2024-07-17T19:19:16.458Z</atom:updated>
            <content:encoded><![CDATA[<h4>A brief introduction about the new tool</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*vjo0ttQuxdtBbwOXgdmX9Q.jpeg" /><figcaption>Generated with <a href="https://leonardo.ai/">Leonardo AI</a></figcaption></figure><p>In this article, we will review the notable changes introduced in the new <a href="https://nodejs.org/en/blog/release/v22.5.0">Node.js version</a>:</p><ul><li>add node:sqlite module (Colin Ihrig) <a href="https://github.com/nodejs/node/pull/53752">#53752</a></li><li>add matchesGlob method (Aviv Keller) <a href="https://github.com/nodejs/node/pull/52881">#52881</a></li><li>add postMessageToThread (Paolo Insogna) <a href="https://github.com/nodejs/node/pull/53682">#53682</a></li></ul><p>Let’s get started!</p><h3>node:sqlite</h3><blockquote>The node:sqlite module facilitates working with SQLite databases.</blockquote><pre>// index.js<br><br>const { DatabaseSync } = require(&#39;node:sqlite&#39;);<br>const database = new DatabaseSync(&#39;:memory:&#39;);<br><br>// Execute SQL statements from strings.<br>database.exec(`<br>  CREATE TABLE data(<br>    id INTEGER PRIMARY KEY,<br>    name TEXT<br>  );<br>`);<br><br>// Create a prepared statement to insert data into the database.<br>const insert = database.prepare(&#39;INSERT INTO data (id, name) VALUES (?, ?)&#39;);<br><br>// Execute the prepared statement with bound values.<br>insert.run(1, &#39;Bob&#39;);<br>insert.run(2, &#39;John&#39;);<br><br>// Create a prepared statement to read data from the database.<br>const query = database.prepare(&#39;SELECT * FROM data ORDER BY id&#39;);<br><br>// Execute the prepared statement and log the result set.<br>console.log(query.all());<br><br>// [ { id: 1, name: &#39;Bob&#39; }, { id: 2, name: &#39;John&#39; } ]</pre><p>The following example shows the basic usage of the node:sqlite module to open an in-memory database, write data to the database, and then read the data back.</p><p>This built-in lib becomes available when using the --experimental-sqlite flag.</p><pre>node --experimental-sqlite index.js</pre><p>We developers, along with Node.js native modules, use many external modules to make the server fully functional. It’s excellent that Node.js tries to include those important tools on its own. For example, they recently added a native support of <a href="https://nodejs.org/docs/latest/api/test.html">test runner</a>, so we switched from jest and mocha to the native library. Now, it&#39;s time for the databases and ORMs.</p><p>This is currently an <a href="https://nodejs.org/docs/latest/api/sqlite.html">experimental module</a>, and it will take some time for it to become stable and have more methods.</p><p>Let’s appreciate the Node.js Core team’s effort. One day, we will only use a small number of external libraries as most key modules will be available natively.</p><h3>matchesGlob</h3><p>This is another experimental method which determines if path matches the pattern.</p><pre>path.matchesGlob(&#39;/foo/bar&#39;, &#39;/foo/*&#39;); // true<br>path.matchesGlob(&#39;/foo/bar*&#39;, &#39;foo/bird&#39;); // false</pre><h3>postMessageToThread</h3><p>Last but not least, a method that sends a value to another worker, identified by its thread ID.</p><p>Previously, we could communicate with the worker threads using Message Channels. But now, thread communication is much simpler.</p><p>Here is how we can get the thread ID:</p><pre>const { threadId } = require(&#39;node:worker_threads&#39;);</pre><p>And this is how they can communicate:</p><pre>postMessageToThread(id, { message: &#39;pong&#39; });</pre><p>To be able to get the messages first, you need to have the listener:</p><pre>process.on(&#39;workerMessage&#39;, (value, source) =&gt; {<br>    console.log(`${source} -&gt; ${threadId}:`, value);<br>});</pre><p>The source is the sender&#39;s thread ID, so you can do something like this if you want to message back.</p><pre>process.on(&#39;workerMessage&#39;, (value, source) =&gt; {<br>  console.log(`${source} -&gt; ${threadId}:`, value);<br>  postMessageToThread(source, { message: &#39;pong&#39; });<br>});</pre><p>If you want to communicate with the main thread, you can simply use 0, as the main thread’s ID is 0.</p><pre>postMessageToThread(0, { message: &#39;ping&#39; });</pre><p>Full example <a href="https://nodejs.org/docs/latest/api/worker_threads.html#workerpostmessagetothreadthreadid-value-transferlist-timeout">here</a>.</p><p>I hope you enjoyed this summary. I aimed to deliver the information promptly and highlight the most important updates.</p><p>Also, follow my <strong>JavaScript</strong> newsletter on Telegram: <a href="https://t.me/javascript">@javascript</a></p><ul><li><a href="https://blog.bitsrc.io/you-dont-know-node-js-eventloop-8ee16831767">You Don’t Know Node.js EventLoop</a></li><li><a href="https://blog.bitsrc.io/how-to-scale-node-js-socket-server-with-nginx-and-redis-b02e23b3423c">How to Scale Node.js Socket Server with Nginx and Redis</a></li></ul><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=8899620ddebf" width="1" height="1" alt=""><hr><p><a href="https://blog.bitsrc.io/what-is-new-in-node-js-v22-5-8899620ddebf">What is new in Node.js V22.5</a> was originally published in <a href="https://blog.bitsrc.io">Bits and Pieces</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Essential JavaScript Design Patterns]]></title>
            <link>https://javascript.plainenglish.io/essential-javascript-design-patterns-3850a85b37ed?source=rss-b0ffd91825e5------2</link>
            <guid isPermaLink="false">https://medium.com/p/3850a85b37ed</guid>
            <category><![CDATA[design-patterns]]></category>
            <category><![CDATA[computer-science]]></category>
            <category><![CDATA[programming]]></category>
            <category><![CDATA[javascript]]></category>
            <dc:creator><![CDATA[nairihar]]></dc:creator>
            <pubDate>Mon, 25 Dec 2023 18:13:24 GMT</pubDate>
            <atom:updated>2024-07-17T19:15:15.210Z</atom:updated>
            <content:encoded><![CDATA[<h4>Understanding Common Design Patterns with JavaScript</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*5lc9rCe1TP0Qk-Zckwy9_g.jpeg" /><figcaption>Generated via leonardo.ai</figcaption></figure><p>There are numerous programming design patterns that you’ve likely used, but you’re not aware of them.</p><p>In 2019, I presented a talk at JS Conf Armenia on this topic. The presentation is in Armenian, and you are <a href="https://youtu.be/NCyiZQ5dT5o">welcome to watch</a> it if you’re interested. Today, however, I have chosen to conduct the same research in written form</p><p>I will cover some common and useful design patterns, providing JavaScript examples.</p><p>In programming, design patterns are invaluable as they provide significant support. By following certain principles, we ultimately achieve well-structured, easily understandable, reusable, and manageable code. When starting the journey of learning design patterns, it may pose challenges, and certain concepts may appear unclear initially, especially when you can express the same idea more briefly. But remember, high-quality code doesn’t always mean it’s the shortest.</p><p>As I mentioned earlier, most developers use different kinds of design patterns daily, but they are not aware of them. There are so many design patterns, and each has its principles, scopes, and names.</p><p>Design patterns are not programming language-specific. They are more likely programming paradigm-specific. Functional programming, Object-oriented programming, and all those different types of programming paradigms have their design patterns.</p><h3>Design Pattern types</h3><p>Think of design patterns as helpful guides in the world of Objects and Classes. When we talk about classes, we are talking also about function constructors. These patterns help us to maintain and organize high-quality code.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*7HVvN1oI1NgiahrDGOQ_RA.png" /><figcaption>Design Pattern map</figcaption></figure><p>According to this map, there are three main types of design patterns.</p><ul><li><strong>Creational Design Patterns</strong> focus on the process of object/class creation.</li><li><strong>Structural Design Patterns </strong>are concerned with the composition of classes or objects.</li><li><strong>Behavioral Design Patterns</strong> focus on the interaction and communication between objects and classes.</li></ul><p>We will not delve deeper into each of those design patterns from the image. I just want to introduce some design patterns and their examples. Afterwards, you can dedicate time to books on similar topics. This is something that should be taken seriously, and you can’t learn everything in just a few hours. It requires constant practice.</p><h3>1. Singleton</h3><p>This pattern ensures that a class or object has only one instance and provides a global point of access to that instance.</p><h4>Database connection example</h4><p>In this example, the MongoClient is typically used to connect to a MongoDB database. The Singleton pattern here would ensure that there is only one instance of the MongoClient throughout your application, preventing multiple unnecessary connections to the database.</p><pre>const { MongoClient } = require(&#39;mongodb&#39;);<br><br>const url = &#39;mongodb://localhost:27017&#39;;<br><br>// here you may have other options as well ....<br><br>export default MongoClient.connect(url);</pre><p>When we import and use the exported object in different parts of our application, it will always refer to the same instance of the MongoClient. This helps in managing and efficiently reusing the database connection.</p><h3>2. Factory</h3><p>In simple terms, this pattern encapsulates the instantiation logic, providing a way to create objects without specifying their exact class.</p><pre>class User {<br>  constructor(name) {<br>    this.name = name;<br>  }<br>}<br><br>function createUser(name) {<br>  return new User(name);<br>}<br><br><br>const myUser = createUser(&#39;Sara&#39;);</pre><h4>Image example</h4><pre>function createImage(name) {<br>  if (name.match(/\.jpeg$/)) {<br>    return new JpegImage(name);<br>  } if (name.match(/\.gif$/)) {<br>    return new GifImage(name);<br>  } if (name.match(/\.png$/)) {<br>    return new PngImage(name);<br>  }<br>  throw new Exception(&#39;Unsupported format!&#39;);<br>}</pre><p>In this example:</p><ul><li>createImage is the factory method responsible for creating instances of different image types.</li><li>JpegImage, GifImage, PngImage are the product classes, each representing a specific type of image.</li></ul><p>Here are the benefits of using the Factory Pattern in this example:</p><ul><li><strong>Encapsulation</strong>: The creation logic is encapsulated in the createImage function, isolating it from the rest of the code.</li><li><strong>Flexibility</strong>: If you want to add a new image format or modify the creation process, you can do it within the factory function without affecting the client code.</li><li><strong>Readability</strong>: The client code that uses createImage is abstracted from the details of how each image type is created, making the code more readable and maintainable.</li></ul><p>Overall, this example nicely demonstrates the Factory Pattern by providing a centralized way to create instances of different image types based on a common interface.</p><h3>3. Strategy</h3><p>This is a behavioural design pattern that defines a family of algorithms, encapsulates each one, and makes them interchangeable. It allows the client to choose an algorithm from a family of algorithms at runtime.</p><pre>const ups = new UPS();<br>const usps = new USPS();<br>const fedex = new Fedex();<br><br>const shipping = new Shipping();<br><br>shipping.setStrategy(fedex);<br><br>shipping.calculate();<br><br>// YOU CAN ALSO DO THIS<br>// shipping.setStrategy(ups);<br>// shipping.setStrategy(usps);</pre><p>In this example:</p><ul><li>UPS, USPS, and Fedex are different strategies or algorithms for shipping.</li><li>Shipping is the context class that uses a strategy.</li></ul><p>The key benefit of this Pattern is that it allows the client (Shipping class in this case) to switch between different algorithms (strategies) at runtime without altering its structure. It promotes code reusability, flexibility, and easy maintenance.</p><h3>4. Adapter</h3><p>It allows us to access the functionality of a class using a different interface.</p><pre>// Existing class with a different interface<br>class OldSystem {<br>  requestInfo() {<br>    return &quot;Information from the old system.&quot;;<br>  }<br>}<br><br>// Adapter to make the OldSystem compatible with the new interface<br>class Adapter {<br>  constructor(oldSystem) {<br>    this.oldSystem = oldSystem;<br>  }<br><br>  fetchDetails() {<br>    const oldInfo = this.oldSystem.requestInfo();<br>    return `Adapted: ${oldInfo}`;<br>  }<br>}<br><br>// Using the Adapter we make OldSystem compatible with another interface<br>const oldSystem = new OldSystem();<br>const adaptedSystem = new Adapter(oldSystem);</pre><p>So, the overall purpose is to make an existing class (OldSystem) work with a new interface by creating an adapter (Adapter) that translates the methods from the old class to match the expected interface. The client code can then interact with the adapted system (adaptedSystem) without worrying about the differences in the original and desired interfaces.</p><h3>5. Command</h3><p>Here we have a simple calculator with undo functionality using the Command Pattern. It defines basic math operations, creates command objects for each operation, and utilizes a calculator object to execute, undo, and log these operations. The example performs calculations, undoes two operations, and displays the final result. The Command Pattern enables flexible and extensible command execution in a clean and organized manner.</p><pre>const calculator = new Calculator();<br><br>// Perform calculations via commands<br>calculator.execute(new AddCommand(100));<br>calculator.execute(new SubCommand(24));<br>calculator.execute(new MulCommand(6));<br>calculator.execute(new DivCommand(2));<br><br>// Reverse last two commands (undo calculations)<br>calculator.undo();<br>calculator.undo();<br><br>console.log(`Value: ${calculator.getCurrentValue()}`);</pre><p>It’s obvious, that this isn’t the complete example. You can find it <a href="https://github.com/nairihar/JSConf-Armenia-2019/blob/master/command/calculator.js">here</a>.</p><p>Why bother writing a basic calculator in 100 lines when it could be done in less? The thing is, the impact may not be obvious immediately.<br>In big and complicated projects, being clear is super important. Opting for consistent design patterns ensures that you and your team uniformly approach the code. This not only makes the project more understandable but also facilitates smoother collaboration, even when dealing with sizable projects/codebases.</p><p>Imagine you’re in an interview, tasked with writing a calculator. How do you think the interviewer would perceive a basic calculator compared to one crafted using this design pattern? It becomes a showcase of your understanding and knowledge, leaving a lasting impression.</p><h3>6. Module</h3><p>With the help of this design pattern, we can prevent pollution of the global namespace by keeping variables and functions within a private scope.</p><p>Numerous implementations exist, but in this article, I’ll present two straightforward examples that are exceptionally simple and clear within the context of Node.js.</p><h4>Implementation 1</h4><pre>// utils.js<br><br>const privateList = [];<br>const privateObject = {};<br><br>function find() {<br>  // ...<br>}<br><br>function each() {<br>  // ...<br>}<br><br>module.exports = {<br>  find, each<br>};</pre><h4>Implementation 2</h4><pre>// utils.js<br><br>const privateList = [];<br>const privateObject = {};<br><br>exports.find = function () {<br>  // ...<br>};<br><br>exports.each = function () {<br>  // ...<br>};</pre><p>In both examples, the functions and variables inside the module are encapsulated (hidden), providing a clean and organized way to structure code. Users of the module can only access what is explicitly exported, helping to maintain a clear separation of concerns and avoiding naming conflicts with other parts of the codebase.</p><h3>7. Pub/Sub</h3><p>This behavioral design pattern facilitates communication between different parts of a software system without them having to directly reference each other. It involves a mechanism where entities can subscribe to receive messages (<strong>publishers</strong>) and other entities can publish messages to the <strong>subscribers</strong>.</p><pre>const subscriber1 = new Subscriber();<br>const subscriber2 = new Subscriber();<br><br>const publisher = new Publisher();<br><br>subscriber1.sub(&#39;t.me/javascript&#39;, (msg) =&gt; {<br>  console.log(msg);<br>});<br><br>subscriber2.sub(&#39;t.me/javascript&#39;, (msg) =&gt; {<br>  console.log(msg);<br>});<br><br>publisher.pub(&#39;t.me/javascript&#39;, &#39;Quiz #123&#39;);</pre><p>Subscriber is a class that represents an entity that wants to receive messages. Each instance of Subscriber can subscribe to specific channels. And the Publisher is a class that represents an entity that sends out messages to specific channels.</p><p>Every time specified callback function will be executed when a message is published to that channel.</p><p>In summary, the Pub/Sub pattern provides a flexible way for different components or modules in a system to communicate without being directly aware of each other. Publishers send messages to specific channels, and subscribers express interest in receiving messages from particular channels, creating a decoupled and scalable communication system.</p><h3>8. Observer</h3><p>In the world of design patterns, numerous patterns may seem similar at first glance, but in reality, they have subtle differences. Similarly, this design pattern shares similarities with the Pub/Sub pattern.</p><pre>const observer = new Observable();<br><br>// Subscriber 1<br>observer.subscribe(&#39;channelName&#39;, (msg) =&gt; console.log(msg)); // Hello<br>// Subscriber 2<br>observer.subscribe(&#39;channelName2&#39;, (msg) =&gt; console.log(msg)); // Hello<br><br>observer.notify(&#39;channel&#39;, &#39;Hello&#39;);</pre><p>You can find the full example <a href="http://In the world of design patterns, there are numerous patterns that may seem similar at first glance, but in reality, they have subtle differences. Similarly, this design pattern shares similarities with the Pub/Sub pattern.">here</a>.</p><h4>Number of Subjects</h4><p>In the Observer pattern, there’s typically one subject (or observable) that maintains a list of dependents (observers) and notifies them of changes in its state. But in Pub/Sub, there can be multiple publishers and multiple subscribers, and they communicate through channels. Publishers send messages to specific channels, and subscribers express interest in receiving messages from particular channels.</p><h4><strong>Flexibility and Scalability</strong></h4><p>Pub/Sub is more flexible and scalable, especially in distributed systems. Publishers and subscribers can be added or removed without directly impacting each other, making it easier to extend the system. However the Observer pattern might require more careful management of dependencies, and adding or removing observers may involve modifications to the subject.</p><h4>The granularity of Notification</h4><p>In the Observer pattern, when the subject notifies observers, it sends the same message to all observers. In Pub/Sub, when a message is published to a channel, only the subscribers of that channel receive the message.</p><h4>Coupling</h4><p>The Observer pattern often involves a more direct relationship between the subject and its observers. Observers are aware of the subject and its changes.<br>Pub/Sub promotes a more decoupled approach. Publishers and subscribers are not explicitly aware of each other; they communicate through a central hub (the message broker) without direct references.</p><p>While the design patterns may appear similar, there are crucial differences between them.</p><h3>9. Constructor</h3><p>This is a way to define and initialize object instances with their properties and methods.</p><pre>class Movie {<br>  constructor(name, year) {<br>    this.name = name;<br>    this.year = year;<br><br>    this.about = () =&gt; {<br>      return `${name} movie has been shot in ${year}`;<br>    };<br>  }<br>}<br><br>const hp = new Movie(&#39;Harry Potter&#39;, 2001);<br>const insatiable = new Movie(&#39;John Wick&#39;, 2014);</pre><p>The Constructor Pattern provides a convenient way to create multiple instances of objects with shared properties and methods.</p><p>The Constructor Pattern is often associated with classes in JavaScript, but it’s not limited to classes. In JavaScript, a constructor is essentially a function that creates instances of objects.</p><pre><br>function Movie(name, year) {<br>  this.name = name;<br>  this.year = year;<br><br>  this.about = () =&gt; {<br>    return `${name} movie has been shot in ${year}`;<br>  };<br>}<br><br>const johnWick = new Movie(&#39;John Wick&#39;, 2014);</pre><p>It’s more about the concept of using a function (whether a class constructor or a regular function) to create and initialize objects. It’s a way to structure and create instances of objects with shared properties and methods.</p><pre>function createMovie(name, year) {<br>  const about = () =&gt; {<br>    return `${name} movie has been shot in ${year}`;<br>  };<br><br>  return {<br>    name, year, about<br>  };<br>}</pre><h3>Summary</h3><p>Remember, every design pattern can have multiple implementations, and implementations can differ in every language. However, the idea is the same.</p><p>Every design pattern comes with its pros and cons.</p><p>The cool thing is, that these patterns work in all sorts of programming languages. So, if you learn them in Java, you can apply the same tricks in JavaScript.</p><p>It’s like having both advantages and disadvantages, but in a language that all programmers can understand!</p><p>Consider using up-to-date books to understand those patterns in your favorite programming language. For JavaScript, here are a few examples of helpful books.</p><ul><li>Node.js Design Patterns by Mario Casciaro</li><li>Learning JavaScript Design Patterns by Addy Osmani</li></ul><p>Thank you for taking the time to read this article. I hope you found it valuable. Feel free to ask any questions or tweet me <a href="https://twitter.com/nairihar">@nairihar</a></p><p>Also follow my <strong>JavaScript</strong> newsletter on Telegram: <a href="https://t.me/javascript">@javascript</a></p><ul><li><a href="https://blog.bitsrc.io/you-dont-know-node-js-eventloop-8ee16831767">You Don’t Know Node.js EventLoop</a></li><li><a href="https://blog.bitsrc.io/how-to-scale-node-js-socket-server-with-nginx-and-redis-b02e23b3423c">How to Scale Node.js Socket Server with Nginx and Redis</a></li></ul><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=3850a85b37ed" width="1" height="1" alt=""><hr><p><a href="https://javascript.plainenglish.io/essential-javascript-design-patterns-3850a85b37ed">Essential JavaScript Design Patterns</a> was originally published in <a href="https://javascript.plainenglish.io">JavaScript in Plain English</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Why I don’t burnout and still love programming]]></title>
            <link>https://nairihar.medium.com/why-i-dont-burn-out-and-still-love-programming-52de5b0018a4?source=rss-b0ffd91825e5------2</link>
            <guid isPermaLink="false">https://medium.com/p/52de5b0018a4</guid>
            <category><![CDATA[hobby]]></category>
            <category><![CDATA[programming]]></category>
            <category><![CDATA[engineering]]></category>
            <category><![CDATA[burnout]]></category>
            <dc:creator><![CDATA[nairihar]]></dc:creator>
            <pubDate>Sat, 09 Dec 2023 01:32:02 GMT</pubDate>
            <atom:updated>2023-12-09T01:34:27.373Z</atom:updated>
            <content:encoded><![CDATA[<h4>My Story of Beating Burnout and Loving Programming</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*A5f1awrEuELY0qIvbVZpOA.jpeg" /></figure><p>If you have over 5 years of programming experience, it’s quite likely that you’ve experienced burnout at some point.</p><p>You’re facing a 10-day deadline, and you end up procrastinating for the first 8 days because you just didn’t have the mood to work. However, you managed to complete the task within the last two days out of necessity.<br>It’s a familiar scenario — you open your laptop, but the motivation to code consistently for the 5th or 6th year isn’t there.</p><p>I began my journey in the programming world almost 10 years ago. I have been working as a software engineer for 8 consecutive years. The last few years lacked proper vacations, and at times, I experienced burnout. Nevertheless, I still find joy in my work every day, and in this article, I will share my secret…</p><h3>The Journey into the World of Programming</h3><p>When I was learning programming, at some point I understood that this is the thing that I want to do every day. I fell in love with programming, and it became one of the prioritized hobbies in my life.</p><p>I got a bit crazy, didn’t sleep well, and missed my breakfasts and lunches. I was like a zombie, undisturbedly consuming new information from the internet. And it boosted my career.</p><p>You may say — we all understand that if you’re doing something too much, then you will lose your interest too early.<br>- Fortunately, my case is quite different.</p><h4>Hobbies</h4><p>I’m not quite sure why, but it feels like I have too many hobbies (magic tricks, cycling, running, podcasting, writing etc). You might think that I simply adopt a hobby for a couple of weeks and then promptly forget about it. However, that’s not the case; I genuinely strive to deeply immerse myself in any aspect that captivates me.<br>I spent 4–5 years learning magic tricks, 3–4 years cycling and running long distances, and so on … Sometimes I even sensed that my focus leaned more towards one of those aspects rather than on programming itself.</p><p>While having those types of hobbies, I never felt that I was losing my interest in programming. Programming still is a hobby and that pursuit helps me make money and concentrate on other hobbies.</p><p>We, engineers, spend lots of time doing the same thing every day. Even though it can be a hobby for us, we may lose interest if we do it too much.</p><p>Somehow, we need to maintain a balance, just like on the cover of this article. On the left side, there’s a large stone symbolizing programming, which often consumes most of our time. Finding a similar stone for the right side to achieve balance is challenging, as we have limited time in our day. So that’s why on the right side of the cover we have multiple stones on top of each other. Each of those can carry a different weight, and each can bring various emotions and joy to our day.<br><br>It’s quite similar to my situation; every day, I spend hours coding, but I make an effort to maintain my interest and prevent burnout through other hobbies.</p><h4>If you like dancing, why you don’t practice every day?</h4><p>In addition to my hobbies mentioned above, I also enjoy Latin dancing, and I practiced for 1–2 years. I prioritized it in my daily routine because I enjoyed dancing.</p><p>One day, I asked the question above to one of my friends, and he replied:</p><blockquote>Even though I have plenty of time and enjoy dancing, I don’t want to lose my dancing interest. If I practice every day, it’s likely that I will lose the interest in dancing in a short period.</blockquote><p>The thing is, that unintentionally, I had too many hobbies at the same time, and they all helped and continue to help me maintain my interest in programming.</p><p>Sometimes, I feel like I have a new hobby addiction. However, I understand that if I try to balance my hobbies, as my friend mentioned, I can’t get the maximum from each one, and it will definitely affect my interest in programming. Because I have some expectations from me, and I need to satisfy them mentally and not only. Just programming can’t always satisfy me. I strive to extract the most from each hobby, and whenever I feel it’s losing its appeal, I seek out another one or an old one.</p><p>I wouldn’t say that I disagree with my friend. He is right. However, each of us has our priorities and interests in life. Somehow, I ended up in this situation that magically works, and I am not displeased with it.</p><h4>No burnout? You don’t lie?</h4><p>I would lie If I said I never had a burnout situation. Even though I shared my story, I want to mention that I still have burnout sometimes. But the level of that burnout isn’t significant. It has a really weak effect.</p><p>We need to understand that it’s not possible to prevent burnout completely. We just need to minimize it. If it weren’t minimal in my case, I wouldn’t continue to enjoy coding, and I wouldn’t love to continue programming like some of my friends who decided to switch from programming to business or project management.</p><h4>Programming as a hobby</h4><p>If this is not the case, then it’s probably harder for you. Because you’re not enjoying your job. You shouldn’t restrict yourself to programming just because it’s popular or lucrative. No, you need to start loving it; you need to take it as a hobby. Otherwise, you might find yourself in a huge struggle, at least in the long run for sure.</p><h4>Summary</h4><p>If you are a beginner, if you have just started your journey in programming, try to gather as much information as you can. The longer you do the same thing, the more likely you are to get tired of it. In the beginning, your productivity is high, but over time, it tends to decrease. Try to understand that and pace yourself. As someone who has been in software engineering for more than 5 years, also concentrate on other things. Make sure that you are satisfying your needs.</p><p>Thank you for taking the time to read this article. I hope you found it valuable. Feel free to ask any questions or tweet me <a href="https://twitter.com/nairihar">@nairihar</a></p><p>Also follow my <strong>JavaScript</strong> newsletter on Telegram: <a href="https://t.me/javascript">@javascript</a></p><ul><li><a href="https://nairihar.medium.com/you-dont-know-node-js-eventloop-8ee16831767">You Don&#39;t Know Node.js EventLoop</a></li><li><a href="https://blog.bitsrc.io/how-to-scale-node-js-socket-server-with-nginx-and-redis-b02e23b3423c">How to Scale Node.js Socket Server with Nginx and Redis</a></li></ul><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=52de5b0018a4" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Native support of .env in Node.js]]></title>
            <link>https://blog.stackademic.com/native-support-of-env-in-node-js-b1a9497ff6d9?source=rss-b0ffd91825e5------2</link>
            <guid isPermaLink="false">https://medium.com/p/b1a9497ff6d9</guid>
            <category><![CDATA[javascript]]></category>
            <category><![CDATA[nodejs]]></category>
            <category><![CDATA[environment]]></category>
            <category><![CDATA[configuration]]></category>
            <category><![CDATA[node]]></category>
            <dc:creator><![CDATA[nairihar]]></dc:creator>
            <pubDate>Wed, 06 Sep 2023 03:29:31 GMT</pubDate>
            <atom:updated>2023-09-06T04:07:02.774Z</atom:updated>
            <content:encoded><![CDATA[<h3>Native Support of <strong>.env</strong> in Node.js</h3><h4>Load environment variables without an additional library</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Q6QtCEXCCsbRJmHuUO87xA.jpeg" /></figure><p>Many programming languages have a handy tool called dotenvfor loading environment variables from a .env file.</p><p>The Node.js dotenv library is no different, and it&#39;s one of our go-to choices for managing environment variables. It&#39;s a simple and important addition to any project.</p><p>With the exciting release of <a href="https://nodejs.org/en/blog/release/v20.6.0">Node.js V20.6.0</a>, there’s now native support for .env files. This means you can effortlessly load environment variables from a .env file into your Node.js application’s process.env, all without the need for external dependencies.</p><pre>node --env-file .env</pre><h3>Differences</h3><ol><li>When Node.js starts up, it automatically loads and parses the .env file. This allows you to include environment variables that configure Node.js itself, such as NODE_PRESERVE_SYMLINKS.</li><li>While this may not align with the behaviour of current dotenv packages, it’s crucial to recognize that Node.js handles it differently. In Node.js, variables defined in the .env file will not override existing environment variables.</li></ol><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*1oCUvjz0AApg111BG2xQwA.png" /><figcaption><a href="https://nodejs.org/dist/latest-v20.x/docs/api/cli.html#environment-variables">https://nodejs.org/dist/latest-v20.x/docs/api/cli.html#environment-variables</a></figcaption></figure><p>There is currently an ongoing <a href="https://github.com/nodejs/node/pull/49424">discussion</a> regarding this matter, and it’s highly likely that it will be changed.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*JQqlFP5hftkap_v7A_r07w.png" /><figcaption><a href="https://github.com/nodejs/node/pull/49424">https://github.com/nodejs/node/pull/49424</a></figcaption></figure><p>3. While many dotenv packages offer additional features, such as supporting multiple environment variables, Node.js takes a simpler approach. It allows us to specify only one environment variable. Attempting to specify multiple variables will result in Node.js reading only the last specified environment variable.”</p><pre>node --env-file .env --env-file .env.staging</pre><p>Adding the dotenv library might seem like a small step, but it’s a significant one. Dotenv library increased our package list by one. Even though that simple configuration can be one line, it’s not good to mix technical things with business logic.</p><p>Node.js is always evolving, and you can stay up-to-date with all the latest changes by referring to the <a href="https://nodejs.org/en/blog/release/v20.6.0">documentation</a>.</p><p>Thank you for taking the time to read this comprehensive article. I hope you found it informative and gained valuable insights from it.<br>Feel free to ask any questions or tweet me <a href="https://twitter.com/nairihar">@nairihar</a></p><p>Also follow my JavaScriptnewsletter on Telegram: <a href="https://t.me/javascript">@javascript</a></p><h4>Additional reading:</h4><ul><li><a href="https://nairihar.medium.com/node-js-multithreading-executing-callbacks-in-separate-threads-39e83a0a9ded">Executing Node.js Callbacks in Separate Threads</a></li><li><a href="https://nairihar.medium.com/you-dont-know-node-js-eventloop-8ee16831767">You Don&#39;t Know Node.js EventLoop</a></li></ul><p><em>Thank you for reading until the end. Please consider following the writer and this publication. Visit </em><a href="https://stackademic.com/"><strong><em>Stackademic</em></strong></a><em> to find out more about how we are democratizing free programming education around the world.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=b1a9497ff6d9" width="1" height="1" alt=""><hr><p><a href="https://blog.stackademic.com/native-support-of-env-in-node-js-b1a9497ff6d9">Native support of .env in Node.js</a> was originally published in <a href="https://blog.stackademic.com">Stackademic</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Executing Node.js Callbacks in Separate Threads]]></title>
            <link>https://blog.bitsrc.io/node-js-multithreading-executing-callbacks-in-separate-threads-39e83a0a9ded?source=rss-b0ffd91825e5------2</link>
            <guid isPermaLink="false">https://medium.com/p/39e83a0a9ded</guid>
            <category><![CDATA[nodejs]]></category>
            <category><![CDATA[typescript]]></category>
            <category><![CDATA[multithreading]]></category>
            <category><![CDATA[javascript]]></category>
            <category><![CDATA[threads]]></category>
            <dc:creator><![CDATA[nairihar]]></dc:creator>
            <pubDate>Mon, 04 Sep 2023 20:33:38 GMT</pubDate>
            <atom:updated>2024-02-06T11:50:21.653Z</atom:updated>
            <content:encoded><![CDATA[<h4>Abstraction for Node.js Multithreading</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*DVudSCRyJJ671uCeRb0bhw.jpeg" /><figcaption>Generated using Leonardo AI</figcaption></figure><p>It has already been five years since Node.js introduced the worker_threads native library, designed for creating workers, essentially functioning as threads.</p><p>When it was released, I was incredibly enthusiastic and eager to build something interesting in the scope of that.</p><p>When discussing worker threads, I noticed that it’s not straightforward for everyone to build a proper system that would use worker_threads. Consequently, I embarked on a quest to find abstractions that could simplify and enhance the potential for more intriguing solutions.</p><p>I came across several intriguing abstractions, but ultimately, I chose to develop my own — one that is lightweight and straightforward.</p><h3>funthreads</h3><p><a href="https://www.npmjs.com/package/funthreads">NPM</a> | <a href="https://github.com/nairihar/funthreads#readme">Github</a></p><p>The idea is simple: you can execute your function in a dedicated thread by utilizing Promises.</p><pre>import executeInThread from &#39;funthreads&#39;;<br><br>async function calculate() {<br>    const values = await Promise.all([<br>        executeInThread(() =&gt; 2 ** 10),<br>        <br>        executeInThread(() =&gt; 3 ** 10)<br>    ]);<br>    <br>    console.log(values);<br>}<br><br>calculate();</pre><p>You can relocate CPU-intensive operations to separate threads and easily retrieve the results using Promises.</p><p>Just try yourself and see.</p><pre>$ npm i funthreads</pre><p><a href="https://github.com/nairihar/funthreads">https://github.com/nairihar/funthreads</a></p><p>I didn’t spend much time on this library, and though I haven’t used it in a real project, I stayed excited and made small improvements over time.</p><p>After publishing this library, I noticed other extended implementations of the same idea.</p><h4>node-worker-threads-pool</h4><p>As the authors describe: <em>Simple worker threads pool using Node’s worker_threads module. Compatible with ES6+ Promise, Async/Await and TypeScript🚀.</em></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*4XO4TpyAsTcPdlgz0v891w.png" /><figcaption><a href="https://github.com/SUCHMOKUO/node-worker-threads-pool/tree/master">https://github.com/SUCHMOKUO/node-worker-threads-pool/tree/master</a></figcaption></figure><p>But this is not all, there is another library that had been released even before worker_threads.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*JFuJLjehYAqEUvxulln3EQ.png" /><figcaption><a href="https://www.npmjs.com/package/threads">https://www.npmjs.com/package/threads</a></figcaption></figure><p>Previously, they used various strategies and approaches to support multithreading. However, after worker_threads, their implementations significantly improved.</p><p>Here’s a concise example that illustrates how the library operates.</p><pre>// main.js<br>import { spawn, Thread, Worker } from &quot;threads&quot;<br><br>const auth = await spawn(new Worker(&quot;./workers/auth&quot;))<br>const hashed = await auth.hashPassword(&quot;Super secret password&quot;, &quot;1234&quot;)<br><br>console.log(&quot;Hashed password:&quot;, hashed)<br><br>await Thread.terminate(auth)</pre><pre>// worker.js<br>import sha256 from &quot;js-sha256&quot;<br>import { expose } from &quot;threads/worker&quot;<br><br>expose({<br>  hashPassword(password, salt) {<br>    return sha256(password + salt)<br>  }<br>})</pre><p>It seems that this is the biggest library that provides a bunch of tools to work with multithreading in Node.js.</p><p>P.S. Both of the libraries offer a method to establish a thread pool and employ it for your specific use cases.</p><p>Unfortunately, I haven’t used any of them in production, as there wasn’t a need. However, I’m confident that many projects can benefit from them in real-world scenarios.</p><p>It’s interesting to observe how they work and understand the mechanics of the abstraction.</p><p>funthreads library is quite straightforward, consisting of just a few files with small functions. You can begin with it, and I’m confident that you’ll grasp the concept easily, realizing how simple the provided abstraction is.</p><h3>Wrapping up</h3><p>Compared with other programming languages in JavaScript everything is quite different. Everything operates in its own unique way and has its proper explanation.</p><p>JavaScript has classes, and Node.js supports multithreading. While it may seem overwhelming compared to other languages, it’s important to embrace JavaScript on its own uniqueness.</p><p>Thank you for taking the time to read this comprehensive article. <br>Feel free to ask any questions or tweet me <a href="https://twitter.com/nairihar">@nairihar</a></p><p>Also follow my <strong>JavaScript</strong> newsletter on Telegram: <a href="https://t.me/javascript">@javascript</a></p><p>I hope you found it informative and gained valuable insights from it.</p><h3>Read more</h3><p><a href="https://blog.bitsrc.io/you-dont-know-node-js-eventloop-8ee16831767">You Don’t Know Node.js EventLoop</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=39e83a0a9ded" width="1" height="1" alt=""><hr><p><a href="https://blog.bitsrc.io/node-js-multithreading-executing-callbacks-in-separate-threads-39e83a0a9ded">Executing Node.js Callbacks in Separate Threads</a> was originally published in <a href="https://blog.bitsrc.io">Bits and Pieces</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[You Don’t Know Node.js EventLoop]]></title>
            <link>https://blog.bitsrc.io/you-dont-know-node-js-eventloop-8ee16831767?source=rss-b0ffd91825e5------2</link>
            <guid isPermaLink="false">https://medium.com/p/8ee16831767</guid>
            <category><![CDATA[web-development]]></category>
            <category><![CDATA[nodejs]]></category>
            <category><![CDATA[javascript]]></category>
            <category><![CDATA[multithreading]]></category>
            <category><![CDATA[event-loop]]></category>
            <dc:creator><![CDATA[nairihar]]></dc:creator>
            <pubDate>Fri, 26 May 2023 22:48:35 GMT</pubDate>
            <atom:updated>2024-09-13T07:52:10.939Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*z2Jku5Y9CjvW6NGHz3tj7g.jpeg" /><figcaption>Generated using Leonardo AI</figcaption></figure><h4>The Ultimate Guide to Understanding EventLoop in Node.js</h4><p>You might be wondering if the title of this article is a bit ambitious, but fear not! I’ve put in the time and effort to cover everything you need to know about the EventLoop in Node.js. And believe me, you’ll discover so many new things along the way!</p><p>Before beginning this article, I’d like to inform you that it’s important to have a basic understanding of Node.js and its EventLoop to fully comprehend the core concepts discussed.</p><p>This comprehensive guide is going to take some time to cover every detail that you need to know, so grab a cup of coffee and settle in for an exciting journey to the fascinating world of Node.js. Let’s get started!</p><p>Here is the list of topics covered in this article.</p><p>— What is Node.js<br> — Reactor Pattern<br> — Node.js Architecture<br> — Event Queues (I/O Polling, Macrotasks, Microtasks) <br> — Changes from Node v11<br>— Unblocking EventLoop using <strong>unref()</strong> (<em>added in Jul of 2024</em>)<br> — CommonJS vs ES modules<br>— Libuv (thread pool)<br> — DNS is problematic in Node.js<br> — Custom Promises <br>— Bluebird<br> — Summary<br>— A challenging interview question (<em>added in Feb of 2024</em>)<br> — Translated versions of this article</p><h3>What is Node.js</h3><p>If you take a look at the official documentation, you’ll find a brief explanation like this.</p><blockquote><strong>Node.js</strong> is a JavaScript runtime built on <strong>Chrome’s V8 JavaScript engine</strong>. It uses an <strong>event-driven</strong>, <strong>non-blocking I/O</strong> model that makes it lightweight and efficient.</blockquote><p>Well, that brief explanation doesn’t tell us much, does it?</p><p>There are so many important details and concepts related to the EventLoop in Node.js that require a more in-depth explanation.</p><p>Let’s explore them together!</p><h3><strong>Reactor Pattern</strong></h3><p><strong><em>event-driven model</em></strong></p><p>Node.js is written using the Reactor pattern, which provides what is commonly referred to as an event-driven model.</p><p>So, how does the Reactor pattern work in event-driven programming?</p><p>Let’s say we have an I/O request — in this example, it’s a file system action.</p><pre>fs.readFile(&#39;./file.txt&#39;, callback);</pre><p>When we make a function call that involves an I/O operation, the request is directed to the EventLoop, which then passes it on to the Event Demultiplexer.</p><blockquote>I/O stands for Input/Output and refers to the communication with the computer’s central processing unit (CPU) .</blockquote><p>After receiving the request from the EventLoop, the Event Demultiplexer decides what type of hardware I/O operation needs to be performed, based on the I/O request. In the case of a file system read, the operation is delegated to the appropriate unit, which reads the file.</p><p>A specific C/C++ function will read the requested file and return the content to the Event Demultiplexer.</p><pre>uv__fs_read(req)</pre><p>When the requested operation is completed, the Event Demultiplexer generates a new event and adds it to the Event Queue, where it can be queued with other similar events.</p><p>Once the JavaScript Call Stack is empty, the event from the Event Queue will be processed, and our callback function will be executed.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*AJF5XcZVEKAI_Nfp7Bt2cw.png" /><figcaption>Visualization of the event-driven model.</figcaption></figure><p>Neither the Event Demultiplexer nor the Event Queue is a single component. This is the abstract view. For example, the implementation of Event Demultiplexer and Hardware I/O can vary depending on the operating system. Additionally, the Event Queue is not a single queue but rather consists of multiple queues.</p><p>— Where does all of this come from?</p><h3>libuv</h3><p>The EventLoop in Node.js is provided by the libuv library, which is written in C language specifically for Node.js. It provides the ability to work with the operating system using asynchronous I/O.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/365/1*D8Rzu8rdDa4WL7MqRYnZeA.png" /><figcaption><a href="https://libuv.org/">https://libuv.org/</a></figcaption></figure><blockquote>libuv is a multi-platform support library with a focus on asynchronous I/O.</blockquote><p>— Where does it fit in the Node.js architecture?</p><h3>Node.js architecture</h3><p>Using JavaScript, we interact with the operating system. But if we put it bluntly, JavaScript is a high-level language that is limited to basic operations such as creating variables, loops, and functions. It can’t do much more on its own.</p><p>— How does it work with the Operation System?</p><p>Many programming languages interact directly with the operating system. Therefore, if we integrate JavaScript with those languages, we can essentially work with the operating system using JavaScript.</p><p>Here is how it is already done.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*6OmfVkxfF4QbYcVYnc83VQ.png" /><figcaption>Visualisation of Node.js architecture</figcaption></figure><p>The middle layer (Node.js) takes care of our JavaScript code and interacts it with the Operation System. Now let’s discuss the components of Node.js architecture.</p><h4>V8</h4><p>This should be a well-known engine that parses and executes our JavaScript code.</p><h4>libuv</h4><p>This is the library we previously discussed, which provides the EventLoop and most of the interactions needed to work with the operating system.</p><h4>Core modules</h4><p>Node.js provides various native modules, such as <strong>fs</strong>, <strong>http</strong>, and <strong>crypto.</strong> Those are called native modules and include JavaScript source code.</p><h4>C++ bindings</h4><p>In Node.js, we have an API that allows us to write C++ code, compile it, and require it in JavaScript as a module. These are called addons. Core modules may have their addons as well.</p><p>Node.js provides a compiler that generates addons, which essentially creates a .node file that can be required.</p><pre>require(&#39;./my-cpp-module.node&#39;);</pre><p>The require function in Node.js prioritizes loading .js and .json files, followed by addon files with the .node type.</p><h4>c-ares, zlib, etc</h4><p>There are also smaller libraries written in C/C++ that provide specific operations, such as file compression, DNS operations, and more.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*pLIGFCrDdrsoHcr9zLP0FA.png" /><figcaption><a href="https://github.com/nodejs/node/tree/main/deps">https://github.com/nodejs/node/tree/main/deps</a></figcaption></figure><p>We are not limited to this. We will revisit libuv later in this article. For now, let’s continue forward.</p><h3><em>Event Queues</em></h3><p>The EventLoop is a mechanism that continuously processes and handles events in a single thread until there are no more events to handle. It is often referred to as a <strong>“semi-infinite loop”</strong> because it runs indefinitely until there are no more events to handle or an error occurs.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*KPTsDZaKlBYLT-fOYUr5pw.png" /><figcaption>Visualisation of EventLoop #1</figcaption></figure><p>As previously mentioned, the EventLoop consists of multiple queues, each with its priority level. In the following sections, we will delve into more detail about these priorities.</p><p>Once the Call Stack is empty, the EventLoop goes over the Queues and waits for an event to execute. It checks for timer-related events first.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*IQyCYto3DfrVHW0uQy-S-w.png" /><figcaption>Visualisation of EventLoop #2</figcaption></figure><p>If a setTimeout or setInterval function has finished executing, the Event Demultiplexer will enqueue an event to the <strong>Timers</strong> queue.</p><p>Let’s say we have a setTimeout function which is scheduled to execute in one hour. It means that after one hour, the Event Demultiplexer will enqueue an event into this queue.</p><p>When the queue has events, the EventLoop will execute the corresponding callbacks until the queue is empty. Once the queue is empty, the EventLoop will move on to check other queues.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*_XEi2SnocCZsc35IWXhMtg.png" /><figcaption>Visualisation of EventLoop #3</figcaption></figure><p>In the second position, we have the <strong>I/O</strong> queue which is responsible for most of the asynchronous operations such as file system operations, networking, and more.</p><p>Next, we have the <strong>Immediate</strong> queue which is responsible for setImmediate calls. It allows us to schedule operations that should run after I/O operations.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*zZLN3roCz9TwhciY0J5Ryg.png" /><figcaption>Visualisation of EventLoop #4</figcaption></figure><p>And at the end, we have a specific queue called <strong>close events</strong> for the closed event.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*1rWmDSqZ_ZbgmGgvJI7XrQ.png" /><figcaption>Visualisation of EventLoop #5</figcaption></figure><p>It is responsible for handling all connections that have a close event, such as database and TCP connections. These events are queued here for execution.</p><p>In each cycle, the EventLoop checks all of these queues to determine if any events need to be executed. The EventLoop typically takes only a few milliseconds to review all the queues and check if any events are to be executed. However, if the EventLoop is busy, it may take longer. Fortunately, there are many libraries available on npm that allow us to measure the duration of the EventLoop cycle.</p><p>You may think that we have already covered most of it, but I will say it’s just the beginning.</p><p>The events which are queued in these queues are also referred to as <strong>Macrotasks</strong>.</p><p>There are two types of tasks in the EventLoop: Macrotasks and Microtasks. We will discuss microtasks later on.</p><p>Now, let’s take a look at how the EventLoop determines when it’s time to stop the Node.js process because there are no more events to handle.</p><p>EventLoop maintains a refs counter, which starts at 0 when the process begins. Whenever there is an asynchronous operation, the counter is incremented by one. For example, if we have a setTimeout or readFile operation, each of these functions will increment the counter by one.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*gZNFaJse8Lvut-7zuAB46A.png" /><figcaption>Visualisation of EventLoop #6</figcaption></figure><p>When an event is pushed to the queue, and the EventLoop executes the callback of that particular macrotask, it also decreases the counter by one.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*yoGLdjoLga89aFNDZgfO9A.png" /><figcaption>Visualisation of EventLoop with counter</figcaption></figure><p>After processing the closed events queue, EventLoop checks the counter. If it is zero, it means there are no ongoing operations, and the process can exit. However, if the counter is not zero, it means there are still ongoing operations, and the EventLoop will continue its cycles until all the operations are completed, and the counter becomes zero.</p><p>OK, enough theory, let’s do some practice!</p><pre>const fs = require(&#39;fs&#39;);<br><br>setTimeout(() =&gt; {<br>  console.log(&#39;hello&#39;);<br>}, 50);<br><br><br>fs.readFile(__filename, () =&gt; {<br>  console.log(&#39;world&#39;);<br>});</pre><p>Let’s take a look at a simple example with one timeout and one file system read to better understand how the EventLoop works in practice.</p><ol><li>When the Node process starts, V8 begins by parsing the JavaScript code and executing the setTimeout function. This triggers a C/C++ function (C_TIMEOUT) to execute inside libuv and increase the counter (refs++).</li><li>When V8 comes across the readFile function, it does the same thing. Libuv initiates the file read operation (C_FS), which again increases the counter (refs++).</li><li>Now there is nothing left for V8 to execute, and EventLoop takes over. It starts by checking each queue one by one until the counter reaches zero.</li><li>Once C_TIMEOUT is finished, an event is registered in the timers queue. When EventLoop checks the timers queue again, it detects the event and executes the corresponding callback, resulting in the “hello” message appearing in the console. The counter is then decremented, and the EventLoop continues to check each queue until the counter is zero.</li><li>At some point, depending on the file size, the C_FS operation is completed, and an event is registered in the I/O queue. Once again, EventLoop detects the event and executes the corresponding callback, which outputs the “world” message in the console. The counter is decremented again, and EventLoop resumes its work.</li><li>Finally, after checking the close event queue, EventLoop checks the counter. Since it is zero, the Node process is exited.</li></ol><p>Visualizing the diagram can help you easily understand how the asynchronous code will work, and you won’t have to wonder about the results.</p><p>While we’ve been discussing Macrotasks, let’s discuss another important thing.</p><h4>I/O Polling</h4><p>This process often confuses those who attempt to learn about the EventLoop. Many articles mention it as part of the EventLoop, but few explain what it does. Even in the <a href="https://nodejs.org/en/docs/guides/event-loop-timers-and-nexttick">official documentation</a> is hard to understand what does it.</p><p>Let’s take a look at this example.</p><pre>const fs = require(&#39;fs&#39;)<br>const now = Date.now();<br><br>setTimeout(() =&gt; {<br>  console.log(&#39;hello&#39;);<br>}, 50);<br><br>fs.readFile(__filename, () =&gt; {<br>  console.log(&#39;world&#39;);<br>});<br><br>setImmediate(() =&gt; {<br>  console.log(&#39;immediate&#39;);<br>});<br><br>while(Date.now() - now &lt; 2000) {} // 2 second block</pre><p>We have three operations: setTimeot, readFile and setImmediate.<br>In the end, we have a while loop that blocks the thread for two seconds. During this time, all three corresponding events should be registered in their respective queues. This means that when V8 finishes executing the while loop, EventLoop should see all three events in the same cycle and based on the diagram execute the callbacks in the following order:</p><pre>hello<br>world<br>immediate</pre><p>But the actual result looks like this:</p><pre>hello<br>immediate<br>world</pre><p>It’s because there is an extra process called I/O Polling.</p><p>Unlike other types of events, I/O events are only added to their queue at a specific point in the cycle. This is why the callback for setImmediate() will execute before the callback for readFile() even though both are ready when the while loop is done.</p><p>The issue is that the I/O queue-checking stage of the EventLoop only runs callbacks that are already in the event queue. They don’t get put into the event queue automatically when they are done. Instead, they are only added to the event queue later during the I/O polling.</p><p>Here is what happens after two seconds when the while loop is finished.</p><ol><li>The EventLoop proceeds to execute the timer callbacks and finds that the timer has finished and is ready to be executed, so it runs it.<br>In the console, we see “hello”.</li><li>After that, the EventLoop moves on to the I/O callbacks stage. At this point, the file reading process is finished, but its callback is not yet marked to be executed. It will be marked later in this cycle. The EventLoop then continues through several other stages and eventually reaches the I/O poll phase. At this point, the readFile() callback event is collected and added to the I/O queue, but it still doesn’t get executed yet.<br>It’s ready for execution, but EventLoop will execute it in the next cycle.</li><li>Moving on to the next phase, the EventLoop executes the setImmediate() callback.<br>In the console, we see “immediate”.</li><li>The EventLoop then starts over again. Since there are no timers to execute, it moves to the I/O callbacks stage, where it finally finds and runs the readFile() callback.<br>In the console, we see “world”.</li></ol><p>This example can be a bit challenging to understand, but it provides valuable insight into the I/O polling process. If you were to remove the two-second while loop, you would notice a different result.</p><pre>immediate<br>world<br>hello</pre><p>setImmediate() will work in the first cycle of EventLoop when neither of the setTimeout or File Systems processes is finished. After a certain period, the timeout will finish and the EventLoop will execute the corresponding callback. At a later point, when the file has been read, the EventLoop will execute the readFile’s callback.</p><p>Everything depends on the delay of the timeouts and the size of the file. If the file is large, it will take longer for the read process to complete. Similarly, if the timeout delay is long, the file read process may complete before the timeout. However, the setImmediate() callback is fixed and will always be registered in the event queue as soon as V8 executes it.</p><p>Let’s discuss some other interesting examples that will help us practice the diagram.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*1rWmDSqZ_ZbgmGgvJI7XrQ.png" /><figcaption>Event Queue DIagram (Macrotasks)</figcaption></figure><h4>setTimeout &amp; setImmediate</h4><p>In this example, we have a timeout with a delay of 0 seconds and a setImmediate function. This is a tricky question, but if you answer correctly, it can leave a good impression on your knowledge during the interview.</p><pre>setTimeout(() =&gt; {<br>  console.log(&#39;setTimeout&#39;);<br>}, 0);<br><br>setImmediate(() =&gt; {<br>  console.log(&#39;setImmediate&#39;);<br>});</pre><p>The thing is, you never know which one will be logged first.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*leRen9liCPiuR9xVq9BS7w.png" /><figcaption>The process results in terminal #1</figcaption></figure><p>The thing is, you never know which one will be logged first. This is because sometimes a process can take longer(this is about milliseconds) to execute, causing the EventLoop to move past the timers queue when it is empty. Alternatively, the EventLoop may work too quickly, causing the Demultiplexer to not manage to register the event in the Event Queue in time. As a result, if you run this example multiple times, you may get different results each time.</p><h4>setTimeout &amp; setImmediate inside fs callback</h4><p>In contrast to the previous example, the result of this code is predictable. Take a moment to examine the code and consider the order in which the logs will appear, using the diagram as a guide.</p><pre>const fs  = require(&#39;fs&#39;);<br><br>fs.readFile(__filename, () =&gt; {<br>  setTimeout(() =&gt; {<br>    console.log(&#39;setTimeout&#39;);<br>  }, 0);<br>  <br>  setImmediate(() =&gt; {<br>    console.log(&#39;setImmediate&#39;);<br>  });<br>});</pre><p>As the setTimeout and setImmediate are written inside the readFile function, we know that when the callback will be executed, then the EventLoop is in the I/O phase. So, the next one in its direction is the setImmediate queue. And as the setImmediate is an immediately get registered in the queue, it&#39;s not surprising that the logs will always be in this order.</p><pre>setImmediate<br>setTimeout</pre><p>Now that we have gained a good understanding of macrotasks and the workflow of the Event Loop, let’s continue exploring further.</p><p>First, let’s improve our diagram slightly by adding markings to indicate the phases that depict JavaScript executions.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*X0ASGe2A5-PGe-m1UHrGVA.png" /><figcaption>Event Queue DIagram (including JS execution phases)</figcaption></figure><p>We have a single JavaScript execution when we run the node.js process. After that, when there is nothing left to execute, V8 waits until the Event Loop receives an event and commands the execution of the corresponding callback. As you may have observed, there is a JavaScript execution phase after each queue phase. For instance, in the diagram, the execution of the callback for the timeout occurs during the second JavaScript phase.</p><p>So far, our discussion has focused on macrotasks, which didn’t include any information about Promises and process.nextTick. Those two are called Microtasks.</p><p>During each JavaScript execution phase, there is additional processing takes place.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*xzaypRNF7I8qMr_wljpNPQ.png" /><figcaption>Diagram: Microtasks</figcaption></figure><p>These two types of microtasks have their dedicated queues. Additionally, there are other microtask schedulers as well, known as MutationObserver, queueMicrotaskbut for our discussion, we will focus on nextTick and Promise.</p><p>Once V8 executes all the JavaScript code, it proceeds to check the microtask queues, just like it does with macrotasks. If there is an event registered in the microtask queue, it will be processed, and the corresponding callback will be executed.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*68ZEhgvd9htdWCyQ_ZyJSA.png" /><figcaption>Full diagram: Macrotasks and Microtasks</figcaption></figure><p>In the diagram, the light gray color represents the process.nextTick() queue, which holds the highest priority among scheduled tasks. The next dark gray color represents the Promise queue, which follows next in terms of priority.</p><p>Let’s try some examples.</p><h4>process.nextTick &amp; Promise</h4><p>This is a basic example which demonstrates the workflow of Microtasks.</p><pre>console.log(1);<br><br>process.nextTick(() =&gt; {<br>  console.log(&#39;nextTick&#39;);<br>});<br><br>Promise.resolve()<br>  .then(() =&gt; {<br>    console.log(&#39;Promise&#39;);<br>  });<br><br>console.log(2);</pre><p>In terms of output sequencing, the process.nextTick() callbacks will always be executed before the Promise callbacks.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/570/1*KHjZQcjhKcwUgtsKy2sNeQ.png" /></figure><p>During the execution process, V8 begins with the first console log statement and then proceeds to execute the nextTick function, which registers an event in the queue. A similar process occurs with the Promise, where its callback is stored in a separate queue.<br>After V8 completes the execution of the last function call, resulting in the output of 2, it moves on to execute the events stored in the queues.</p><p>process.nextTick() is a function that allows a callback function to be executed immediately after the current operation completes, but before the Event Loop proceeds to the next phase.</p><p>When process.nextTick() is invoked, the provided callback is added to the nextTick queue, which holds the highest priority among scheduled tasks within the Event Loop. As a result, the callback will be executed before any other type of task, including Promises and other microtasks.</p><p>The primary use of process.nextTick() is for time-sensitive or high-priority operations that require prompt execution, bypassing the wait for other pending tasks. However, it is essential to exercise caution when using process.nextTick() to prevent blocking the Event Loop and causing performance degradation.</p><p>As long as at least one event remains in the Microtasks queue, the EventLoop will continue to prioritize it over the timers queue.</p><p>If we recursively run process.nextTick(), the EventLoop will never reach the timers queue, and the corresponding callbacks in the timers queue will never be executed.</p><pre>function recursiveNextTick() {<br>  process.nextTick(recursiveNextTick);<br>}<br><br>recursiveNextTick();<br><br>setTimeout(() =&gt; {<br>  console.log(&#39;This will never be executed.&#39;);<br>}, 0);</pre><p>In the above code, recursiveNextTick() function is invoked recursively using process.nextTick(). This causes the EventLoop to continuously process the nextTick queue, never allowing it to reach the timers queue.</p><p>As a result, the callback passed to setTimeout will never be executed, and the console statement inside it will never be printed.</p><p>Similarly, the same scenario will occur if we recursively use other microtasks. The EventLoop will be continuously occupied with processing the microtask queue, preventing it from reaching the timers queue or executing any other tasks.</p><p>Consequently, the callback passed to setTimeout will not be executed, and the console statement within it will never be printed.</p><pre>function recursiveMicrotask() {<br>  Promise.resolve().then(recursiveMicrotask);<br>}<br><br>recursiveMicrotask();<br><br>setTimeout(() =&gt; {<br>  console.log(&#39;This will never be executed.&#39;);<br>}, 0);</pre><p>This can lead to the EventLoop being blocked, which can cause scheduled timeouts to run with inaccurate timing or potentially never execute at all.</p><pre>setTimeout(() =&gt; {<br>  console.log(&#39;setTimeout&#39;);<br>}, 0);<br><br>let count = 0;<br><br>function recursiveNextTick() {<br>  count += 1;<br><br>  if (count === 20000000)<br>    return; // finish recursion<br><br>  process.nextTick(recursiveNextTick);<br>}<br><br>recursiveNextTick();</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/614/1*gjCFyVXPZTbXzh9U0v1YXA.png" /></figure><p>As you can observe, the timeout that was scheduled for 0 milliseconds executed after 2 seconds.</p><p><strong>So be careful with Microtasks!</strong></p><p>As a quick exercise, let’s try to predict the output of the code.</p><pre>process.nextTick(() =&gt; {<br>  console.log(&#39;nextTick 1&#39;);<br><br>  process.nextTick(() =&gt; {<br>    console.log(&#39;nextTick 2&#39;);<br><br>    process.nextTick(() =&gt; console.log(&#39;nextTick 3&#39;));<br>    process.nextTick(() =&gt; console.log(&#39;nextTick 4&#39;));<br>  });<br><br>  process.nextTick(() =&gt; {<br>    console.log(&#39;nextTick 5&#39;);<br><br>    process.nextTick(() =&gt; console.log(&#39;nextTick 6&#39;));<br>    process.nextTick(() =&gt; console.log(&#39;nextTick 7&#39;));<br>  });<br>  <br>});</pre><p>Here is the explanation:<br>When this code is executed, it schedules a series of nested process.nextTick callbacks.</p><ol><li>The initial process.nextTick callback is executed first, logging &#39;nextTick 1&#39; to the console.</li><li>Within this callback, two more process.nextTick callbacks are scheduled: one logging &#39;nextTick 2&#39; and another logging &#39;nextTick 5&#39;.</li><li>The callback logged as ‘nextTick 2’ is executed next, logging ‘nextTick 2’ to the console.</li><li>Inside this callback, two more process.nextTick callbacks are scheduled: one logging &#39;nextTick 3&#39; and another logging &#39;nextTick 4&#39;.</li><li>The callback logged as ‘nextTick 5’ is executed after ‘nextTick 2’, logging ‘nextTick 5’ to the console.</li><li>Inside this callback, two more process.nextTick callbacks are scheduled: one logging &#39;nextTick 6&#39; and another logging &#39;nextTick 7&#39;.</li><li>Finally, the remaining process.nextTick callbacks are executed in the order they were scheduled, logging &#39;nextTick 3&#39;, &#39;nextTick 4&#39;, &#39;nextTick 6&#39;, and &#39;nextTick 7&#39; to the console.</li></ol><p>Here is an overview of how the queue will be structured throughout the execution.</p><pre>Proess started: [ nT1 ]<br>nT1 executed: [ nT2, nT5 ]<br>nT2 executed: [ nT5, nT3, nT4 ]<br>nT5 executed: [ nT3, nT4, nT6, nT7 ]<br>// ...</pre><p>Meanwhile, it’s worth noting that referring back to the diagrams will greatly assist in understanding the underlying logic.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*xzaypRNF7I8qMr_wljpNPQ.png" /><figcaption>Diagram: Microtasks</figcaption></figure><h4>Microtasks &amp; Macrotasks in practice</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*68ZEhgvd9htdWCyQ_ZyJSA.png" /><figcaption>Full diagram: Macrotasks and Microtasks</figcaption></figure><p>For the next exercises, you will need to work with the complete diagram to fully grasp the concept.</p><pre>process.nextTick(() =&gt; {<br>  console.log(&#39;nextTick&#39;);<br>});<br><br>Promise.resolve()<br>  .then(() =&gt; {<br>    console.log(&#39;Promise&#39;);<br>  });<br><br>setTimeout(() =&gt; {<br>  console.log(&#39;setTimeout&#39;);<br>}, 0);<br><br>setImmediate(() =&gt; {<br>  console.log(&#39;setImmediate&#39;);<br>});</pre><p>This should be an easy one!</p><p>Here’s an explanation of how each of these functions behaves:</p><ol><li>process.nextTick: This function schedules a callback to be executed, immediately after the current execution process completes. In the code, the callback logs &#39;nextTick&#39; to the console.</li><li>Promise: The Promise.resolve creates a resolved promise, and the attached .then method schedules a callback to be executed. In the code, the callback within the .then() logs &#39;Promise&#39; to the console.</li><li>setTimeout: This function schedules a callback to be executed as a macrotask after a specified delay. In the code, the callback logs &#39;setTimeout&#39; to the console. Although the delay is set to 0 milliseconds, it still gets queued as a macrotask and will be executed after any pending microtasks (nextTicks, Promises).</li><li>setImmediate: Similar to timeout, this function also schedules a callback which will be executed as a macrotask.</li></ol><p>The execution order will follow this sequence:</p><ol><li>process.nextTick</li><li>Promise</li><li>setTimeout</li><li>setImmediate</li></ol><p>It’s important to note that the Event Loop processes microtasks (such as process.nextTick and Promise) before macrotasks (such as setTimeout and setImmediate), and the order within each category is respected.</p><p>Okay, now let’s dive into something more challenging.</p><pre>const fs  = require(&#39;fs&#39;);<br><br>fs.readFile(__filename, () =&gt; {<br>  process.nextTick(() =&gt; {<br>    console.log(&#39;nextTick in fs&#39;);<br>  });<br><br>  setTimeout(() =&gt; {<br>    console.log(&#39;setTimeout&#39;);<br>    <br>    process.nextTick(() =&gt; {<br>      console.log(&#39;nextTick in setTimeout&#39;);<br>    });<br>  }, 0);<br>  <br>  setImmediate(() =&gt; {<br>    console.log(&#39;setImmediate&#39;);<br><br>    process.nextTick(() =&gt; {<br>      console.log(&#39;nextTick in setImmediate&#39;);<br><br>      Promise.resolve()<br>        .then(() =&gt; {<br>          console.log(&#39;Promise in setImmediate&#39;);<br>        });<br>    });<br>  });  <br>});</pre><p>Looks scary, isn’t it? It will be even worst if it’s an interview…<br>Well just remember the diagram and everything will be much easier.</p><p>When V8 executes the code, initially there is only one operation, which is fs.readFile(). While this operation is being processed, the Event Loop starts its work by checking each queue. It continues checking the queues until the counter (I hope you remember it) reaches 0, at which point the Event Loop will exit the process.</p><p>Eventually, the file system read operation will be completed, and the Event Loop will detect it while checking the I/O queue. Inside the callback function there are three new operations: nextTick, setTimeout, and setImmediate.</p><p>Now, think about the priorities.</p><p>After each Macrotask queue, our Microtasks are executed. This means “nextTick in fs” will be logged. And as the Microtask queues are empty EventLoop goes forward. And in the next phase is the immediate queue. So “setImmediate” will be logged. In addition, it also registers an event in the nextTick queue.</p><p>Now, when no immediate events are remaining, JavaScript begins to check the Microtask queues. Consequently, “nextTick in setImmediate” will be logged, and simultaneously, an event will be added to the Promise queue. Since the nextTick queue is now empty, JavaScript proceeds to check the Promise queue, where the newly registered event triggers the logging of “Promise in setImmediate”.</p><p>At this stage, all Microtask queues are empty, so the Event Loop proceeds and next, where it founds an event inside the timers queue.<br>Now, at the end “setTimeout” and “nextTick in setTimeout” will be logged with the same logic as we discussed.</p><p>You can further enhance your understanding of Microtasks and Macrotasks by engaging in similar exercises independently. By doing so, you can gain insights into how these tasks operate and develop the ability to anticipate the sequence of results.</p><p>Just use this diagram and don’t forget about the I/O polling phase (which is a specific case)!</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*68ZEhgvd9htdWCyQ_ZyJSA.png" /><figcaption>Full diagram: Macrotasks and Microtasks</figcaption></figure><p>Btw if you check the Node.js source code in <a href="https://github.com/nodejs/node/tree/main">GitHub</a>, you will notice that Microtasks are at the JavaScript level, and it’s easy to understand and see all those queues and logics… because it’s JavaSciprt, not C++.</p><h3>Changes from Node v11</h3><p>When running this example in a web browser …</p><pre>setTimeout(() =&gt; console.log(&#39;Timeout 1&#39;));<br>setTimeout(() =&gt; {<br>    console.log(&#39;Timeout 2&#39;);<br>    Promise.resolve().then(() =&gt; console.log(&#39;promise resolve&#39;));<br>});<br>setTimeout(() =&gt; console.log(&#39;Timeout 3&#39;));<br></pre><p>The result would be as follows.</p><pre>Timeout 1<br>Timeout 2<br>promise resolve<br>Timeout 3</pre><p>However, in Node versions prior to 11.0.0, you’ll receive the following output:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/574/1*frCVD-znxjdwVbARbZdiAw.png" /><figcaption>Example using Node V10</figcaption></figure><p>Within the Node.js implementation, process.nextTick, promise and other microtask callbacks are triggered during the transitions between each phase of the EventLoop. Consequently, during the timers phase of the EventLoop, all timer callbacks are handled before the execution of the Promise callback. This particular order of execution is what ultimately produces the output that has been observed and mentioned above.</p><p>Extensive <a href="https://github.com/nodejs/node/pull/22842">discussions have taken place</a> within the Node.js community regarding the need to address this issue and align the behaviour more closely with web standards. The aim is to bring consistency between Node.js and web environments.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/616/1*vxHTLUF0zNPune1oUb_mnA.png" /><figcaption>Example using Node V20</figcaption></figure><p>In this scenario, instead of using setTimeout, I have incorporated another macrotask(setImmediate) to expand upon the example.</p><p>The release of Node.js version 11 brings forth noteworthy changes, enabling nextTick callbacks and microtasks to execute between every individual setTimeout, setImmediate and other macrotasks.<br>This update harmonizes the behaviour of Node.js with that of web browsers, enhancing the compatibility and reusability of JavaScript code across both environments.</p><p>Changes introduced by the Node.js team have the potential to impact the compatibility of existing Node.js applications. Therefore, it is crucial to stay informed and remain vigilant about Node.js updates. Being attentive to these updates is essential as it ensures that you are aware of any modifications that may occur. By staying tuned to Node.js updates, you can proactively address any changes that might affect your applications and ensure their smooth operation in the face of evolving technologies and frameworks.</p><h3>Unblocking EventLoop using <strong>unref()</strong></h3><p>You will not wonder if I say that this code will keep the EventLoop alive and prevent the Node process from exiting.</p><pre>setInterval(() =&gt; console.log(&#39;interval 2s&#39;), 2000);</pre><p>But what if I say that we can have an active timer and also remove that restriction that prevents the EventLoop from stopping?</p><p>So, we can make the interval process unimportant for the EventLoop.</p><pre>const interval = setInterval(() =&gt; console.log(&#39;interval 2s&#39;), 2000);<br><br>setTimeout(() =&gt; interval.unref(), 6000);</pre><p>Interval works fine as soon we don’t call the <strong>unref</strong> function.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/580/1*FpXIzxca9K1n-rsb_WZyXg.png" /></figure><p>Unlike the .clearInterval() &amp; .clearTimeout()methods, .unref() doesn’t stop the timer. It acts like a flag for your timers, marking tasks that EventLoop doesn’t need to wait for. They’ll run as usual while the process runs, but if the event queue is empty, they are ignored, and the process exits.</p><p>You can also make the timer important again by using .ref().</p><p>In Node.js 11 and later, you can check if the timer is currently blocking the process from exiting with timer.hasRef().</p><pre>const interval = setInterval(() =&gt; console.log(&#39;interval 2s&#39;), 2000);<br><br>console.log(interval.hasRef()); // true</pre><h3>CommonJS vs ES modules</h3><pre>process.nextTick(()=&gt;{<br>    console.log(&#39;nextTick&#39;);<br>});<br><br>Promise.resolve().then(()=&gt;{<br>    console.log(&#39;promise resolve&#39;);<br>});<br><br>console.log(&#39;console.log&#39;);</pre><p>This is a rather straightforward example that we have previously discussed.</p><pre>console.log<br>nextTick<br>promise resolve</pre><p>However, when you attempt to utilize it with ES modules, you will observe a notable distinction.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/770/1*OHbwViwLvzUnF5GyXcp4Qg.png" /><figcaption>process.nextTick vs Promises (ES modules)</figcaption></figure><p>If you understand how ES modules work and what they provide, then it is highly probable that you will comprehend the reason.</p><p>ES modules operate asynchronously, and when you compare the usage of require with imports, you will observe a significant disparity in their execution order. The key point to note is that ES modules function asynchronously, meaning that when the program begins, it does not initiate solely as a program in the conventional CommonJS fashion.</p><p>This difference in execution order is the reason behind the observed variations in microtask sequencing.</p><pre>setImmediate(() =&gt; {<br>   process.nextTick(()=&gt;{<br>      console.log(&#39;nextTick&#39;);<br>   });<br>  <br>   Promise.resolve().then(()=&gt;{<br>      console.log(&#39;promise resolve&#39;);<br>   });<br>  <br>   console.log(&#39;console.log&#39;);<br>});</pre><p>If you execute the same functions within a single macrotask, you will observe that the sequence remains as expected.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/676/1*ga_uda01UcJi14IO-jtQJA.png" /><figcaption>process.nextTick vs Promises inside Macrotask (ES modules)</figcaption></figure><p>The program’s execution order is contingent upon the position of the pointer, which queue it resides in, and the task phase it is currently in. Consequently, the program’s execution order can be subject to change, which accounts for the observed differences.</p><p>In CommonJS we can load ES modules. Note that it returns a promise.<br>Now, I assume you understand that when it comes to ES modules, the pointer is positioned at the top of the Promise queue. This is the reason why, during program startup, promise queues are given higher priority than nextTick.</p><h3>libuv</h3><p>In OS, operations can be blocking or non-blocking. Blocking operations require a separate thread to enable concurrent execution of different operations. However, non-blocking operations allow for simultaneous execution without additional threads.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*EyvIqT3519Me_tHVHBXjdA.png" /><figcaption>Visualisation of Blocking and Non-Blocking I/O</figcaption></figure><p>File and DNS operations are blocking, meaning they block the thread until completion. On the other hand, network operations are non-blocking, enabling multiple requests to be sent from a single at the same time.</p><p>Different operating systems provide notification mechanisms for Non-Blocking I/O. In Linux it’s called <strong>epoll</strong>, in Windows, it’s called <strong>IOCP</strong>, and so on. These notification mechanisms allow us to add handlers and wait for operations to complete, they will notify us when a specific operation will be finished.</p><p>Libuv uses those mechanisms to allow us to work with the Network I/O asynchronously. But with the Blocking I/O operations, it’s different.</p><p>Think about it, Node.js operates on a single thread with an EventLoop, which runs in a semi-infinite loop. However, when it comes to Blocking I/O operations, Libuv cannot handle them within the same thread.</p><p>So for that reason, Libuv uses a Thread Pool.</p><p>CPU-intensive tasks and Blocking I/O operations pose challenges because we can’t handle them asynchronously. But fortunately, Libuv got a solution for that. Baisicly it utilizes threads to tackle such situations effectively.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*lmjpTIlRkJdymzEsjENUbw.png" /><figcaption>Libuv: visualisation #1</figcaption></figure><p>With a default thread pool size of 4, Libuv handles file read operations by executing them within one of those threads. Once the operation is completed, the thread is released and Libuv delivers the corresponding response. This enables efficient handling of Blocking I/O operations in an asynchronous manner using the thread pool.</p><p>If you attempt to perform 10 file read operations, only 4 of them will initiate the process while the remaining 6 operations will wait until threads become available for execution.</p><p>If we want to perform numerous Blocking I/O operations and find that the default thread pool size of 4 is insufficient, we can easily increase the thread pool size.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*ln4AvON5i7FoiZiVWkL-NA.png" /><figcaption>Libuv: visualisation #2</figcaption></figure><p>For that, we need to use this ENV variable.</p><pre>UV_THREADPOOL_SIZE=64 node script.js</pre><p>So in this case Libuv will create a Thread Pool with 64 threads.</p><p>Please note that having an excessive number of threads in the thread pool can lead to performance issues. This is because maintaining numerous threads requires significant resources. Therefore, it is important to carefully consider the implications before working with this environment variable.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*d1HX7slh26CQVy1letj3Fg.png" /><figcaption>UV_THREADPOOL_SIZE — Demo #1</figcaption></figure><p>You may wonder why there are 71 threads instead of 64. The additional threads are utilized by V8 and other components for tasks such as garbage collection and code optimization. These operations require resources, which is why the thread count surpasses the expected 64 threads.</p><p>Note that if you don’t use any Blocking I/O operations, the thread pool will not be initialized. You will only observe multiple threads if the pool is initialized, which can be done by executing a single Blocking I/O operation.</p><pre>require(&#39;fs&#39;).readFile(__filename, () =&gt; {}); // Blocking I/O<br><br>setInterval(() =&gt; {}, 3000);</pre><p>In my example, I’ve used this piece of code.<br>Simply remove the first line and observe that the thread count is noticeably reduced.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*qplHOmPjimsoKwQ5HouuQw.png" /><figcaption>UV_THREADPOOL_SIZE — Demo #2</figcaption></figure><p>The reason for this is that the interval operation is not a Blocking I/O operation, which means the Thread Pool is not initialized.</p><h3>DNS is problematic in Node.js</h3><p>In Node.js, the dns.lookup function is a Blocking I/O operation when resolving hostnames. If you specify a hostname in your request, the DNS lookup process will introduce a blocking operation, as the underlying implementation may rely on synchronous operations.</p><p>However, if you work with IP addresses or utilize your DNS lookup mechanism, you have the opportunity to make the process fully asynchronous. So you can eliminate potential blocking and ensure a fully non-blocking execution flow in your Node.js application.</p><pre>http.get(&quot;https://github.com&quot;, {<br>    lookup: yourCustomLookupFunction<br>});</pre><p>This is a pretty interesting topic, so I would suggest you read <a href="https://httptoolkit.com/blog/configuring-nodejs-dns/">this article</a>.</p><h3>Custom Promises - Bluebird</h3><p><em>Why not natives?</em></p><p>You may have noticed that people often utilize custom-written promises, such as Bluebird.js. However, Bluebird.js offers much more than just a set of useful methods. It distinguishes itself by providing advanced features, enhancing promise performance.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*4O4xYCBZR_tx-vvKrnaTlg.png" /><figcaption>Visualisation of Native Promises</figcaption></figure><p>This is a straightforward visualization of how native Promises work. Essentially, each Microtask has its corresponding callback.</p><p>In Bluebird, you can customize the behaviour of promises, which can result in improved performance depending on various situations.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*YeeyeugDA-0sh6Qid_gJIQ.png" /><figcaption>Visualisation of Bluebird Promises</figcaption></figure><p>By default, Bluebird combines all resolved promises and executes them within a single task, known as a Macrotask. In my visual example, it occurs within the setImmediate phase, but by default in Bluebird, it’s setTimeout. This approach allows us to prevent thread blocking when dealing with many promise calls.</p><p>Basically, in the queue, we will have one event, and the corresponding callback will include as many callbacks as the number of resolved promises we have.</p><p>Something like this.</p><pre>setImmediate(() =&gt; {<br>  promiseResolve1();<br>  promiseResolve2();<br>  promiseResolve3();<br>  promiseResolve4();<br>});</pre><p>By the way, we can also configure Bluebird to make promises run in a different phase.</p><pre>Promise.setScheduler(function(fn) {<br>    process.nextTick(fn);<br>});</pre><p>In this case, promises will have the highest priority.</p><p>Bluebird uses <strong>setTimeout(fn, 0)</strong> as a default scheduler. This means that the Promises will be run in the timers phase.</p><p>Just try it yourself and you will see how interesting it works.</p><h3>Summary</h3><p>Node.js is constantly evolving, with new updates and features being released regularly. Therefore, it’s important to stay updated by following the Node.js changelog. By doing so, you can stay informed about the latest changes and advancements, enabling you to have an up-to-date diagram of the Node.js architecture and its functionalities in your mind.</p><h4>Node.js</h4><p>It’s a JS runtime which allows us to build server-side applications which can work with OS.</p><p>Libuv was initially developed for Node.js. It is a powerful library that serves as the foundation for the Event Loop and offers additional functionalities. It is designed to facilitate Asynchronous I/O operations across different platforms such as Windows, Linux, and others, providing ample opportunities for efficient and Non-Blocking I/O handling.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*jvgj5Gm-ZQgS_54OIZtXhA.png" /><figcaption>Libraries used inside Node.js</figcaption></figure><p>Node.js incorporates a wide range of libraries and essential components that are critical for various processes and operations. These components greatly enhance the functionality and capabilities of Node.js.</p><h4>EventLoop (Macrotasks and Microtasks)</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*1oyrUB5F2zFWk7fgI9jXGQ.png" /><figcaption>Macrotasks</figcaption></figure><p>The Event Loop, the core of Node.js, is implemented in C and C++. It serves as a fundamental mechanism that manages the execution of JavaScript code. It provides multiple queues, known as macrotasks, which correspond to different operations within Node.js. These queues ensure that tasks are executed in the appropriate order and efficiently handle events, I/O operations, and other asynchronous tasks.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*-oQ9rf0tcXnxK6TaspdqoA.png" /><figcaption>Microtasks</figcaption></figure><p>In addition to the Event Loop, Node.js also introduces the concept of microtasks, which exist at the JavaScript/Node.js level. Microtasks encompass promises and nextTicks, and they provide a way to execute callbacks asynchronously and with higher priority. Microtasks are processed within the Event Loop after each Macrotask, allowing for finer-grained control and handling of asynchronous operations in Node.js.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*68ZEhgvd9htdWCyQ_ZyJSA.png" /><figcaption>Full diagram: Macrotasks and Microtasks</figcaption></figure><h3>A challenging interview question</h3><p>In a recent job interview, I was asked a nice question about the Node.js Event Loop. It was interesting and made me think. However, I couldn’t give the best answer during the interview because it wasn’t clear to me at first. If I had more time, I think I could have figured it out better.</p><pre>const http = require(&#39;http&#39;);<br><br>// Create a basic HTTP server<br>const server = http.createServer((req, res) =&gt; {<br>  res.writeHead(200, { &#39;Content-Type&#39;: &#39;text/plain&#39; });<br>  res.end(&#39;Hello, this is your Node.js server!&#39;);<br>});<br><br>server.listen(3000);<br><br>// Blocking operation<br>async function block() {<br>  for (let i = 0; i &lt; 100; i++) {<br>    Atomics.wait(new Int32Array(new SharedArrayBuffer(4)), 0, 0, 100);<br>    console.log(`Waited ${i + 1} times`);<br><br>    // YOU CAN ONLY ADD CODE, YOU SHOULDN&#39;T CHANGE ANYTHING<br>  }<br>}<br><br>block();</pre><p>We have a simple HTTP server. When we start the server, we also execute a blocking operation. You can skip the Atomics.wait function; it&#39;s just a straightforward blocking operation that pauses the JavaScript thread for 100 ms.</p><p>The thing is that when you run this, you will see logs for each iteration of the loop. If you try to access your HTTP server via curl, the request will hang. This is because the event loop is busy and can’t handle the HTTP request.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*q0yWyxp2-Nulig0S8hNdDA.png" /><figcaption>Terminal view #1 Blocked</figcaption></figure><p>When the for loop is over, the HTTP request will be processed.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*4BmzDvnsM6q3CpgdPa5xUA.png" /><figcaption>Terminal view #2 Finished</figcaption></figure><p>There is a comment for you in the code.</p><pre>// YOU CAN ONLY ADD CODE, YOU SHOULDN&#39;T CHANGE ANYTHING</pre><p>You need to add a solution that prevents the blocking operation from impacting the event loop, allowing it to handle HTTP requests. Simultaneously, it should function as expected, blocking the thread for 100 iterations, each lasting 100 ms.</p><p>Don’t overthink, don’t use worker_threads.</p><p><strong>The solution is simple, but the idea for me is brilliant.</strong></p><p>You just need to add some pieces of code. Now think about it.</p><h4>Hint</h4><p>In case you need a hint, notice that the blocking function is asynchronous.</p><h4>Solution</h4><p>Ta da da dam. Here it comes!</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/972/1*oUwim0bUas-ACE23VP4oUQ.png" /><figcaption>Terminal view #2 Solution</figcaption></figure><pre>await new Promise(r =&gt; setTimeout(r, 0));</pre><p>Previously, our for loop wasn’t dependent on macrotasks; Event Loop didn’t have a chance to check the queues.<br>With this approach, we introduce a minimal dependency on macrotasks, specifically on setTimeout. Our asynchronous function awaits a promise that resolves in a manner facilitating the Event Loop to complete one cycle. Within that cycle, it manages to handle the HTTP request.</p><p>I hope this was interesting for you. And thank you to the person who created this interview question.</p><p>I would like to express my sincere gratitude to those enthusiasts who have contributed their time and expertise to translate this article.</p><p>Below, you will find the translated versions of this article, along with the names of the respective translators:</p><ul><li><a href="https://medium.com/@andrea.diblasix/non-conosci-leventloop-di-node-js-2b62539243ec">Non Conosci l’EventLoop di Node.js</a> (Italian) — Andrea Di Blasi</li></ul><p>If you’d like to make this article in your preferred language, we encourage you to do so and refer back to the original source. I believe that translations play a crucial role in making knowledge accessible to a global audience.</p><p>Thank you for taking the time to read this comprehensive article. I hope you found it informative and gained valuable insights from it.<br>Feel free to ask any questions or tweet me <a href="https://twitter.com/nairihar">@nairihar</a></p><p>Also follow my <strong>JavaScript</strong> newsletter on Telegram: <a href="https://t.me/javascript">@javascript</a></p><ul><li><a href="https://blog.bitsrc.io/how-to-scale-node-js-socket-server-with-nginx-and-redis-b02e23b3423c">How to Scale Node.js Socket Server with Nginx and Redis</a></li><li><a href="https://nairihar.medium.com/graceful-shutdown-in-nodejs-2f8f59d1c357">Graceful Shutdown in NodeJS</a></li></ul><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=8ee16831767" width="1" height="1" alt=""><hr><p><a href="https://blog.bitsrc.io/you-dont-know-node-js-eventloop-8ee16831767">You Don’t Know Node.js EventLoop</a> was originally published in <a href="https://blog.bitsrc.io">Bits and Pieces</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Quick Tips for Learning Javascript]]></title>
            <link>https://javascript.plainenglish.io/the-truth-about-javascript-67ad023b8ccd?source=rss-b0ffd91825e5------2</link>
            <guid isPermaLink="false">https://medium.com/p/67ad023b8ccd</guid>
            <category><![CDATA[javascript]]></category>
            <category><![CDATA[programming]]></category>
            <category><![CDATA[computer-science]]></category>
            <category><![CDATA[nodejs]]></category>
            <dc:creator><![CDATA[nairihar]]></dc:creator>
            <pubDate>Thu, 04 May 2023 22:31:40 GMT</pubDate>
            <atom:updated>2023-05-11T00:17:13.140Z</atom:updated>
            <content:encoded><![CDATA[<h4>Success hinges on choosing the right path.</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*QcUhZP9hmicl2P15Xiu-7w.jpeg" /><figcaption>Generated using Leonardo AI</figcaption></figure><p>Despite being one of the most widely-used programming languages on the web, JavaScript has a bit of a bad reputation. But the truth is, JavaScript can be an incredibly powerful tool for creating client-side and server-side applications.</p><p>JavaScript’s flexibility and ease of use have made it a popular programming language, but these benefits can also have drawbacks. Without strict typing and with many processes already abstracted, it can be easy to make errors in JavaScript without realizing it.</p><p>It’s as if you’re walking in a forest where it’s easy to take the wrong path without any warning signs or guidance. On the other hand, in other programming languages, the language itself acts as a guide, leading you to learn algorithms, data structures, and memory management, making it more difficult to lose your way.</p><p>Let’s dive deeper into the details.</p><h4>Dynamic typing</h4><p>JavaScript doesn’t have types. We can do whatever we want.</p><pre>let value = 2023;<br><br>value = [ 1, 2, 3 ];<br>value.push({ firstname: &#39;John&#39; });<br><br>value[2] = true;<br><br>value[3].firstname = function() {};</pre><p>We can create and manage a variable however we like. Well, JavaScript allows us to do that, but it’s not something which will be beneficial for us or for the engine. This approach is problematic because when we use different types of values for a variable, we may lose track of its intended value over time. Additionally, the engine may not perform efficiently when constantly changing its understanding of the variable.</p><p>When learning JavaScript, it’s important to focus on avoiding this type of code. It’s not worth spending too much time trying to understand it since you’re unlikely to encounter it in real-world projects. Even if you do understand it, you may forget why it produces a certain result in the future.</p><pre>{} + 12 + [] + true </pre><p>Later point in case you would like to have types like in any other programming language then you can use TypeScript.</p><pre>let age: number = 18;<br><br>let fruits: Array&lt;string&gt; = [ &#39;Apple&#39;, &#39;Orange&#39;, &#39;Banana&#39; ];<br><br>// ...</pre><p>So it’s a myth and misunderstanding that in JavaScript we don’t have types and etc …</p><h4>Built-in data structures</h4><p>People who start learning programming via Java, C# or similar programming languages, tend to know about many things related to data structures, algorithms, strict typings and etc. Java offers built-in data structures like LinkedList, Stack, and Queue, while JavaScript does not.</p><p>It’s rare for instructors to begin teaching JavaScript with data structures and algorithms, as it’s often taught alongside HTML and CSS. As a result, some developers may lack fundamental programming skills when they start their careers, leading to negative perceptions of JavaScript in the industry.</p><blockquote>Success hinges on choosing the right path.</blockquote><p>It would be better to take a quick course about fundamental computers since and learn about basic data structures and algorithms.</p><p>Learning data structures and algorithms can be effectively done in JavaScript by implementing them through the use of Arrays and Objects. This approach can enhance our understanding of these concepts and allow us to practice them in real-world scenarios using JavaScript.</p><h4>It’s not only for web</h4><p>Although JavaScript was originally developed for use in web browsers, it has evolved over the years and can now be utilized in a variety of contexts. Today, it is widely used not only for client-side web development, but also for server-side programming, IoT, and more.</p><p>This is the key point which makes JavaScript so unique!</p><h4>Multiple ways to achieve the same thing</h4><p>If you are a developer who learned programming using other languages, you may find yourself wondering why there are so many similar tools available when working with JavaScript.</p><pre>function -&gt; function, arrow function, generator function<br>variable -&gt; var, let, const<br>class    -&gt; function, class</pre><p>It’s true that there are many different tools available to achieve the same thing in JavaScript. This can be attributed to the language’s history and standardization process, which allows for continual improvement year after year.</p><p>Initially, JavaScript was a fairly simple language that was primarily used in web development. As it became standardized under the name ECMAScript, the community began making proposals for improvements and new tools each year.</p><p>If you want to gain a comprehensive understanding of JavaScript and become a skilled engineer, it’s important to learn the language’s history and become familiar with its tools in a systematic order.</p><h4>Summary</h4><p>JavaScript is a truly powerful programming language that has been consistently improving every year. Its large and active community is a priceless asset. Although it may seem like some problems in JavaScript have complex solutions, the reality is that there is usually a simple answer that can be found by delving deeper into the language. However, due to its non-traditional nature, learning JavaScript may present some difficulties. In fact, at times, understanding the entire concept of the language can be challenging.</p><p>Thank you, feel free to ask any questions or tweet me <a href="https://twitter.com/nairihar">@nairihar</a></p><p>Also follow my “JavaScript Universe” newsletter on Telegram: <a href="https://t.me/javascript">@javascript</a></p><ul><li><a href="https://nairihar.medium.com/ecmascript-2023-es14-what-to-expect-fd3e19421ce8">ECMAScript 2023 (ES14) — what to expect</a></li><li><a href="https://nairihar.medium.com/global-npm-node-js-executables-50a0dab2b8ae">Global NPM/Node.js executables</a></li><li><a href="https://blog.bitsrc.io/how-to-scale-node-js-socket-server-with-nginx-and-redis-b02e23b3423c">How to Scale Node.js Socket Server with Nginx and Redis</a></li></ul><p><em>More content at </em><a href="https://plainenglish.io/"><strong><em>PlainEnglish.io</em></strong></a><em>.</em></p><p><em>Sign up for our </em><a href="http://newsletter.plainenglish.io/"><strong><em>free weekly newsletter</em></strong></a><em>. Follow us on </em><a href="https://twitter.com/inPlainEngHQ"><strong><em>Twitter</em></strong></a>, <a href="https://www.linkedin.com/company/inplainenglish/"><strong><em>LinkedIn</em></strong></a><em>, </em><a href="https://www.youtube.com/channel/UCtipWUghju290NWcn8jhyAw"><strong><em>YouTube</em></strong></a><em>, and </em><a href="https://discord.gg/GtDtUAvyhW"><strong><em>Discord</em></strong></a><strong><em>.</em></strong></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=67ad023b8ccd" width="1" height="1" alt=""><hr><p><a href="https://javascript.plainenglish.io/the-truth-about-javascript-67ad023b8ccd">Quick Tips for Learning Javascript</a> was originally published in <a href="https://javascript.plainenglish.io">JavaScript in Plain English</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
    </channel>
</rss>