<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by Akanni Emmanuel on Medium]]></title>
        <description><![CDATA[Stories by Akanni Emmanuel on Medium]]></description>
        <link>https://medium.com/@coderoyalty?source=rss-5514fd26e883------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Sun, 17 May 2026 06:06:53 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@coderoyalty/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[You should not overuse environment variables.]]></title>
            <link>https://medium.com/@coderoyalty/you-should-not-overuse-environment-variables-6c6ec70f2e8b?source=rss-5514fd26e883------2</link>
            <guid isPermaLink="false">https://medium.com/p/6c6ec70f2e8b</guid>
            <category><![CDATA[backend]]></category>
            <category><![CDATA[best-practices]]></category>
            <category><![CDATA[software-engineering]]></category>
            <dc:creator><![CDATA[Akanni Emmanuel]]></dc:creator>
            <pubDate>Thu, 07 Mar 2024 16:32:21 GMT</pubDate>
            <atom:updated>2024-03-07T16:32:21.754Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*J0pyPaDDcMk0FSLJ" /><figcaption>Photo by <a href="https://unsplash.com/@sigmund?utm_source=medium&amp;utm_medium=referral">Sigmund</a> on <a href="https://unsplash.com?utm_source=medium&amp;utm_medium=referral">Unsplash</a></figcaption></figure><p>Environment variables serve as a basis for software configuration management, providing a flexible way to manage application settings over different environments. These variables can encapsulate crucial information such as API keys, database credentials and other runtime configurations.</p><p>Developers can decouple sensitive information from their codebase by leveraging environment variables. Leveraging environment variables facilitates smoother deployment and ensures a higher level of security.</p><p>However, while environment variables offer numerous benefits, they also present challenges when accessed excessively and inconsistently.</p><p>Over-reliance on environment variables (primarily through direct access via process.env for NodeJS-based applications) can cause code coupling. Code coupling is when application logics are tightly bound to specific configuration values, making code maintenance and testing more complex.</p><h4>How environment variables are overused?</h4><p>While environment variables are necessary for managing configurations and crucial information, they can be misused or overused in a few ways:</p><ul><li><strong>Dependency on external configurations: </strong>Relying heavily on environment variables can make your application less portable and more difficult to configure. For instance, if your application requires many environment variables to run correctly, these configurations will be difficult to maintain across different environments.</li><li><strong>Elusive application logic: </strong>When too much application logic is built within environment variables, code can become challenging to understand and maintain. Excessive use of environment variables can lead to more opaque code that is often difficult to debug and modify.</li><li><strong>Reduced readability</strong>: Overuse of process.env can make the code harder to read, especially if there are numerous references to environment variables scattered throughout the codebase. It&#39;s essential to balance using environment variables for configuration and keeping the codebase clean and understandable.</li></ul><p>Another way it could be misused is by misapplying the convenience of calling ‘process.env’ in NodeJS.</p><blockquote>If you need to write environment-specific code, you can check the value of NODE_ENV with process.env.NODE_ENV. Be aware that checking the value of any environment variable incurs a performance penalty, and so should be done sparingly. — <a href="https://expressjs.com/en/advanced/best-practice-performance.html#:~:text=If%20you%20need%20to%20write%20environment%2Dspecific%20code%2C%20you%20can%20check%20the%20value%20of%20NODE_ENV%20with%20process.env.NODE_ENV.%20Be%20aware%20that%20checking%20the%20value%20of%20any%20environment%20variable%20incurs%20a%20performance%20penalty%2C%20and%20so%20should%20be%20done%20sparingly.">expressjs.com</a></blockquote><p>We can mitigate this misuse by creating a centralized configuration module/class that readily includes the needed environment variables. This is better compared to calling ‘process.env’ numerously.</p><pre>MONGO_DB_URI=...<br>MONGO_DB_APPNAME=...<br>REDIS_URI=...<br>PSQL_DB_URI=...<br>MAIL_SERVICE_KEY=...</pre><p>The centralized configuration module</p><pre>//config.ts<br>import dotenv from &quot;dotenv&quot;;<br>dotenv.config();<br>const config = {<br> mongodb: {<br>  URI: process.env.MONGO_DB_URI || &quot;mongodb://localhost:27017&quot;,<br>  APPNAME: process.env.MONGO_DB_URI || &quot;...&quot;,<br> },<br> redis: {<br>   URI: process.env.REDIS_URI,<br> }<br> smtp: {<br>   ...<br> }<br>}</pre><p>This can be imported and accessed by other files within the project instead of having ‘process.env’ everywhere. This can also avoid misspelling the variable names.</p><h4>Conclusion</h4><p>In conclusion, environment variables are crucial for configuring and customizing applications across different environments. In Node.js, they are accessed conveniently through the process.env object.</p><p>It is crucial to handle environment variables securely, avoid exposing sensitive information, and maintain code readability and maintainability. By leveraging environment variables effectively, developers can build flexible and scalable applications.</p><p>Happy Coding 👩‍💻</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=6c6ec70f2e8b" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[API Pagination]]></title>
            <link>https://medium.com/@coderoyalty/api-pagination-b2e5fda72254?source=rss-5514fd26e883------2</link>
            <guid isPermaLink="false">https://medium.com/p/b2e5fda72254</guid>
            <category><![CDATA[api]]></category>
            <category><![CDATA[software-development]]></category>
            <category><![CDATA[backend]]></category>
            <dc:creator><![CDATA[Akanni Emmanuel]]></dc:creator>
            <pubDate>Thu, 29 Feb 2024 09:02:08 GMT</pubDate>
            <atom:updated>2024-02-29T09:02:08.816Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*Iut7z3-iut8CtAiD" /><figcaption>Photo by <a href="https://unsplash.com/@nananadolgo?utm_source=medium&amp;utm_medium=referral">Nana Smirnova</a> on <a href="https://unsplash.com?utm_source=medium&amp;utm_medium=referral">Unsplash</a></figcaption></figure><p>Imagine you are browsing through an online store, scrolling through pages of products, only to realize there are hundreds if not thousands more items to explore. This could become overwhelming for a user, overloading and crashing the server from a developer&#39;s stance.</p><p>Presumably, the server is providing all the items in the database. Assuming the number of items is 10,000, that will be a million repetitions to 100 users.</p><h3>Pagination</h3><p>The ideal solution will be to provide a limited number of items and allow users to explore the dataset based on a few metadata, offering a structured approach to data retrieval. Pagination involves breaking large datasets into manageable chunks or <strong>pages</strong>. This method enhances a seamless navigation experience while optimizing resources and performance for the client and server.</p><blockquote><strong>Pagination</strong>, also known as <strong>paging</strong>, is the process of dividing a document into discrete <a href="https://en.wikipedia.org/wiki/Page_(paper)">pages</a>, either electronic pages or printed pages. — Wikipedia</blockquote><p>That is not everything about pagination. The description above is figuratively defining page-based pagination — a method of paging datasets —. As we assumed earlier, the number of items we have is 10,000. We could structure our items into pages. If we supply 25 items per page, then the number of pages available is 400. For some application that infrequently adds new items, this is reliable. But in the real-time system, where pages may not be reliable, we will have to consider other methods for pagination.</p><h4>Pagination Metadata</h4><p>These are often the valid query parameters attached to a request.</p><pre><a href="https://api.mymail.com/v0/users/:id/mails?page=0&amp;size=50&amp;category=important">https://api.mymail.com/v0/users/:id/mails?page=20&amp;size=50&amp;category=important</a><br><br>Metadata:<br>page=20, size=50, category=important</pre><p>These queries are used to explore the API; for instance, incrementing or decrementing the page value helps navigate from page to page.</p><h4>Types of Pagination Methods</h4><p>In the field of API pagination, there are several common types of pagination methods, each having its own use cases. Let us explore some of these types:</p><p>1. <strong>Offset-Based Pagination: </strong>This involves specifying the number of items to skip before retrieving results. This method is straightforward but has its trade-offs. For example, if a user is at page 3 with ten items per page, the API will skip the first 20 items — 2 pages * 10 items — and return the following 10 items. For large datasets, the server will process and skip a potentially large number of records, making this type of pagination inefficient.</p><p>2. <strong>Page-Based Pagination</strong></p><p>Page-based pagination is identical to Offset-based pagination, except it uses pages instead of offsets. Users can specify the page they want to view, and the API will return a fixed number of items. The server can allow a set of sizes, for example, 10 for mobile devices and 20 for desktops.</p><pre>// Model: a mongoose model<br>const size = 20; //<br>const skip = (page - 1) * size;<br>const [totalPage, pages] = await Promise.all([<br>  Model.countDocuments(),<br>  Model.find({isDeleted: false})<br>   .sort({_id: &quot;desc&quot;}) // MongoDB object ID are sortable<br>   .skip(skip)<br>   .limit(size)<br>]);</pre><p>When items are added or removed between requests, this method can cause inconsistencies.</p><p>3. <strong>Key/Cursor-Based Pagination</strong></p><p>This type of pagination method is commonly used for real-time or dynamic data. It relies on unique identifiers (keys/cursors) to paginate through results. Instead of using offsets or page numbers, the API returns results based on a specific key (e.g., ID, timestamp) provided by the user.</p><pre>const cursor = ... // timestamp or ID<br>const size = 20;<br><br>let filter;<br><br>if (!cursor || !validateCursor(cursor)) {<br>  filter = {};<br>}<br>else {<br>  filter = {_id : {&quot;$gt&quot;: cursor}};<br>}<br><br>const result = await Model.find(filter)<br>  .sort({_id: &quot;desc&quot;})<br>  .limit(size)<br><br>const metadata = {<br> prevCursor: cursor,<br> nextCursor: null,<br> //...<br>};<br><br>if (result.length &gt; 0) {<br> metadata.nextCursor = result.at(-1)._id;<br>}</pre><p>4. <strong>Combination Pagination</strong></p><p>Combination pagination combines multiple pagination strategies to provide an efficient solution. For instance, combining offset-based pagination with key-based pagination can offer users both page-based and key-based pagination.</p><p>5. <strong>Range-Based Pagination</strong></p><p>Range-based pagination involves defining a range of items to retrieve. Such could be within a specific date range or numerical range.</p><p>Range-based pagination is often combined with other paging methods to form a combination pagination.</p><pre>const from = new Date(&quot;02-29-2020&quot;);<br>const to = new Date();<br><br>const result = Model.find({<br>  &quot;createdAt&quot;: {<br>    &quot;$gt&quot;: from,<br>    &quot;$lt&quot;: to,<br>  }<br>  ...<br>})<br>// we can also support page-size pagination here<br>// .skip(skip).limit(size)</pre><h4>Conclusion</h4><p>API pagination is essential in effectively managing and presenting large datasets. By breaking down data into manageable pages, pagination improves the navigation experience and reduces server load. In conclusion, API pagination plays a vital role in enhancing the usability and efficiency of large datasets.</p><p>By understanding the different pagination techniques and their use cases, developers can implement the most suitable method for their applications, ultimately delivering a seamless and efficient user experience.</p><p>Happy Coding 👩‍💻</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=b2e5fda72254" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Should you store sensitive tokens in localStorage?]]></title>
            <link>https://medium.com/@coderoyalty/should-you-store-sensitive-tokens-in-localstorage-ce13698676f3?source=rss-5514fd26e883------2</link>
            <guid isPermaLink="false">https://medium.com/p/ce13698676f3</guid>
            <category><![CDATA[best-practices]]></category>
            <category><![CDATA[security]]></category>
            <category><![CDATA[backend]]></category>
            <dc:creator><![CDATA[Akanni Emmanuel]]></dc:creator>
            <pubDate>Wed, 21 Feb 2024 09:35:17 GMT</pubDate>
            <atom:updated>2024-02-21T09:35:17.745Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*qXXfVcqqE0jrxrhr" /><figcaption>Photo by <a href="https://unsplash.com/@franckinjapan?utm_source=medium&amp;utm_medium=referral">Franck</a> on <a href="https://unsplash.com?utm_source=medium&amp;utm_medium=referral">Unsplash</a></figcaption></figure><p><strong>TLDR; no</strong>, you shouldn’t. But why?<br>I recently (at the time of this writing) saw a tweet from a respected developer, <a href="https://twitter.com/EOEboh">Captain-EO</a>. He had an excellent opinion on how many tutorials will teach the “how to” and not whether you “should” or “shouldn’t” do something. However, a few developers disagreed and have no issue regarding storing sensitive tokens in local storage.</p><p>In this article, I’ll project the risk of storing session tokens in localStorage. First, let’s create a simple authentication system.</p><h4>The sandbox</h4><p>The packages required:</p><pre>npm init -y<br>npm install jsonwebtoken express cookie-parser</pre><pre>const express = require(&quot;express&quot;);<br>const cookieParser = require(&quot;cookie-parser&quot;);<br>const uuid4 = require(&quot;uuid&quot;).uuid4;<br><br>const app = express();<br><br>const users = [];<br><br>// configurations<br><br>app.use(express.urlencoded({ extended: false }));<br>app.use(express.json());<br>app.use(cookieParser(&quot;cookie_parser_secret&quot;));<br><br>// registration endpoint<br><br>app.post(&quot;/api/auth/register&quot;, (req, res) =&gt; {<br>  const { username, password } = req.body;<br>  const id = uuid4();<br><br>  const existingUser = users.find((value) =&gt; {<br>    value.username === username;<br>  });<br><br>  if (existingUser) {<br>    return res.sendStatus(409);<br>  }<br><br>  users.push({ id, username, password });<br><br>  res.sendStatus(201);<br>});<br><br>// login endpoint<br><br>app.post(&quot;/api/auth/login&quot;, (req, res) =&gt; {<br>  const { username, password } = req.body;<br><br>  const valid = users.find(<br>    (value) =&gt; value.username === username &amp;&amp; value.password === password<br>  );<br><br>  if (!valid) {<br>    return res.sendStatus(401);<br>  }<br><br>  res.sendStatus(200);<br>});<br><br>app.listen(3000);</pre><p>The code above implements a simple registration and login route. When a user provides the correct credentials, this user is now privileged to access their content on that platform.</p><p>HTTP is stateless, meaning there is no way to know whether the user has that privilege. To resolve this, a token is generated and given to the client. The client must provide this token for every request to the other endpoint. On receiving this token, the server asserts if the token is invalid or valid, rejecting it if so. For a valid token, most are usually decoded. This decoded data often contains data such as the username, email, role, and database identifier. let us do that.</p><pre>// update login route handler<br>const jwt = require(&quot;jsonwebtoken&quot;);<br><br>app.post(&quot;/api/auth/login&quot;, (req, res) =&gt; {<br> const { username, password } = req.body;<br><br> const user = users.find(<br>  value =&gt; value.username === username &amp;&amp; value.password === password,<br> );<br><br> if (!user) {<br>  return res.sendStatus(401);<br> }<br><br> const token = jwt.sign(<br>  {<br>   username: user.username,<br>   id: user.id,<br>  },<br>  &quot;jwtprivatekey&quot;,<br>  {expiresIn: &quot;1h&quot;}<br> );<br><br> return res.json({<br>  message: &quot;Logged in successfully&quot;,<br>  token,<br> });<br>});</pre><p>The server creates the token and sends it to the client alongside a success message.</p><p>For the client, on receiving the token, there’s a need to store the token in a safe place.</p><h4>?</h4><p>The obvious way to retain the token will be to store it in Local storage. Local storage is a web storage solution web browsers provide to store data persistently on the logged-in user device.</p><p>Developers often misuse this feature. Local storage should be used to store configuration data and user preferences. For example, if a user prefers dark mode over light mode, I can save this preference in Local storage, so the next time the user visits my site, they can see the site in dark mode without toggling a button.</p><p>When it&#39;s about sensitive data or tokens with privileges, saving it in Local storage makes it exposed and vulnerable.<br> Here are some risks associated with storing sensitive information in Local Storage:</p><ol><li>No Encryption: Local Storage does not provide built-in encryption for stored data. If sensitive information is stored in plain text, it could be easily read if an attacker gains access to a user device or if there is a security vulnerability.</li><li>Cross-Site Scripting: If your web application is vulnerable to XSS attacks, an attacker could inject malicious scripts into your pages, giving them access to Local Storage data. This could lead to the theft of sensitive information stored in Local Storage.</li></ol><p>Why should I use Cookie?</p><p>Although this solution has caveats, it is safer compared to local storage.</p><p>Unlike Local Storage, Cookies are automatically sent along with a request by the browser. When we save a session token in local storage, we fetch this token and attach it with a request. In the case of a cookie, the backend is responsible for setting the cookie.</p><p>Client-side JavaScript can access the data stored in local storage with just a single line of code:</p><pre>Object.entries(localStorage);<br></pre><p>This means when your site is vulnerable to XSS (Cross-Site Scripting) attack, your user token can be stolen. For cookies, setting the httpOnly flag to true denies any client-side code from accessing a cookie.</p><pre>//...<br>const token = jwt.sign(<br>  {<br>   username: user.username,<br>   id: user.id,<br>  },<br>  &quot;jwtprivatekey&quot;,<br>  {expiresIn: &quot;1h&quot;}<br> );<br>const cookieOption = {};<br>res.cookie(&quot;auth_session&quot;, token, cookieOption);<br>return res.status(200).json({...});</pre><p>When a request is sent from a user’s browser, we’ll check the cookie for our session token. We can handle this by creating a middleware that restricts unauthorized access.</p><pre>const isLoggedInMiddleware = (req, res, next) =&gt; {<br>  try {<br>    const token = req.cookies.auth_session; // must match the name used when we set the token<br>    const decoded = verifyAndDecodeToken(token);<br>    req.user = {...decoded};<br>  } catch (err) {<br>    return res.status(401).json({message: &quot;You&#39;re unauthorized, please login&quot;});<br>  }<br>  next(null);<br>}</pre><p>Regarding cookies, setting them is not enough; there are a few attributes that we can configure to enhance security. These options include:</p><ul><li>HttpOnly: This option restricts it from being accessed through client-side scripts such as JavaScript. It helps mitigate the risk of XSS attacks, where an attacker injects malicious scripts into a web application to steal cookies or perform actions on behalf of the user.</li><li>SameSite: This attribute controls how cookies are sent with cross-origin requests. It helps mitigate the risk of CSRF attacks, where an attacker tricks the user’s browser into making unintended requests to a target website. When set to Strict, the cookie is only sent to requests from the same site. In Lax mode, cookies are sent along with top-level navigation GET requests and cross-origin POST requests that trigger a top-level navigation.</li><li>Secure: When the Secure attribute is set, the browser only sends it over HTTPS connections, ensuring it is encrypted during transmission. This helps protect sensitive information from being intercepted by attackers sniffing network traffic.</li><li>MaxAge: This attribute limits the lifespan of cookies by setting an expiration time.</li></ul><p>While these attributes do not eliminate these vulnerabilities, they help mitigate them.</p><h3>Cookies disadvantages</h3><h4>CSRF Attacks</h4><p>One of the vulnerabilities regarding cookies is the Cross-Site Request Forgery (CSRF) attack. It typically involves embedding malicious codes or links in a trusted website that the user visits. When the user visits the website, their browser automatically sends requests to the vulnerable application, leading to unintended actions being performed on behalf of the user.</p><p>This vulnerability all boils down to the fact that:</p><ol><li>Cookies are automatically attached to a request regardless of the origin</li><li>You can send forms across domains/origin</li></ol><pre>&lt;form id=&quot;my-trans&quot; action=&quot;https://example.com/send-money&quot; method=&quot;POST&quot;&gt;<br>&lt;input name=&quot;amount&quot; value=&quot;10000000&quot; /&gt;<br>&lt;/form&gt;<br><br>&lt;script&gt;<br>  const form = document.getElementById(&quot;my-trans&quot;);<br>  form.submit();<br>&lt;/script&gt;</pre><p>We can prevent it by implementing measures such as:</p><ol><li>CSRF Token: Include unique tokens with each request, typically by adding them in forms or headers, that only the server knows how to validate.</li><li>SameSite Cookies: Set the SameSite attribute on cookies to prevent them from being sent in cross-origin requests, thereby reducing the risk of CSRF attacks.</li><li>Origin Header: Validate the Origin or Referer header of incoming requests to ensure that they originate from trusted sources.</li></ol><h4>XSS Attack</h4><p>XSS (Cross-Site Scripting) is a type of security vulnerability commonly found in web applications. It occurs when an attacker injects malicious scripts into web pages viewed by other users. These scripts can then execute within the context of the user’s browser, leading to various malicious activities such as session hijacking, defacement of websites, theft of sensitive information, and more.</p><p>While cookies can’t be read, XSS can still use the cookie on behalf of the user.</p><pre>fetch(&quot;https://example.com/send-money&quot;,{<br>  method: &quot;POST&quot;,<br>  body: JSON.stringify({amount: 300_000_000})<br>});</pre><p>We can prevent it by implementing measures such as:</p><ol><li>HttpOnly Cookies: Set the HTTPOnly flag on cookies to prevent them from being accessed by client-side scripts</li><li>Use Frameworks and Libraries: utilize secure web development frameworks and libraries that offer built-in protections against XSS attacks, such as AngularJS, React, and Vue.js.</li><li>Content Security Policy (CSP): Implement CSP headers to specify which external resources the browser should execute or load. This measure works by limiting the sources which scripts can be loaded from.</li></ol><p>Cookies might have their issues, but they’re beneficial over local storage. The use of local storage shouldn’t be discouraged. Local Storage should be used to store insensitive data. It is best to store sensitive data as cookies. The front end shouldn’t deal with how sensitive data are kept.</p><p>In conclusion, while local storage may seem suitable for storing session tokens, its intrinsic security risks, such as lack of encryption and vulnerability to XSS attacks, make it wrong for handling sensitive information. On the other hand, cookies offer a more secure alternative, especially when configured with attributes like HttpOnly, SameSite, Secure, and MaxAge. By following established security best practices and leveraging the appropriate storage mechanism, developers can better defend user data and mitigate the risk of unauthorized access or exploitation.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=ce13698676f3" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Exploring Multi-Identifier Authentication]]></title>
            <link>https://medium.com/@coderoyalty/exploring-multi-identifier-authentication-976949e85fbb?source=rss-5514fd26e883------2</link>
            <guid isPermaLink="false">https://medium.com/p/976949e85fbb</guid>
            <category><![CDATA[authentication]]></category>
            <category><![CDATA[nodejs]]></category>
            <category><![CDATA[backend]]></category>
            <dc:creator><![CDATA[Akanni Emmanuel]]></dc:creator>
            <pubDate>Fri, 01 Dec 2023 04:01:51 GMT</pubDate>
            <atom:updated>2023-12-01T04:01:51.489Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*g6MRKUtL7SBYtAe4" /><figcaption>Photo by <a href="https://unsplash.com/@joannakosinska?utm_source=medium&amp;utm_medium=referral">Joanna Kosinska</a> on <a href="https://unsplash.com?utm_source=medium&amp;utm_medium=referral">Unsplash</a></figcaption></figure><p>Relying solely on email for authentication can be problematic from a user experience perspective. Users are denied the liberty to select what suits them best. Forced identifiers kill the freedom of personalized choice in the authentication process. In this article, I’ll cover how to handle multi-identifier authentication.</p><h4>Prerequisites</h4><ul><li>The implementations are written in TypeScript, but a basic understanding of programming is enough to follow this article.</li></ul><h4>What’s multi-identifier authentication?</h4><p>Have you ever tried signing into a platform like X (formerly called Twitter) or Spotify? You would‘ve noticed the placeholder text reads “phone, email, or username”. This means you can provide any of these identifiers when signing in. That’s what I meant by <strong>“multi-identifier”</strong> authentication.</p><p>Multi-identifier authentication refers to the flexible use of another identifier like email, phone or username with the password to authenticate a user, rather than the restriction to use emails only. This is common for authentication, offering flexibility for those with complex usernames or multiple emails.</p><p>Although we’re attempting to enhance the authentication process from the backend of an application, it’s crucial to understand that the server side defines the variety of user identifiers a user can choose from.</p><p>Imagine a user schema like this:</p><pre>const UserSchema = {<br> username: String,<br> email: String,<br> phone: String,<br> password: String,<br> display_name: String,<br>}</pre><p>We can’t use the field, display_name, as an identifier because it’s not unique. Users can’t expect to identify their account using their display name. The supported identifiers are username, email and phone, and that’s because they’re unique.</p><h4>Importance</h4><p>Multi-identifier authentication is valuable as users may own multiple, for example, email addresses. Uncertainly, trying both emails is an option, but remembering a username is often easier. With multi-identifier support, users can log in using a remembered username, helping enhance user experience.</p><h4>Implementation</h4><p>I’ll be implementing how the server side can handle multi-identifier authentication in Nodejs. I’ll use MongoDB and its popular ODM (Object-Document Mapper), Mongoose.</p><pre>npm install express mongoose @types/express</pre><p>The user model:</p><pre>// user.model.ts<br>import mongoose from &quot;mongoose&quot;;<br><br>const UserSchema = new mongoose.Schema({<br> username: { type: String, unique: true },<br> email: { type: String, unique: true },<br> phone: { type: String, unique: true },<br> display_name: { type: String },<br>}, { timestamps: true })<br><br>export default mongoose.model(&quot;user&quot;, UserSchema);</pre><pre>import express from &quot;express&quot;;<br>import UserModel from &quot;./user.model&quot;;<br><br>const app = express();<br><br>app.use(express.json());<br>app.use(express.urlencoded({extended: false});<br><br>app.post(&#39;/api/auth&#39;, authHandler);<br><br>async function authHandler(req: Request, res: Response) {<br> const {userIdentifier, password} = req.body;<br> try {<br>   const user = await UserModel.findOne({<br>    $or: [<br>     { username: userIdentifier.toLowerCase() },<br>     { email: userIdentifier.toLowerCase() },<br>     { phone: userIdentifier },<br>   ]});<br>   if (!user) {<br>     return res.sendStatus(400);<br>   }<br>   //... perform your password comparison &amp; verification here!<br>   //...<br> } catch (err) {<br>  console.error(&quot;Authentication error: &quot;, err);<br>  return res.sendStatus(500);<br> }<br>}<br><br>app.listen(3000);</pre><p>In the context of a database query, the $or operator specifies a logical OR condition. It’s a way to find a document or record where at least one of the conditions is true.</p><p>In this example:</p><pre>{<br> $or: [<br>  { username: userIdentifier.toLowerCase() },<br>  { email: userIdentifier.toLowerCase() },<br>  { phone: userIdentifier },<br> ]<br>}</pre><p>This is specifying an OR condition for a query.</p><p>It’s saying to find a user record where</p><ul><li>The username field is equal to userIdentifier.toLowerCase().</li><li>OR the email field is equal to userIdentifier.toLowerCase().</li><li>OR the phone field is equal to userIdentifier.</li></ul><p>The SQL alternative:</p><pre>SELECT * FROM User<br>WHERE<br>  username = LOWER(:userIdentifier) OR<br>  email = LOWER(:userIdentifier) OR<br>  phone = :userIdentifier;</pre><h4>Conclusion</h4><p>It’s necessary to deliver flexibility to users when possible. Allowing users to log into an account shouldn’t be restricted to emails alone. While some user might remember their email address, others might remember their username or phone number. Giving every user an option to pick from is another way to improve their experience with your application.</p><p>This article explains what I meant by “Multi-identifier” authentication, its importance and its implementation. An example was showcased using Node.js and MongoDB, including its SQL alternative.</p><p>Thanks for reading, and happy coding 🍻.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=976949e85fbb" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Why you need a Job Queue]]></title>
            <link>https://medium.com/@coderoyalty/why-you-need-a-job-queue-d1f50df5bbe1?source=rss-5514fd26e883------2</link>
            <guid isPermaLink="false">https://medium.com/p/d1f50df5bbe1</guid>
            <category><![CDATA[nodejs]]></category>
            <category><![CDATA[efficiency]]></category>
            <category><![CDATA[backend]]></category>
            <dc:creator><![CDATA[Akanni Emmanuel]]></dc:creator>
            <pubDate>Mon, 13 Nov 2023 09:10:25 GMT</pubDate>
            <atom:updated>2023-11-13T09:24:41.372Z</atom:updated>
            <content:encoded><![CDATA[<h3>Why you need a Job Queue.</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*cq8dk9BDwpkNdby7" /><figcaption>Photo by <a href="https://unsplash.com/@mparzuchowski?utm_source=medium&amp;utm_medium=referral">Michał Parzuchowski</a> on <a href="https://unsplash.com?utm_source=medium&amp;utm_medium=referral">Unsplash</a></figcaption></figure><p>In the traditional request-response cycle, a server handles each task immediately upon receiving a request, leading to a real-time synchronous communication. Ideally, requests are anticipated to be fully processed within a short timeframe, although there can be a worst-case scenario, such as a minute, where tasks might take longer. When the response time becomes considerably lengthy, the client-side of an application tends to time out.</p><h4><strong>The problem</strong></h4><p>Certain requests, particularly those involving time-consuming tasks, pose challenges. Consider a scenario of processing a large ZIP file uploaded to your application. If these tasks are immediately processed by the request handler upon receiving it, and the processing of the ZIP file take around 10 minutes, the response time will be delayed for approximately 10 minutes. This can become a critical issue, especially when multiple users simultaneously upload ZIP file of a larger size. Such simultaneous requests can overload the system, leading to crashes.</p><p>Concerning the deletion of models in a database. although it might seem simple, scheduling it via a job queue is advantageous.</p><pre>// a mongoose model<br>import Post from &quot;../models/post.model&quot;;<br><br><br>async function deletePost(filter) {<br>  await Post.deleteOne(filter);<br>}</pre><p>This allows you to control and prioritize tasks, ensuring that the operations are managed without affecting the immediate responsiveness of the system.</p><h4><strong>Job Queue</strong></h4><p>A job queue is a system used in software development to manage and organize tasks or jobs for processing. It significantly contributes to enhancing efficiency, and reliability by introducing a controlled and structured approach to jobs execution.</p><p>In the context of processing data, a job queue can be used to normalize the immediate processing of tasks across an application. When performing heavy tasks like file handling, data analysis etc. on a server, a job queue can efficiently reduce workloads. This will help schedule a processing task and return an acknowledgement to the client.</p><p>After scheduling the task for processing, a designated application or server, known as a worker, takes responsibility for processing the tasks. Once processed, the worker updates and dequeues the task from the job queue. This distributed approach enables the load to be distributed across various workers, preventing bottlenecks.</p><p>In practice, tools like <a href="https://docs.bullmq.io/readme-1"><strong>BullMQ</strong></a> (for Node.js), <a href="https://docs.celeryq.dev/en/stable/"><strong>Celery</strong></a> (for python) and other queuing systems provides an effective way to manage job queues, allowing you to schedule and execute tasks efficiently.</p><p>Here’s a simple example of BullMQ to schedule tasks. BullMQ uses Redis as its storage and messaging backend. So, you’ll require Redis to run this example.</p><pre>npm install bullmq</pre><p>Create a BullMQ queue.</p><pre>import express from &quot;express&quot;;<br>import {Queue} from &quot;bullmq&quot;;<br>import User from &quot;./models/user.model&quot;;<br>import mongoose from &quot;mongoose&quot;;<br><br>// 10 minutes in milliseconds<br>const USER_DEL_DURATION = 10 * 60 * 1000;<br>// create an express app<br>const app = express();<br><br>// The queue<br>const UserQueue = new Queue(&quot;userDeletionQueue&quot;);</pre><p>Implement DELETE /api/users/:id route handler.</p><pre>//... continuation<br>app.delete(&quot;/api/users/:id&quot;,<br>async (req: express.Request, express.Response) =&gt; {<br> const {id} = req.params;<br> try {<br>  if (!mongoose.isValidObjectId(id)) return res.sendStatus(400);<br><br>  const user = await User.findById(id);<br>  if (!user || user.deleted) return res.sendStatus(404);<br><br>  user.deleted = true;<br><br>  // add a task to the queue<br>  const jobData = {<br>    message: `User &lt;${id}&gt; permanently deleted from DB`,<br>    userId: id,<br>  }<br>  UserQueue.add(<br>    &quot;deleteUser&quot;, jobData,<br>    {delay: USER_DEL_DURATION}<br>  );<br><br>  return res.sendStatus(204);<br> } catch (err) {<br>    res.sendStatus(500);<br> }<br>})</pre><p>A worker is a crucial component responsible for processing and executing tasks enqueued in the queue. This worker operates independently, continuously monitoring the designated job queue for pending or expired tasks.</p><p>We need a worker to process the task when it expires. The worker must run in another Node.js process.</p><pre>import { Worker } from &#39;bullmq&#39;;<br>import User from &quot;./models/user.model&quot;<br><br>const worker = new Worker(&#39;userDeletionQueue&#39;, async job =&gt; {<br>  const {userId, message} = job.data;<br>  await User.deleteOne({_id: userId});<br>  console.log(message);<br>});</pre><p>After a task has expired (the delay duration is up), it needs to be processed. That’s where a worker comes in. It processes the task, which is deleting a user from the database.</p><p>This way, the tasks have been offloaded from the server to another application (the worker process) which chronologically processes the tasks.</p><h4><strong>Conclusion</strong></h4><p>In summary, the implementation of BullMQ and its associated workers serves as a demonstration of how job queues manage and optimize time-consuming tasks within a software system. By showcasing the creation of a job queue, the handling of user deletion tasks with delayed processing, and the integral role of the worker process in offloading server responsibilities, this example brings to light the practical application of job queues in a real-world scenario.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=d1f50df5bbe1" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[What happens when you type google.com in your browser and press Enter?]]></title>
            <link>https://medium.com/@coderoyalty/what-happens-when-you-type-google-com-in-your-browser-and-press-enter-d624d005a811?source=rss-5514fd26e883------2</link>
            <guid isPermaLink="false">https://medium.com/p/d624d005a811</guid>
            <category><![CDATA[web]]></category>
            <category><![CDATA[learning]]></category>
            <dc:creator><![CDATA[Akanni Emmanuel]]></dc:creator>
            <pubDate>Fri, 12 May 2023 21:30:43 GMT</pubDate>
            <atom:updated>2023-05-12T21:30:43.376Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*ykW4LtZItJeBt_OF" /><figcaption>Photo by <a href="https://unsplash.com/@edhoradic?utm_source=medium&amp;utm_medium=referral">Edho Pratama</a> on <a href="https://unsplash.com?utm_source=medium&amp;utm_medium=referral">Unsplash</a></figcaption></figure><p>We’ve all typed in a URL like <a href="http://www.google.com">www.google.com</a> into our web browser. Then after, our web browser renders a web page for us. Our web browser follows a few procedures to get the web page from its host. However, we’ll uncover these procedures in this article.</p><p>In this article, we’ll dive into the steps to get the data our browser can render from the URL host. Via this article, we’ll cover</p><ul><li><strong>DNS (Domain Name System) Request</strong></li><li><strong>TCP (Transmission Control Protocol) / IP (Internet Protocol)</strong></li><li><strong>Firewall</strong></li><li><strong>HTTPS (Hypertext Transfer Protocol Secure) / SSL (Secure Sockets Layer)</strong></li><li><strong>Load balancer</strong></li><li><strong>Web Server</strong></li><li><strong>Application Server</strong></li><li><strong>Database</strong></li></ul><p>Our web browser gets the URL we type and needs to get the web page from it. However, our browser does not know the address where this data is kept.</p><p>In other to get this address. Our browser sends a request to a DNS server. This request is called a DNS Request.</p><h3>DNS Request</h3><figure><img alt="A diagram visualizing how DNS resolution works." src="https://cdn-images-1.medium.com/max/742/1*bP8bLwnF8sJvSE9iRcd8lg.png" /></figure><p>This component involves sending a request from our browser to a DNS (Domain Name System) server. The DNS server process this request and translates the URL / domain name to its corresponding IP address. The DNS server is a repository for storing URLs and their IP addresses. It resolves the requests from our browser and sends a corresponding response. This response contains the URL’s IP address.</p><p>Our browser could connect to a DNS server because it has the address of a DNS server. It’ll be unable to send a request without the address of a DNS server.</p><p>After obtaining an IP address from the DNS server. Our web browser can create a connection using the IP address. Our browser has the IP address for the URL we’re visiting, which will be used to access the web server.</p><p>To establish successful communication between a web browser and a web server, it’s essential to use a communication protocol that ensures the accurate transmission of data between the two applications/devices. That’s where TCP/IP comes in.</p><h3>TCP/IP</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*m9LNzavNCYzO888SwIoy_A.png" /></figure><p>TCP (Transmission Control Protocol) is a type of communication protocol. It is used to transmit data over the internet and other computer networks. TCP provides a standardized set of procedures for how devices communicate with each other. It ensures data is sent correctly.</p><p>IP handles and directs packets between devices on the internet or network. It ensures packets are sent to the correct destination by assigning separate IP addresses to each device.</p><p>TCP is also responsible for breaking data into packets. It sends these packets over the internet and ensures they are received correctly by its recipient.</p><p>Our web browser will use the TCP interface to create a connection to the web server at the IP address. This connection between our web browser and the web server could be prevented by a Firewall, at least for now.</p><h3>Firewall</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*uZ8e1nOBWdg2wEUdlbLZZw.png" /><figcaption>Structural diagram of a firewall.</figcaption></figure><p>According to <a href="https://en.wikipedia.org/wiki/Firewall_(computing)">Wikipedia</a>,</p><blockquote>a <strong>firewall</strong> is a network security system that monitors and controls incoming and outgoing network traffic based on predetermined security rules.</blockquote><p>A firewall can prevent our web browser from accessing Google’s web server on a public or private network. It can also filter packets which are transferred between devices or applications. Firewalls can help prevent communicating with untrusted, malicious web servers. Especially servers tagged as dangerous (this could be by the network or system administrator).</p><h4>HTTPS / SSL</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/608/1*GQhoXSJ6LMklgb_LYPM4Aw.png" /></figure><p>HTTP (Hypertext Transfer Protocol) is an application layer protocol that functions as a request-response protocol in the client-server model. It’s used for distributing hypermedia information.</p><p>HTTPS (HTTP Secure) is widely used over the internet. It’s the secure version of HTTP. It uses encryption for secure communication over a network or internet.</p><p>The communication protocol of HTTPS is encrypted via Secure Sockets Layer (now known as Transport Layer Security, TLS).</p><p>Google’s web server supports SSL. Our web browser will create an SSL handshake with Google’s web server. With this SSL agreement, our web browser and Google’s web server can now communicate securely.</p><p>We’ve been citing web servers all this while. But what exactly is a web server?</p><h4>Web Server</h4><p>A web server is computer software and hardware that accepts requests via HTTP, its secure alternative, HTTPS or other protocols. A web server is used to serve dynamic or static content. Its main task is to process and deliver web pages to users. Examples include:</p><ul><li>NGINX (pronounced: EngineX)</li><li>Apache HTTP Server</li><li>Lighttpd (pronounced: Lighty)</li></ul><p>However, Web servers are limited by the number of requests that can be driven. Google handles more than a million requests per minute. A single server can’t anchor these requests. The solution to this is having more than one server. How can these requests be shared across these servers? Through load balancing.</p><h4>Load Balancer</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/766/1*amho3ElcC3GXuETiWBpyDA.png" /></figure><p>A load balancer is a device that distributes a set of tasks over a set of servers. A load balancer distributes incoming requests among a collection of servers. It returns the response from the selected server to its appropriate client. A load balancer acts as a reverse proxy. A reverse proxy accepts a request from a client and forwards it to the server. Load balancing makes the procedure:</p><ul><li>Efficient by evenly distributing tasks across servers.</li><li>Fast by optimizing response time.</li><li>Scalable by effortlessly installing new servers.</li><li>Maintainable by reducing workloads across servers.</li></ul><p>Examples of a load balancer are:</p><ul><li>NGINX</li><li>HAProxy</li><li>Microsoft Azure Load Balancer</li><li>Google Cloud Load Balancer</li></ul><p>A load balancer sends our request to one of Google’s collection of web servers. We can get responses quicker because Google’s load balancer distributes requests effectively.</p><h4>Application Server</h4><p>This is a server that hosts applications or software through a communication protocol. It enables interaction between end-user clients and server-side application code to deliver dynamic content. Examples include:</p><ul><li>Java Application Server</li><li>Node.js Application Server</li><li>PHP Application Server</li></ul><p>Application servers can manage the flow of data between components of the application.</p><p>We’ve been saying processing data and serving content. But where are they kept?</p><h4>Database</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/288/1*ftfagrUMp1v-vjnGTEdJkA.png" /></figure><p>A database is an organized collection of data. It allows users to retrieve, update and store information. A <strong>Database Management System</strong> (DBMS) is a software system that interacts with an application and a database to examine data. It allows users to manipulate, create and maintain databases.</p><p>Examples of a DBMS are:</p><ul><li>MySQL</li><li>PostgreSQL</li><li>MongoDB</li><li>Microsoft SQL Server etc</li></ul><p>The web page we’re accessing from Google is kept in a database. A DBMS maintains the database. Google’s web server can access the database from a DBMS. The data from the database can be processed and interpreted by Google’s web server. After processing the data, the web server sends a response. The load balancer receives the response and directs it to its intended client.</p><h3>Conclusion</h3><p>We now understand what happens when we type <a href="http://www.google.com">www.google.com</a> into our web browser.</p><p>Briefly, When we type a URL into our web browser, our browser sends a DNS request to a DNS server. The DNS server returns an IP address if the URL is valid. IP addresses are unique to every device. They are like the address information of your house.</p><p>Our browser requires a connection with the server at the IP address. TCP/IP is used as it provides procedures for communicating efficiently. Our browser communicates with the server using HTTP, or its secure alternative, HTTPS. The IP address from the DNS response might point to a Load balancer. A load balancer shares request across a group of servers and returns a response to the intended client.</p><p>A web server itself is software and hardware that serves web content. This content can be static or dynamic. It interprets our HTTP request and generates an HTTP response which our browser will use to render a webpage for us.</p><p>If the server support secure communication, our browser will create an agreement. SSL is used to encrypt data sent to our server. This secures our communication protocol from malicious eavesdropping. HTTPS is the secure version of HTTP. It’s the protocol used for secure communication.</p><p>The whole response from the web server is kept in a database. A database is an organized collection of data. A Database Management System (DBMS) allows users to manipulate and maintain databases. A web server can access these data via the DBMS. After accessing and processing the data, the web server will generate a response and sends it back to the client. Remember, the load balancer directs the response to its intended client.</p><p>Finally, We’ve uncovered the procedures and understand “What happens when you type google.com in your browser and press Enter”.</p><p>I hope you learn from this article and understand how a web browser gets a web page for you.</p><p>Kindly leave a comment and upvote for this article. Happy Learning 👏</p><p>References to <a href="https://www.wikipedia.org/">Wikipedia</a>…</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=d624d005a811" width="1" height="1" alt="">]]></content:encoded>
        </item>
    </channel>
</rss>