<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by Maxime on Medium]]></title>
        <description><![CDATA[Stories by Maxime on Medium]]></description>
        <link>https://medium.com/@maximemrf?source=rss-30180446cafa------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Mon, 06 Apr 2026 09:55:06 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@maximemrf/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Rebuilding Edge Infrastructure on AWS: Lessons from a Cloudflare to CloudFront Migration]]></title>
            <link>https://medium.com/trackit/rebuilding-edge-infrastructure-on-aws-lessons-from-a-cloudflare-to-cloudfront-migration-b5a6a2b4fa95?source=rss-30180446cafa------2</link>
            <guid isPermaLink="false">https://medium.com/p/b5a6a2b4fa95</guid>
            <category><![CDATA[aws]]></category>
            <category><![CDATA[cloudflare]]></category>
            <category><![CDATA[cloud-computing]]></category>
            <category><![CDATA[aws-cloudfront]]></category>
            <category><![CDATA[cdn]]></category>
            <dc:creator><![CDATA[Maxime]]></dc:creator>
            <pubDate>Thu, 05 Feb 2026 10:31:01 GMT</pubDate>
            <atom:updated>2026-02-05T10:31:01.052Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/800/1*0yoisBQivghLa0ajCIxZxA.png" /></figure><p>Edge infrastructure decisions tend to stay invisible until scale, security, or compliance constraints start affecting day-to-day operations. At that stage, the trade-offs between managed platforms and self-assembled architectures become impossible to ignore.</p><p>As edge logic grows in volume and operational importance, the manner in which it is expressed and governed starts to matter. Auditing changes, enforcing consistency across environments, and treating edge behavior as deployable code become harder when edge configuration is managed through a web console instead of being versioned and reviewed as code.</p><p>In this context, companies often choose to migrate their edge infrastructure from Cloudflare to Amazon CloudFront and AWS Web Application Firewall (WAF) to regain control and align edge behavior with infrastructure-as-code (IaC) practices. However, such migrations are rarely simple lift-and-shift exercises. While Cloudflare abstracts most edge concerns behind a unified interface, AWS exposes them as discrete building blocks that must be explicitly designed, wired, and maintained.</p><h3>Rebuilding Edge Capabilities on AWS</h3><p>This article documents a real-world migration from Cloudflare to a CloudFront and WAF based architecture, driven by the need for stronger control, auditability, and infrastructure-as-code (IaC) enforcement.</p><p>Several capabilities that were implicit on Cloudflare had to be rebuilt explicitly, including redirects, rate limiting, persistent blocking, image optimization, and testing. The objective was not to copy Cloudflare’s features one configuration block at a time, but to arrive at a serverless edge architecture defined entirely in Terraform, with predictable behavior and clear operational ownership.</p><h3>Routing &amp; Redirects</h3><p>The migration challenge becomes clearer by comparing how routing and redirects are handled on each platform:</p><ul><li><strong>Cloudflare Page Rule:</strong> Follow a straightforwardIf URL matches X, Then do Y logic. They are used for redirects, forcing HTTPS, bypassing cache, and similar edge logic. The model is configuration-driven and flexible, with little friction as rules accumulate.</li><li><strong>AWS Ordered Cache Behavior:</strong> In Amazon CloudFront, ordered cache behaviors are heavyweight configuration objects designed to route traffic to different origins, such as /api/* to Lambda or /static/* to S3. They are not intended for simple conditional logic. From an infrastructure perspective, they are architectural primitives, verbose to express in Terraform, and limited to 75 behaviors per distribution.</li></ul><h4>The Classic AWS Mistake: Overusing ordered_cache_behavior</h4><p>A common reflex during an AWS migration is to translate every Cloudflare Page Rule into an ordered_cache_behavior block in Terraform. Even though AWS has raised the default limit to 75 behaviors per distribution, pushing the configuration in that direction does not scale in practice.</p><p>This approach breaks down for two reasons.</p><ol><li><strong>The Maintenance Cost:</strong> Terraform is verbose. Managing 20 redirects via cache behaviors generates a lot<strong> of lines of HCL code</strong>. Changing a single security header policy requires updating 20 blocks. It is error-prone and tedious. While Terraform modules can mitigate code verbosity (DRY), they do not solve the underlying architectural inefficiency or the deployment latency caused by managing dozens of distinct behaviors.</li><li><strong>The Deployment Fatigue:</strong> CloudFront distribution updates are inherently slow. Each terraform apply involving many cache behaviors forces the CloudFront API to validate and reprocess the entire behavior chain. As the configuration grows, CI/CD pipelines become noticeably slower and less predictable.</li></ol><h3>The Solution: One Global Edge Router</h3><p>Rather than defining a separate infrastructure block for every redirect, a single global redirect function is used.</p><p>The function is attached to the default cache behavior, allowing it to intercept all incoming requests. It effectively acts as a lightweight edge router. The execution flow is simple:</p><ol><li>Inspect the requested URL</li><li>Look up the URL in a centralized mapping</li><li>If a match exists, return a 301 Moved Permanently response immediately</li><li>If no match is found, forward the request to the origin without modification</li></ol><pre>function handler(event) {<br>   const request = event.request;<br>   const uri = request.uri;<br><br>   const redirectMap = {<br>       &#39;/a/nice/path&#39;:   &#39;https://to/the/new/path&#39;,<br>       &#39;/another/path&#39;: &#39;https://to/another/new/path&#39;,<br>   };<br><br>   if (redirectMap[uri]) {<br>       return {<br>           statusCode: 301,<br>           statusDescription: &#39;Moved Permanently&#39;,<br>           headers: {<br>               &#39;location&#39;: { value: redirectMap[uri] }<br>           }<br>       };<br>   }<br>   return request;<br>}</pre><h3>Security: Persistent Blocking on AWS WAF</h3><p>This is where the behavioral gap between platforms becomes most visible.</p><p>On Cloudflare, an IP repeatedly hitting a sensitive endpoint can be handled with a simple rate-limiting rule and a fixed block duration. The problem is effectively solved at the configuration level.</p><p>AWS WAF does not offer an equivalent mechanism. It provides rate-based rules, but their behavior is transient and threshold-driven, closer to a revolving door than a sustained block:</p><ol><li>An attacker IP exceeds the request threshold (ex: 100 requests per 5min)</li><li>AWS WAF blocks the IP</li><li>Attacker stops (or gets blocked) for 5 minutes</li><li>The rate drops below the threshold</li><li>AWS releases the IP immediately</li><li>The attacker starts again</li></ol><p>Achieving persistent blocking exposes a core limitation of AWS WAF. The service is stateless and does not retain historical context beyond the evaluation window.</p><h3>The Solution: A Stateful Penalty Box</h3><p>AWS WAF does not provide a built-in mechanism for persistent blocking, which makes it necessary to introduce <strong>State </strong>on top of an otherwise stateless system.</p><p>The solution is composed of four parts:</p><ul><li><strong>Sensor:</strong> A WAF rate-based rule used to detect abusive traffic</li><li><strong>Logic:</strong> A Lambda function triggered every minute via EventBridge</li><li><strong>Memory:</strong> An S3 JSON file storing IP addresses and their expiration timestamps</li><li><strong>Blocker:</strong> A WAF IP set used as a blocklist</li></ul><p><strong>How it works: </strong>Every minute, the Lambda function runs and queries AWS WAF for IPs currently violating the configured rate-based rules. For each IP, a release time is calculated based on the configured blocking duration, for example current time plus 24 hours. This information is written to S3.</p><p>On each execution, the Lambda also rebuilds the list of IPs whose release time has not yet expired and pushes this consolidated list to the WAF IP set attached to the Web ACL.</p><h4>Existing Solutions vs. Our Needs</h4><p>This problem is not unique. AWS provides a reference implementation on <a href="https://github.com/aws-samples/aws-waf-rate-based-rule-customized-block-period">GitHub </a>that demonstrates persistent blocking with AWS WAF. While useful as a proof of concept, it does not meet production requirements for two reasons.</p><ol><li><strong>The Terraform Gap:</strong> The AWS-provided solution is implemented entirely in CloudFormation. For environments standardized on Terraform, this introduces an inconsistency in tooling and workflow.</li><li><strong>The Scalability Issue:</strong> The reference implementation maps a single WAF rule to a single blocking duration. Handling different threat levels requires deploying the stack multiple times. For example, aggressive login brute-forcing might require a 24-hour ban, while low-intensity scraping might only justify a 10-minute block. Each case would require a separate deployment.</li></ol><h4>The Upgrade: A Multi-Rule Engine</h4><p>The logic was rewritten from scratch in Node.js and Terraform to support vector-based configuration. Instead of relying on a single variable, the Lambda function receives a JSON configuration that maps WAF rule names to ban durations:</p><pre>{<br>  &quot;RateLimit_Login_High&quot;: 1440,  // Ban 24h<br>  &quot;RateLimit_Search_Low&quot;: 10     // Ban 10m<br>}</pre><p>On each execution, the Lambda iterates over this configuration. When an IP violates a given rule, the corresponding ban duration is applied. This transforms a simple script into a mechanism capable of handling multiple threat levels within a single execution flow.</p><h4><strong>The IPv6 Trap</strong></h4><p>One detail that Cloudflare abstracts away but AWS exposes directly is IP address handling. AWS WAF requires separate IP sets for IPv4 and IPv6 addresses. Mixing both formats in a single list is not supported.</p><p>As a result, the implementation maintains separate blocklists (PenaltyBox-IPv4 and PenaltyBox-IPv6) independently. Failing to do so leaves roughly 40 percent of modern mobile traffic unprotected, as many mobile networks default to IPv6.</p><h3>Image Optimization: Where is the “Polish” button?</h3><p>On Cloudflare, image optimization is enabled by toggling a feature called <strong>Polish</strong>. Large JPEGs are automatically delivered as optimized WebP or AVIF images without any additional configuration.</p><p>On AWS, CloudFront behaves differently. It delivers exactly what is stored in the origin, with no built-in image transformation or format negotiation.</p><h4><strong>The Solution: Dynamic Image Transformation</strong></h4><p>To address this, the official <a href="https://github.com/aws-solutions/dynamic-image-transformation-for-amazon-cloudfront"><strong>AWS Solution: Dynamic Image Transformation</strong></a> was deployed.</p><p>The architecture places a Lambda function behind CloudFront that processes images on the fly using the Sharp library. A Lambda-based implementation was chosen, although an ECS-based alternative is also available.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*db8qfah89siXNtftQHdF7g.png" /><figcaption>AWS Dynamic Image Transformation diagram</figcaption></figure><p>Just like Cloudflare, the configured Lambda handles format conversion automatically. When a user requests &lt;img src=”/hero.jpg”&gt;, the Lambda checks the browser’s Accept header. If the browser supports it, the Lambda converts the image to <strong>WebP</strong> or <strong>AVIF</strong> and optimizes the quality.</p><p>For this to work correctly, the Accept header must be explicitly whitelisted in the CloudFront cache policy. By default, CloudFront removes most headers to maximize cache hit ratios. Without this configuration, the Lambda function cannot detect format support and will serve the original JPEG to all clients.</p><p>Handling this explicitly restores the simplicity of automatic compression while also enabling dynamic resizing for responsive image delivery.</p><h3>Testing Edge Behavior Reliably</h3><p>Configuring complex WAF rules and geo-blocking is one challenge. Verifying that those rules behave as expected, without relying on VPNs or location-based workarounds, is another.</p><p>Manual curl commands and ad-hoc shell scripts do not scale for this purpose. Edge behavior needed to be tested in a way that was repeatable, portable, and as codified as the infrastructure itself.</p><h4>Choosing the Right Tool: Japa vs. Hurl</h4><p>A JavaScript test runner such as <strong>Japa </strong>was an initial option, given its existing use for backend testing. It offers full language expressiveness and familiar syntax.</p><p>For infrastructure and edge testing, this approach introduced unnecessary overhead. Two constraints stood out:</p><ul><li><strong>Overhead</strong>: Running a Node.js test runner requires bootstrapping a runtime, managing npm dependencies, and writing asynchronous test code, even when the goal is to validate a simple HTTP response.</li><li><strong>Portability</strong>: These tests needed to run in lightweight CI environments and be usable by operations teams, without requiring a full development setup.</li></ul><p><a href="https://github.com/Orange-OpenSource/hurl"><strong>Hurl</strong></a> was selected instead. It is a lightweight command-line tool written in Rust that focuses exclusively on HTTP integration testing. Requests and assertions are defined in a plain text format, making tests easy to read, version, and execute. Its narrow scope matches the problem space well.</p><p>Testing a caching rule with Hurl:</p><pre>HEAD {{host}}/<br>HTTP 200<br><br><br>HEAD {{host}}/<br>HTTP 200<br>[Asserts]<br>header &quot;X-Cache&quot; contains &quot;Hit from cloudfront&quot;<br><br><br>header &quot;Age&quot; exists<br>header &quot;Age&quot; toInt &gt;= 0</pre><p>Caching validation is performed using two consecutive requests. The first request populates the cache. The second request asserts that the response is served from CloudFront, indicated by a cache hit and the presence of an Age header.</p><p>More complex behavior can be validated just as directly. For example, redirect validation is reduced to asserting a 301 status code and an exact match on the Location header.</p><pre>GET {{host}}/oldpath<br>HTTP 301<br>[Asserts]<br>header &quot;Location&quot; == &quot;https://mywebsite.com/newpath&quot;</pre><h4>Testing the Geo-Blocking</h4><p>Geographic restrictions are difficult to validate without relying on VPNs or physical location changes. This is addressed by introducing a controlled testing mode based on the X-Forwarded-For header.</p><p>By default, AWS WAF evaluates the source IP of the TCP connection. Using a <strong>Terraform dynamic block</strong>, WAF can be instructed to trust the X-Forwarded-For header only when testing is explicitly enabled.</p><pre>dynamic &quot;forwarded_ip_config&quot; {<br>  for_each = var.enable_testing_mode ? [1] : []<br>  content {<br>    header_name       = &quot;X-Forwarded-For&quot;<br>    fallback_behavior = &quot;MATCH&quot;<br>  }<br>}</pre><p>When enable_testing_mode is set to true in a staging environment, AWS WAF evaluates the injected IP address instead of the physical source IP. This makes it possible to use <strong>Hurl</strong> to simulate requests from specific geographic locations and validate geo-blocking rules deterministically.</p><h3>Conclusion</h3><p>Migrating from Cloudflare to Amazon CloudFront and AWS WAF is not without cost. It requires engineering effort, debugging time, and a deep understanding of AWS primitives. Convenience is traded for ownership, and implicit platform features are replaced with explicit design decisions.</p><p>However, over time, the ROI becomes clear:</p><ol><li><strong>Immutable Infrastructure:</strong> Edge behavior is no longer modified through ad-hoc console changes. Redirects, security rules, and routing logic are fully defined in Terraform. Rolling back a WAF rule becomes a predictable deployment rather than a manual intervention.</li><li><strong>Granular Control:</strong> Behavior is no longer constrained by predefined platform features. Blocking durations, routing logic, and image optimization strategies are defined explicitly and can be adapted as requirements evolve.</li><li><strong>Observability:</strong> Logs and metrics are fully owned. Traffic and security events can be streamed directly to CloudWatch and queried with Logs Insights, without reliance on vendor-specific dashboards or plan limitations.</li></ol><p>Companies considering a similar migration, or facing comparable edge architecture constraints, can reach out to <a href="https://trackit.io">TrackIt</a> for guidance and hands-on support.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=b5a6a2b4fa95" width="1" height="1" alt=""><hr><p><a href="https://medium.com/trackit/rebuilding-edge-infrastructure-on-aws-lessons-from-a-cloudflare-to-cloudfront-migration-b5a6a2b4fa95">Rebuilding Edge Infrastructure on AWS: Lessons from a Cloudflare to CloudFront Migration</a> was originally published in <a href="https://medium.com/trackit">TrackIt</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[How I Automated My Bitcoin Knots/Core Setup with Ansible]]></title>
            <link>https://medium.com/@maximemrf/how-i-automated-my-bitcoin-knots-core-setup-with-ansible-a578667592ea?source=rss-30180446cafa------2</link>
            <guid isPermaLink="false">https://medium.com/p/a578667592ea</guid>
            <category><![CDATA[blockchain]]></category>
            <category><![CDATA[bitcoin-node]]></category>
            <category><![CDATA[bitcoin-knots]]></category>
            <category><![CDATA[bitcoin-core]]></category>
            <category><![CDATA[bitcoin]]></category>
            <dc:creator><![CDATA[Maxime]]></dc:creator>
            <pubDate>Mon, 08 Dec 2025 12:20:32 GMT</pubDate>
            <atom:updated>2026-01-14T11:58:57.941Z</atom:updated>
            <content:encoded><![CDATA[<p>Manually provisioning a secure Bitcoin node on a Linux server involves a repetitive checklist of system administration tasks: downloading and verifying GPG signatures, creating dedicated users with restricted permissions, writing systemd service files to ensure uptime. Doing this manually is not only time-consuming but also prone to human error. If you need to migrate to a new server, spin up a testnet node, or switching from Bitcoin Core to Knots, you have to start from scratch.</p><p><strong>For me, this project was the logical next step in my automation journey.</strong> I had already streamlined my infrastructure by automating the installation of Debian using <strong>Proxmox templates</strong>. I could spin up a fresh, secure OS in seconds, but the automation stopped there. I still found myself manually configuring the Bitcoin software on top of those fresh VMs. It was a friction point that undermined the speed of my Proxmox setup.</p><p>I wanted a better way. I wanted a solution that adhered to the principles of <strong>Infrastructure as Code</strong> to bridge that final gap. I wanted a reproducible, secure, and automated deployment process that I could audit and version control. That is why I built <a href="https://github.com/MaximeMRF/ansible-bitcoin-node"><strong>ansible-bitcoin-node</strong></a>.</p><p>But this automation is more than just a convenience, it is a strategic building block. This role serves as a foundational layer for a much larger personal project: <strong>I am currently developing a “Bitcoin Node as a Service” platform.</strong> If you want to be the first to know when it launches, <a href="https://impedance.cloud/?utm_source=medium&amp;utm_medium=article&amp;utm_campaign=automation-bitcoin-ansible&amp;utm_content=cta-top"><strong>check it out here</strong></a>.</p><p>In this article, I will share why I chose Ansible for this task, how the role is architected, and how you can use it to spin up your own node in minutes.</p><h3>Why Ansible ?</h3><p>For me, the choice was dictated by the reality of my current infrastructure. <strong>My homelab is currently a hybrid environment, services are split about 50/70 between virtual machines and Kubernetes.</strong> While I might eventually migrate fully to a pure Kubernetes cluster, managing VMs remains a core part of my operations. I needed a tool that fit this transitional hybrid phase perfectly, allowing me to treat my traditional VMs with the same rigor and reproducibility as my containerized workloads.</p><h3>Project Overview &amp; Architecture</h3><p>The goal of this role is simple: take a fresh Linux server (Debian/Ubuntu) and turn it into a fully functioning, secure Bitcoin node with zero manual intervention.</p><blockquote>Don’t hesitate to open PR on github if my Ansible role work with other version of Debian or Ubuntu not listed here, or if you want to add some distributions like Fedora for example.</blockquote><h4>Tech Stack</h4><ul><li><strong>Target OS:</strong> Debian / Ubuntu (tested on Debian 13)</li><li><strong>Automation:</strong> Ansible</li><li><strong>Software:</strong> <strong>Bitcoin Knots (Default)</strong> or Bitcoin Core (Optional)</li><li><strong>Process Management:</strong> Systemd</li></ul><h4>Key Features</h4><p>I designed this role to follow security best practices strictly while offering flexibility in the choice of implementation.</p><ul><li><strong>Knots by Default, Core Available:</strong> By default, the role installs <strong>Bitcoin Knots</strong>. I chose Knots as the default implementation because of its advanced features and enhanced mempool policies, which give node runners more control over the transactions they relay. However, if you prefer the standard implementation, you can switch to <strong>Bitcoin Core</strong> simply by changing a variable.</li><li><strong>Security &amp; User Management:</strong> The role creates a dedicated system user (default: bitcoin): the Bitcoin daemon never runs as root.</li><li><strong>Trustless Installation:</strong> It doesn’t just download the binary. It downloads the checksums and verifies the GPG signatures against the developers’ keys (Luke Dashjr for Knots, or the Core maintainers). If the signature doesn’t match, the deployment fails immediately.</li><li><strong>Systemd Integration:</strong> It installs a robust systemd service file. This ensures that bitcoind starts automatically on boot and restarts in case of a crash.</li></ul><h3>Deploy your Node</h3><p>The role is available on <a href="https://galaxy.ansible.com/ui/standalone/roles/MaximeMRF/ansible-bitcoin-node/">Ansible Galaxy</a> so no need to clone the repository and install it manually.</p><p>The tutorial is basically the same as the project readme file.</p><h4>Install the role</h4><p>Pull the role directly from Galaxy to your machine:</p><pre>ansible-galaxy role install MaximeMRF.ansible-bitcoin-node</pre><h4>Create your playbook</h4><p>Create a file named deploy_node.yml. This is where you call the installed role and define your configuration.</p><pre>---<br>- name: Deploy Bitcoin Node<br>  hosts: bitcoin_nodes<br>  become: yes<br><br>  roles:<br>    - role: MaximeMRF.ansible-bitcoin-node</pre><h4>Define your Hosts</h4><p>Create an inventory file hosts.yaml :</p><pre>all:<br>  children:<br>    bitcoin_nodes:<br>      vars:<br>        ansible_user: &quot;debian&quot;<br>      hosts:<br>        node-btc-01:<br>          ansible_host: 192.168.0.10<br>          bitcoin_variant: &quot;knots&quot;<br>          bitcoin_version: &quot;29.2.knots20251110&quot;<br>          # optional, default is &quot;x86_64&quot;<br>          bitcoin_architecture: &quot;x86_64&quot;<br>          bitcoin_enable_indexes: false<br>          bitcoin_config:<br>            uacomment: &quot;MyKnotsNode&quot;<br>            server: 1<br>            listen: 1<br>            logips: 1<br>            bind: &quot;0.0.0.0&quot;<br>            rpcbind: &quot;0.0.0.0&quot;<br>            rpcallowip: &quot;0.0.0.0/0&quot;<br>            rpcuser: &quot;bitcoinrpc&quot;<br>            rpcpassword: &quot;myverysecurepassword&quot;<br>            prune: 4096<br>            dbcache: 4096<br>            maxmempool: 300<br>            zmqpubrawblock: &quot;tcp://0.0.0.0:28332&quot;<br>            zmqpubrawtx: &quot;tcp://0.0.0.0:28333&quot;<br>        node-btc-02:<br>          ansible_host: 192.168.0.20<br>          bitcoin_variant: &quot;core&quot;<br>          bitcoin_version: &quot;29.2&quot;<br>          bitcoin_enable_indexes: false<br>          bitcoin_config:<br>            uacomment: &quot;MyCoreNode&quot;<br>            server: 1<br>            listen: 1<br>            logips: 1<br>            bind: &quot;0.0.0.0&quot;<br>            rpcbind: &quot;0.0.0.0&quot;<br>            rpcallowip: &quot;0.0.0.0/0&quot;<br>            rpcuser: &quot;bitcoinrpc&quot;<br>            rpcpassword: &quot;myverysecurepassword&quot;<br>            prune: 4096<br>            dbcache: 4096<br>            maxmempool: 300<br>            zmqpubrawblock: &quot;tcp://0.0.0.0:28332&quot;<br>            zmqpubrawtx: &quot;tcp://0.0.0.0:28333&quot;</pre><p><strong>Understanding the Key Parameters:</strong></p><ul><li><strong>bitcoin_variant:</strong> This is the main switch. Set it to knots for the enhanced version (recommended) or core for the standard reference implementation. You can switch for variants and re-apply the playbook to take effect, same for the versions.</li><li><strong>bitcoin_version: </strong>Set the full version name to take effect. Don’t hesitate to update or downgrade the version and re-apply the playbook it will work fine.</li><li><strong>bitcoin_config:</strong> This dictionary maps directly to the bitcoin.conf file. Every key-value pair here is rendered into the final configuration file on the server.</li><li><strong>bitcoin_architecture</strong>: Defaults to x86_64 (standard Intel/AMD servers). You must change this to <strong>aarch64</strong> if you are deploying on a <strong>Raspberry Pi</strong> or an ARM-based VPS. This ensures Ansible downloads the correct binary for your hardware.</li></ul><h4>Run it</h4><p>Launch the deployment and watch Ansible handle the heavy lifting.</p><pre>ansible-playbook -i hosts.yaml playbook.yaml</pre><h3>Conclusion &amp; What’s Next</h3><p>Automating the deployment of a Bitcoin node changes the game. It turns a tedious, error-prone manual process into a reliable, reproducible asset. Whether you are running a single node at home or managing a fleet of validators, <a href="https://github.com/MaximeMRF/ansible-bitcoin-node"><strong>ansible-bitcoin-node</strong></a> ensures your infrastructure is secure and standardized from day one.</p><p><strong>The Next Step: Bitcoin Node as a Service</strong> As I hinted, this role is the foundation for something much bigger. I’m currently building a platform to make Bitcoin infrastructure even more accessible.</p><p><strong>Are you interested in a “Bitcoin Node as a Service” solution?</strong> I’m looking for early beta testers and feedback. <a href="https://impedance.cloud/?utm_source=medium&amp;utm_medium=article&amp;utm_campaign=automation-bitcoin-ansible&amp;utm_content=cta-bottom"><strong>See the project here</strong></a>.</p><p>Open source lives on community feedback. I built this tool to be robust, but there is always room for improvement.</p><ul><li><strong>Star the Repo:</strong> If you found this useful, please give the project a ⭐ on <a href="https://github.com/MaximeMRF/ansible-bitcoin-node"><strong>GitHub</strong></a>. It helps with visibility.</li><li><strong>Contribute:</strong> Found a bug? Want to add support for a Linux Distro? Don’t hesitate to open an <strong>Issue</strong> or submit a <strong>Pull Request</strong>.</li></ul><p>Happy hosting, and don’t trust, verify.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=a578667592ea" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Unreal Horde vs. Traditional CI/CD: Optimizing for Unreal Engine Development]]></title>
            <link>https://medium.com/trackit/unreal-horde-vs-traditional-ci-cd-optimizing-for-unreal-engine-development-521a009e01fe?source=rss-30180446cafa------2</link>
            <guid isPermaLink="false">https://medium.com/p/521a009e01fe</guid>
            <category><![CDATA[game-development]]></category>
            <category><![CDATA[aws]]></category>
            <category><![CDATA[unreal-engine]]></category>
            <category><![CDATA[ci-cd-pipeline]]></category>
            <category><![CDATA[jenkins]]></category>
            <dc:creator><![CDATA[Maxime]]></dc:creator>
            <pubDate>Thu, 06 Nov 2025 12:22:16 GMT</pubDate>
            <atom:updated>2025-11-20T10:32:32.847Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/800/1*5AvcccNrVO6xjALRg7nKOQ.png" /></figure><p>As game development scales in complexity, Continuous Integration and Continuous Delivery (CI/CD) pipelines have become central to maintaining productivity. Conventional platforms such as Jenkins, Github CI, GitLab CI, or CircleCI offer flexibility and maturity for typical software delivery. However, their architectures are not optimized for the specific computational and data demands of Unreal Engine projects. Compiling millions of lines of C++ code and processing terabytes of binary assets requires an orchestration layer purpose-built for this environment.</p><p><a href="https://dev.epicgames.com/documentation/en-us/unreal-engine/horde-in-unreal-engine">Unreal Horde</a>, developed by Epic Games, extends CI/CD beyond the general-purpose model. It introduces Granular Parallelization, Native Caching, and Deep Build Graph integration, three design principles that align with the technical and operational realities of Unreal Engine development. The following sections examine each of these principles in detail, highlighting how they address the performance, scalability, and maintenance challenges that traditional CI/CD tools face in large Unreal Engine environments.</p><h3>Granular Parallelization for Build Speed</h3><p>The primary technical challenge in Unreal Engine development is the sheer computational intensity of the build process, particularly the enormous C++ compilation phase. The rate at which the pipeline completes this process directly governs iteration speed and developer efficiency. Traditional CI tools eventually reach a hard performance ceiling, while Horde achieves a level of scalability that translates into exponential speed gains.</p><h4>The Bottleneck of Job-Level Parallelization</h4><p>Traditional CI/CD systems such as Jenkins are built around <strong>Job-Level Parallelization</strong>. They can effectively distribute independent, self-contained tasks (for example, <em>Run Unit Tests</em> or <em>Build iOS Client</em>) to separate Build Agents. However, when confronted with a single, monolithic process like compiling the engine and game code, a generic tool can only assign the job to one high-spec machine. That machine quickly becomes a bottleneck, leaving hundreds of available CPU cores idle and developers waiting for builds to finish.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*dKOUp7cLLxyDhtzjSL3aPw.png" /><figcaption>Jenkins uses job-level parallelization, where each agent runs an entire stage (Build, Test, or Deploy).</figcaption></figure><h4>Horde’s Breakthrough: Task-Level Distribution</h4><p>Horde eliminates this inefficiency by operating at a <strong>Task-Level</strong> granularity, made possible through its native understanding of Unreal Engine’s Build Graph system.</p><ul><li><strong>Build Graph Decomposition:</strong> When a build begins, Horde reads the Build Graph script and decomposes it into thousands of small, non-sequential, interdependent tasks.</li><li><strong>Distributed Execution:</strong> Instead of assigning an entire job to a single agent, Horde distributes these atomic tasks across a scalable pool of Horde Agents, leveraging the elastic capacity of AWS. Independent C++ modules, for example, can be compiled simultaneously across hundreds of machines.</li></ul><p>Rather than one server performing the entire build, a hundred servers can each process a fraction of the workload in parallel. This granular parallelization fully utilizes available compute power, transforming multi-hour builds into streamlined processes. For large Unreal Engine projects, it becomes a decisive architectural advantage by removing wait times, shortening feedback loops, and sustaining the development team’s momentum.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*RGml2wSkslUnLovYCQHQ1g.png" /><figcaption>Unreal Horde distributes a single large job into many smaller tasks. These tasks are dynamically assigned across multiple agents to maximize efficiency and reduce build times.</figcaption></figure><h3>Managing Large Assets Efficiently (Caching and Monorepos)</h3><p>A major challenge for general-purpose CI/CD systems in game development is managing data volume. Modern Unreal Engine projects operate with multi-terabyte monorepos containing extensive binary assets, textures, and cache files. In such environments, maintaining efficient data synchronization and artifact management is essential to sustain fast build times.</p><h4>The Challenge of Redundant Processing</h4><p>Traditional CI/CD systems such as Jenkins often process the same data multiple times across different build agents. Each agent independently downloads, compiles, and processes the same files, lacking a unified caching layer. This redundancy increases network traffic, storage I/O, and compute usage (particularly problematic when handling gigabytes of cooked game assets).</p><h4>Horde’s Integrated Approach: Content Addressable Storage</h4><p>Horde addresses this limitation through its native Content Addressable Storage (CAS), also known as Horde Storage or Zen, a system designed specifically for Unreal Engine’s scale and data characteristics.</p><ul><li><strong>Centralized Artifact Repository:</strong> Each compiled code object, cooked asset, or cache file is assigned a unique cryptographic hash and stored in the central CAS repository.</li><li><strong>Elimination of Redundant Work:</strong> Once an artifact has been built by any Horde Agent, it becomes instantly available to all others. Subsequent builds simply fetch the cached version instead of reprocessing it.</li></ul><p>This model minimizes unnecessary network and compute activity, ensuring consistent performance even as project size grows. When deployed on AWS, CAS benefits from services like Amazon S3 for durable, cost-effective storage, providing a native caching framework that general-purpose CI tools would require significant custom engineering to approximate.</p><h4>Maintenance Overhead</h4><p>Replicating Horde’s caching efficiency in other CI/CD platforms typically requires a complex stack of additional components: network file shares (such as NFS), custom caching proxy servers, or paid third-party plugins. These integrations add latency, introduce potential points of failure, and require ongoing maintenance.</p><p>Over time, the cumulative complexity and operational cost make it difficult for a general-purpose CI tool to manage the scale and data patterns of modern Unreal Engine projects effectively.</p><h3>Native Integration for Reduced Maintenance and Greater Control</h3><p>One of Horde’s most significant advantages is the reduction in ongoing maintenance effort required to support Unreal Engine development. By aligning natively with Unreal’s ecosystem, Horde simplifies operations and allows engineering teams to focus on building features rather than maintaining infrastructure.</p><h4>The Build Graph Translation Overhead</h4><p>CI/CD platforms such as Jenkins must interpret Unreal’s Build Graph scripts, which are the XML-based recipes that define how projects are compiled, packaged, and deployed. To do so, engineers often create intermediary wrapper scripts that translate between Jenkins’ general-purpose command model and Unreal Build Tool (UBT) operations. This middle layer is fragile, requires frequent updates when Unreal versions change, and introduces additional maintenance overhead.</p><h4>Horde’s Native Execution Advantage</h4><p>Horde eliminates the need for this translation entirely. Developed by Epic Games, it interprets and executes Build Graph scripts directly, understanding all dependencies, relationships, and execution rules without intermediary scripting.</p><ul><li><strong>Direct Execution:</strong> Horde runs Build Graph instructions natively, ensuring full compatibility and minimal setup.</li><li><strong>Zero Maintenance Debt:</strong> Its tight integration with Unreal ensures stability across engine updates, removing the need for continuous adjustments.</li></ul><p>This native alignment shifts engineering effort away from maintaining CI/CD plumbing toward optimizing build performance and scalability. The result is a cleaner, more predictable pipeline that supports faster iteration and consistent delivery across projects.</p><h3>Summary Comparison Table</h3><p>Traditional CI platforms remain highly effective for standard software delivery pipelines. However, the computational and data intensity of Unreal Engine development benefits from a system architected around the engine itself. Horde’s native integration, distributed architecture, and caching capabilities deliver measurable gains in build performance, maintainability, and scalability, particularly when deployed on AWS.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/719/1*GOYXDyJgMCsb4c7OA_wpmw.png" /><figcaption>Summary Comparison Table</figcaption></figure><h3>Conclusion</h3><p>Selecting the right CI/CD platform depends on the nature of the workload. Jenkins offers proven versatility for multi-language, multi-stack projects. For Unreal Engine, however, where build complexity, asset volume, and iteration speed define competitiveness, Unreal Horde provides a specialized, scalable, and maintainable foundation.</p><p>When combined with the elasticity of AWS infrastructure, Horde enables game studios to maintain continuous delivery performance at scale, supporting larger teams, faster feedback loops, and smoother release cycles for ambitious Unreal Engine titles.</p><p>To explore the financial and operational advantages of running Horde on AWS, refer to the companion article <a href="https://medium.com/trackit/modernizing-unreal-horde-ci-cd-moving-from-on-premise-infrastructure-to-aws-c7c306a09536"><strong>Modernizing Unreal Horde CI/CD: Moving from On-Premise Infrastructure to AWS</strong></a><strong>.</strong></p><h3>About TrackIt</h3><p>TrackIt is an international AWS cloud consulting, systems integration, and software development firm headquartered in Marina del Rey, CA.</p><p>We have built our reputation on helping media companies architect and implement cost-effective, reliable, and scalable Media &amp; Entertainment workflows in the cloud. These include streaming and on-demand video solutions, media asset management, and archiving, incorporating the latest AI technology to build bespoke media solutions tailored to customer requirements.</p><p>Cloud-native software development is at the foundation of what we do. We specialize in Application Modernization, Containerization, Infrastructure as Code and event-driven serverless architectures by leveraging the latest AWS services. Along with our Managed Services offerings which provide 24/7 cloud infrastructure maintenance and support, we are able to provide complete solutions for the media industry.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=521a009e01fe" width="1" height="1" alt=""><hr><p><a href="https://medium.com/trackit/unreal-horde-vs-traditional-ci-cd-optimizing-for-unreal-engine-development-521a009e01fe">Unreal Horde vs. Traditional CI/CD: Optimizing for Unreal Engine Development</a> was originally published in <a href="https://medium.com/trackit">TrackIt</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Modernizing Unreal Horde CI/CD: Moving from On-Premise Infrastructure to AWS]]></title>
            <link>https://medium.com/trackit/modernizing-unreal-horde-ci-cd-moving-from-on-premise-infrastructure-to-aws-c7c306a09536?source=rss-30180446cafa------2</link>
            <guid isPermaLink="false">https://medium.com/p/c7c306a09536</guid>
            <category><![CDATA[game-development]]></category>
            <category><![CDATA[aws]]></category>
            <category><![CDATA[unreal-engine-horde]]></category>
            <category><![CDATA[unreal-engine]]></category>
            <category><![CDATA[cloud-computing]]></category>
            <dc:creator><![CDATA[Maxime]]></dc:creator>
            <pubDate>Thu, 30 Oct 2025 16:14:18 GMT</pubDate>
            <atom:updated>2025-10-30T16:15:38.851Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/800/1*w6vDQ86ImdIDMKMgiSGQZw.png" /></figure><p><a href="https://dev.epicgames.com/documentation/en-us/unreal-engine/horde-in-unreal-engine">Unreal Horde</a> is a proprietary CI/CD system developed by Epic Games to meet the distinctive demands of <a href="https://www.unrealengine.com/en-US">Unreal Engine</a> development. Designed as a high-performance orchestrator, it intelligently manages the most intensive build and asset-processing workloads compiling millions of lines of C++ code and transforming terabytes of content across distributed machines.</p><p>This orchestration drastically shortens build times, enabling faster iteration, testing, and innovation. For studios building ambitious Unreal Engine titles, Horde functions as the backbone of a modern production pipeline, turning what were once slow, fragmented processes into a unified, high-speed development engine.</p><h3>Challenges of On-Premise Horde CI/CD</h3><h4>The Demands of Unreal Engine Development</h4><p>Game development operates under extreme technical and creative pressure, where build speed directly determines how fast teams can innovate. <strong>Unreal Engine</strong> projects, in particular, push hardware to its limits compiling millions of lines of C++ code, processing massive textures, and cooking binary assets into optimized formats. A reliable Continuous Integration and Delivery system such as Unreal Horde becomes essential in managing these heavy workloads, ensuring stable, repeatable, and efficient development cycles.</p><h4>Challenges of On-Premise Horde CI/CD</h4><p>Running Horde on-premise often becomes a hidden constraint that slows development and consumes resources inefficiently. The challenge stems from highly uneven compute demand intensive build spikes followed by long periods of inactivity. To handle peak load, studios must purchase and maintain powerful servers that remain underutilized for much of the day, often as high as 80%. This static, hardware-bound model locks significant capital into infrastructure that delivers limited day-to-day value and constrains both scalability and financial flexibility.</p><h4>Migrating Horde CI/CD to AWS</h4><p>Transitioning Horde CI/CD to AWS represents a decisive move toward efficiency and scalability. The cloud addresses the core limitation of on-premise infrastructure idle capacity by turning compute into an on-demand resource.</p><p>With AWS, build environments scale dynamically to match actual workload requirements. When a large build begins, the necessary compute power can be provisioned within minutes; once complete, resources are released automatically. This model eliminates the waste of idle servers, removes capacity constraints, and converts what was once static infrastructure into a flexible, continuously optimized service.</p><h3>Optimizing CI/CD Costs with Elastic Infrastructure</h3><p>Migrating Horde CI/CD to AWS transforms cost management by aligning infrastructure spend directly with usage. Compute capacity scales in real time to match workload requirements, converting what was once fixed infrastructure into a flexible, consumption-based model. The result is a system that adapts to production needs while eliminating the financial drag of idle servers.</p><h4>Elasticity for Build-Driven Workloads</h4><p>Elasticity remains the defining strength of AWS, particularly suited to the burst-heavy nature of game development. Horde consumes compute resources only when builds are active. During large-scale or overnight builds, the system automatically scales to launch hundreds of virtual Horde Agents across <a href="https://aws.amazon.com/ec2/">Amazon EC2</a>. When the build completes, the instances terminate and billing stops. This dynamic allocation ensures that costs reflect actual activity rather than fixed capacity planning.</p><h4>The Spot Strategy</h4><p>Cost efficiency is further enhanced through <a href="https://aws.amazon.com/ec2/spot/">Amazon EC2 Spot Instances</a>. Because Horde Agents are stateless and easily replaceable, workloads can be redistributed if an instance is interrupted making them ideal for Spot usage. By tapping into AWS’s surplus compute capacity, up to 90% of build agents can run on Spot Instances, delivering substantial savings without compromising reliability or throughput. This approach significantly reduces the per-build cost while preserving the performance and scale expected from a production-grade CI/CD pipeline.</p><h4>Staying Current Without Reinvestment</h4><p>On-premise infrastructure inevitably ages, tying studios to outdated hardware until the next budget cycle. AWS removes this limitation by providing immediate access to the latest CPU and GPU generations as they become available. This continuous modernization keeps Horde CI/CD running on optimal infrastructure without reinvestment, maintenance overhead, or the depreciation associated with static assets.</p><h3>Accelerating Builds with On-Demand Scalability</h3><p>Beyond cost efficiency, the most tangible advantage of running Horde CI/CD on AWS lies in the speed and responsiveness of the build process. In game development, build velocity directly determines how quickly teams can test, iterate, and innovate.</p><h4>Eliminating Wait Times with Instant Compute</h4><p>The most significant obstacle to developer productivity is waiting whether for a build to complete, a test to run, or an iteration to deploy. When Horde operates on AWS, this friction effectively disappears.</p><p>The system can mobilize hundreds of CPU cores <em>within minutes</em>, distributing compilation and cooking workloads across scalable compute resources. During high-activity periods, such as milestone check-ins, this capability removes bottlenecks entirely. Developers receive faster feedback, spend less time context-switching, and can validate smaller, more frequent changes without delay.</p><h4>Scaling Seamlessly with Project Growth</h4><p>As game projects evolve, the demands on build infrastructure expand new features, additional platforms, larger teams. On-premise environments struggle to keep pace, requiring months of budgeting, procurement, and configuration to add capacity. AWS eliminates these constraints by providing near-unlimited scalability.</p><p>Build Agents can be deployed horizontally and instantly, ensuring that compute capacity grows in step with the project. Whether scaling for multiple console launches or a sudden increase in developer headcount, the pipeline remains responsive and future-ready, ensuring that infrastructure never becomes the limiting factor in a project’s timeline.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*e5Mchwh78pN32Ox0dPO2UQ.png" /><figcaption><em>Horde on AWS Architecture Diagram</em></figcaption></figure><h3>Operational Agility and Resilience</h3><p>Beyond cost optimization and speed, running Horde CI/CD on AWS transforms operational management. The platform introduces built-in resilience, automation, and observability that shift focus away from infrastructure maintenance toward continuous delivery and innovation.</p><h4>Infrastructure as Code for Repeatability and Control</h4><p>Migrating to AWS enables full Infrastructure as Code (IaC) implementation through tools such as Terraform or AWS CloudFormation a strategic leap from traditional, manually managed setups.</p><ul><li><strong>Versioned Infrastructure:</strong> Every component of the Horde environment servers, storage, and networking is defined as code. These definitions are version-controlled, auditable, and easily replicated across environments.</li><li><strong>Rapid Disaster Recovery:</strong> In the event of a failure, the entire Horde pipeline can be redeployed in a different AWS Region or Availability Zone within minutes, ensuring continuity that physical infrastructure cannot match.</li><li><strong>Environmental Consistency:</strong> IaC guarantees that development, testing, and production environments remain identical, eliminating discrepancies and minimizing integration issues.</li></ul><h4>Built-in Resilience and Durable Storage</h4><p>Physical data centers inherently carry risks hardware failures, network interruptions, or localized outages. AWS mitigates these risks through architecture designed for fault tolerance and data protection.</p><ul><li><strong>High Availability:</strong> AWS services operate across multiple Availability Zones (AZs), each a distinct, isolated facility. If one zone experiences disruption, Horde operations continue unaffected in another.</li><li><strong>Data Durability:</strong> Build artifacts, logs, and assets stored in Amazon S3 benefit from industry-leading durability guarantees, ensuring essential data remains secure and recoverable. This level of reliability would be costly and complex to replicate on-premise.</li></ul><h4>Refocusing Engineering Effort on Core Development</h4><p>Delegating infrastructure management to AWS allows engineering teams to focus exclusively on the areas that add creative and technical value. Responsibilities such as power management, hardware replacement, and network patching are handled by AWS, freeing teams to refine the Horde pipeline, optimize build performance, and integrate custom development tools. The outcome is a leaner, more efficient operation that accelerates delivery and innovation without the operational weight of maintaining physical servers.</p><h3>Conclusion</h3><p>Adopting AWS for Horde CI/CD establishes a foundation that is elastic, cost-optimized, and resilient designed to support studios pursuing ambitious Unreal Engine projects at any scale.</p><p>The cloud model directly addresses the three key constraints that limit studio performance today:</p><ul><li><strong>Financial Rigidity:</strong> Transitioning from fixed capital investments to a flexible, usage-based model unlocks financial agility. Leveraging services such as Amazon EC2 Spot Instances delivers significant cost efficiency and virtually unlimited scalability without long-term capital lock-in.</li><li><strong>Innovation Bottlenecks:</strong> With on-demand compute capacity, build queues and developer idle time are eliminated. Teams can iterate continuously, maintaining creative momentum without being constrained by infrastructure capacity.</li><li><strong>Operational Risk:</strong> Through Infrastructure as Code and resilient services such as Amazon S3 and multi-AZ architectures, the environment achieves high availability, rapid recoverability, and long-term data durability.</li></ul><p>This transformation releases engineering teams from the burden of managing hardware, power, and cooling, allowing them to focus entirely on improving build efficiency and advancing the game itself.</p><h3>About TrackIt</h3><p><a href="https://trackit.io/">TrackIt </a>is an international AWS cloud consulting, systems integration, and software development firm headquartered in Marina del Rey, CA.</p><p>We have built our reputation on helping media companies architect and implement cost-effective, reliable, and scalable Media &amp; Entertainment workflows in the cloud. These include streaming and on-demand video solutions, media asset management, and archiving, incorporating the latest AI technology to build bespoke media solutions tailored to customer requirements.</p><p>Cloud-native software development is at the foundation of what we do. We specialize in Application Modernization, Containerization, Infrastructure as Code and event-driven serverless architectures by leveraging the latest AWS services. Along with our Managed Services offerings which provide 24/7 cloud infrastructure maintenance and support, we are able to provide complete solutions for the media industry.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=c7c306a09536" width="1" height="1" alt=""><hr><p><a href="https://medium.com/trackit/modernizing-unreal-horde-ci-cd-moving-from-on-premise-infrastructure-to-aws-c7c306a09536">Modernizing Unreal Horde CI/CD: Moving from On-Premise Infrastructure to AWS</a> was originally published in <a href="https://medium.com/trackit">TrackIt</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Scaling Media Asset Management with Kubernetes: Deploying a MAM for Production on Amazon EKS]]></title>
            <link>https://medium.com/trackit/scaling-media-asset-management-with-kubernetes-deploying-a-mam-for-production-on-amazon-eks-19c0d754bf19?source=rss-30180446cafa------2</link>
            <guid isPermaLink="false">https://medium.com/p/19c0d754bf19</guid>
            <category><![CDATA[cloud-services]]></category>
            <category><![CDATA[aws]]></category>
            <category><![CDATA[media-asset-management]]></category>
            <category><![CDATA[kubernetes]]></category>
            <category><![CDATA[cloud-computing]]></category>
            <dc:creator><![CDATA[Maxime]]></dc:creator>
            <pubDate>Fri, 02 May 2025 15:21:42 GMT</pubDate>
            <atom:updated>2025-05-02T15:21:42.995Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/800/1*Ega0h8U1wEUp5Ic1BpoBFQ.png" /></figure><p>The adoption of cloud-native technologies has transformed the way media workflows are designed and deployed. As organizations face growing volumes of digital assets and increasingly complex distribution needs, scalable and resilient infrastructure becomes essential. Kubernetes has emerged as a popular choice for orchestrating such workloads, but managing stateful services within clusters can introduce operational challenges.</p><p>Below is a detailed overview of deploying a <strong>Media Asset Management (MAM</strong>) solution — Phraseanet — in a production environment using <a href="https://aws.amazon.com/eks/">Amazon EKS (Elastic Kubernetes Service)</a>. While the application is containerized and orchestrated with Kubernetes, critical stateful components such as the database, Redis, Elasticsearch, and RabbitMQ are offloaded to AWS managed services. This hybrid approach reduces operational complexity and enhances reliability, allowing the MAM to meet production-grade performance and scalability requirements.</p><p><em>Note: While this article focuses primarily on Phraseanet, the concepts discussed — such as auto-scaling and managed services — are broadly applicable and can be adapted for other MAM systems.</em></p><h4>Why Use Amazon Managed Services Instead of Pods?</h4><p>Leveraging AWS managed services enhances performance, reliability, and scalability while minimizing the operational burden associated with managing these components within a Kubernetes cluster. Services such as Amazon RDS and Amazon ElastiCache for Redis deliver optimized performance and reliability through continuous monitoring and maintenance by AWS, offering features like high availability and automatic failover.</p><p>Built-in scalability allows seamless capacity adjustments based on demand, a critical feature for managing unpredictable traffic patterns in media asset workflows. These services also integrate efficiently with the broader AWS ecosystem, contributing to a cohesive and secure cloud infrastructure. Compliance standards and robust security features are supported by default.</p><h4>Phraseanet’s Infrastructure Recap</h4><p>To determine which services are suitable for replacement with AWS managed solutions, below is a summary of the components used in the current Phraseanet infrastructure:</p><ul><li><strong>Phraseanet Gateway</strong>: Serves as the main access point, routing traffic and handling frontend requests.</li><li><strong>Database (MySQL)</strong>: Stores metadata and associated information for media assets, supporting search, retrieval, and scalability.</li><li><strong>Worker Service</strong>: Manages background tasks such as media processing, transcoding, and workflow automation.</li><li><strong>Elasticsearch</strong>: Indexes and searches large media libraries, improving metadata retrieval and content discovery.</li><li><strong>FPM (FastCGI Process Manager)</strong>: Handles PHP processes for frontend and backend interfaces, ensuring responsive user interactions.</li><li><strong>RabbitMQ</strong>: Facilitates messaging between services, enabling reliable communication for components like the worker service.</li><li><strong>Redis</strong>: Provides caching and session management to reduce database load and improve performance.</li><li><strong>Phraseanet Setup</strong>: Initializes the environment by configuring databases and essential system settings.</li></ul><p>These services are currently deployed as Kubernetes pods. The objective is to migrate selected components to AWS managed services.</p><h4>Phraseanet Services Transitioning to AWS Managed Solutions</h4><p>The key components suitable for AWS-managed replacements include Redis, RabbitMQ, Elasticsearch, and the MySQL database. These are mapped to the following AWS services:</p><ul><li><strong>Amazon RDS:</strong> A managed database offering high availability, automated backups, and seamless scaling.</li><li><strong>Amazon ElastiCache:</strong> A managed Redis or Valkey service optimized for caching and real-time use cases, with built-in security and failover.</li><li><strong>Amazon OpenSearch:</strong> A managed Elasticsearch service providing scalable search and analytics capabilities.</li><li><strong>Amazon MQ:</strong> A managed RabbitMQ service that simplifies messaging and integrates well with other AWS services.</li></ul><p>Valkey was selected over Redis for ElastiCache. As an open-source fork of Redis, Valkey offers enhanced multi-threading performance and improved memory efficiency, while maintaining full compatibility with Redis APIs. It is also supported by AWS with pricing up to 33% lower, making it a more cost-effective option.</p><h3>Monitoring &amp; Alerts Management</h3><p>Amazon CloudWatch is used for monitoring and alert management. As a native AWS service, CloudWatch provides seamless integration, real-time monitoring, and automated alerts. It supports performance tracking, anomaly detection, and rapid incident response to ensure system reliability and operational efficiency.</p><h4><strong>Monitoring Managed Services</strong></h4><p>For Amazon RDS, an email alert notifies administrators when CPU usage exceeds 80% over a specified time period. This proactive measure helps maintain performance and prevent potential disruptions. Similar alerts can be configured for other managed services, monitoring critical metrics such as CPU utilization, memory usage, and disk space.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*HT0FTSfKbwX2EWABSkGnMg.png" /><figcaption>Amazon RDS Metrics</figcaption></figure><h4>Monitoring Pods and the EKS Cluster</h4><p>Amazon <strong>CloudWatch Container Insights</strong> is used to monitor the EKS cluster and associated pods. This service provides visibility into the performance and health of the Kubernetes environment by collecting and displaying critical metrics. It enables tracking of resource utilization, including CPU and memory usage, across nodes and individual pods.</p><p>Alarms can be configured for a wide range of components — nodes, namespaces, pods, and more — triggering notifications in the event of anomalies or threshold breaches.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*3s4_hVU7ymGSP_m7Yj9ZgA.png" /><figcaption>CloudWatch Container Insights for EKS</figcaption></figure><h4>Automating the Infrastructure with Auto Mode</h4><p><strong>EKS Auto Mode</strong> is used to deploy the MAM system due to its numerous benefits over the traditional EKS setup. Auto Mode simplifies cluster management by automating provisioning, scaling, and maintenance tasks for compute, storage, and networking infrastructure. This reduces the need for manual configuration and allows teams to concentrate on deploying and managing applications.</p><h4>How Does EKS Auto Mode Scale Automatically?</h4><p>EKS Auto Mode dynamically adjusts the number of EC2 instances based on workload demands. It integrates with Karpenter to provision and scale nodes efficiently, ensuring that clusters are resourced appropriately, reducing idle capacity, and improving responsiveness under load.</p><h4>Difference from Managed Node Groups</h4><p>Managed Node Groups in EKS offer automated management and scaling for EC2 worker nodes, but still require user-defined instance types, scaling policies, and update configurations. While this allows for greater customization, it also increases operational complexity compared to Auto Mode, which is fully automated.</p><h4>Configuring Auto Mode</h4><p>The use of Auto Mode is specified in the Terraform configuration, along with a general-purpose node pool. When deploying YAML files via HELM, EKS automatically selects the nodes best suited to the workload. This setup, managed through Terraform, offers a straightforward deployment experience.Additional details on EKS Auto Mode can be found in the full article available <a href="https://medium.com/trackit/simplifying-kubernetes-management-with-eks-auto-mode-997e47d46e37">here</a>.</p><h4>Conclusion</h4><p>Deploying a Media Asset Management system in a production environment becomes significantly more scalable and efficient with EKS Auto Mode and AWS managed services. Auto Mode reduces infrastructure management complexity, while managed services enhance performance, security, and reliability.</p><p>Automated scaling, real-time monitoring, and proactive alert systems contribute to a robust and responsive architecture, well-suited to dynamic production environments and the evolving demands of media asset workflows.</p><h4>About TrackIt</h4><p>TrackIt is an international AWS cloud consulting, systems integration, and software development firm headquartered in Marina del Rey, CA.</p><p>We have built our reputation on helping media companies architect and implement cost-effective, reliable, and scalable Media &amp; Entertainment workflows in the cloud. These include streaming and on-demand video solutions, media asset management, and archiving, incorporating the latest AI technology to build bespoke media solutions tailored to customer requirements.</p><p>Cloud-native software development is at the foundation of what we do. We specialize in Application Modernization, Containerization, Infrastructure as Code and event-driven serverless architectures by leveraging the latest AWS services. Along with our Managed Services offerings which provide 24/7 cloud infrastructure maintenance and support, we are able to provide complete solutions for the media industry.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=19c0d754bf19" width="1" height="1" alt=""><hr><p><a href="https://medium.com/trackit/scaling-media-asset-management-with-kubernetes-deploying-a-mam-for-production-on-amazon-eks-19c0d754bf19">Scaling Media Asset Management with Kubernetes: Deploying a MAM for Production on Amazon EKS</a> was originally published in <a href="https://medium.com/trackit">TrackIt</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Simplifying Kubernetes Management with EKS Auto Mode]]></title>
            <link>https://medium.com/trackit/simplifying-kubernetes-management-with-eks-auto-mode-997e47d46e37?source=rss-30180446cafa------2</link>
            <guid isPermaLink="false">https://medium.com/p/997e47d46e37</guid>
            <category><![CDATA[cloud-computing]]></category>
            <category><![CDATA[kubernetes]]></category>
            <category><![CDATA[aws]]></category>
            <category><![CDATA[terraform]]></category>
            <category><![CDATA[cloud]]></category>
            <dc:creator><![CDATA[Maxime]]></dc:creator>
            <pubDate>Thu, 13 Mar 2025 14:59:25 GMT</pubDate>
            <atom:updated>2025-03-13T14:59:25.770Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/800/0*6AE0A6q6conTYvrX" /></figure><p>At AWS re:Invent 2024, Amazon introduced Auto Mode for Elastic Kubernetes Service (EKS), a new feature that simplifies Kubernetes cluster management. This deep dive into EKS Auto Mode explores its capabilities and benefits through a hands-on demonstration. Using Terraform, the guide below walks through each step of deploying an EKS Auto Mode cluster, showcasing how this new feature streamlines operations and enhances the cloud-native experience.</p><h3>Understanding EKS Auto Mode: Features and Benefits</h3><p>EKS Auto Mode simplifies running an EKS cluster by handling complex tasks like managing the Kubernetes control plane, maintaining controllers for load balancing and auto-scaling, and configuring IAM roles and policies. It also streamlines IAM setup for CSI (Container Storage Interface) drivers, enabling persistent storage while reducing operational overhead.</p><p><strong>Key Features:</strong></p><ul><li><strong>Streamlined Management</strong>: Provides production-ready clusters with minimal operational overhead.</li><li><strong>Application Availability</strong>: Dynamically scales nodes based on application demands, reducing manual capacity planning (powered by Karpenter for autoscaling).</li><li><strong>Efficiency</strong>: Optimizes compute costs by terminating unused instances and consolidating workloads.</li><li><strong>Security</strong>: Uses immutable AMIs with locked-down software and regular node cycling for enhanced security.</li><li><strong>Automated Upgrades</strong>: Keeps clusters and components up-to-date with the latest patches.</li><li><strong>Managed Components</strong>: Includes built-in support for essential Kubernetes and AWS features.</li><li><strong>Customizable NodePools and NodeClasses</strong>: Allows for tailored configurations to meet specific workload requirements.</li></ul><h3>How EKS Auto Mode Updates the Cluster Automatically</h3><p>EKS Auto Mode simplifies Kubernetes version upgrades by handling control plane updates and node replacements while maintaining workload availability through pod disruption budgets. Components such as the Amazon EBS CSI driver are managed as integrated services, eliminating the need for manual installation or updates.</p><p>This approach differs from standard EKS clusters, where components like the EBS CSI driver are typically installed and managed as add-ons. In EKS Auto Mode, AWS oversees the lifecycle of these components, ensuring they remain up to date and properly configured.</p><p>For example, when deploying an application with Auto Mode, the StorageClass references the provisioner ebs.csi.eks.amazonaws.com, which AWS manages as part of the service. In a standard EKS cluster, the provisioner ebs.csi.aws.com is used instead, requiring manual installation and management of the EBS CSI driver.</p><p><strong>Automated Updates:</strong></p><ul><li>Nodes are replaced with the new Kubernetes version.</li><li>Components like CoreDNS, KubeProxy, AWS Load Balancer Controller, Karpenter, and AWS EBS CSI Driver are automatically updated.</li></ul><p><strong>User Responsibilities:</strong></p><ul><li>Updating apps and workloads.</li><li>Managing self-deployed add-ons and controllers.</li><li>Updating Amazon EKS Add-ons.</li></ul><h3>Tutorial: EKS Auto Mode</h3><h4>Create the cluster</h4><p>A closer look at cluster creation provides a better understanding of the EKS Auto Mode concept. This walkthrough covers deployment using Terraform.</p><p>Note: The provided configuration is simplified for demonstration purposes and is not intended for production use.</p><h4>Configuration</h4><p>The Terraform code used in this tutorial is available in this repository:<a href="https://github.com/MaximeMRF/eks-auto-mode-tutorial"> https://github.com/MaximeMRF/eks-auto-mode-tutorial</a>.</p><p>Before getting started, make sure the AWS CLI is properly configured and that both kubectl and Terraform are installed.</p><p>Begin by reviewing the terraform.tfvars file to ensure the variables align with project requirements. For instance, availability zones (AZs) may need to be adjusted from Europe to the US.</p><p>Next, open the eks.tf file. In the cluster_compute_config object, the enabled property is set to true, indicating that Auto Mode is activated for compute, network, and storage.</p><p>This configuration also creates a node pool named <strong>“general-purpose”</strong>. EKS automatically selects the nodes and instance sizes while scaling them using Karpenter, removing the need for manual setup.</p><h4>Deployment of the cluster</h4><p>To set up the project and install dependencies, run the following command:</p><pre>terraform init</pre><p>Next, generate the execution plan:</p><pre>terraform plan</pre><p>Finally, deploy the configuration:</p><pre>terraform apply -auto-approve</pre><p>Once the cluster is ready, update the kubectl configuration to access it. Adjust the cluster name and region based on the defined variables:</p><pre>aws eks update-kubeconfig --name eks-auto-mode-cluster --region eu-north-1</pre><h4>Understanding the Auto-scaling</h4><p>The cluster is now running. Listing the nodes with kubectl will show no active nodes yet, as no pods have been deployed.</p><p>To see how nodes are created when pods are scheduled, apply the deploy.yml file from the repository. This will launch a BusyBox container that runs indefinitely.</p><pre>kubectl apply -f deploy.yml</pre><p>Listing the nodes again will now show that EKS has created a node to run the container.</p><pre>kubectl get nodes</pre><p>As more pods are added, EKS will either launch new nodes or assign them to existing ones with available resources, all without the need for manual scaling configuration.</p><h4>How EKS Auto Mode Manages Storage</h4><p>Without Auto Mode, IAM permissions must be configured manually by retrieving and attaching the AmazonEBSCSIDriverPolicy to the cluster’s node role, allowing the EBS CSI driver to manage volumes. Auto Mode includes a built-in CSI driver with preconfigured permissions.</p><p>To test volume management with Auto Mode, use the preconfigured YAML file available in the kubernetes-objects folder of the repository.</p><h4>How EKS Auto Mode Handles Load Balancing</h4><p>EKS Auto Mode simplifies load balancing by removing the need for manual IAM policy creation and attachment, as these are preconfigured. It includes a built-in controller for provisioning load balancers with the necessary permissions.</p><p>Without Auto Mode, IAM policies must be manually created and attached to enable the AWS Load Balancer Controller (LBC). Auto Mode also requires defining an IngressClass and IngressClassParams, which are optional when using LBC. The IngressClass specifies the controller, with Auto Mode using eks.amazonaws.com/alb. This setup streamlines ALB management and reduces manual configuration.</p><h3>Conclusion</h3><p>EKS Auto Mode simplifies Kubernetes cluster management by automating tasks such as resource scaling, system patching, and security enforcement. This allows teams to focus on application development rather than infrastructure maintenance. It provides a production-ready environment that is efficient, secure, and continuously updated without the operational complexity.</p><h4>About TrackIt</h4><p><a href="https://trackit.io/">TrackIt</a> is an international AWS cloud consulting, systems integration, and software development firm headquartered in Marina del Rey, CA.</p><p>We have built our reputation on helping media companies architect and implement cost-effective, reliable, and scalable Media &amp; Entertainment workflows in the cloud. These include streaming and on-demand video solutions, media asset management, and archiving, incorporating the latest AI technology to build bespoke media solutions tailored to customer requirements.</p><p>Cloud-native software development is at the foundation of what we do. We specialize in Application Modernization, Containerization, Infrastructure as Code and event-driven serverless architectures by leveraging the latest AWS services. Along with our Managed Services offerings which provide 24/7 cloud infrastructure maintenance and support, we are able to provide complete solutions for the media industry.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=997e47d46e37" width="1" height="1" alt=""><hr><p><a href="https://medium.com/trackit/simplifying-kubernetes-management-with-eks-auto-mode-997e47d46e37">Simplifying Kubernetes Management with EKS Auto Mode</a> was originally published in <a href="https://medium.com/trackit">TrackIt</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Access financial datas in real time with the Pyth Network]]></title>
            <link>https://medium.com/@maximemrf/access-financial-datas-in-real-time-with-the-pyth-network-1f614bf9ea45?source=rss-30180446cafa------2</link>
            <guid isPermaLink="false">https://medium.com/p/1f614bf9ea45</guid>
            <category><![CDATA[javascript]]></category>
            <category><![CDATA[pythnet]]></category>
            <category><![CDATA[pyth]]></category>
            <category><![CDATA[typescript]]></category>
            <category><![CDATA[cryptocurrency]]></category>
            <dc:creator><![CDATA[Maxime]]></dc:creator>
            <pubDate>Tue, 14 Jan 2025 20:38:11 GMT</pubDate>
            <atom:updated>2025-04-09T14:32:59.854Z</atom:updated>
            <content:encoded><![CDATA[<h3>Access financial datas in real time with the Pyth Network and Node.js</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/600/1*njj_L5b6uK4fe5_vCWdA5w.png" /></figure><p>In this article I will show you how to access financial datas in real time in your Javascript (Node.js) application thanks to the power of the Pyth Network, a oracle blockchain.</p><h3>The Pyth Network</h3><p>Pyth Network is a decentralized oracle solution designed to deliver high-fidelity, real-time financial market data directly to blockchain applications. By bridging the gap between traditional finance and decentralized ecosystems, Pyth enables smart contracts to access accurate and timely information essential for various decentralized finance (DeFi) applications.</p><p>Unlike traditional oracles that rely on aggregated third-party data, Pyth sources its information directly from first-party providers (<a href="https://www.pyth.network/blog/pyth-network-and-revolut-supercharging-the-future-of-mainstream-finance">like Revolut</a>), including leading exchanges, trading firms, and market makers. This direct sourcing ensures the data’s authenticity, low latency, and high precision, making it particularly valuable for applications requiring real-time market insights.</p><h3>The Hermes server</h3><p><strong>Hermes</strong> is an open-source service that continuously monitors Pythnet and the Wormhole Network for Pyth price updates, making them accessible through a user-friendly web API.</p><p>Hermes allows users to easily retrieve the latest price updates via a REST API or subscribe to SSE <a href="https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events">(Server Send Events)</a> for real-time streaming data.</p><p>This infrastructure simplifies the integration of Pyth’s financial data into various on-chain and off-chain applications, providing developers with efficient access to real-time market information.</p><p>In our case we will not use directly the web API from scratch, Pyth makes available to us a Typescript client package to interact with the Hermes server.</p><h3>Installing the Hermes client</h3><p>Before installing the client make you sure that you have Node.js and npm (or another package manager) installed on your computer by doing this command:</p><pre>node -v &amp;&amp; npm -v</pre><p>It should output the version of Node and the version of your packet manager.</p><p>Create a new folder a then write the command npm init -y to init a new Node.js project with all default parameters.</p><p>Then install the Hermes client with npm i @pythnetwork/hermes-client and create a new Javascript file called for example index.js .</p><p>Also, add &quot;type&quot;: &quot;module&quot; at your package.json object because we will use imports instead of required.</p><p>In a first time I will show you how to get real time datas from Hermes and in a second time how to setup a simple alert program if a price go up or down.</p><blockquote>You can find all the code of this tutorial <a href="https://github.com/MaximeMRF/pyth-network-offchain-example">here</a>.</blockquote><h3>Retrieve datas from Hermes</h3><p>The Pyth blockchain expose more than 600 price feeds and it will grow more in the time, <a href="https://www.pyth.network/price-feeds">here is the list of prices currently supported</a>.</p><p>For this example we will retrieve only 4 different prices (BTC/USD, ETH/USD, AVAX/USD, SOL/USD) but feel free to adapt this program to get more prices.</p><p>So now, let’s code ! Open the index.js and copy this following program:</p><pre>import { HermesClient } from &#39;@pythnetwork/hermes-client&#39;<br><br>// Initiate a connexion with the Hermes server hosted by the pyth fondation<br>const connection = new HermesClient(&#39;https://hermes.pyth.network&#39;)<br><br>// here is the list of prices: https://www.pyth.network/price-feeds<br>// This object contains the key that is the index and it correspondance in ids<br>const PriceIds = {<br>  BTC_USD: &#39;e62df6c8b4a85fe1a67db44dc12de5db330f7ac66b72dc658afedf0f4a415b43&#39;,<br>  ETH_USD: &#39;ff61491a931112ddf1bd8147cd1b641375f79f5825126d665480874634fd0ace&#39;,<br>  AVAX_USD: &#39;93da3352f9f1d105fdfe4971cfa80e9dd777bfc5d0f683ebb6e1294b92137bb7&#39;,<br>  SOL_USD: &#39;ef0d8b6fda2ceba41da15d4095d1da392a0d2f8ed0c6c7bc0f4cfac8c280b56d&#39;,<br>}<br><br>// With the object PriceNames you can get the price index in a humain readable format<br>const PriceNames = {<br>  [PriceIds.BTC_USD]: &#39;BTC/USD&#39;,<br>  [PriceIds.ETH_USD]: &#39;ETH/USD&#39;,<br>  [PriceIds.AVAX_USD]: &#39;AVAX/USD&#39;,<br>  [PriceIds.SOL_USD]: &#39;SOL/USD&#39;,<br>}<br><br>// We have to transform our PriceIds object to a ids array<br>const priceIds = Object.values(PriceIds)<br><br>// Here we create our SSE connexion with the server<br>// thanks to EventSource: https://developer.mozilla.org/en-US/docs/Web/API/EventSource<br>const eventSource = await connection.getPriceUpdatesStream(priceIds, { parsed: true })<br><br>// We will get datas in json format<br>eventSource.onmessage = async (event) =&gt; {<br>  const data = JSON.parse(event.data)<br>  for (const item of data.parsed) {<br>    const priceName = PriceNames[item.id]<br>    const rawPrice = Number(item.price.price)<br>    const exponent = item.price.expo<br>    const priceParsed = rawPrice * Math.pow(10, exponent)<br>    console.log(`Price update for ${priceName}: ${priceParsed}`)<br>  }<br>}<br><br>eventSource.onerror = (error) =&gt; {<br>  console.error(&#39;Error receiving updates:&#39;, error)<br>  eventSource.close()<br>}</pre><p>Then run the code using node index.js it will display something like that indefinitely:</p><pre>Price update for BTC/USD: 96822.408314<br>Price update for ETH/USD: 3226.14867123<br>Price update for AVAX/USD: 36.50984607<br>Price update for SOL/USD: 187.21274583000002<br>Price update for BTC/USD: 96820.75503003<br>Price update for ETH/USD: 3226.10911453<br>Price update for AVAX/USD: 36.50984607<br>Price update for SOL/USD: 187.20670585<br>Price update for BTC/USD: 96822.45579042<br>Price update for ETH/USD: 3226.22825627<br>Price update for AVAX/USD: 36.51020737</pre><p>That’s nice ! We now have a program that retrieve price data in real time, in the second part we will setup a simple alerting system.</p><h3>Setup a alerting system</h3><p>If a price is going up or going down we want to receive a notification to maybe re-invest or sell our position.</p><p>Create a array of alert object in your code and add the loop below:</p><pre>import { HermesClient } from &quot;@pythnetwork/hermes-client&quot;<br><br>const connection = new HermesClient(&quot;https://hermes.pyth.network&quot;)<br><br>const PriceIds = {<br>  BTC_USD: &quot;e62df6c8b4a85fe1a67db44dc12de5db330f7ac66b72dc658afedf0f4a415b43&quot;,<br>  ETH_USD: &quot;ff61491a931112ddf1bd8147cd1b641375f79f5825126d665480874634fd0ace&quot;,<br>  AVAX_USD: &quot;93da3352f9f1d105fdfe4971cfa80e9dd777bfc5d0f683ebb6e1294b92137bb7&quot;,<br>  SOL_USD: &quot;ef0d8b6fda2ceba41da15d4095d1da392a0d2f8ed0c6c7bc0f4cfac8c280b56d&quot;,<br>}<br><br>const PriceNames = {<br>  [PriceIds.BTC_USD]: &quot;BTC/USD&quot;,<br>  [PriceIds.ETH_USD]: &quot;ETH/USD&quot;,<br>  [PriceIds.AVAX_USD]: &quot;AVAX/USD&quot;,<br>  [PriceIds.SOL_USD]: &quot;SOL/USD&quot;,<br>}<br><br>// Add a object for each alert you want to set<br>// you can adapt it to come from a configuration file or a database<br>const alerts = [<br>  { id: PriceIds.BTC_USD, direction: &quot;up&quot;, targetPrice: 96000 },<br>  { id: PriceIds.ETH_USD, direction: &quot;below&quot;, targetPrice: 3100 },<br>  { id: PriceIds.AVAX_USD, direction: &quot;up&quot;, targetPrice: 35 },<br>  { id: PriceIds.SOL_USD, direction: &quot;below&quot;, targetPrice: 200 },<br>]<br><br>const priceIds = Object.values(PriceIds);<br><br>const eventSource = await connection.getPriceUpdatesStream(priceIds, {<br>  parsed: true,<br>})<br><br>eventSource.onmessage = async (event) =&gt; {<br>  const data = JSON.parse(event.data)<br>  for (const item of data.parsed) {<br>    const priceName = PriceNames[item.id]<br>    const rawPrice = Number(item.price.price)<br>    const exponent = item.price.expo<br>    const priceParsed = rawPrice * Math.pow(10, exponent)<br>    console.log(`Price update for ${priceName}: ${priceParsed}`)<br>    // filter alerts for the current price<br>    const avalaibleAlerts = alerts.filter((alert) =&gt; alert.id === item.id)<br>    // check if the price triggers any alert<br>    for (const alert of avalaibleAlerts) {<br>      if (alert.direction === &quot;up&quot; &amp;&amp; priceParsed &gt;= alert.targetPrice) {<br>        console.log(`Alert for ${priceName} triggered: ${priceParsed} &gt;= ${alert.targetPrice}`)<br>      }<br>      if (alert.direction === &quot;below&quot; &amp;&amp; priceParsed &lt;= alert.targetPrice) {<br>        console.log(`Alert for ${priceName} triggered: ${priceParsed} &lt;= ${alert.targetPrice}`)<br>      }<br>    }<br>  }<br>}<br><br>eventSource.onerror = (error) =&gt; {<br>  console.error(&quot;Error receiving updates:&quot;, error)<br>  eventSource.close()<br>}</pre><p>Now, re-run your script and maybe depending of the current market price you will see appeared alerts like that:</p><pre>Alert for AVAX/USD triggered: 36.41689888 &gt;= 35<br>Price update for SOL/USD: 186.55917611<br>Alert for SOL/USD triggered: 186.55917611 &lt;= 200<br>Price update for BTC/USD: 96651<br>Alert for BTC/USD triggered: 96651 &gt;= 96000<br>Price update for ETH/USD: 3215.6112736<br>Price update for AVAX/USD: 36.417797220000004<br>Alert for AVAX/USD triggered: 36.417797220000004 &gt;= 35<br>Price update for SOL/USD: 186.56284226<br>Alert for SOL/USD triggered: 186.56284226 &lt;= 200</pre><p>Our alerts are simple console.log at the moment but you can easily setup alerts by sms, webhook, discord webhook and maybe emails.</p><h3>Conclusion</h3><p>You can now retrieve real time prices using the Pyth blockchain and setup alerts, don’t hesitate to learn more about Pyth Network and improve the tutorial code if you want to do something serious because I only scratch the surface. Maybe next step is to create a web app to allow users to set up their own alerts ?</p><p><a href="https://github.com/MaximeMRF/pyth-network-offchain-example">Here you can find all the code used in this tutorial</a>, don’t hesitate to give me a star on github !</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=1f614bf9ea45" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Authentication with Adonisjs v6 and jwt token]]></title>
            <link>https://medium.com/@maximemrf/authentication-with-adonisjs-v6-and-jwt-token-b5bb1ee8d65d?source=rss-30180446cafa------2</link>
            <guid isPermaLink="false">https://medium.com/p/b5bb1ee8d65d</guid>
            <category><![CDATA[jwt-token]]></category>
            <category><![CDATA[backend-development]]></category>
            <category><![CDATA[adonisjs]]></category>
            <category><![CDATA[cookies]]></category>
            <category><![CDATA[authentication]]></category>
            <dc:creator><![CDATA[Maxime]]></dc:creator>
            <pubDate>Sun, 01 Sep 2024 19:26:03 GMT</pubDate>
            <atom:updated>2024-09-01T19:28:41.687Z</atom:updated>
            <content:encoded><![CDATA[<h4>Introduction</h4><p>Through this article I will show you how to implement a authentication system via jwt token in a Adonisjs v6 application. But before telling you how to do that I would like you to ask yourself the question if the jwt token is the best way in your case to authenticate a user of your application.</p><h4>Differents kind of authentication</h4><p>Before beginning I invite you to read <a href="https://docs.adonisjs.com/guides/authentication/introduction#choosing-an-auth-guard">this part of the documentation</a> to know if it’s really the authentication type you want for your application.</p><blockquote><em>If you prefer to use the authentication by OAT token you can read </em><a href="https://medium.com/@maximemrf/authentication-with-adonisjs-v6-and-access-token-oat-6c8029827562"><em>this article.</em></a></blockquote><p>🇫🇷 <em>Tu parles français ? </em><a href="https://medium.com/@maximemrf/authentification-avec-adonisjs-v6-et-jwt-token-8a3dbb1fb51d"><em>voici le lien de l’article en français</em></a></p><h4>Installation</h4><p>Before beginning make you sure that you use node version 22 or at least node version 20.6 .</p><pre>node -v<br># v22.x.x</pre><p>There is <a href="https://docs.adonisjs.com/guides/getting-started/installation#starter-kits">3 differents official starter kit</a> provided by the Adonisjs team to don’t start the code from scratch and save some times. For this application we will use the api starter kit with the <a href="https://docs.adonisjs.com/guides/authentication/introduction#choosing-an-auth-guard">authentication guard</a> session because the jwt package that we will use is based in this guard.</p><pre>npm init adonisjs@latest -- -K=api --auth-guard=session</pre><p>Choose sqlite as the database for this tutorial it will really make the task easier.</p><p>After choose the differents options asked by the CLI and going to the new folder created we can go to the next step.</p><p>Then we can install the <a href="https://github.com/MaximeMRF/adonisjs-jwt">jwt package</a> that will allow us to have a jwt guard for our application.</p><pre>npm i @maximemrf/adonisjs-jwt</pre><blockquote>Don’t hesitate to give a ⭐️ to my <a href="https://github.com/MaximeMRF/adonisjs-jwt">jwt package</a>, it really motivate me to contribute more to opensource and write some other articles ❤️</blockquote><h4>Configuration</h4><p>To begin we will configurate our guard located at /config/auth.ts , replace the configuration by this one:</p><pre>import { defineConfig } from &#39;@adonisjs/auth&#39;<br>import { sessionUserProvider } from &#39;@adonisjs/auth/session&#39;<br>import { jwtGuard } from &#39;@maximemrf/adonisjs-jwt/jwt_config&#39;<br>import type { InferAuthEvents, Authenticators } from &#39;@adonisjs/auth/types&#39;<br><br>const authConfig = defineConfig({<br>  default: &#39;jwt&#39;,<br>  guards: {<br>    jwt: jwtGuard({<br>      // tokenExpiresIn is the duration of the validity of the token, it&#39;s a optional value<br>      tokenExpiresIn: &#39;1h&#39;,<br>      // if you want to use cookies for the authentication instead of the bearer token (optional)<br>      useCookies: true,<br>      provider: sessionUserProvider({<br>        model: () =&gt; import(&#39;#models/user&#39;),<br>      }),<br>    }),<br>  },<br>})</pre><p>If you would like to stock and send the jwt token via cookies let useCookies at true . Otherwise remove the option, the token will be passed by the authorization header (bearer). As well as for the expiration date of the tokens, it’s also a optional property.</p><h4>Controller</h4><p>Here is the login and register methods:</p><pre>export default class AuthController {<br>  // login if we use authorization bearer<br>  async login({ request, response, auth }: HttpContext) {<br>    const { email, password } = await request.validateUsing(loginValidator)<br><br>    const user = await User.verifyCredentials(email, password)<br>    const token = await auth.use(&#39;jwt&#39;).generate(user)<br><br>    return response.ok({<br>      token: token,<br>      ...user.serialize(),<br>    })<br>  }<br>  // login if we use the cookies (useCookies: true)<br>  async login({ request, response, auth }: HttpContext) {<br>    const { email, password } = await request.validateUsing(loginValidator)<br><br>    const user = await User.verifyCredentials(email, password)<br>    await auth.use(&#39;jwt&#39;).generate(user)<br><br>    return response.ok({<br>      ...user.serialize(),<br>    })<br>  }<br>  async register({ request, response }: HttpContext) {<br>    const payload = await request.validateUsing(registerValidator)<br><br>    const user = await User.create(payload)<br><br>    return response.created(user)<br>  }<br>}</pre><p>For more explanation about the validators and controllers used in this code I invite you to read <a href="https://medium.com/@maximemrf/authentication-with-adonisjs-v6-and-access-token-oat-6c8029827562">this article</a>.</p><h4>Conclusion</h4><p>Now you know how to create a JWT authentication system with Adonisjs. Feel free to read the official documentation to learn more.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=b5bb1ee8d65d" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Authentification avec Adonisjs v6 et jwt token]]></title>
            <link>https://medium.com/@maximemrf/authentification-avec-adonisjs-v6-et-jwt-token-8a3dbb1fb51d?source=rss-30180446cafa------2</link>
            <guid isPermaLink="false">https://medium.com/p/8a3dbb1fb51d</guid>
            <category><![CDATA[jwt-token]]></category>
            <category><![CDATA[jwt]]></category>
            <category><![CDATA[adonisjs]]></category>
            <category><![CDATA[authentication]]></category>
            <category><![CDATA[typescript]]></category>
            <dc:creator><![CDATA[Maxime]]></dc:creator>
            <pubDate>Mon, 10 Jun 2024 14:33:41 GMT</pubDate>
            <atom:updated>2024-06-10T14:33:41.849Z</atom:updated>
            <content:encoded><![CDATA[<h3>Introduction</h3><p>Dans cet article je vais vous montrer comment implémenter un système d’authentification via jwt token dans une application Adonisjs v6. Mais avant de se lancer tête la première dans mon tutoriel je vous propose de vous poser la question si les jwt token sont vraiment la meilleur manière d’authentifier un utilisateur dans votre application.</p><h3>Différent moyens de s’authentifier</h3><p>Avant de commencer le tutoriel je vous invite à lire <a href="https://docs.adonisjs.com/guides/auth#choosing-an-auth-guard">cette partie de la documentation</a> pour savoir si c’est vraiment ce type d’authentification qui est le mieux pour votre application.</p><p>Pour comprendre les différentes façon d’authentifier un utilisateur, je vous invite à voir la rediffusion de ce <a href="https://youtu.be/7ALYvSN8XZQ?si=RzRWrWN3mAj1kLWG">live twitch</a> où Romain Lanz, un core contributeur Adonisjs, qui vous explique les moyen de s’authentifier les plus commune avec leur avantages et leur inconvéniens.</p><blockquote>Si vous voulez plutôt utiliser l’authentification par OAT token je vous invite à lire <a href="https://medium.com/@maximemrf/authentification-avec-adonisjs-v6-et-access-token-oat-83c97387a39b">cet article</a>.</blockquote><h3>Installation</h3><p>Avant toute chose vérifiez que vous utilisez bien node v22 ou au moins node v20.6.</p><pre>node -v<br># v22.x.x</pre><p>Il existe <a href="https://docs.adonisjs.com/guides/getting-started/installation#starter-kits">3 starter kit officiel différent</a> pour ne pas commencer à développer son application à partir de rien. Pour cette application on va utiliser le starter kit api avec le <a href="https://docs.adonisjs.com/guides/authentication/introduction#choosing-an-auth-guard">guard d’authentification</a> session car le package jwt qu’on va utiliser se base sur ce guard.</p><pre>npm init adonisjs@latest -- -K=api --auth-guard=session</pre><p>Choisissez sqlite comme base de données ça va nous faciliter grandement la tâche pour ce tutoriel.</p><p>Après avoir rentré les différentes informations que vous demande le CLI et être aller dans le nouveau dossier créé nous pouvons passer à la suite.</p><p>Ensuite nous allons installer <a href="https://github.com/MaximeMRF/adonisjs-jwt">le package</a> qui va nous permettre d’avoir un jwt guard pour notre api et utiliser les jwt token.</p><pre>npm i @maximemrf/adonisjs-jwt</pre><blockquote>N’hésitez pas à mettre une ⭐️ à mon <a href="https://github.com/MaximeMRF/adonisjs-jwt">package jwt</a> ça me motive vraiment à contribuer à l’opensource et d’écrire d’autre articles ❤️</blockquote><h3>Configuration</h3><p>Dans un premier temps nous allons configurer notre guard situé à /config/auth.ts , remplacez la configuration par celle-ci:</p><pre>import { defineConfig } from &#39;@adonisjs/auth&#39;<br>import { sessionUserProvider } from &#39;@adonisjs/auth/session&#39;<br>import { jwtGuard } from &#39;@maximemrf/adonisjs-jwt/jwt_config&#39;<br>import type { InferAuthEvents, Authenticators } from &#39;@adonisjs/auth/types&#39;<br><br>const authConfig = defineConfig({<br>  default: &#39;jwt&#39;,<br>  guards: {<br>    jwt: jwtGuard({<br>      // tokenExpiresIn est la durée de validité du token, c&#39;est une valeur optionnelle<br>      tokenExpiresIn: &#39;1h&#39;,<br>      // si vous souhaitez envoyer le token dans un cookie à la place du header authorization, vous pouvez définir useCookies à true<br>      useCookies: true,<br>      provider: sessionUserProvider({<br>        model: () =&gt; import(&#39;#models/user&#39;),<br>      }),<br>    }),<br>  },<br>})</pre><p>Si vous souhaitez stocker et envoyer le token jwt via les cookies laissez useCookies à true. Sinon enlevez complètement l’option, les token devront être passé via le header authorization (bearer). Pareil pour l’expiration des tokens c’est une propriété optionnelle.</p><h3>Controller</h3><p>Voici les méthodes de login et register:</p><pre>export default class AuthController {<br>  // login si on utilise authorization bearer<br>  async login({ request, response, auth }: HttpContext) {<br>    const { email, password } = await request.validateUsing(loginValidator)<br><br>    const user = await User.verifyCredentials(email, password)<br>    const token = await auth.use(&#39;jwt&#39;).generate(user)<br><br>    return response.ok({<br>      token: token,<br>      ...user.serialize(),<br>    })<br>  }<br>  // login si on utilise les cookies (useCookies: true)<br>  async login({ request, response, auth }: HttpContext) {<br>    const { email, password } = await request.validateUsing(loginValidator)<br><br>    const user = await User.verifyCredentials(email, password)<br>    await auth.use(&#39;jwt&#39;).generate(user)<br><br>    return response.ok({<br>      ...user.serialize(),<br>    })<br>  }<br>  async register({ request, response }: HttpContext) {<br>    const payload = await request.validateUsing(registerValidator)<br><br>    const user = await User.create(payload)<br><br>    return response.created(user)<br>  }<br>}</pre><p>Pour des explications plus précises sur le fonctionnement des Controllers/Validateurs utilisé dans ce code ce sont les même que <a href="https://medium.com/@maximemrf/authentification-avec-adonisjs-v6-et-access-token-oat-83c97387a39b">ce tutoriel</a>.</p><h3>Conclusion</h3><p>Vous savez maintenant comment créer un système d’authentification par JWT avec Adonisjs. N’hésitez pas à lire la documentation officielle pour en apprendre d’avantage.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=8a3dbb1fb51d" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Authentication with Adonisjs v6 and access token (OAT)]]></title>
            <link>https://medium.com/@maximemrf/authentication-with-adonisjs-v6-and-access-token-oat-6c8029827562?source=rss-30180446cafa------2</link>
            <guid isPermaLink="false">https://medium.com/p/6c8029827562</guid>
            <category><![CDATA[backend]]></category>
            <category><![CDATA[adonisjs]]></category>
            <category><![CDATA[tutorial]]></category>
            <category><![CDATA[authentication]]></category>
            <category><![CDATA[token]]></category>
            <dc:creator><![CDATA[Maxime]]></dc:creator>
            <pubDate>Sat, 18 May 2024 09:16:08 GMT</pubDate>
            <atom:updated>2024-07-13T02:38:18.495Z</atom:updated>
            <content:encoded><![CDATA[<h3>Introduction</h3><p>In this article, I will show you how to set up an AdonisJS version 6 application and create an OAT (Opaque Access Token) authentication system.</p><p>Before starting the tutorial, I invite you to read <a href="https://docs.adonisjs.com/guides/authentication/introduction#choosing-an-auth-guard">this part of the documentation</a> to find out if this type of authentication is really the best for your application.</p><p><a href="https://github.com/MaximeMRF/adonisjs-oat-auth-tutorial">Here is the github repository with the complete code</a>.</p><p>Feel free to give a ⭐ to my repository, it really help me a lot !</p><p>🇫🇷 <em>Tu parles français ? </em><a href="https://medium.com/@maximemrf/authentification-avec-adonisjs-v6-et-access-token-oat-83c97387a39b"><em>voici le lien de l’article en français</em></a></p><h3>Installation</h3><p>To begin, verify that you use node v22 or at least node v20.6</p><pre>node -v # v22.x.x</pre><p>There are three different official starter kits available so you don’t have to start developing your application from scratch.</p><p>There’s the “slim” kit, which contains just the core of the framework and the default file and folder structure of AdonisJS.</p><p>Next comes the “web” kit, which includes a variety of AdonisJS packages like Lucid, which is the framework’s ORM, and a template engine called Edge. The web kit is a good base for creating an application that renders views in HTML or Alpine.js, for example.</p><p>Finally, the third kit, and the one we will use, is the “API” kit. It allows you to easily create APIs that render JSON.</p><p>Below is the command to install it. Here, for the -K flag, we will choose “api” and we will specify that we want OAT tokens with this flag: --auth-guard=access_tokens</p><pre>npm init adonisjs@latest -- -K=api --auth-guard=access_tokens</pre><p>After entering the various details requested by the CLI and going into the newly created folder, we can proceed to the next step.</p><h3>Migrations</h3><p>During the installation of the kit, we could see that Lucid, the ORM of AdonisJS, was automatically configured to use SQLite as the database. For practical reasons, we will keep this database for the rest of the tutorial. The authentication package was also configured to use OATs, so there’s nothing more for us to do on that front.</p><p>If we run node ace migration:status, we can see that the kit has automatically created two migrations: one for the tokens and one for the user. All that&#39;s left is to migrate the database with the following command.</p><pre>node ace migration:run</pre><h3>Controller</h3><p>Once the tables have been migrated it is time to create our controller for authentication and give it the name “auth”.</p><pre>node ace make:controller auth</pre><p>The new controller is situated at app/controller</p><h3>Creation of the register route</h3><p>To begin we will create the register route which will allow us to register user to our application.</p><pre>import type { HttpContext } from &#39;@adonisjs/core/http&#39;<br>import { registerValidator } from &#39;#validators/auth&#39;<br>import User from &#39;#models/user&#39;<br><br>export default class AuthController {<br>  async register({ request, response }: HttpContext) {<br>    const payload = await request.validateUsing(registerValidator)<br><br>    const user = await User.create(payload)<br><br>    return response.created(user)<br>  }<br>}</pre><p>To validate data that will be transmitted to our backend we will create a validator.</p><pre>node ace make:validator auth</pre><p>Go to app/validator/auth.ts</p><p>We will define an object that will have 3 properties:</p><p>fullName, which must have a minimum length of 3 characters and a maximum of 64, and must be of type string.</p><p>Email, which must be of type string and be an email :) and, most importantly, must be unique in the database.</p><p>Password, of type string with a minimum length of 12 characters up to 512.</p><pre>import vine from &#39;@vinejs/vine&#39;<br><br>export const registerValidator = vine.compile(<br>  vine.object({<br>    fullName: vine.string().minLength(3).maxLength(64),<br>    email: vine<br>      .string()<br>      .email()<br>      .unique(async (query, field) =&gt; {<br>        const user = await query.from(&#39;users&#39;).where(&#39;email&#39;, field).first()<br>        return !user<br>      }),<br>    password: vine.string().minLength(12).maxLength(512),<br>  })<br>)</pre><p>We have one last step before testing our register route. Go to start/routes.ts to create the route using our controller.</p><p>We will replace the existing code by the code below:</p><pre>import router from &#39;@adonisjs/core/services/router&#39;<br><br>const AuthController = () =&gt; import(&#39;#controllers/auth_controller&#39;)<br><br>router.group(() =&gt; {<br>  router.post(&#39;register&#39;, [AuthController, &#39;register&#39;])<br>}).prefix(&#39;user&#39;)</pre><p>We will import our controller and create a POST route linked to the register method of our controller.</p><p>You can see that I have created a router group with a user prefix. All the routes we put in this group will have the user prefix, which means our register route will not have the URL /register but /user/register.</p><p>Start the development server with this command: node ace serve and then test the route http://localhost:3333/user/register using Postman or another tool.</p><p>Example body:</p><pre>{<br>    &quot;fullName&quot;: &quot;Maxime&quot;,<br>    &quot;email&quot;: &quot;max@ime.test&quot;,<br>    &quot;name&quot;: &quot;Maxime&quot;,<br>    &quot;password&quot;: &quot;12345678&quot;<br>}</pre><p>If everything goes well, the server should return a 201 Created response with the user&#39;s information.</p><p>To prevent the server from returning the hash of the password when you request the user, go to app/models/users.ts and add { serializeAs: null } in the argument of the @column decorator for the password.</p><pre>@column({ serializeAs: null })<br>declare password: string</pre><p>Now, when the server returns the user, it will replace the value of the password field with null. Feel free to visit here to learn more.</p><h3>Creation of the login route</h3><p>We will add a validator on the validator file.</p><pre>import vine from &#39;@vinejs/vine&#39;<br><br>// new validator<br>export const loginValidator = vine.compile(<br>  vine.object({<br>    email: vine.string().email(),<br>    password: vine.string().minLength(8).maxLength(32),<br>  })<br>)<br><br>export const registerValidator = vine.compile(<br>  vine.object({<br>    fullName: vine.string().minLength(3).maxLength(64),<br>    email: vine<br>      .string()<br>      .email()<br>      .unique(async (db, value) =&gt; {<br>        const user = await db.from(&#39;users&#39;).where(&#39;email&#39;, value).first()<br>        return !user<br>      }),<br>    password: vine.string().minLength(12).maxLength(512),<br>  })<br>)</pre><p>As well as a login method in our controller</p><pre>import type { HttpContext } from &#39;@adonisjs/core/http&#39;<br>import User from &#39;#models/user&#39;<br>import { registerValidator, loginValidator } from &#39;#validators/auth&#39;<br><br>export default class AuthController {<br>  // new login method<br>  async login({ request, response }: HttpContext) {<br>    const { email, password } = await request.validateUsing(loginValidator)<br><br>    const user = await User.verifyCredentials(email, password)<br>    const token = await User.accessTokens.create(user)<br><br>    return response.ok({<br>      token: token,<br>      ...user.serialize(),<br>    })<br>  }<br>  async register({ request, response }: HttpContext) {<br>    const payload = await request.validateUsing(registerValidator)<br><br>    const user = await User.create(payload)<br><br>    return response.created(user)<br>  }<br>}</pre><p>Then add the login route to the router</p><pre>import router from &#39;@adonisjs/core/services/router&#39;<br><br>const AuthController = () =&gt; import(&#39;#controllers/auth_controller&#39;)<br><br>router.group(() =&gt; {<br>  router.post(&#39;register&#39;, [AuthController, &#39;register&#39;])<br>  router.post(&#39;login&#39;, [AuthController, &#39;login&#39;])<br>}).prefix(&#39;user&#39;)</pre><p>When we test the route it will return the user as well as the token.</p><h3>Creation of a token protected route</h3><p>Now that we have our login and register routes, we will create a route that is accessible only by a registered user.</p><pre>import router from &#39;@adonisjs/core/services/router&#39;<br>import { middleware } from &#39;./kernel.js&#39;<br><br>const AuthController = () =&gt; import(&#39;#controllers/auth_controller&#39;)<br><br>router.group(() =&gt; {<br>  router.post(&#39;register&#39;, [AuthController, &#39;register&#39;])<br>  router.post(&#39;login&#39;, [AuthController, &#39;login&#39;])<br>}).prefix(&#39;user&#39;)<br><br>// add this route<br>router.get(&#39;me&#39;, async ({ auth, response }) =&gt; {<br>  try {<br>    const user = auth.getUserOrFail()<br>    return response.ok(user)<br>  } catch (error) {<br>    return response.unauthorized({ error: &#39;User not found&#39; })<br>  }<br>})<br>.use(middleware.auth())</pre><p>The route will return the user’s information. We can see that we defined a middleware with the api guard and used the auth object to access the user. To test the route, include the Authorization header with the value of the token (Bearer token).</p><h3>Creation of the logout route</h3><p>Now that we have our login and register routes, the only thing missing is the route for logging out. To log out a user, we simply need to delete their token from the database.</p><p>We will add a new logout method in our controller:</p><pre>import type { HttpContext } from &#39;@adonisjs/core/http&#39;<br>import User from &#39;#models/user&#39;<br>import { registerValidator, loginValidator } from &#39;#validators/auth&#39;<br><br>export default class AuthController {<br>  async login({ request, response }: HttpContext) {<br>    const { email, password } = await request.validateUsing(loginValidator)<br>    const user = await User.verifyCredentials(email, password)<br>    const token = await User.accessTokens.create(user)<br>    return response.ok({<br>      token: token,<br>      ...user.serialize(),<br>    })<br>  }<br>  async register({ request, response }: HttpContext) {<br>    const payload = await request.validateUsing(registerValidator)<br>    const user = await User.create(payload)<br>    return response.created(user)<br>  }<br>  // Notre nouvelle route logout<br>  async logout({ auth, response }: HttpContext) {<br>    const user = auth.getUserOrFail()<br>    const token = auth.user?.currentAccessToken.identifier<br>    if (!token) {<br>      return response.badRequest({ message: &#39;Token not found&#39; })<br>    }<br>    await User.accessTokens.delete(user, token)<br>    return response.ok({ message: &#39;Logged out&#39; })<br>  }<br>}</pre><p>The auth object allows us to retrieve the authenticated user and their token. We will then check that this token is not undefined (which shouldn&#39;t happen since the user is authenticated), delete the token, and inform the client that the operation was successful.</p><p>Then, add the new route:</p><pre>router.group(() =&gt; {<br>  router.post(&#39;register&#39;, [AuthController, &#39;register&#39;])<br>  router.post(&#39;login&#39;, [AuthController, &#39;login&#39;])<br>  // the new logout route<br>  router.post(&#39;logout&#39;, [AuthController, &#39;logout&#39;]).use(middleware.auth())<br>}).prefix(&#39;user&#39;)</pre><p>As with the me route, we will tell it to use the authentication middleware with use(middleware.auth()) to access the auth object in our controller method.</p><p>If you wanna see the complete code go to <a href="https://github.com/MaximeMRF/adonisjs-oat-auth-tutorial">my github repository</a>.</p><h3>Conclusion</h3><p>You now know how to create an OAT authentication system with AdonisJS. Feel free to read the official documentation to learn more.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=6c8029827562" width="1" height="1" alt="">]]></content:encoded>
        </item>
    </channel>
</rss>