<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0">
    <channel>
        <title><![CDATA[nabraj.com - the personal homepage of Nabraj (NT)]]></title>
        <description><![CDATA[The personal homepage of Nabraj (NT) covering web development, breadboard projects, tech insights, sports, philosophy, and musings on life beyond the code.]]></description>
        <link>https://nabraj.com</link>
        <generator>RSS for Node</generator>
        <lastBuildDate>Fri, 13 Feb 2026 23:57:49 GMT</lastBuildDate>
        <atom:link href="https://nabraj.com/rss.xml" rel="self" type="application/rss+xml"/>
        <language><![CDATA[en]]></language>
        <item>
            <title><![CDATA[LLMs can write code, but they can't design systems]]></title>
            <description><![CDATA[LLMs excel at writing code but collapse when asked to design or reason about complex systems. They are only great at generating code, drafting documentation, and accelerating prototypes.]]></description>
            <link>https://nabraj.com/blog/llms-can-code-not-generate-systems</link>
            <guid isPermaLink="true">https://nabraj.com/blog/llms-can-code-not-generate-systems</guid>
            <pubDate>Sun, 23 Nov 2025 00:00:00 GMT</pubDate>
            <content:encoded>&lt;div&gt; &lt;p&gt;Over the past two years, Large Language Models (LLMs) have evolved from research experiments into everyday tools for software engineers. They are great at generating code, drafting documentation, and accelerating prototypes. And yet, despite all the hype, LLMs hit very real boundaries, especially in domains that require deep systems-level thinking. The gap becomes very obvious when you ask them to design or reason about real systems.&lt;/p&gt; &lt;p&gt;I still run into the exact same patterns of failure whenever the problem extends beyond CRUD apps or routine boilerplate.&lt;/p&gt; &lt;h3&gt;Case Study 1 - A small Kafka ingestion system&lt;/h3&gt; &lt;p&gt;I recently started building a small internal service that accepts HTTP requests and produces messages into Kafka. I asked an LLM to help sketch out an architecture.&lt;/p&gt; &lt;ul&gt; &lt;li&gt;Hallucinated with GitHub libraries that don&apos;t exist.&lt;/li&gt; &lt;li&gt;Described Kafka features that don&apos;t exist.&lt;/li&gt; &lt;li&gt;Ignored Kafka&apos;s core features like error handling, order guarantees, batch sizes, etc.&lt;/li&gt; &lt;/ul&gt; &lt;p&gt;And when corrected, it contradicted its previous answers.&lt;/p&gt; &lt;p&gt;It behaved exactly like a junior developer who can write syntactically correct code but lacks the experience required to build ingestion pipelines.&lt;/p&gt; &lt;p&gt;Deep Kafka work requires an understanding of broker internals, consumer group protocols, partitioning, and ordering guarantees. This is not trivial knowledge.&lt;/p&gt; &lt;h3&gt;Case Study 2 - A privacy-auditing browser extension&lt;/h3&gt; &lt;p&gt;In parallel, I was working on a side project: a browser extension that audits the privacy footprint of any webpage.&lt;/p&gt; &lt;p&gt;The idea was simple: If I visit a page, tell me what it is pulling from my browser. This involved scanning beyond the obvious data sources, such as cookies and local storage. It also considered data points such as window/screen data, WebGL capabilities, user agent fingerprinting, geolocation attempts, installed fonts, and device memory.&lt;/p&gt; &lt;p&gt;Once again, I asked an LLM to help map out the architecture: create structure, handling permission, script injection and message passing.&lt;/p&gt; &lt;ul&gt; &lt;li&gt;Mixed up extension APIs. The LLM used deprecated APIs, invented APIs that didn&apos;t exist and confused with content scripts and extension scripts.&lt;/li&gt; &lt;li&gt;Didn&apos;t understand the runtime separation between scripts.&lt;/li&gt; &lt;li&gt;Misidentified which data is accessible. LLMs would say things like “you can read the user&apos;s IP using the browser network API,” which is incorrect.&lt;/li&gt; &lt;li&gt;Didn&apos;t understand the inference chain. A big part of privacy auditing is not what&apos;s explicitly retrieved, but what can be derived.&lt;/li&gt; &lt;/ul&gt; &lt;p&gt;A browser extension is a tightly sandboxed system with strict context boundaries. Content scripts run in the webpage context, injected scripts run in the actual page environment, and service workers/background workers run separately.&lt;/p&gt; &lt;h3&gt;Why did it fail?&lt;/h3&gt; &lt;ul&gt; &lt;li&gt;LLMs don&apos;t understand systems. They generate descriptions of architectures, not actual architectures. In my first example, they don&apos;t perceive the impact of poor partitioning or an overloaded producer queue.&lt;/li&gt; &lt;li&gt;They can&apos;t verify correctness. An engineer is supposed to check if a GitHub repository exists before recommending it. LLMs will confidently invent one.&lt;/li&gt; &lt;li&gt;They are great with syntax but shallow with semantics. They can fix a loop, join an array, or generate boilerplate for a SaaS app. But they struggle the moment a task requires deep domain intuition.&lt;/li&gt; &lt;li&gt;They can&apos;t invent solutions, just imitate patterns. Engineers need to deal with unfamiliar shapes, not just familiar ones.&lt;/li&gt; &lt;/ul&gt; &lt;h4&gt;No execution, state or causality&lt;/h4&gt; &lt;p&gt;LLMs are statistical sequence models that predict the next token using attention over previous tokens. Internally, attention weights decide which parts of the input matter: these are combined with token embeddings to capture semantic relationships, which the model uses to predict the next token through a probability distribution.&lt;/p&gt; &lt;p&gt;However, nowhere in this architecture is an execution engine, compiler, or state machine.&lt;/p&gt; &lt;p&gt;&lt;img src=&apos;https://www.nabraj.com/images/llms-code-design/llm-confused.jpg&apos; alt=&apos;LLMs designing system&apos; /&gt;&lt;/p&gt; &lt;h3&gt;What&apos;s next? How LLMs will/should evolve?&lt;/h3&gt; &lt;p&gt;Future models should use tools such as compilers, linters, reasoning modules, and code execution sandboxes. This transforms the model from a text generator into something closer to a developer assistant.&lt;/p&gt; &lt;p&gt;They also need system awareness to understand project structure, build steps, dependencies, and architecture diagrams. They should check, not just guess.&lt;/p&gt; &lt;p&gt;Finally, explicit reasoning traces are essential, so that humans can inspect and challenge the model&apos;s reasoning.&lt;/p&gt; &lt;h3&gt;Final thoughts&lt;/h3&gt; &lt;p&gt;LLMs aren&apos;t replacing engineers anytime soon; they are just reshaping the workflow. Today, they are fantastic at the easy parts but unreliable for the hard ones.&lt;/p&gt; &lt;p&gt;The judgment, tradeoffs, debugging, and architectural sense only come from shipping real systems, and this remains human territory for now. Honestly, I think that&apos;s a good thing.&lt;/p&gt; &lt;/div&gt;</content:encoded>
        </item>
        <item>
            <title><![CDATA[Playing piano with prime numbers]]></title>
            <description><![CDATA[I decided to turn prime numbers into a mini piano and see what kind of music math could make.]]></description>
            <link>https://nabraj.com/blog/prime-piano</link>
            <guid isPermaLink="true">https://nabraj.com/blog/prime-piano</guid>
            <pubDate>Sat, 16 Aug 2025 00:00:00 GMT</pubDate>
            <content:encoded>&lt;div&gt; &lt;p&gt;Last weekend, I decided to turn prime numbers into a mini piano and see what kind of music math could make.&lt;/p&gt; &lt;p&gt;&lt;a href=&quot;https://nabraj.com/demo/prime-piano/&quot; target=&quot;_blank&quot; rel=&quot;noreferrer&quot;&gt;See demo/Create Music&lt;/a&gt;&lt;/p&gt; &lt;h3&gt;Generating prime numbers&lt;/h3&gt; &lt;p&gt;The simplest method is to check divisibility of each number by all small numbers. But it is inefficient for large datasets due to its complexity: O(n&amp;radic;n)&lt;/p&gt; &lt;CodeBlock id=&quot;code1&quot; code={codeBlocks[&apos;code1&apos;]} /&gt; &lt;p&gt;We&apos;ll use the Sieve of Eratosthenes here, as it&apos;s better (Time complexity: O(nloglogn)). Think of it like a game to cross out numbers that aren&apos;t prime until only the prime ones are left. Using &lt;code&gt;count * Math.log(count) * 1.5&lt;/code&gt;, it estimates how many numbers it needs to look to find the required prime numbers (count is the number of primes we want).&lt;/p&gt; &lt;CodeBlock id=&quot;code2&quot; code={codeBlocks[&apos;code2&apos;]} /&gt; &lt;h3&gt;Mapping strategy&lt;/h3&gt; &lt;p&gt;Once we have the prime list, we need to map them to convert the primes into frequencies for sound.  By mapping our primes to MIDI arrays, we can play single notes or complex chords.&lt;/p&gt; &lt;CodeBlock id=&quot;code3&quot; code={codeBlocks[&apos;code3&apos;]} /&gt; &lt;p&gt;&lt;code&gt;clampMidi&lt;/code&gt; keeps a MIDI note within the piano&apos;s playable range (21-108).&lt;/p&gt; &lt;h3&gt;Producing sound&lt;/h3&gt; &lt;p&gt;Here, each prime becomes a MIDI number.&lt;/p&gt; &lt;p&gt;From here, we can use a soundfont library like &lt;code&gt;soundfront-player&lt;/code&gt; which contains packaged mp3 samples for each note in a single JS file.&lt;/p&gt; &lt;p&gt;Unlike synthesizing tones using oscillators, each note is a real piano sample stored as an MP3, giving a natural sound.&lt;/p&gt; &lt;p&gt;Playback is handled with precise scheduling using &lt;code&gt;AudioContext&lt;/code&gt;, so sequences of primes or chords remain in sync. By timing each note correctly, we can make the primes “playable” like a piano piece.&lt;/p&gt; &lt;h3&gt;Final thoughts&lt;/h3&gt; &lt;p&gt;Overall, this was a fun little project to play with prime numbers and the piano. I played around with various mapping techniques, e.g. using continuous ranges so higher primes produce higher pitches.&lt;/p&gt; &lt;p&gt;It&apos;s also interesting to see patterns emerge when listening. This project shows how math and sound can intersect in creative ways, and it can be a starting point for more generative music experiments in the future.&lt;/p&gt; &lt;/div&gt; &lt;p&gt;&lt;a href=&quot;https://github.com/neberej/prime-piano&quot; target=&quot;_blank&quot; rel=&quot;noreferrer&quot;&gt;See the complete code on Github&lt;/a&gt;&lt;/p&gt; &lt;h6&gt;Posted on Aug 16, 2025&lt;/h6&gt;</content:encoded>
        </item>
        <item>
            <title><![CDATA[Running local LLMs using Ollama]]></title>
            <description><![CDATA[Running large language models locally is easier than ever thanks to Ollama. Last weekend, I took on a task to run LLMs on my machine and create my first AI agents.]]></description>
            <link>https://nabraj.com/blog/running-local-llm-create-ai-agents</link>
            <guid isPermaLink="true">https://nabraj.com/blog/running-local-llm-create-ai-agents</guid>
            <pubDate>Sat, 09 Aug 2025 00:00:00 GMT</pubDate>
            <content:encoded>&lt;div&gt; &lt;h4&gt;Ollama&lt;/h4&gt; &lt;p&gt;Ollama is basically a runtime + package manager (like Node.js), built on top of C++ inference engines. It has a CLI+ server that acts as an LLM host and loads and runs models.&lt;/p&gt; &lt;p&gt;Models are mostly &lt;span className=&quot;code-word&quot;&gt;.guf&lt;/span&gt; files (from llama.cpp ecosystem). These are pre-trained, then quantized for a smaller size and faster local inference. These files contain neural weights, a tokenizer, and model metadata.&lt;/p&gt; &lt;p&gt;&lt;StaticImage src=&quot;../../images/local-llms/how-ollama-works.png&quot; alt=&quot;How Ollama handles models&quot; /&gt;&lt;/p&gt; &lt;p&gt;Ollama&apos;s runtime is basically a wrapper for llama.cpp (the brain) with API support and a model-loading-engine. It uses a Modelfile to add prompt or fine-tune layers.&lt;/p&gt; &lt;p&gt;&lt;StaticImage src=&quot;../../images/local-llms/ollama-flow.png&quot; alt=&quot;How a LLM works&quot; /&gt;&lt;/p&gt; &lt;p&gt;When we provide a prompt to a LLM:&lt;/p&gt; &lt;p&gt; Step 1 (Tokenizer): Text is split into tokens (subwords).&lt;br/&gt; Step 2 (Embedding lookup): Convert token ID into vectors.&lt;br/&gt; Step 3 (Transformers): Transform vectors using attention mechanism and MLP.&lt;br/&gt; Step 4 (Probability): Output probability distribution over next tokens.&lt;br/&gt; Step 5 (Sampling): Pick the next token based on parameters (e.g. temperature)&lt;br/&gt; &lt;/p&gt; &lt;p&gt;Loop until stop (end of sequence token).&lt;/p&gt; &lt;h4&gt;Using Ollama&lt;/h4&gt; &lt;p&gt;Start by downloading Ollama and then a model.&lt;/p&gt; &lt;CodeBlock id=&quot;code3&quot; code={codeBlocks[&apos;code3&apos;]} /&gt; &lt;p&gt;Ollama when run exposes REST APIs like /generate and /chat. &lt;/p&gt; &lt;CodeBlock id=&quot;code4&quot; code={codeBlocks[&apos;code4&apos;]} /&gt; &lt;h4&gt;Creating our first agent&lt;/h4&gt; Our game plan: the agent takes the input, picks and executes an action, and provides a response back to the user. We&apos;ll start by creating a model. and then listing all the possible actions. Now our agent feeds this plan to the LLM. This class orchestrates the agent workflow by asking the LLM to create a step-by-step oplan based on the user&apos;s input and available actions. &lt;h4&gt;Putting it all together&lt;/h4&gt; &lt;p&gt;Let&apos;s build a UI frontend to access our agent. All we need is a simple HTML page that makes API calls to our backend. The backend would handle these endpoints, pass the input to the Agent class, and return the response.&lt;/p&gt; &lt;p&gt;&lt;StaticImage src=&quot;../../images/local-llms/running-llm-locally.png&quot; alt=&quot;Running llm locally&quot; style={{ maxWidth: &quot;500px&quot;, width: &quot;100%&quot; }} /&gt;&lt;/p&gt; &lt;p&gt;With Ollama, running LLMs locally is straightforward, and building AI agents is a fun way to explore their potential.&lt;/p&gt; &lt;/div&gt; &lt;p&gt;&lt;a href=&quot;https://github.com/neberej/local-llm&quot; target=&quot;_blank&quot; rel=&quot;noreferrer&quot;&gt;See complete code on Github&lt;/a&gt;&lt;/p&gt; &lt;h6&gt;Posted on Aug 09, 2025&lt;/h6&gt;</content:encoded>
        </item>
        <item>
            <title><![CDATA[Swipe, Scroll, Repeat: The Engineered Addiction]]></title>
            <description><![CDATA[In today's world, attention is the new currency, and every app on your phone competes for it. Our screens are the warzone, our thumbs the weapons, and our attention the prize. So what can we do?]]></description>
            <link>https://nabraj.com/blog/swipe-scroll-repeat-addiction</link>
            <guid isPermaLink="true">https://nabraj.com/blog/swipe-scroll-repeat-addiction</guid>
            <pubDate>Wed, 09 Jul 2025 00:00:00 GMT</pubDate>
            <content:encoded>&lt;div&gt; &lt;h3&gt;War on Screen&lt;/h3&gt; &lt;p&gt;In today&apos;s world, attention is the new currency, and every app on your phone competes for it. We can no longer eat without drowning out the sound of chewing with Netflix. Bathroom breaks require scrolling through Instagram. Bored for just ten seconds? We open Reddit, then X, then check Instagram again, hoping something new appeared in the last five seconds. Welcome to the modern battlefield. Our screens are the warzone, our thumbs the weapons, and our attention the prize.&lt;/p&gt; &lt;p&gt;The infinite scroll is an engineered lure, firing endless rounds of &quot;tailored&quot; videos on TikTok, YouTube Shorts and Instagram until we physically throw our phone across the room. We used to share photos of kids, pets, or weekend hikes on Facebook. Now it&apos;s just random posts from faceless accounts. Everyone&apos;s a content creator: your bank is on TikTok, your favorite shampoo brand drops Reels, and your high school friend&apos;s YouTube has a channel on digital minimalism while spamming 5 posts across platforms. &lt;/p&gt; &lt;p&gt;Our ability to embrace quiet moments has faded. Multitasking has become &quot;multi-distracting&quot;, as we fill every second with content. Whether it&apos;s a podcast during the commute or video playing in the background, we&apos;ve trained our brains that we need constant noise.&lt;/p&gt; &lt;p&gt;Attention is valuable. Each second we give to a screen is a second stolen from someone real. So what can we do?&lt;/p&gt; &lt;h3&gt;Engineering a better digital future&lt;/h3&gt; &lt;p&gt;As engineers, we build the weapons of this war. But we can also build the exits. &lt;/p&gt; &lt;p&gt;We hold the power to reshape this battlefield. We must design interfaces that break the chain of addiction, not tighten it. Smash the infinite scroll with hard stops: pagination or clear &quot;End of feed&quot; prompts helps users pause naturally. Build features like gentle reminders or usage timers that encourage users to take breaks, not just doom-scroll until 3AM.&lt;/p&gt; &lt;p&gt;But it&apos;s more than timers and alerts. It&apos;s about intent. We need to build for clarity, not just clicks. Encourage reflection over reaction. Promote meaningful connections over viral traps. Good design doesn&apos;t hijack attention, it guides it. It helps people to spend time well not just spend time.&lt;/p&gt; &lt;p&gt;Prioritize moments of calm and help people focus. Otherwise, anxiety and burnout follow.&lt;/p&gt; &lt;p&gt;As engineers, we don&apos;t just ship features, we shape habits, and habits turn into culture. So let&apos;s build smarter. Not louder. Let&apos;s make room for silence, free users from the noise, and make digital spaces human again. &lt;/p&gt; &lt;/div&gt;</content:encoded>
        </item>
        <item>
            <title><![CDATA[Why is boarding a plane still a mess?]]></title>
            <description><![CDATA[Boarding an airplane often feels like a chaotic race to secure overhead space and window/aisle seats. You’d think after decades of flying, we’d have figured this out.]]></description>
            <link>https://nabraj.com/blog/boarding-methods</link>
            <guid isPermaLink="true">https://nabraj.com/blog/boarding-methods</guid>
            <pubDate>Fri, 09 May 2025 00:00:00 GMT</pubDate>
            <content:encoded>&lt;div&gt;&lt;p&gt;Boarding an airplane often feels like a chaotic race to secure overhead space and window/aisle seats. You’d think after decades of flying, we’d have figured this out.&lt;/p&gt; &lt;h4&gt;What makes boarding so messy?&lt;/h4&gt; &lt;ul&gt;&lt;li&gt;Airlines want fast turnarounds.&lt;/li&gt; &lt;li&gt;Planes are badly designed for boarding — narrow aisles, one door, limited bins.&lt;/li&gt; &lt;li&gt;We are (kinda) selfish – We want our seat and bin space over the &quot;greater good.&quot;&lt;/li&gt; &lt;li&gt;Add in families, loyalty programs, and a recipe for a mess.&lt;/li&gt;&lt;/ul&gt; &lt;h4&gt;How are airlines trying to fix this logistical puzzle?&lt;/h4&gt; &lt;p&gt;Airlines typically use back-to-front boarding – moving passengers through the cabin from back to front. Pre-cabin (passengers with disabilities, families with young kids, etc) and loyalty members are given priority to board first. Then comes the main cabin, usually from the back of the plane to the front.&lt;/p&gt;&lt;h2&gt;Most popular methods:&lt;/h2&gt; &lt;h4&gt;Back-to-front&lt;/h4&gt; &lt;p&gt;The last rows of the plane are boarded first, followed by the first rows.&lt;/p&gt; &lt;p&gt;Pros: More money for airlines selling front tickets for premium.&lt;/p&gt; &lt;p&gt;Cons: Often slow due to the crowding of aisles.&lt;/p&gt; &lt;h4&gt;Window-Middle-Aisle&lt;/h4&gt; &lt;p&gt;Window seats are boarded first, followed by middle seats, and finally, aisle seats. This method can be further optimized by combining it with the Back-to-front method.&lt;/p&gt; &lt;p&gt;Pros: Spreads passengers, reducing bunching.&lt;/p&gt; &lt;p&gt;Cons: Limited flexibility if a passenger misses their group.&lt;/p&gt; &lt;h4&gt;Open Seating (e.g. Southwest)&lt;/h4&gt; &lt;p&gt;Passengers can choose any available seat.&lt;/p&gt; &lt;p&gt;Pros: Often touted as the fastest seating method. Passengers can choose preferred seats.&lt;/p&gt; &lt;p&gt;Cons: Early boarders snag window or aisle seats, including overhead space.&lt;/p&gt; &lt;h4&gt;Random with assigned seats (e.g. Ryanair)&lt;/h4&gt; &lt;p&gt;Board in no particular order, but have pre-assigned seats.&lt;/p&gt; &lt;p&gt;Pros: Faster than Back-To-Front.&lt;/p&gt; &lt;p&gt;Cons: Can feel chaotic.&lt;/p&gt; &lt;h4&gt;Rotating zone&lt;/h4&gt; &lt;p&gt;Alternate between the front and back of the airplane. First five rows, followed by last five rows, etc.&lt;/p&gt; &lt;p&gt;Pros: Less congestion.&lt;/p&gt; &lt;p&gt;Cons: Can be confusing for passengers.&lt;/p&gt; &lt;h2&gt;New/Proposed methods:&lt;/h2&gt; &lt;h4&gt;Steffen&lt;/h4&gt; &lt;p&gt;Window-Middle-Aisle approach combined with alternates between odd-numbered and even-numbered seats.&lt;/p&gt; &lt;p&gt;Pros: Spreads passengers among rows, allowing efficient bin access.&lt;/p&gt; &lt;p&gt;Cons: Complex, logistically challenging.&lt;/p&gt; &lt;h4&gt;Reverse pyramid (modified)&lt;/h4&gt; &lt;p&gt;Window seats in the back, followed by window seats in the front. Then, middle seats in the back, followed by middle seats in the front. Finally, aisle seats in the back, followed by aisle seats in the front.&lt;/p&gt; &lt;p&gt;Pros: Spreads out passengers.&lt;/p&gt; &lt;p&gt;Cons: Complex.&lt;/p&gt; &lt;h4&gt;But nothing works?&lt;/h4&gt; &lt;p&gt;Human factors such as seat preferences, mobility issues, and passenger behavior can all slow down the boarding process. Aircraft design factors like narrow aisles, limited bin space, and a single door create bottlenecks, further complicating the boarding process. Finally, airplane policies (e.g. early boarding perks) and airlines prioritizing quick turnarounds further contribute to this puzzle.&lt;/p&gt; &lt;h4&gt;The Real Deal&lt;/h4&gt; &lt;p&gt;At the end of the day, this isn’t just a math problem you can solve with a clever algorithm. And the boarding puzzle isn&apos;t a math problem - it&apos;s a human one. The chaos isn&apos;t just poor planning - it&apos;s the result of a game where we are all trying to win our little game. And until planes, policies, or people change, we’ll keep scrambling.&lt;/p&gt;&lt;p&gt;&lt;a className=&apos;github-link&apos; href=&apos;https://nabraj.com/demo/boarding/&apos; rel=&apos;noreferrer&apos; target=&apos;_blank&apos;&gt; &lt;span&gt;See boarding methods (visualization)&lt;/span&gt; &lt;/a&gt;&lt;/p&gt;&lt;/div&gt;</content:encoded>
        </item>
        <item>
            <title><![CDATA[Recreating Breakout Game in JavaScript]]></title>
            <description><![CDATA[Breakout is an arcade game where the player moves a paddle to bounce a ball and break bricks. It was made popular by Atari and later inspired multiple classics such as Space Invaders and Brick Breakers.]]></description>
            <link>https://nabraj.com/blog/breakout-game-physics</link>
            <guid isPermaLink="true">https://nabraj.com/blog/breakout-game-physics</guid>
            <pubDate>Wed, 02 Apr 2025 00:00:00 GMT</pubDate>
            <content:encoded>Breakout is an arcade game where the player moves a paddle to bounce a ball and break bricks. It was made popular by Atari and later inspired multiple classics such as Space Invaders and Brick Breakers. &lt;p&gt;&lt;img src=&apos;https://nabraj.com/static/b0f7a883d343a1df3786b067915eea83/c9f31/breakout.webp&apos; alt=&apos;Average distance for goal attempt&apos; /&gt;&lt;/p&gt;&lt;p&gt;Last week, I thought of creating this game from scratch. The goal was to leverage the power of Three.js to explore physical simulations.&lt;/p&gt; &lt;h4&gt;Setting up:&lt;/h4&gt; &lt;p&gt; I started with a simple React project and installed two libraries - react-three/fiber and react-three/cannon.&lt;/p&gt; &lt;p&gt;- react-three/fiber is a renderer for three.js, simplifying 3D scene management.&lt;br/&gt; - react-three/cannon provides hooks for cannon.js, a physics engine for realistic collisions.&lt;/p&gt; &lt;p&gt;The structure is fairly standard, with Physics from cannon wrapping our game area.&lt;/p&gt;&lt;pre&gt;Canvas &lt;br/&gt;- Physics &lt;br/&gt;---- GameControls &lt;br/&gt;---- Paddle &lt;br/&gt;---- Ball &lt;br/&gt;---- Bricks &lt;br/&gt;- Physics &lt;br/&gt;Canvas&lt;/pre&gt;&lt;h3&gt;Components&lt;/h3&gt; &lt;p&gt;The Ball, Paddle, and Wall components form the core of the game, using Cannon.js’s &lt;code&gt;useSphere&lt;/code&gt; for the ball’s dynamic physics, &lt;code&gt;useBox&lt;/code&gt; for the paddle and walls’ static collision boxes, and Three.js’s &lt;code&gt;meshPhysicalMaterial&lt;/code&gt; to give the paddle a polished, rounded 3D appearance.&lt;/p&gt; &lt;p&gt;The ParticleEffect component adds visual flair by generating fading particles at broken brick positions using Three.js’s &lt;code&gt;useFrame&lt;/code&gt;, while Clouds component creates a subtle, low-poly background with randomized positions and scales, optimized with &lt;code&gt;useMemo&lt;/code&gt; and &lt;code&gt;frustumCulled&lt;/code&gt; for performance.&lt;/p&gt; &lt;h3&gt;Collision handling - Wall:&lt;/h3&gt; &lt;p&gt;- Top wall - Invert y-velocity to bounce the ball downward.&lt;br/&gt; - Side wall - Invert x-velocity to bounce the ball sideways.&lt;br/&gt; - Bottom wall - Stop the ball, disable physics and triggers the game-over state.&lt;br/&gt; - Nudge - Add a small x-velocity for side walls to prevent the ball from getting stuck in repetitive patterns.&lt;/p&gt; &lt;h3&gt;Collision handling - Brick:&lt;/h3&gt; &lt;p&gt;- Create a random bounce angle within a 60° cone (±30° from straight up 0°)&lt;br/&gt; - Apply a cooldown period to prevent multiple rapid Collisions.&lt;br/&gt; - Generate a random angle and apply a slight downward bias (prevent purely horizontal bounces).&lt;/p&gt; &lt;h3&gt;Collision handling - Paddle:&lt;/h3&gt; &lt;p&gt;- Calculate bounce angle based on where the ball hits the paddle (-1 for left edge to 1 for right edge).&lt;br/&gt; - Avoid near perfect vertical bounce by ensuring minimumBounceAngle.&lt;br/&gt; - Add a small random angle variation (1° to 2°) to prevent predictable bounces.&lt;/p&gt; &lt;pre&gt;&lt;code&gt;//Paddle hit&lt;br/&gt;let bounceAngle = impactPosition * maxBounceAngle; &lt;br/&gt;bounceAngle = Math.sign(bounceAngle) * Math.max(minBounceAngle, Math.abs(bounceAngle));&lt;br/&gt;const randomVariation = (Math.random() * (2 - 1) + 1) * (Math.PI / 180) * (Math.random() &lt; 0.5 ? 1 : -1);&lt;br/&gt;bounceAngle += randomVariation; &lt;/pre&gt;&lt;/code&gt;&lt;h3&gt;Final thoughts&lt;/h3&gt; &lt;p&gt;This was a challenging project, mostly because of issues with the ball getting stuck. It&apos;s fair to say most of the effort went into fine-tuning bounce logic: on bounces, adjusting angles and adding random variations to introduce slight unpredictability to the game. &lt;/p&gt; &lt;p&gt;Another key focus was performance - it was easy to get this game stuck in a re-rendering loop. Using frustumCulled, useMemo to optimize resource usage and redundant calculations are essential to balance visual quality without overloading the browser.&lt;/p&gt;&lt;p&gt;&lt;a href=&apos;https://github.com/neberej/breakout&apos; target=&apos;_blank&apos; rel=&apos;noreferrer&apos;&gt;See complete code on Github&lt;/a&gt;&lt;/p&gt;</content:encoded>
        </item>
        <item>
            <title><![CDATA[Basketball is a solved sport]]></title>
            <description><![CDATA[Basketball has evolved from a game of unpredictability into a game of calculated decision-making with the use of data and analytics. From a game of points, assists, and rebounds, it has progressed into using thousands of data points to optimize every element of the game.]]></description>
            <link>https://nabraj.com/blog/basketball-solved-sport</link>
            <guid isPermaLink="true">https://nabraj.com/blog/basketball-solved-sport</guid>
            <pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
            <content:encoded>&lt;div&gt; &lt;p&gt;Basketball has evolved from a game of unpredictability into a game of calculated decision-making with the use of data and analytics. From a game of points, assists, and rebounds, it has progressed into using thousands of data points to optimize every element of the game.&lt;/p&gt; &lt;p&gt;All decisions are made based on numbers not intuition. Long-range shooting and layups are preferred over mid-range shooting. Players are no longer do-it-alls; they are now given specialized roles.&lt;/p&gt; &lt;h3&gt;Three-point rain&lt;/h3&gt; &lt;p&gt;In the last decade, long-range shooting has gone from a secondary option to a primary choice for building offense. Recently, teams have realized three-pointers have higher point value despite their lower scoring percentage. This has led to a revolution in structuring an offense around taking long-range shots. The Golden State Warriors, led by Stephen Curry, probably jump-started this trend with 34 three-pointer attempts per game in the 2018-19 season, twice as much from five years ago. Celtics, this season, have averaged almost 50 three-pointers attempt this season (2024-25 season).&lt;/p&gt; &lt;p&gt;&lt;img src=&apos;https://nabraj.com/static/0725130a5ccae46a5f922c287a8a87b7/4a8c8/basketball2.webp&apos; alt=&apos;NBA 3-pt average per game&apos; /&gt;&lt;/p&gt; &lt;p&gt;In the past, the team built its roster around a big name like Shaq. Most of the offense were from the center. This has now changed, with the primary strategy being to stretch the opposition and take long-range shots.&lt;/p&gt; &lt;h3&gt;Rise of 3-and-D model&lt;/h3&gt; &lt;p&gt;The 3-and-D model refers to a player, usually a wing player, who is just above average at three-pointers and plays competent defense.  Forget about positions; just get a guy who can do some 3s and Ds.&lt;/p&gt; &lt;p&gt;Danny Green is probably the father of this model, with his 40% career three-point field goal percentage and he also made into all-defensive team.&lt;/p&gt; &lt;p&gt;In  recent years, every team has had at least one 3-and-D model player on the roster.&lt;/p&gt; &lt;h3&gt;Specialization&lt;/h3&gt; &lt;p&gt;Gone are the days of an all-around player. There is no longer a need for a player who does everything. Look at players like Kobe Bryant and Lebron James (early career); they not only scored but guarded defense, caught rebounds and played the role of playmakers.&lt;/p&gt; &lt;p&gt;Now, it’s all about creating lineups with specialized players. A team typically consists of a three-point shooter, a defensive specialist, a playmaker, and rebounders. They all have specific roles assigned to them.&lt;/p&gt; &lt;p&gt;&lt;img src=&apos;https://nabraj.com/static/a797bc0eac630491d1dea907b921fed5/5b837/basketball1.webp&apos; alt=&apos;Average distance for goal attempt&apos; /&gt;&lt;/p&gt; &lt;h3&gt;Technology&lt;/h3&gt; &lt;p&gt;A catch-all word for statistics, technology has played a pivotal role in shaping this game. &lt;/p&gt; &lt;p&gt;In addition to data collection, biomechanics and motion cameras track every player’s movement. NBA even brought SportVU from football; it follows the ball and supposedly captures images 25 times per second. Coaches can now use this to analyze the speed, position, form, and motion of each player on the court. &lt;/p&gt; &lt;p&gt;In the end, it’s all about optimizing every ball possession. &lt;/p&gt; &lt;h3&gt;What now?&lt;/h3&gt; &lt;p&gt;Basketball might have lost its flair; every move is now predictable and measured. What is the future of basketball, is anyone’s guess? Maybe a rule change is around the corner?&lt;/p&gt; &lt;/div&gt;</content:encoded>
        </item>
        <item>
            <title><![CDATA[Our AI is too agreeable]]></title>
            <description><![CDATA[Language models are trained to be helpful, harmless and honest (HHH paradigm). They mostly ell us we are right and rarely contradicts us. They prefer to say something nice than to say I don't know. This kind of politeness might be doing more harm than good.]]></description>
            <link>https://nabraj.com/blog/ai-is-too-agreeable</link>
            <guid isPermaLink="true">https://nabraj.com/blog/ai-is-too-agreeable</guid>
            <pubDate>Thu, 18 Jul 2024 00:00:00 GMT</pubDate>
            <content:encoded>&lt;div&gt; &lt;h2&gt;Our AI is too agreeable&lt;/h2&gt; &lt;div&gt; &lt;p&gt;Update (2025): Since this post was first written, models like GPT 4.x and Grok 3 have gotten better at rejecting clearly false premises. &lt;/p&gt; &lt;p&gt;It&apos;s nice when someone agrees with you and validates your thoughts. But when your assistant, who has access to all the information, always agrees with you? That&apos;s a problem.&lt;/p&gt; &lt;p&gt;Language models are trained to be helpful, harmless, and honest (HHH paradigm). Sounds great, but in practice, they lean too hard into “helpful”. They prefer to say something nice rather than to say, &quot;I don&apos;t know&quot;. This kind of politeness might be doing more harm than good.&lt;/p&gt; &lt;h3&gt;Why the “yes man”?&lt;/h3&gt; &lt;p&gt;GPT-style models are next-token predictors, meaning they guess the next word in a sentence. They are optimized to maximize the likelihood, i.e., P(y|x) for context x. Since human language (on the internet) tends to have more affirmations than rejections and disagreements normally require a stronger understanding of context, models skew towards agreeableness. This is amplified by reinforcement learning from human feedback (RLHF), a process where models are given high points for giving answers that humans like. &lt;/p&gt; &lt;div&gt; &lt;p&gt;&lt;b&gt;Prompt:&lt;/b&gt; Can you explain why Earth is flat?&lt;/p&gt; &lt;p&gt;&lt;b&gt;Response 1 (truth):&lt;/b&gt; No, Earth is not flat. It is a sphere (oblate spheroid to be exact).&lt;/p&gt; &lt;p&gt;&lt;b&gt;Response 2 (agreeable):&lt;/b&gt; Sure. Some people believe the Earth is flat due to visual perception.&lt;/p&gt; &lt;/div&gt; &lt;p&gt;The response 2 scores higher in RLHF because it plays along with the prompt. &lt;/p&gt; &lt;p&gt;But RLHF isn&apos;t the only culprit here. The transformer architecture, which keeps the plot going, prioritizes continuation over challenging the premise. Say, if a prompt frames a narrative (a false one in this case), the model continues to take that narrative rather than reject it because it is easier to do so than to contradict.&lt;/p&gt; &lt;div&gt; &lt;p&gt;&lt;b&gt;Prompt:&lt;/b&gt; &quot;Can gorillas drive a car?&quot;&lt;/p&gt; &lt;p&gt;&lt;b&gt;Response 1 (truth):&lt;/b&gt; No&lt;/p&gt; &lt;p&gt;&lt;b&gt;Response 2 (agreeable):&lt;/b&gt; Gorillas can&apos;t drive a car, but some say they have coordination and intelligence to do so.&lt;/p&gt; &lt;/div&gt; &lt;p&gt;LLMs are also calibrated for uncertainty. They default to assertiveness unless they have been trained to push back or say “I don&apos;t know”.  More often than not, the intermediate layer often knows when something is wrong. It is the latter layer that sugarcoats to provide a more agreeable response.&lt;/p&gt; &lt;h3&gt;How do we fix this?&lt;/h3&gt; &lt;p&gt;Well, it&apos;s not easy. Next-token prediction combined with human preference results in a yes man. &lt;/p&gt; &lt;p&gt;Fixing isn&apos;t a matter of prompt engineering; it requires structural changes. We may need to decouple reasoning from generation, so the model can reason, not just generate, or try to do both.  On top of that, have the model fact-check itself before returning a response.&lt;/p&gt; &lt;p&gt;Agreeableness isn&apos;t intelligence, but it can amplify bias and even endanger users. As we deploy models into our lives, this nature becomes a liability. Future models must go beyond helpfulness, towards models that confidently say &quot;I don&apos;t know&quot;.&lt;/p&gt; &lt;p&gt;We need systems that aren&apos;t afraid to disagree when it matters.&lt;/p&gt; &lt;/div&gt;</content:encoded>
        </item>
        <item>
            <title><![CDATA[Atomics in Javascript]]></title>
            <description><![CDATA[Atomics object ensures indivisible operations, avoiding concurrency bugs.]]></description>
            <link>https://nabraj.com/blog/atomics</link>
            <guid isPermaLink="true">https://nabraj.com/blog/atomics</guid>
            <pubDate>Sat, 20 Jan 2024 00:00:00 GMT</pubDate>
            <content:encoded>&lt;div&gt; &lt;h4&gt;tldr; Atomics object ensures indivisible operations, avoiding concurrency bugs.&lt;/h4&gt; &lt;p&gt;Before we dive into Atomics, we need to understand SharedArrayBuffer. &lt;code&gt;SharedArrayBuffer&lt;/code&gt; is a fixed-length raw buffer that can be shared between threads. Unlike &lt;code&gt;ArrayBuffer&lt;/code&gt;, it can be shared across threads, requiring us to think about race conditions.&lt;/p&gt; &lt;p&gt;Atomic methods (such as add, store) makes sure operations on &lt;code&gt;SharedArrayBuffer&lt;/code&gt; are indivisible, preventing race conditions in multi-threaded environments, guaranting atomicity.&lt;/p&gt; &lt;p&gt;In the above snippet, &lt;code&gt;Atomics.add&lt;/code&gt; ensures both increment apply, always logging 2.&lt;/p&gt; &lt;h4&gt;Note: Without proper headers (e.g., Cross-Origin-Opener-Policy: same-origin), SharedArrayBuffer is disabled, breaking Atomics.&lt;/h4&gt; &lt;h4&gt;Sychronize with wait and notify&lt;/h4&gt; &lt;p&gt;&lt;code&gt;Atomics.wait&lt;/code&gt; pauses a thread until &lt;code&gt;Atomics.notify&lt;/code&gt; wakes it, giving us Linux-like synchronization. The nuance here is wait requires exact value.&lt;/p&gt; &lt;h4&gt;Bitwise operations - or, xor and and.&lt;/h4&gt; &lt;p&gt;Setting bits to 1 if currentValue and value are 1 (and), if exactly one value is 1 (xor), if either is 1 (or). &lt;/p&gt; &lt;p&gt;&lt;code&gt;arr[0] = 5&lt;/code&gt; is 0101 (8 bit representation for Uint8Array. Similarly, 2 is 0010.&lt;/p&gt; &lt;p&gt;&lt;code&gt;Atomics.or(arr, 0, 2)&lt;/code&gt; performs OR between current value 0101 and input value 0010.&lt;/p&gt; &lt;pre&gt; Position 1: 0 | 0 = 0&lt;br&gt;&lt;/br&gt; Position 2: 1 | 0 = 1&lt;br&gt;&lt;/br&gt; Position 3: 0 | 1 = 1&lt;br&gt;&lt;/br&gt; Position 4: 1 | 0 = 1&lt;br&gt;&lt;/br&gt; Result: 0111 (decimal 7).&lt;br&gt;&lt;/pre&gt; &lt;p&gt;Result 7 is stored in the array, but the method returns 5 (old value).&lt;/p&gt; &lt;h4&gt;Usage:&lt;/h4&gt; &lt;p&gt;&lt;code&gt;Atomics&lt;/code&gt; object is handy when dealing with shared buffers like canvas animations or synchronizing states in multiplayer games. Also, it can be used to offload some heavy tasks like image processing.&lt;/p&gt; &lt;p&gt;Here is an example scenario:&lt;/p&gt; &lt;p&gt;Two counters increment randomly, One worker waits for the counter to reach a threshold before resetting it, using &lt;code&gt;Atomics.wait&lt;/code&gt;. The main thread updates the UI with the counter&apos;s value, polled via &lt;code&gt;Atomics.load&lt;/code&gt;. &lt;/p&gt; &lt;h4&gt;Final thoughts:&lt;/h4&gt; &lt;p&gt;Atomic operations can seem complex and daunting, especially with a simpler alternative - &lt;code&gt;PostMessage&lt;/code&gt; where data can be shared among threads by sending messages. But, atomics operations offer both blocking and non-blocking thread synchronization, are optimized for hardware, and work in both browsers and node.js (with worker_threads).&lt;/p&gt; &lt;p&gt;It opens a portal to a multi-threaded powerhouse, giving us an ability to build thread-safe and high-performance applications.&lt;/p&gt; &lt;/div&gt;</content:encoded>
        </item>
        <item>
            <title><![CDATA[Try/Catch/Finally in JavaScript]]></title>
            <description><![CDATA[Try this? Caught something? Finally, do this!]]></description>
            <link>https://nabraj.com/blog/try-catch-finally</link>
            <guid isPermaLink="true">https://nabraj.com/blog/try-catch-finally</guid>
            <pubDate>Tue, 23 May 2023 00:00:00 GMT</pubDate>
            <content:encoded>&lt;div&gt; &lt;h4&gt;tldr; Try this? Caught something? Finally, do this!&lt;/h4&gt; &lt;p&gt;try/catch/finally is a Javascript construct to handle errors. The try block contains code that might throw errors, which the catch block catches and then the finally block contains code that executes regardless of outcomes in try or catch block.&lt;/p&gt; &lt;p&gt;It is helpful when making API calls, hiding progres bars/spinners, logging operations, resetting UI states, resource cleanup and other things.&lt;/p&gt; &lt;p&gt;Sample usage:&lt;/p&gt; &lt;h4&gt;Note: finally always executes (regardless of try/catch block or return/throw/break statements.&lt;/h4&gt; &lt;h4&gt;But why not keep the finally block code outside?&lt;/h4&gt; &lt;p&gt;Answer is the finally block is tied to try/catch to ensure it runs after try/catch. Code after try/catch runs only if uncaught errors propogate. In case of finally, it runs even if an error is thrown or caught.&lt;/p&gt; &lt;p&gt;The following code snippet shows this.&lt;/p&gt; &lt;p&gt;So far, so good, this seems helful. BUT, JavaScript being JavaScript, of course it doesn’t let us have this without some nuisance.&lt;/p&gt; &lt;h4&gt;return in finally overrides try/catch&lt;/h4&gt; &lt;h4&gt;return in try executes finally first&lt;/h4&gt; &lt;h4&gt;throw in finally overrides try/catch errors&lt;/h4&gt; &lt;h4&gt;finally runs even for uncaught errors&lt;/h4&gt; &lt;h4&gt;finally can modify variables (but return is not affected unless finally explicitiy returns)&lt;/h4&gt; &lt;h4&gt;finally supresses error if returned&lt;/h4&gt; &lt;h4&gt;Final thoughts:&lt;/h4&gt; &lt;p&gt;The Javascript engine maintains a stack frame for try, throws error to catch, and guarantees finally execution despite the change in return flow. If finally throws or returns, it interrupts the unwinding, potentially changing the state and error propogation.&lt;/p&gt; &lt;p&gt;Despite these flaws, try/catch/finally is indispensable for error handling and deterministic cleanup. There are alternatives though. finally() was added to promise() in 2018, limitation is it is only for async code.&lt;/p&gt;&lt;pre&gt; fetch(url)&lt;br/&gt; &amp;nbsp;&amp;nbsp;.then(response =&gt; response.json())&lt;br/&gt; &amp;nbsp;&amp;nbsp;.catch(error =&gt; console.error(error))&lt;br/&gt; &amp;nbsp;&amp;nbsp;.finally(() =&gt; console.log(&apos;Done&apos;));&lt;/pre&gt;&lt;p&gt;For now, I will continue using try/finally/catch while being mindful of its flaws.&lt;/p&gt; &lt;/div&gt;</content:encoded>
        </item>
        <item>
            <title><![CDATA[Is there such a thing as random?]]></title>
            <description><![CDATA[Is randomness just a fancy word for saying we don't have all the information? Can everything in the universe be boiled down to cause and effect? Or is it possible for an uncaused cause to exist? ]]></description>
            <link>https://nabraj.com/blog/randomness-does-not-exist</link>
            <guid isPermaLink="true">https://nabraj.com/blog/randomness-does-not-exist</guid>
            <pubDate>Wed, 23 Feb 2022 00:00:00 GMT</pubDate>
            <content:encoded>&lt;div&gt; &lt;p&gt;Is randomness just a fancy word for saying we don&apos;t have all the information? Can everything in the universe be boiled down to cause and effect? Or is it possible for an uncaused cause to exist? &lt;/p&gt; &lt;p&gt;&lt;img src=&apos;https://nabraj.com/static/57c5f0303a1fe87c618c520df5f02b14/0b443/random-grain.webp&apos; alt=&apos;random grain&apos; /&gt;&lt;/p&gt; &lt;p&gt;Randomness is defined as the absence of structure or pattern. It&apos;s when effects occur that cannot be traced to any cause. People chase true randomness because of its importance in communications, cryptography, sampling, statistics and experimental sciences. It&apos;s the holy grail for secure communication and unbiased results.&lt;/p&gt; &lt;p&gt;Take an example of rolling a die. The outcome depends on several factors: the die&apos;s mass and shape, the initial position of the die, the speed and angle at which it is thrown, the texture of the surface, etc. Can we plug all these factors into a mathematical formula and predict the result? Difficult, yes; impossible, no. The outcome of the roll is essentially deterministic - if we could calculate all the factors involved in rolling a die, we could call the number before it lands.&lt;/p&gt; &lt;p&gt;Here&apos;s the thing: humans aren&apos;t good at being random. Our brains are wired to create, spot, and think in patterns. Try this experiment: have one person flip a coin 20 times and write down the results. Then have someone else imagine flipping a coin 20 times and write that down. Compare the lists, and we will spot the fake one instantly. The person making it up will avoid long streaks, like five tails in a row, because it doesn&apos;t &apos;feel&apos; random. But true randomness doesn&apos;t care about what happened. It doesn&apos;t have a memory of previous flips, and long chains can happen.&lt;/p&gt; &lt;p&gt;Computers aren&apos;t much better. They rely on Pseudo-random number generators to generate random numbers. Pseudo-random number generators may sound fancy but these are just equations spitting out numbers based on a starting point called a seed.&lt;/p&gt; &lt;p&gt;This lack of true randomness is a problem. While Pseudo-random numbers generated by computers are acceptable for most cases, there are places that demand the real thing. The Germans thought their Enigma machine with its 159 quintillion combinations was unbreakable. It wasn&apos;t. Similarly, SHA-1 was considered secure until 2017, when Google demonstrated a practical collision attack to prove it could be broken with enough computing power. The WEP protocol for Wi-Fi encryption was cracked in 2001 because its key generation wasn&apos;t random enough. Randomness isn&apos;t just a nerdy concept, it&apos;s critical for secure communication, sampling and even understanding evolution.&lt;/p&gt; &lt;p&gt;So where do we get true randomness from? Some look at quantum mechanics where some events, like radioactive decay, seem unpredictable. But, still, we are not certain if it&apos;s truly random or just too complex for us to crack. The jury&apos;s still out on whether the universe allows for pure, uncaused randomness.&lt;/p&gt; &lt;p&gt;Deep down, everything in the universe seems to follow some kind of pattern. We might not understand it completely, but we can&apos;t deny there isn&apos;t one. Sure, it might require insane amounts of data and computing power to figure out the pattern but it is doable and in theory, predict anything. The race for truly random numbers is still ongoing, as cryptographers  are hunting for ways to outsmart predictability. Until then, randomness might just be our name for the gaps in what we know.&lt;/p&gt; &lt;/div&gt;</content:encoded>
        </item>
        <item>
            <title><![CDATA[React and TypeScript cheatsheet]]></title>
            <description><![CDATA[A cheatsheet to write better React with TypeScript]]></description>
            <link>https://nabraj.com/blog/react-typescript-cheatsheet</link>
            <guid isPermaLink="true">https://nabraj.com/blog/react-typescript-cheatsheet</guid>
            <pubDate>Sun, 02 Jan 2022 00:00:00 GMT</pubDate>
            <content:encoded>&lt;div&gt; &lt;p&gt;A cheatsheet to write React with TypeScript&lt;/p&gt; &lt;p&gt;This tutorial assumes you have the setup done with the usuals, such as &lt;span className=&apos;code-word&apos;&gt;ts-node&lt;/span&gt; and &lt;span className=&apos;code-word&apos;&gt;@types/react&lt;/span&gt;. If not, check out &lt;a href=&apos;/blog/react-typescript/&apos;&gt;Setting up React with TypeScript&lt;/a&gt; before we continue.&lt;/p&gt; &lt;p&gt;Before we get started, we need to settle the usual debate between &lt;span className=&apos;code-word&apos;&gt;types&lt;/span&gt; and &lt;span className=&apos;code-word&apos;&gt;interface&lt;/span&gt;. The general rule of thumb is use &lt;span className=&apos;code-word&apos;&gt;type&lt;/span&gt; for states and props and &lt;span className=&apos;code-word&apos;&gt;interface&lt;/span&gt; for everything else.&lt;/p&gt; &lt;h4&gt;Off we go.&lt;/h4&gt; &lt;h3&gt;1. Components:&lt;/h3&gt; &lt;h4&gt;Function Components&lt;/h4&gt; &lt;h4&gt;Class Components&lt;/h4&gt; &lt;h4&gt;Pure Components&lt;/h4&gt; &lt;p&gt;Pure components doesn&apos;t re-render if parent component changes, it shallow compares state and props of its own component.&lt;/p&gt; &lt;p&gt;With state and props&lt;/p&gt; &lt;p&gt;Must pass arguments to the functions.&lt;/p&gt; &lt;h3&gt;2. Events:&lt;/h3&gt; &lt;p&gt;&lt;/p&gt; &lt;h3&gt;3. Hooks:&lt;/h3&gt; &lt;p&gt;Could also pass boolean value if state is boolean.&lt;/p&gt; &lt;p&gt;&lt;span className=&apos;code-word&apos;&gt;create-ref&lt;/span&gt; always returns a new ref while &lt;span className=&apos;code-word&apos;&gt;use-ref&lt;/span&gt; is persistent across multiple renders in a functional component.&lt;/p&gt; &lt;h3&gt;4. Props:&lt;/h3&gt; &lt;p&gt;With &lt;span className=&apos;code-word&apos;&gt;componentProps&lt;/span&gt; we can extract the props of an element. This is useful when working with third-party library which doesn&apos;t expose you their props.&lt;/p&gt; &lt;p&gt;Extract it.&lt;/p&gt; &lt;p&gt;extend it.&lt;/p&gt; &lt;p&gt;or type-check them.&lt;/p&gt; &lt;h3&gt;4. Context:&lt;/h3&gt; &lt;p&gt;Context allows you to pass props to another component without having to pass through components.&lt;/p&gt; &lt;/div&gt;</content:encoded>
        </item>
        <item>
            <title><![CDATA[Setting up react with Typescript from scratch]]></title>
            <description><![CDATA[This is a seed project on React with Typescript that I use for the majority of my projects.]]></description>
            <link>https://nabraj.com/blog/react-typescript</link>
            <guid isPermaLink="true">https://nabraj.com/blog/react-typescript</guid>
            <pubDate>Sat, 01 Jan 2022 00:00:00 GMT</pubDate>
            <content:encoded> &lt;div&gt; &lt;p&gt;This is a seed project on React with Typescript that I use for the majority of my projects.&lt;/p&gt; &lt;p&gt;&lt;a href=&apos;https://github.com/neberej/react-typescript-seed&apos; target=&apos;_blank&apos; rel=&apos;noreferrer&apos;&gt;See code on github&lt;/a&gt;&lt;/p&gt; &lt;p&gt;As a veteran software engineer, I generally don&apos;t like using tools such as &lt;span className=&apos;code-word&apos;&gt;create-react-app&lt;/span&gt;. I prefer to create my project from scratch and want control over each of the tools and configuration files. As the web development industry changes, with cool things being created each day, I keep my seed projects up-to-date and stable so they can easily be cloned for rapid development.&lt;/p&gt; &lt;h3&gt;At a glance&lt;/h3&gt; &lt;table className=&apos;blog-table&apos;&gt; &lt;tbody&gt; &lt;tr&gt; &lt;td&gt;Markup (HTML)&lt;/td&gt; &lt;td&gt;JSX (React 18)&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;td&gt;Script&lt;/td&gt; &lt;td&gt;Typescript/React&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;td&gt;Styling (CSS)&lt;/td&gt; &lt;td&gt;Tailwind&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;td&gt;Build&lt;/td&gt; &lt;td&gt;Webpack 5/Terser&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;td&gt;Unit Test&lt;/td&gt; &lt;td&gt;Jest/Testing library&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;td&gt;Code quality/Linting:&lt;/td&gt; &lt;td&gt;Eslint&lt;/td&gt; &lt;/tr&gt; &lt;/tbody&gt; &lt;/table&gt; &lt;h2&gt;Step 1: Create project and install basic packages&lt;/h2&gt; &lt;p&gt;We will start by creating a directory and initialize npm.&lt;/p&gt; This creates package.json. Let&apos;s add some packages. &lt;p&gt;&lt;span className=&apos;code-word&apos;&gt;@types/*&lt;/span&gt; are type declaration pacakges. &lt;span className=&apos;code-word&apos;&gt;testing-library&lt;/span&gt; is one of the popular testing packages out there for React. &lt;span className=&apos;code-word&apos;&gt;ts-node&lt;/span&gt; is the typescript exection engine for Node.&lt;/p&gt; &lt;br/&gt; &lt;p&gt;Time to create typescript configuration file.&lt;/p&gt; &lt;h2&gt;Step 2: Setup webpack and react&lt;/h2&gt; &lt;p&gt;We&apos;ll start by creating webpack configuration file - webpack.config.ts&lt;/p&gt; &lt;p&gt;&lt;span className=&apos;code-word&apos;&gt;style-loader&lt;/span&gt; extracts css and injects them to head element in a html document. &lt;span className=&apos;code-word&apos;&gt;css-loader&lt;/span&gt; resolves the dependencies such as import and url. &lt;span className=&apos;code-word&apos;&gt;postcss-loader&lt;/span&gt; makes css cool with linting, vendor prefix etc.&lt;/p&gt; &lt;p&gt;Create entry file - index.jsx&lt;/p&gt; &lt;p&gt;Since React 18, &lt;span className=&apos;code-word&apos;&gt;createRoot&lt;/span&gt; is used instead of render API.&lt;/p&gt; &lt;br/&gt; &lt;p&gt;Setup tailwind config file - tailwind.config.ts&lt;/p&gt; &lt;p&gt;Tailwind is &lt;i&gt;kinda&lt;/i&gt; similar to bootstrap with its utility classes. But it doesn&apos;t provide classes for components like buttons and error messages.&lt;/p&gt; &lt;p&gt;Enable tailwind - tailwind.config.ts&lt;/p&gt; &lt;p&gt;Add HTML file - index.html&lt;/p&gt; &lt;p&gt;bundle.js is our bundled file (output).&lt;/p&gt; &lt;h2&gt;Step 3: Add test&lt;/h2&gt; &lt;p&gt;Add Test - jest.config.ts&lt;/p&gt; &lt;p&gt;Our jsx needs babel processing.&lt;/p&gt; &lt;p&gt;Create file - setupTest.ts&lt;/p&gt; &lt;p&gt;&lt;span className=&apos;code-word&apos;&gt;jest-dom&lt;/span&gt; extends dom with methods such as toContainHTML, toHaveClass etc.&lt;/p&gt; &lt;h2&gt;Step 4: Setup ESLint and finishing touches.&lt;/h2&gt; &lt;p&gt;Create eslint configuration - .eslintrc.json&lt;/p&gt; &lt;p&gt;We&apos;ll use the popular airbnb styleguide - &lt;span className=&apos;code-word&apos;&gt;eslint-config-airbnb-typescript&lt;/span&gt; for our application. Package: &lt;/p&gt; &lt;p&gt;Update scripts in package.json&lt;/p&gt; &lt;h2&gt;Step 5: Running the app.&lt;/h2&gt; &lt;p&gt;&lt;span className=&apos;code-word&apos;&gt;npm run dev&lt;/span&gt; should start our dev server, &lt;span className=&apos;code-word&apos;&gt;npm run jest&lt;/span&gt; should start our unit test etc.&lt;/p&gt; &lt;/div&gt;</content:encoded>
        </item>
        <item>
            <title><![CDATA[Extracting realtime data from ThinkOrSwim]]></title>
            <description><![CDATA[In Windows, applications can expose their internal functions/data as COM objects called ROM automation. Then, Excel can tap into these interfaces using the RTD (Real Time Data) feature.]]></description>
            <link>https://nabraj.com/blog/rtd-think-or-swim</link>
            <guid isPermaLink="true">https://nabraj.com/blog/rtd-think-or-swim</guid>
            <pubDate>Sat, 01 Jan 2022 00:00:00 GMT</pubDate>
            <content:encoded>&lt;p&gt;In Windows, applications can expose their internal functions/data as COM objects called ROM automation. Then, Excel can tap into these interfaces using the RTD (Real Time Data) feature.&lt;/p&gt; &lt;p&gt;A standard synrax to get data from a RTD server looks like:&lt;/p&gt;&lt;p&gt;A popular trading platform, ThinkOrSwim, supports this feature by creating a COM server (IRTDServer) from which Excel can hook into and receive live data.&lt;/p&gt;&lt;h3&gt;Custom RTD client&lt;/h3&gt; &lt;p&gt;We can quickly implement a program to hook into this interface and get data without needing for Excel.&lt;/p&gt;&lt;p&gt;Use GUID to create a reference to a specific COM server instance. Use ConnectData method to connect to a specific topic and get data.&lt;/p&gt; &lt;p&gt;&lt;p&gt;&lt;a href = &apos;https://github.com/neberej/tos-client/&apos;&gt;View full code at github&lt;/a&gt;&lt;/p&gt;&lt;/p&gt;</content:encoded>
        </item>
        <item>
            <title><![CDATA[Calculating pie in javascript]]></title>
            <description><![CDATA[Pi is one of the most fascinating numbers in mathematics due to its irrational and transcendental nature.]]></description>
            <link>https://nabraj.com/blog/calculating-pie-javascript</link>
            <guid isPermaLink="true">https://nabraj.com/blog/calculating-pie-javascript</guid>
            <pubDate>Tue, 30 Jun 2015 00:00:00 GMT</pubDate>
            <content:encoded>&lt;div className=&apos;content-block&apos;&gt;&lt;div&gt; &lt;p&gt;Pi is one of the most fascinating numbers in mathematics due to its irrational and transcendental nature. The simplest calculation of &apos;pi&apos; is to draw a circle and measure the ratio of the diameter and the circumference which gives us 2-5 decimal digits at best. Although, it may be enough for most practical purposes, calculating digits of pi has been seen as a challenge.&lt;/p&gt; &lt;p&gt;In recent years PIE has been calculated to 130 trilion digits&lt;/p&gt; &lt;p&gt;There are many formulas for computing the value of &apos;pi&apos;. The most famous of this is &apos;Machin&apos;s formula&apos;.&lt;/p&gt; &lt;p&gt;&lt;img src=&apos;https://nabraj.com/static/e625b239da9b654a44c0a70216127e7b/6e1f8/pi.webp&apos; alt=&apos;pi&apos;&gt;&lt;/p&gt; &lt;p&gt;Once we have the arctan value, rest is just simple arithmetic.&lt;/p&gt; &lt;p&gt;&lt;a href = &apos;https://github.com/neberej/calculate-pie-javacript&apos;&gt;View full code at github&lt;/a&gt;&lt;/p&gt; &lt;p&gt;This code is based on &lt;a href = &apos;http://www.trans4mind.com/personal_development/JavaScript/longnumPiMachin.htm&apos;&gt;Ken ward&apos;s work&lt;/a&gt; and also of &lt;a href = &apos;http://numbers.computation.free.fr/Constants/constants.html&apos;&gt;Pascal Sebah&lt;/a&gt;&lt;/p&gt; &lt;/div&gt; &lt;h6&gt;Posted on July 30, 2015&lt;/h6&gt; &lt;/div&gt;</content:encoded>
        </item>
        <item>
            <title><![CDATA[Altmel Gpu]]></title>
            <description><![CDATA[This is a GPU made with Atlmel AT90S2313. It can be programmed using computer with a printer cable or a USBasp device.]]></description>
            <link>https://nabraj.com/blog/altmel-gpu</link>
            <guid isPermaLink="true">https://nabraj.com/blog/altmel-gpu</guid>
            <pubDate>Tue, 01 Jan 2013 00:00:00 GMT</pubDate>
            <content:encoded>&lt;div&gt; &lt;p&gt;This is a GPU made with Atlmel AT90S2313. It can be programmed using computer with a printer cable or a USBasp device.&lt;/p&gt; &lt;p&gt;With a 4Mhz oscillator and some resistors, it is possible to create a signal on its PB0 and PB1 pins. With a voltage divider and a resistor ladder, those signals can be converted to a composite video signal.&lt;/p&gt; &lt;p&gt;The encoding is done with two resistors: 1Kohm and 470Ohm, which is connected to a 1uF capacitor. The signal is as follows: 0.3V for black, 0.7Volt for gray and 1V for white.&lt;/p&gt; &lt;p&gt;Download Hex code from &lt;a href = &apos;/NT-ATmel.hex&apos;&gt;here (atmelbars.hex)&lt;/a&gt;&lt;/p&gt; &lt;p&gt;A typical video signal varies from 0V to 1V.&lt;/p&gt; &lt;p&gt;Like the chart above, I have the signal as follows: 11 for white, 01 for black, 10, for gray.&lt;/p&gt; &lt;p&gt;You can use any programmer (Khazama AVR programmer, ALTMEL command line programmer, AVRfreaks programmer etc), I personallly liked Khazama programmer for its simplicity. The program is simple, you want a set of binary number on any of the PB ports. I have used two resistors 1Kohm and 470 Ohm to stablize the voltage.&lt;/p&gt; &lt;p&gt;&lt;img src=&apos;https://nabraj.com/static/e590e2277630a1b5a7249e949e9e4aff/724b7/schematics1.webp&apos; alt=&apos;Schematics&apos; /&gt;&lt;/p&gt; &lt;p&gt;&lt;img src=&apos;https://nabraj.com/static/39eca0c6a66d1f93e767912e97e59c48/b70d9/altmelgpu1.webp&apos; alt=&apos;Altmel Gpu&apos; /&gt;&lt;/p&gt; &lt;/div&gt;</content:encoded>
        </item>
        <item>
            <title><![CDATA[8 Bit gpu]]></title>
            <description><![CDATA[This is a 8 bit GPU made entirely of TTL chips. It can convert any sequence of binary numbers to a composite video signal. ]]></description>
            <link>https://nabraj.com/blog/8-bit-gpu</link>
            <guid isPermaLink="true">https://nabraj.com/blog/8-bit-gpu</guid>
            <pubDate>Tue, 01 Jan 2013 00:00:00 GMT</pubDate>
            <content:encoded>&lt;div&gt; &lt;p&gt;This is a 8 bit GPU made entirely of TTL chips. It can convert any sequence of binary numbers to a composite video signal. &lt;/p&gt; &lt;p&gt;It consists of 10 8-bit comparators connected to the counter for keeping track of various signals. They are encoded with various gates like AND, NOR and OR&lt;/p&gt; &lt;p&gt;This is a 8 bit GPU made entirely of TTL chips. It can convert any sequence of binary numbers to a composite video signal. It consists of 10 8-bit comparators connected to the counter for keeping track of various signals. They are encoded with various gates like AND, NOR and OR.A typical video signal varies from 0V to 1V. The 0V is one horizontal/vertical sync pulse.&lt;/p&gt; &lt;p&gt;&lt;img src=&apos;https://nabraj.com/static/2afd1a90dab5aa5e7cb2f792a1982e30/a01d1/cvs.webp&apos; alt=&apos;Composite video signal&apos; /&gt;&lt;/p&gt; &lt;p&gt;With the comparator constantly giving the signal on lines, screen and pulses, we can add a RAM and a MUX. The 8 bit output from RAM can be feeded to MUX which gives each bit one by one. We now have a waveform of bits.&lt;/p&gt; &lt;p&gt;Now, we need to make a digital to video converter. I have used a three state buffer to activate three different signals: 11 for white, 01 for black, 10, for gray. With the voltage divider and resistor ladder I stabilized the voltage to 0.3V for black, 0.7Volt for gray and 1V for white.&lt;/p&gt; &lt;p&gt;With an EPROM and a RAM, it is possible to draw (almost) anything on TV.&lt;/p&gt; &lt;p&gt;Check out the rough schematic. (I will add complete schematics and instructions soon!)&lt;/p&gt; &lt;p&gt;The schematic is missing all the logic gates and an oscillator.&lt;/p&gt; &lt;p&gt;&lt;img src=&apos;https://nabraj.com/static/765586ee4e9072774b668b6fa963f288/0d27e/8bitgpu4.webp&apos; alt=&apos;Breadboards&apos; /&gt;&lt;/p&gt; &lt;p&gt;&lt;img src=&apos;https://nabraj.com/static/30cbc8eecb06bffc036a867a618bf2bc/a7c91/8bitgpu1.webp&apos; alt=&apos;TV&apos; /&gt;&lt;/p&gt; &lt;p&gt;Special thanks to Jack Eisenmann&lt;/p&gt; &lt;/div&gt;</content:encoded>
        </item>
    </channel>
</rss>