<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by MABD-Dev on Medium]]></title>
        <description><![CDATA[Stories by MABD-Dev on Medium]]></description>
        <link>https://medium.com/@mabd.dev?source=rss-fe31979f6118------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Fri, 15 May 2026 16:46:34 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@mabd.dev/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Building a Vim-Powered Jira Client with Compose Multiplatform & Claude]]></title>
            <link>https://medium.com/@mabd.dev/building-a-vim-powered-jira-client-with-compose-multiplatform-claude-21bbf29477df?source=rss-fe31979f6118------2</link>
            <guid isPermaLink="false">https://medium.com/p/21bbf29477df</guid>
            <category><![CDATA[productivity]]></category>
            <category><![CDATA[kotlin-multiplatform]]></category>
            <category><![CDATA[software-engineering]]></category>
            <category><![CDATA[vim]]></category>
            <category><![CDATA[kotlin]]></category>
            <dc:creator><![CDATA[MABD-Dev]]></dc:creator>
            <pubDate>Mon, 23 Feb 2026 05:16:01 GMT</pubDate>
            <atom:updated>2026-04-05T23:38:31.790Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*VKVyDp84SXPDZ4wTbIVCYA.png" /></figure><blockquote>Jira is powerful — but painfully slow for keyboard-driven workflows.</blockquote><p><em>Find the project on </em><a href="https://github.com/mabd-dev/gira"><strong><em>githhub</em></strong></a></p><p>As a daily <a href="https://www.vim.org/">vim</a> user I am not satisfied with the experience. After years of trying to embrace it, I finally decided to build my own solution. So I built a keyboard-first Jira client powered by a custom Vim engine using Compose Multiplatform.</p><p><strong>To be clear</strong>: this is not Jira replacement. You can still use Jira normally. I will be fetching Jira data through their official API, and show to you in a nicer way + vim</p><h2>The Problem</h2><ul><li>UI changes very frequently</li><li>UI is clunky and slow</li><li>You see too many elements at your face, even if you don’t use them</li><li>Planning is hard</li><li>Mouse-heavy workflow</li></ul><h2>The Goals</h2><p>Have a keyboard-first navigation system with the option to also use a mouse and keyboard. Data would be fetched from official Jira api and display on a multiplatform application on your desktop or mobile phone.</p><p>Since the data are the same, this would allow me to use this app or Jira whenever I want to. Of course while building this app, I would still be using Jira for features that won’t be supported in the app yet.</p><p>The app UI will be modern, configurable, supports small screens (phone) and large screens (tablets + laptops). This would be powered by compose multiplatform. More on that later</p><h2>The architecture</h2><p>Before starting with UI stuff, I need to have vim like engine to be working, at least the basic usage for now and will improve as I go. I need a way to hit keystrokes on my keyboard and convert those to events. Like move-up, move-down, click, filter, assign task, etc… This means I need to listen to keyboard strokes and handle them properly</p><p>To understand how to do this nicely, We need to know how vim is done and how it is used on the terminal</p><p>At a high level, the system is composed of:</p><ul><li>VimEngine (mode-aware input processor)</li><li>Mode Parsers (Normal, Command)</li><li>MVI ViewModel layer</li><li>Remote + Local data sources</li><li>Compose Multiplatform UI</li></ul><h3>Vim Modes</h3><p>Vim has many <strong>modes</strong> like:</p><ul><li><strong>normal</strong>: while navigating a file</li><li><strong>insert</strong>: while writing to a file</li><li><strong>command</strong>: when running commands in a file</li><li><strong>visual</strong>: selecting text in a file</li><li><strong>v-line</strong>: selecting lines in a file and a few more modes</li></ul><p>For now, I will only handle normal, command modes Each mode handles keystrokes differently</p><h3>Normal Mode Parser</h3><p>This mode need to parse and optionally handle keystrokes as soon as they are received. For example, if user press j, this should be parsed and understood and move down, k for moving up and so on</p><p>This is simple, we could do something like this</p><pre>fun handle(c: Char): VimAction? {<br>  return when (c) {<br>    &#39;j&#39; -&gt; VimAction.MoveDown<br>    &#39;k&#39; -&gt; VimAction.MoveUp<br>    else -&gt; null<br>  }<br>}</pre><p>What happens if I want to handle gg (move to top of the file), you might think i can simply add it to the when statement, like this</p><pre>fun handle(c: Char): VimAction? {<br>  return when (c) {<br>    // ...<br>    &#39;gg&#39; -&gt; VimAction.MoveToTop<br>    // ...<br>}</pre><p>but our handle function only send one character at a time. So we need to cache previous strokes. Then it becomes something like this</p><pre>var buffer = StringBuilder()<br>fun handle(c: Char): VimAction? {<br>  when (c) {<br>    &#39;j&#39; -&gt; {<br>      buffer.clear()<br>      return VimAction.MoveDown<br>    }<br>    &#39;k&#39; -&gt; { <br>      buffer.clear()<br>      return VimAction.MoveUp<br>    }<br>  }<br><br> buffer.append(c)<br> when (buffer.toString()) {<br>   &#39;gg&#39; -&gt; {<br>     buffer.clear()<br>     return VimAction.MoveToTop<br>    }<br> }<br> return null<br>}</pre><p>There are more complicated cases, like if user hit g by mistake and then he want to hit j. This is step by step on what would happen</p><pre>buffer = &quot;&quot;<br>user clicked: g<br>no match -&gt; buffer = &quot;g&quot;<br>user clicked j<br>no match -&gt; buffer = &quot;gj&quot;</pre><p>Now at this point no matter what user hits, buffer would keep increasing. We can clear buffer if user click esc but is not the vim way to handle it.</p><h4>Partial Pattern Matching</h4><p>If you try this in vim, hitting g then j would move up. How so? This is called <strong>partial matches</strong>. gj is not a valid keybinding, but j itself is, so g is ignored and j will run</p><p>If no exact match found, we try last n-1 characters on the buffer (n = number of characters in buffer), if that also had no match we try n-2 and so until no more characters in the buffer. But if we found a partial match, we return the corresponding vim action and we clear the buffer</p><p>This type of pattern matching is needed in normal mode parser.</p><h3>Command Mode Parser</h3><p>Here we need to have a predefined grammar on our commands. Let’s start with a simple and familiar one</p><pre>[verb] [target] (args)</pre><pre>for example:<br>:status done (change task status to done)<br>:assign bob (assign task to bob)</pre><p>In this mode, we don’t want to parse on each keystroke, instead we keep adding to buffer and wait for user to hit enter to apply the command or esc to cancel it</p><p>We should have a predefined vocabulary on each verb, then verb to target maps or something to know what we can handle and what we can’t</p><p>Using power of sealed interface we can do this nicely as follows</p><pre>sealed interface CommandVerb {  <br>  <br>    fun toVimAction(target: String?, args: List&lt;Arg&gt;): VimAction?  <br>  <br>    data object Status: CommandVerb {  <br>  <br>        override fun toVimAction(target: String?, args: List&lt;Arg&gt;): VimAction? {  <br>            if (target.isNullOrBlank()) return null  <br>            val taskStatus = TaskStatus.getFrom(target) ?: return VimAction.Error(&quot;Unknown status=$target&quot;)  <br>            return VimAction.MoveTaskTo(taskStatus)  <br>        }  <br>    }  <br>  <br>    data object Assign: CommandVerb {  <br>        override fun toVimAction(target: String?, args: List&lt;Arg&gt;): VimAction? {  <br>            if (target.isNullOrBlank()) return null  <br>            return VimAction.AssignTo(target.lowercase())  <br>        }  <br>    }  <br>  <br>    data object Help: CommandVerb {  <br>        override fun toVimAction(target: String?, args: List&lt;Arg&gt;): VimAction {  <br>            return VimAction.ShowHelp  <br>        }  <br>    }<br>    <br>    companion object Companion {  <br>	    fun getFrom(verbName: String): CommandVerb? {  <br>	        return when (verbName.lowercase()) {  <br>	            &quot;status&quot; -&gt; Status  <br>	            &quot;assign&quot; -&gt; Assign  <br>	            &quot;help&quot; -&gt; Help  <br>	            else -&gt; null  <br>	        }  <br>	    }  <br>	}<br>}</pre><p>Using regex we parse the command, and from it we get Command Verb like so</p><pre>val verb = getVerbFromCmd(cmd)<br>CommandVerb.getFrom(verb)?.toVimAction(target, args)</pre><p>We need to care about edge cases like:</p><ul><li>What happens if verb is invalid</li><li>What happens if target is invalid and so on</li></ul><h3>Creating The Engine</h3><p>Since our mode parsers are ready now we can start develop the engine</p><p>Our engine needs NormalModeParser and CommandModeParser as params, the engine also need to know the current vim mode to be able to know where to send keystrokes</p><pre>class VimEngine(  <br>    private val normalModeParser: ModeParser,  <br>    private val commandModeParser: ModeParser,  <br>) {  <br>    val mode: VimMode = // something<br>  <br>    private val modeToParser = mapOf(  <br>        VimMode.Normal to normalModeParser,  <br>        VimMode.Command to commandModeParser,  <br>    )  <br>  <br>    fun handleKey(key: VimKey) {  <br>        val parser = modeToParser[mode.value]  <br>        val vimAction = parser?.parse(key)  <br>  <br>	// handle vimAction<br>    } <br>}</pre><p>But wait, how should we change vim mode? and how is responsible for that.</p><p>What makes most sense is that the engine is the one holding vimMode and exposing that as a flow. I decided to make the parser return vim mode change, this way VimAction would only represent an action to be made later by ui and vim mode change is only meant for the VimEngine to see</p><p>so, I updated mode parser function to return ParserResult</p><pre>data class ParseResult(  <br>    val action: VimAction? = null,  <br>    val nextMode: VimMode? = null  <br>)</pre><pre>fun handle(c: Char): ParseResult {<br>// ...</pre><p>then handleKey function in VimEngine becomes as follows</p><pre>fun handleKey(key: VimKey) {  <br>  val parser = modeToParser[mode.value]  <br>  val parseResult = parser?.parse(key)  <br>	  <br>  parseResult?.nextMode?.let { nextMode -&gt;  <br>    scope.launch { _mode.emit(nextMode) }  <br>  }  <br>	  <br>  parseResult?.action?.let { action -&gt;  <br>    scope.launch { emit(action) }  <br>  }<br>}</pre><p>What I have currently:</p><ul><li>User hit keys on keyboard → they get handled by NormalModeParser</li><li>When user hit : NormalModeParser assign nextMode=VimMode.Command in ParseResult so mode switches</li><li>Next keystrokes will be handled by CommandModeParser</li></ul><p>So far so good.</p><p>Later I can add</p><ul><li>more parsers,</li><li>more keybindings to normal mode</li><li>and more command to command mode easily</li></ul><pre>        ┌─────────────┐<br>        │  Vim Engine │<br>        └──────┬──────┘<br>               │<br>        ┌──────┴──────────┐<br>        ▼                 ▼<br>┌──────────────┐  ┌──────────────┐<br>│    Normal    │  │   Command    │<br>│     Mode     │  │     Mode     │<br>│    Parser    │  │    Parser    │<br>└──────────────┘  └──────────────┘</pre><h3>Hook To UI</h3><p>Once the engine was stable, the next challenge was integrating it cleanly with the Compose UI layer.</p><p>I am using compose and handling keystrokes is straightforward. I listen to onKeyEventModifier, get the key, and sent it to ViewModel to be sent later to VimEngine.</p><p>I had a screen with a vertical list of tasks and I was navigating through them in vim keybinding, switching between modes, etc.. all is working</p><p>But then I wanted to show a task details, side-by-side with task list screen. The standard these days in compose is using multi-pane view. On the left i have tasks list, and on the right I have task details view. I did this and it looked nice.</p><pre>┌─────────────────────────────────────────────────┐<br>│                                                 │<br>│   ┌───────────────┐   ┌───────────────────┐     │<br>│   │               │   │                   │     │<br>│   │               │   │                   │     │<br>│   │  tasks list   │   │   task details    │     │<br>│   │               │   │                   │     │<br>│   │               │   │                   │     │<br>│   └───────────────┘   └───────────────────┘     │<br>│                                                 │<br>└─────────────────────────────────────────────────┘</pre><h2>Why I Chose Multiple Vim Engines per Pane</h2><p>When I introduced the multi-pane layout (tasks list + task details), an interesting architectural question appeared:</p><blockquote>Who owns the keyboard behavior when multiple panes are visible?</blockquote><p>Each pane had <strong>very different interaction semantics</strong>:</p><ul><li>Task list → navigation heavy (j, k, gg, filtering…)</li><li>Task details → editing, actions, different commands</li><li>Future panes → unknown behaviors</li></ul><p>I considered three approaches.</p><h4>Option 1 — Dynamic Keymap Switching</h4><p>Switch keybindings whenever focus changes.</p><p><strong>Pros:</strong></p><ul><li>Single engine instant</li><li>Simple mental model initially</li></ul><p><strong>Cons:</strong></p><ul><li>Keymap mutation at runtime</li><li>Harder to reason about state</li><li>Become fragile as number of panes grows</li></ul><p>This felt convenient short-term but risky long-term</p><h4>Option 2 — Swap Parsers Inside One Engine</h4><p>Keep one engine but replace it’s parsers based on focused pane.</p><p><strong>Pros:</strong></p><ul><li>Still one engine</li><li>Some separation of behavior (different parsers for different focus panes)</li></ul><p><strong>Cons:</strong></p><ul><li>Engine becomes focus-aware. This prevent it of being re-used in another projects</li><li>Parser lifecycle becomes harder to track</li><li>Increased coupling between UI and engine</li></ul><p>This improved separation slightly but still mixed responsibilities.</p><h4>Option 3 — Multiple Vim Engines (Chosen)</h4><p>Each focusable pane owns its own VimEngine instance.</p><p>The ViewModel simply routes keystrokes to <strong>currently focused pane’s engine</strong></p><p><strong>Pros:</strong></p><ul><li>Strong isolation between panes</li><li>Each pane can evolve independently</li><li>No runtime mutations of keymaps</li><li>Simpler mental model per engine</li><li>Future-proof for more panes</li><li>Enable parsers sharing when desired</li></ul><p><strong>Cons:</strong></p><ul><li>More engine instances in memory</li><li>Slightly more wiring in ViewModel</li></ul><p>For this applications the tradeoff was clearly worth it.</p><h3>The Key Design Principle</h3><p>The decision was guided by one rule:</p><blockquote>Keyboard behavior is contextual UI state, not global application state.</blockquote><p>By giving each pane its own engine:</p><ul><li>focus becomes the only routing concern</li><li>engines remain pure and predictable</li><li>adding new panes does not increase complexity of existing ones</li></ul><p>In practice, this made the system <strong>much easier to extend</strong> than the single-engine approaches.</p><h3>Why This Matters for Future Growth</h3><p>This design unlocks several things almost for free:</p><ul><li>Different Vim capabilities per pane</li><li>Experimental keymaps in isolated areas</li><li>Plugin-like future architecture</li><li>Potential extraction of the Vim engine as a reusable library</li></ul><p>Most importantly, it keeps the architecture honest: each UI surface owns its own interaction model.</p><h3>Small But Important Optimization</h3><p>Even though I use multiple engines, parsers themselves can still be shared when behavior overlaps. This avoids unnecessary duplication while preserving isolation where it matters.</p><pre>Key Press<br>                        │<br>                        ▼<br>                ┌────────────────┐<br>                │   ViewModel    │<br>                │ (focus aware)  │<br>                └──────┬─────────┘<br>                       │<br>          ┌────────────┴────────────┐<br>          │                         │<br>          ▼                         ▼<br> ┌─────────────────┐      ┌─────────────────┐<br> │ Tasks List Pane │      │ Task Details    │<br> │   VimEngine     │      │   VimEngine     │<br> └────────┬────────┘      └────────┬────────┘<br>          │                         │<br>          ▼                         ▼<br>   Normal / Command          Normal / Command<br>        Parsers                   Parsers</pre><p>The complete flow looks like this</p><pre>User presses key<br>        │<br>        ▼<br>┌──────────────────────┐<br>│      Compose UI      │<br>│   onKeyEvent(...)    │<br>└──────────┬───────────┘<br>           │<br>           ▼<br>┌──────────────────────┐<br>│      ViewModel       │<br>│ (focus-aware router) │<br>└──────────┬───────────┘<br>           │ routes by focus<br>           ▼<br>┌──────────────────────┐<br>│      VimEngine       │<br>│  (mode-aware parse)  │<br>└──────────┬───────────┘<br>           │ emits<br>           ▼<br>┌──────────────────────┐<br>│      VimAction       │<br>└──────────┬───────────┘<br>           │ mapped to<br>           ▼<br>┌──────────────────────┐<br>│     ScreenIntent     │<br>└──────────┬───────────┘<br>           │ handled by<br>           ▼<br>┌──────────────────────┐<br>│   ScreenInteractor   │<br>│ (business logic)     │<br>└──────────┬───────────┘<br>           │ produces<br>           ▼<br>┌──────────────────────┐<br>│   Reducer (MVI)      │<br>│   uses currentState  │<br>└──────────┬───────────┘<br>           │ emits<br>           ▼<br>┌──────────────────────┐<br>│      New State       │<br>└──────────┬───────────┘<br>           │<br>           ▼<br>        Compose UI<br>          recomposes</pre><h2>UI</h2><h4>Switching Focus</h4><p>To be able for the viewModel to know which pane is focused I created a state for it and saved it in viewModel. Then based on emitted vim actions I would know which pane is focused</p><p>For example:</p><ul><li>when user click enter on a task in <strong>tasks list</strong> pane → switch focus to <strong>task details</strong> pane</li><li>when user click esc on <strong>task details</strong> pane → switch focus back to <strong>tasks list</strong> pane</li><li>I even went step further and added vim like keybinding for this in NormalModeParser</li><li>ctrl-l: switch focus to pane on the right (in this case <strong>task details</strong>)</li><li>ctrl-h switch focus to pane on the left (in this case <strong>tasks list</strong>)</li></ul><blockquote>Of course using mouse clicks here would also work and switch focus properly</blockquote><blockquote><strong>Reminder:</strong> this is not a vim-only app, but vim based so mouse still works as expected (screen touches as well for mobile devices)</blockquote><h4>UI Features</h4><ul><li><strong>Tasks List</strong>: auto scroll when user hit j, k, gg, G</li><li>Animated task status update, with loading progress while it’s getting updated</li><li>Stacked <strong>Notification</strong> system: show info, error, warning notifications at the top-right corner of the app, with auto-disappearing after 3 seconds</li><li>Highlight focused pane</li><li>Popup to show all available keybindings and what each do</li></ul><h4>More Vim Features</h4><p>Here is a list of vim feature I also supported</p><ul><li><strong>Repeatable actions</strong> Repeating last command by clicking on .</li><li>In <strong>Command Mode</strong>: click arrow-up/arrow-down it would show previous/forward commands executed</li><li>Each parser has it’s own buffer</li><li><strong>Vim Engine</strong>: Expose buffer for the currently working parser, show buffer content on UI</li><li><strong>NormalModeParser</strong>:</li><li>Created <strong>default keybinding</strong></li><li><strong>Extra keybinding</strong>: to support configurable keybindings later</li></ul><h2>Data Layer</h2><p>This app is intended to be used offline. So I need a local data source and remote data source (Jira). They both expose my domain level models.</p><ul><li>Remote Data Source (interface)</li><li><strong>Real Implementation</strong>: abstract way Jira models, and only return back my domain level models</li><li><strong>Fake Implementation</strong> for testing</li><li>Local Data Source (interface)</li><li><strong>In-Memory Implementation</strong>: store and cache data to memory</li><li><strong>Db Implementation</strong> (to be done) save into DB. This is needed for <strong>offline</strong> mode support</li></ul><h2>Claude Code</h2><p>It’s 2026, not using AI in the development workflow would be a missed opportunity.</p><p>I used LLM tooling (Claude Code) strategically to accelerate implementation while keeping full architectural ownership and code review responsibility.</p><p>My rule was simple:</p><blockquote>AI can generate — but I design, verify, and own the system.</blockquote><h3>What AI helped the most</h3><h4>UI Scaffolding</h4><p>Since the early focus of the project was the interaction model rather than visual polish, I used Claude to scaffold several UI components.</p><p>With well-scoped prompts and clear context, most components were generated correctly in one pass. This allowed me to:</p><ul><li>move faster in early iterations</li><li>avoid spending time on repetitive Compose boilerplate</li><li>keep focus on the Vim engine and state flow</li></ul><p>As the product matures, I expect to take more manual control over UX refinement.</p><h4>API Layer</h4><p>The Jira integration layer is something I’ve implemented many times professionally, so it was a good candidate for delegation.</p><p>I provided Claude with:</p><ul><li>Jira API documentation</li><li>my project structure conventions</li><li>interface contracts (real + fake implementations)</li><li>error-handling expectations</li><li>domain model mappings</li></ul><p>Because the constraints were explicit, the generated code was:</p><ul><li>Clean</li><li>Testable</li><li>Idiomatic</li><li>and required only light review adjustments</li></ul><p>This is exactly the kind of work where AI currently provides the most leverage.</p><h4>Unit Testing</h4><p>The Vim engine and parsers have many edge cases, and comprehensive unit testing is essential but time-consuming.</p><p>My workflow was:</p><ol><li>I defined the test scenarios</li><li>Claude generated the test implementations</li><li>I reviewed and refined them</li><li>GitHub Actions enforce them on every PR</li></ol><p>This gave me broad test coverage quickly while maintaining confidence in correctness.</p><h4>What AI Did Not Own</h4><p>The following remained fully manual:</p><ul><li>overall architecture</li><li>Vim engine design</li><li>mode system</li><li>state flow (MVI)</li><li>focus routing model</li><li>concurrency decisions</li></ul><p>AI accelerated the build — but the system design decisions remained human-driven.</p><h3>Takeaway</h3><p>Used carelessly, AI can produce fragile systems.</p><p>Used deliberately, it becomes a powerful force multiplier.</p><p>In this project, the goal was never to replace engineering judgment — only to remove unnecessary friction from the implementation process.</p><h2>Things To Improve</h2><p>This is still the first version of the app. The core interaction model is working well, but several areas need to mature before this could be considered production-ready.</p><p><strong>High priority</strong></p><ul><li><strong>Proper Jira API authentication</strong><br>Currently the app uses a simple API token. Supporting OAuth and improving token handling will be required for real-world usage.</li><li><strong>Offline mode support</strong><br>The data layer is designed for it, but the database implementation and sync strategy still need to be completed.</li><li><strong>Smarter cache invalidation</strong><br>Right now caching is basic. As usage grows, I will need more deliberate invalidation and refresh strategies to avoid stale task data.</li></ul><p><strong>Medium priority</strong></p><ul><li><strong>Extract VimEngine as a reusable library</strong><br>The engine is already mostly decoupled. With some cleanup it could become a standalone module usable in other projects.</li><li><strong>More Vim motions and text objects</strong><br>The current implementation focuses on navigation and commands. Expanding motion coverage will improve muscle-memory compatibility for heavy Vim users.</li></ul><p><strong>Longer-term explorations</strong></p><ul><li><strong>Performance tuning under heavy key input</strong><br>As the number of panes and commands grows, I want to measure and optimize keystroke latency and buffering behavior.</li><li><strong>Plugin-style extensibility</strong><br>The multi-engine design opens the door for pane-specific extensions. I’m interested in exploring how far this model can scale.</li></ul><h2>Final thoughts</h2><p>What started as a small experiment has quietly become part of my daily workflow. I now use it daily at work on daily standup (I am the scrum master, I can do that 😁)</p><p>Simple operations — like filtering tasks or moving an issue from todo to done are now muscle memory. For example, md (move to done) is often faster than reaching for the mouse and navigating multiple menus.</p><p>Interestingly, my teammates initially assumed I was using some existing tool rather than something custom-built. That reaction alone was a strong signal that the interaction model is heading in the right direction.</p><p>There is still plenty of work ahead, but the core bet is already paying off: <strong>when keyboard interaction is treated as a first-class architectural concern, the entire experience changes.</strong></p><p>The goal was never to replace Jira, only to make working with it finally feel fast.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=21bbf29477df" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[I Built a Tool to Track My Open Source Contributions]]></title>
            <link>https://medium.com/@mabd.dev/i-built-a-tool-to-track-my-open-source-contributions-b2af92c955e7?source=rss-fe31979f6118------2</link>
            <guid isPermaLink="false">https://medium.com/p/b2af92c955e7</guid>
            <category><![CDATA[golang]]></category>
            <category><![CDATA[github]]></category>
            <category><![CDATA[software-development]]></category>
            <category><![CDATA[open-source]]></category>
            <category><![CDATA[developer-tools]]></category>
            <dc:creator><![CDATA[MABD-Dev]]></dc:creator>
            <pubDate>Mon, 22 Dec 2025 06:39:43 GMT</pubDate>
            <atom:updated>2025-12-24T13:19:00.081Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/617/1*AbZP8L4JOdoJhVdzS8vWdQ.png" /></figure><p>Github contributions graph is great at showing activity, but it does not answer the question: <strong>what open source projects I have contributed to</strong>.</p><p>I wanted to display open source projects I have contributed to on my personal website. I can do this manually. However, this can get annoying and add extra thing to remember. <br>I want to show projects I contributed to, how many PR’s merged, lines of code contributed. Github does not surface this easily, so i built a tool to do so.</p><h3>The Problem</h3><p>If you contribute to external repositories (projects you don’t own) Github buries this info. You can get it manually by searching your PR’s, but their is no API endpoint that says “give me all this user’s contributions to external repositories”</p><p>I wanted:</p><ul><li>List of external projects contributed to</li><li>Number of merged PR’s per project</li><li>Commit count and number of lines added/removed (per project)</li><li>JSON output I can feed to my website</li></ul><p>So I created <a href="https://github.com/mabd-dev/gh-oss-stats">gh-oss-stats</a></p><h3>The Approach</h3><p>The core insight is to use Github’s search query</p><blockquote>author:USERNAME type:pr is:merged -user:USERNAME</blockquote><p>This find all pull requests:</p><ul><li>Authored by <strong>you</strong> (<em>author:USERNAME</em>)</li><li>That are <strong>PR’s</strong> not issues (<em>type:pr</em>)</li><li>That are <strong>merged</strong> (<em>is:merged</em>)</li><li>For repos you <strong>don’t</strong> <strong>own</strong> (<em>-user:USERNAME</em>)</li></ul><p>That’s your OSS contribution history in one query.</p><p>Request looks like this:</p><pre>https://api.github.com/search/issues?q=author:mabd-dev+type:pr+is:merged+-user:mabd-dev</pre><p>output looks like this:</p><pre>{<br>  &quot;total_count&quot;: 20,<br>  &quot;incomplete_results&quot;: false,<br>  &quot;items&quot;: [<br>   {<br>    {<br>      &quot;url&quot;: &quot;https://api.github.com/repos/qamarelsafadi/JetpackComposeTracker/issues/9&quot;,<br>      &quot;repository_url&quot;: &quot;https://api.github.com/repos/qamarelsafadi/JetpackComposeTracker&quot;,<br>      &quot;labels_url&quot;: &quot;https://api.github.com/repos/qamarelsafadi/JetpackComposeTracker/issues/9/labels{/name}&quot;,<br>      &quot;comments_url&quot;: &quot;https://api.github.com/repos/qamarelsafadi/JetpackComposeTracker/issues/9/comments&quot;,<br>      &quot;events_url&quot;: &quot;https://api.github.com/repos/qamarelsafadi/JetpackComposeTracker/issues/9/events&quot;,<br>      &quot;html_url&quot;: &quot;https://github.com/qamarelsafadi/JetpackComposeTracker/pull/9&quot;,<br>      &quot;id&quot;: 3204496021,<br>      &quot;node_id&quot;: &quot;PR_kwDONQBujs6diLmP&quot;,<br>      &quot;number&quot;: 9,<br>      &quot;title&quot;: &quot;🔧 Refactor: Add Global Theme Support for UI Customization&quot;,<br>      &quot;user&quot;: {...},<br>      &quot;labels&quot;: [...],<br>      &quot;state&quot;: &quot;closed&quot;,<br>   },<br>   ...<br>  ]<br>}</pre><p>From there, it’s a matter of:</p><ol><li>Fetching PR details (commits, additions, deletions)</li><li>Enriching with repo metadata (stars, description)</li><li>Aggregating into useful statistics</li></ol><h3>Architecture Decision: Library First</h3><p>I built this as a Go library with a CLI wrapper, not just a CLI tool. The core logic lives in an importable package:</p><pre>import &quot;github.com/mabd-dev/gh-oss-stats/pkg/ossstats&quot;<br><br>client := ossstats.New(<br>    ossstats.WithToken(os.Getenv(&quot;GITHUB_TOKEN&quot;)),<br>    ossstats.WithLOC(true), LOC: lines of code<br>)<br><br>stats, err := client.GetContributions(ctx, &quot;mabd-dev&quot;)</pre><p>This means I can use the same code in:</p><ul><li>The CLI tool (for local use)</li><li>GitHub Actions (automated updates)</li><li>A future badge service (SVG generation)</li><li>Anywhere else I need this data</li></ul><p>The CLI is just a thin wrapper that parses flags and calls the library.</p><h3>Handling GitHub’s Rate Limits</h3><p>GitHub’s API has limits: 5,000 requests/hour for authenticated users, but only 60 requests/hour for the Search API. For someone with many contributions, you can burn through this quickly.</p><p>The tool implements:</p><ul><li>Exponential backoff on rate limit errors</li><li>2-second delays between search API calls</li><li>Controlled concurrency (5 parallel requests for PR details)</li><li>Partial results if rate limited mid-fetch</li></ul><h3>The Output</h3><p>Running the tool produces JSON like this:</p><pre>{<br>  &quot;username&quot;: &quot;mabd-dev&quot;,<br>  &quot;generatedAt&quot;: &quot;2025-12-21T06:46:57.823990311Z&quot;,<br>  &quot;summary&quot;: {<br>    &quot;totalProjects&quot;: 7,<br>    &quot;totalPRsMerged&quot;: 17,<br>    &quot;totalCommits&quot;: 58,<br>    &quot;totalAdditions&quot;: 1270,<br>    &quot;totalDeletions&quot;: 594<br>  },<br>  &quot;contributions&quot;: [<br>    {<br>      &quot;repo&quot;: &quot;qamarelsafadi/JetpackComposeTracker&quot;,<br>      &quot;owner&quot;: &quot;qamarelsafadi&quot;,<br>      &quot;repoName&quot;: &quot;JetpackComposeTracker&quot;,<br>      &quot;description&quot;: &quot;This is a tool to track you recomposition state in real-time !&quot;,<br>      &quot;repoURL&quot;: &quot;https://github.com/qamarelsafadi/JetpackComposeTracker&quot;,<br>      &quot;stars&quot;: 94,<br>      &quot;prsMerged&quot;: 2,<br>      &quot;commits&quot;: 14,<br>      &quot;additions&quot;: 181,<br>      &quot;deletions&quot;: 78,<br>      &quot;firstContribution&quot;: &quot;2025-06-14T20:55:24Z&quot;,<br>      &quot;lastContribution&quot;: &quot;2025-07-21T21:39:53Z&quot;<br>    },<br>    ...<br>  ]<br>}</pre><p>This feeds directly into my website’s contributions section.</p><h3>Using It</h3><h4>Installation</h4><pre>go install github.com/mabd-dev/gh-oss-stats/cmd/gh-oss-stats@latest</pre><h4>Basic Usage</h4><pre># Set your GitHub token<br>export GITHUB_TOKEN=ghp_xxxxxxxxxxxx<br><br># Run it<br>gh-oss-stats --user YOUR_USERNAME<br><br># Save to file<br>gh-oss-stats --user YOUR_USERNAME -o contributions.json</pre><h3>Automating with GitHub Actions</h3><p>I run this weekly via GitHub Actions to keep my website updated automatically:</p><pre>name: Update OSS Contributions<br><br>on:<br>  schedule:<br>    - cron: &#39;0 0 * * 0&#39;   # Weekly on Sunday<br>  workflow_dispatch:      # Manual trigger<br><br>permissions:<br>  contents: write<br><br>jobs:<br>  update-stats:<br>    runs-on: ubuntu-latest<br>    steps:<br>      - uses: actions/checkout@v4<br>      <br>      - uses: actions/setup-go@v5<br>        with:<br>          go-version: &#39;1.25&#39;<br>      <br>      - name: Install gh-oss-stats<br>        run: go install github.com/mabd-dev/gh-oss-stats/cmd/gh-oss-stats@latest<br>      <br>      - name: Fetch contributions<br>        env:<br>          GITHUB_TOKEN: ${{ secrets.GH_OSS_TOKEN }}<br>        run: |<br>          gh-oss-stats \<br>            --user YOUR_USERNAME \<br>            --exclude-orgs=&quot;your-org&quot; \<br>            -o data/contributions.json<br>      <br>      - name: Commit changes<br>        run: |<br>          git config user.name &quot;github-actions[bot]&quot;<br>          git config user.email &quot;github-actions[bot]@users.noreply.github.com&quot;<br>          git add data/contributions.json<br>          if ! git diff --staged --quiet; then<br>            git commit -m &quot;Update OSS contributions&quot;<br>            git push<br>          fi</pre><p>Now my website always has fresh data without any manual work.</p><h3>Displaying on My Website</h3><p>On <a href="https://mabd.dev">mabd.dev</a>, I read the JSON file and render it. The exact implementation depends on your stack, but the data structure makes it straightforward:</p><ul><li>Loop through <strong>contributions</strong> array</li><li>Display repo name, stars, PR count</li><li>Show totals from <strong>summary</strong></li><li>Link to the actual repos</li></ul><p>The JSON is the contract, however you want to display it is up to you.</p><h3>What I Learned</h3><p><strong>GitHub’s Search API is powerful but quirky.</strong> The <em>-user:</em> exclusion syntax does not exclude repos you own on your organization. I had to do custom logic to detect that.</p><p><strong>Library-first design pays off.</strong> Building the core as an importable package meant the CLI came together in under an hour. It also means future tools (like a badge service) can reuse 100% of the logic.</p><h3>What’s Next</h3><p>I’m planning to build a companion service <strong>gh-oss-badge</strong> that generates SVG badges you can embed in your GitHub profile README:</p><pre>![OSS Stats](https://oss-badge.example.com/mabd-dev.svg)</pre><p>Same data, different presentation. The library-first architecture means this service will just import `gh-oss-stats/pkg/ossstats` and add an HTTP layer on top.</p><p>If you want to track your own OSS contributions, give <a href="https://github.com/mabd-dev/gh-oss-stats">gh-oss-stats </a>a try. It’s open source (naturally), and contributions are welcome.</p><h4>Resources</h4><ul><li><strong>Github api docs:</strong> <a href="https://docs.github.com/en/rest?apiVersion=2022-11-28">https://docs.github.com/en/rest?apiVersion=2022-11-28</a></li><li><strong>Github api rate limit:</strong> <a href="https://docs.github.com/en/rest/using-the-rest-api/rate-limits-for-the-rest-api?apiVersion=2022-11-28">https://docs.github.com/en/rest/using-the-rest-api/rate-limits-for-the-rest-api?apiVersion=2022-11-28</a></li><li><strong>Authenticating to rest api:</strong> <a href="https://docs.github.com/en/rest/authentication/authenticating-to-the-rest-api?apiVersion=2022-11-28">https://docs.github.com/en/rest/authentication/authenticating-to-the-rest-api?apiVersion=2022-11-28</a></li></ul><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=b2af92c955e7" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Building a Search Engine from Scratch: The Inverted Index]]></title>
            <link>https://medium.com/@mabd.dev/building-a-search-engine-from-scratch-the-inverted-index-019c599b3c59?source=rss-fe31979f6118------2</link>
            <guid isPermaLink="false">https://medium.com/p/019c599b3c59</guid>
            <category><![CDATA[search-engines]]></category>
            <category><![CDATA[software-development]]></category>
            <category><![CDATA[build-in-public]]></category>
            <category><![CDATA[software-engineering]]></category>
            <dc:creator><![CDATA[MABD-Dev]]></dc:creator>
            <pubDate>Sat, 13 Dec 2025 07:40:19 GMT</pubDate>
            <atom:updated>2025-12-13T12:02:24.013Z</atom:updated>
            <content:encoded><![CDATA[<h3>Search Engine from Scratch — Part 1: The Inverted Index</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*xqoMZXM0ChyhoWx6lM02lQ.png" /><figcaption>Generated by Gemini</figcaption></figure><p>I started something big — building a <strong>text-based search engine</strong> from scratch 🔍</p><h4>Why This?</h4><ul><li>🧠 Zero experience in search engines = massive learning opportunity</li><li>💡 We use search daily (browser, Spotlight, Windows Search) yet rarely think about how it works</li><li>⚙️ Pure algorithmic challenge, no backends, no APIs, just data structures and efficiency</li></ul><p><strong>The goal</strong>: a fast, pluggable search tool I can hook into other projects.</p><h3>My Current Knowledge</h3><p>As a developer, when I hear “search feature,” the first thing that comes to mind is a simple algorithm:</p><pre>for all texts -&gt; find text that contains word (case insensitive)</pre><p>That’s it. The trivial “contains” algorithm.</p><p>This is usually good enough for a large portion of projects. But what happens when the data is huge, or the query is more than just a single word?<br>Hmmm, this is where it gets complicated, and from here, I had no idea what to do.</p><h3>Information Retrieval</h3><blockquote>Information retrieval (IR) is finding material of an unstructured nature (usually text) that satisfies an information need from within large collections.</blockquote><p>Let me explain with an example. Say you have a list of 100 words and their meanings. When you want to find a word’s meaning, you traverse the list until you find it. But what happens when the list grows to 100,000 words? There’s no way you can traverse the whole list every single time.</p><p>Those 100,000 words are <strong>unstructured</strong> and the collection is <strong>very large</strong>.</p><p>To retrieve information efficiently, we need a better approach. As a developer, you probably already know one solution: <strong>sorting</strong>.</p><p>Sort the words alphabetically and finding any word becomes trivial, even if the collection grows to 10 million entries.</p><p>Search engines apply similar thinking: take a huge collection of data, then use algorithms and data structures to organize it for fast future lookups.</p><h3>Search Engine V1</h3><p><strong>Objective</strong>: choose a folder on you machine and search for any word in it</p><p>W’ll build our engine based on files, let’s call them <strong>documents</strong></p><pre>type Document struct {<br> ID   int<br> Path string<br>}</pre><p>We want to search for words across a folder, so ideally our data structure would map each word to the list of files where it appears. This is called an <strong>inverted index</strong>, “inverted” because instead of mapping documents → words, we map words → documents.</p><p>Each location where a word appears is called a <strong>posting</strong> (think of it as “posting” the document to that word’s list):</p><pre>type Posting struct {<br> DocID int<br>}<br><br>type Index map[string][]Posting</pre><p>Now we need to process all files in our folder and extract unique words. This process is called <strong>indexing</strong>.</p><p>The algorithm is straightforward: for each file, split by whitespace, then trim leading and trailing spaces from each word.<br>But a problem appears, you end up with things like: (IR), Objective:, backends., }</p><p>So we need to remove symbols and punctuation to get clean words. Let’s call these cleaned words <strong>tokens</strong>.</p><p>Now we can build our index:</p><pre>func indexDocument(doc Document, index Index) Index {<br>  fileContent := getFileContent(doc.Path)<br>  tokens := tokenize(fileContent)<br><br>  for _, token := range tokens {<br>    index[token] = append(postings, Posting{DocID: doc.ID})<br>  }<br>  return index<br>}</pre><p>After indexing all files, we’re ready for queries.</p><h4>Querying</h4><p>For now, let’s keep it simple, searching for a single word:</p><pre>func getPostings(token string) []Posting {<br> return index[token]<br>}</pre><p>That’s it! We get all documents where our token exists.</p><p>This will get more exciting later when we track <strong>positions</strong> within each document where the query appears. Try implementing that yourself 🙂</p><p>You can find the full code on <a href="https://github.com/mabd-dev/search-engine/tree/v1">github</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=019c599b3c59" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Git Worktrees: The Secret Weapon for Running Multiple AI Coding Agents in Parallel]]></title>
            <link>https://medium.com/@mabd.dev/git-worktrees-the-secret-weapon-for-running-multiple-ai-coding-agents-in-parallel-e9046451eb96?source=rss-fe31979f6118------2</link>
            <guid isPermaLink="false">https://medium.com/p/e9046451eb96</guid>
            <category><![CDATA[git-worktree]]></category>
            <category><![CDATA[developer-productivity]]></category>
            <category><![CDATA[ai-agent]]></category>
            <category><![CDATA[devtools]]></category>
            <category><![CDATA[software-development]]></category>
            <dc:creator><![CDATA[MABD-Dev]]></dc:creator>
            <pubDate>Tue, 09 Dec 2025 06:56:19 GMT</pubDate>
            <atom:updated>2025-12-12T04:25:52.145Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Wzd_K0t6X-5Mjf1jpBFUJg.png" /><figcaption>Generated by Gemini</figcaption></figure><p>The rise of AI coding agents like Claude Code, OpenAI Codex, and Cursor has transformed how developers write software. But there’s a hidden bottleneck that’s holding back their full potential: <strong>working directory conflicts</strong>.</p><p>What if you could run five AI agents simultaneously, each tackling a different feature branch, without them stepping on each other’s toes? Enter Git worktrees, a powerful yet underutilized feature that’s becoming essential in the age of AI-assisted development.</p><h3>What Are Git Worktrees?</h3><p>Git worktrees allow you to check out multiple branches of the same repository into separate directories — simultaneously. Unlike cloning a repository multiple times, worktrees share a single <em>.git</em> directory, making them lightweight and keeping all your branches in sync.</p><p>Think of it this way: a traditional Git workflow forces you to switch branches within a single directory. With worktrees, each branch gets its own dedicated folder while sharing the same Git history.</p><pre># Create a new worktree for a feature branch<br>git worktree add my-feature feature-branch<br><br># List all active worktrees<br>git worktree list<br><br># Remove a worktree when done<br>git worktree remove my-feature</pre><h3>The AI Agent Problem: Why Worktrees Matter Now</h3><p>Modern AI coding agents whether <strong>Claude Code</strong>, <strong>GitHub Copilot CLI</strong>, or <strong>ChatGPT codex</strong> operate directly in your filesystem. They read files, make edits, run tests, and execute commands in your working directory.</p><p>Here’s the challenge: <strong>most AI agents assume exclusive access to the project directory</strong>.</p><p>When you run <strong>Claude Code</strong> (or any other ai agent tool) on a bug fix while simultaneously asking another agent to implement a new feature, chaos ensues:</p><ul><li>Both agents modify the same files</li><li>Uncommitted changes from one task interfere with another</li><li>Context switching becomes a nightmare</li><li>You lose the ability to isolate and review changes cleanly</li></ul><p>Git worktrees solve this elegantly by giving each AI agent its own isolated workspace.</p><h3>Setting Up Worktrees for AI Coding Agents</h3><p>Here’s a practical workflow for running multiple AI agents in parallel:</p><blockquote>Example below assumes you are using <a href="https://git-scm.com/docs/git-clone#Documentation/git-clone.txt---bare">bare repo</a></blockquote><h4>Step 1: Create Your Worktree Structure</h4><pre># From your main repository<br>cd ~/projects/my-app<br><br># Create worktrees for different AI tasks<br>git worktree add feature-auth feature/authentication<br>git worktree add bugfix-api bugfix/api-error<br>git worktree add refactor refactor/database-layer</pre><h4>Step 2: Launch AI Agents in Separate Terminals</h4><pre># Terminal 1 - Claude Code working on authentication<br>cd ~/projects/my-app/feature-auth<br>claude<br><br># Terminal 2 - Another agent handling the API bugfix<br>cd ~/projects/my-app/bugfix-api<br>codex<br><br># Terminal 3 - Refactoring task<br>cd ~/projects/my-app/refactor<br>opencode</pre><p>Each agent now operates in complete isolation. They can make changes, run tests, and even break things temporarily, without affecting the others.</p><h4>Step 3: Review and Merge</h4><p>Once each agent completes its task, you have clean, isolated commits ready for review:</p><pre># Back in your main worktree<br>cd ~/projects/my-app/main<br>git fetch --all<br><br># Review changes from each branch<br>git log feature/authentication --oneline<br>git diff main..bugfix/api-error</pre><h3>Real-World Use Cases for AI Agents with Worktrees</h3><h4>1. Parallel Feature Development</h4><p>Assign different features to different AI agents, each working in its own worktree. A single developer can effectively orchestrate multiple AI agents building out an entire sprint’s worth of features simultaneously.</p><h4>2. AI-Powered Code Review and Refactoring</h4><p>Run one AI agent to implement a feature while another reviews and refactors existing code in a separate worktree. No conflicts, no waiting.</p><h4>3. Test-Driven Development at Scale</h4><p>One worktree for an AI writing tests, another for an AI implementing the code to pass those tests. The test-writer doesn’t see the implementation. The implementer gets a fresh perspective.</p><h4>4. Comparing AI Agent Approaches</h4><p>Curious whether Claude Code or Codex produces better results for a specific task? Create two worktrees with the same branch, let each agent work independently, and compare the outputs.</p><h4>5. Safe Experimentation</h4><p>Let an AI agent experiment with risky changes in an isolated worktree. If the approach fails, simply delete the worktree, your main development environment remains untouched.</p><h3>Beyond AI: Other Powerful Worktree Use Cases</h3><p>Git worktrees aren’t just for AI agents. Here are other scenarios where they shine:</p><h4>Hotfix Without Context Switching</h4><p>You’re deep in feature development when a critical production bug lands. Instead of stashing changes or committing half-done work:</p><pre>git worktree add hotfix-urgent main<br>cd hotfix-urgent<br>git checkout -b hotfix/critical-bug<br># Fix the bug, commit, push, create PR<br># Return to your feature work exactly where you left it</pre><h4>Long-Running Tasks</h4><p>Building a large project? Run the build in one worktree while continuing development in another. Same goes for running test suites that take minutes to complete.</p><h4>Documentation Updates</h4><p>Keep a worktree dedicated to documentation. Update docs without switching away from your code, and keep the documentation branch always ready for quick edits.</p><h4>Comparing Implementations Across Branches</h4><p>Need to reference how something was implemented in a different branch? Open it in a separate worktree instead of switching back and forth.</p><h4>Code Archaeology</h4><p>Investigating a bug that might have been introduced several versions ago? Create worktrees for different release tags and compare implementations side by side.</p><h3>Common Worktree Commands Reference</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*F3iI07AkuHV1nNq-ag57WA.png" /></figure><h3>The Future of AI-Assisted Development</h3><p>As AI coding agents become more capable, the ability to orchestrate multiple agents working in parallel will become a key productivity multiplier. Git worktrees provide the foundation for this workflow today.</p><p>Imagine a development setup where:</p><ul><li>One AI agent handles frontend components</li><li>Another manages API endpoints</li><li>A third writes comprehensive tests</li><li>A fourth updates documentation</li></ul><p>All running simultaneously, all isolated, all producing clean, reviewable commits.</p><p>Git worktrees have been around since Git 2.5 (2015), but their relevance has never been greater. In the age of AI-assisted development, they’re becoming an essential part of the modern developer’s toolkit.</p><p>Have you tried using Git worktrees with AI coding agents? Share your experience and workflows in the comments below.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=e9046451eb96" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Stack .add() or .push() ???]]></title>
            <link>https://medium.com/@mabd.dev/stack-add-or-push-a9dad982c3c9?source=rss-fe31979f6118------2</link>
            <guid isPermaLink="false">https://medium.com/p/a9dad982c3c9</guid>
            <category><![CDATA[sd]]></category>
            <category><![CDATA[stack]]></category>
            <category><![CDATA[programming]]></category>
            <category><![CDATA[algorithms]]></category>
            <category><![CDATA[java]]></category>
            <dc:creator><![CDATA[MABD-Dev]]></dc:creator>
            <pubDate>Sat, 22 Feb 2020 15:38:45 GMT</pubDate>
            <atom:updated>2024-07-28T07:23:13.126Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*d0NHhTUihbNfZ8bz6YXQTg.png" /></figure><h3>Stack .add() or .push()???</h3><p>To know what is the difference between them and how they work, we need to take a look at there holder classes.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/416/1*fAT1Rd0_9jY0yD0oRNF_LA.png" /></figure><p>Keep in mind that the<strong> Stack</strong> <strong>class</strong> <em>extends</em> <strong>the Vector</strong> <strong>class, </strong>we are going to need that later</p><p>OK, let’s say we have this piece of code in our project:</p><pre>Stack&lt;Object&gt; s = new Stack&lt;Object&gt;();<br>s.push(new Object());<br>s.add(new Object()); <br>// What is the difference?</pre><h3>s.push();</h3><p>It calls the push method in the <strong>Stack class, </strong>which looks like this…</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/297/1*GIhbs3yoXw5OeQtcrfb_Pg.png" /><figcaption><strong>Stack Class</strong></figcaption></figure><p>It then calls <em>public</em> addElement() in the <strong>Vector class </strong>and returns the item that we just pushed. To be able to do this <em>for example</em>…</p><pre>Object obj = s.push(new Object());</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/615/1*Yi5EjOGXAX0vvql7jAQPVQ.png" /><figcaption><strong>Image 1 : in Vector Class</strong></figcaption></figure><p>As you can see, it calls another method in the <strong>Vector Class.</strong></p><p><strong>Synchronized</strong> here has to do with threading stuff. A topic for later</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/595/1*ZVB2mZFf5z1owBoAB7Gstg.png" /><figcaption><strong>Image 2 : in Vector Class</strong></figcaption></figure><p>Finally, it increases the size of the stack and pushes the element.</p><h3>s.add();</h3><p>It calls add() in the <strong>Stack Class,</strong></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/588/1*MKRBVyX1zTL49oItv1S9qw.png" /><figcaption>Image 3 : in <strong>Vector Class</strong></figcaption></figure><h4>does this look familiar ???</h4><p>Yes, it is.</p><p>This is the same as addElement() that s.push() calls, <strong>except </strong>it is <strong>always</strong> returning true.</p><p>It then calls add() method in the <strong>Vector Class. </strong><em>‘Image 2’</em></p><h3><strong>To Conclude</strong></h3><p>s.push() -&gt; public addElement() -&gt; private add()always</p><p>but</p><p>s.add() -&gt; public add() -&gt; private add()</p><p>This is just a method call … Both ways they lead to the call of the private method .add() which add the element to the <strong>stack</strong></p><p>The only difference between these calls is the return values.</p><p>s.push() return the <strong>object</strong> you are pushing.</p><p>s.add()<strong> always</strong> return <strong>true</strong>.</p><blockquote>Thanks for reading</blockquote><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=a9dad982c3c9" width="1" height="1" alt="">]]></content:encoded>
        </item>
    </channel>
</rss>