<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by Obie Fernandez on Medium]]></title>
        <description><![CDATA[Stories by Obie Fernandez on Medium]]></description>
        <link>https://medium.com/@obie?source=rss-9e1370f50f6e------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Wed, 08 Apr 2026 13:29:40 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@obie/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Introducing Manceps]]></title>
            <link>https://medium.com/zar-engineering/introducing-manceps-cf30b35d3fa7?source=rss-9e1370f50f6e------2</link>
            <guid isPermaLink="false">https://medium.com/p/cf30b35d3fa7</guid>
            <category><![CDATA[ruby]]></category>
            <category><![CDATA[mcps]]></category>
            <category><![CDATA[open-source]]></category>
            <dc:creator><![CDATA[Obie Fernandez]]></dc:creator>
            <pubDate>Tue, 07 Apr 2026 02:00:25 GMT</pubDate>
            <atom:updated>2026-04-07T02:00:25.971Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*47dA9J4l_MMQDQgK" /></figure><p>At ZAR we’ve been building MCP integrations in Ruby and kept hitting the same problems: HTTP libraries dropping connections that MCP servers expect to stay alive, bolted-on auth, and clients that assume you’re using a specific LLM framework. We also wanted all the latest and greatest features of the MCP spec.</p><p>So we released our own Ruby client, extracted from production code!</p><p><a href="https://github.com/zarpay/manceps">GitHub - zarpay/manceps: Ruby client for the Model Context Protocol (MCP). Latin: one who takes in hand.</a></p><p><strong>Persistent connections.</strong> MCP servers bind sessions to TCP connections. Manceps uses httpx to keep them alive across requests.</p><p><strong>Two transports.</strong> Streamable HTTP for remote servers, stdio for local ones (spawns a subprocess, talks JSON over stdin/stdout). Auto-detects from the URL.</p><p><strong>Auth built in.</strong> Bearer tokens, API key headers, and experimental OAuth 2.1 with discovery, PKCE, and automatic refresh.</p><p><strong>Full spec coverage.</strong> Protocol version negotiation (2025–11–25 down to 2025–03–26), elicitation, notifications, structured tool output, tasks, pagination, exponential backoff with session recovery.</p><p><strong>No LLM coupling.</strong> Pure protocol client. No to_openai_tools(), no framework dependencies. Use it with anything.</p><pre>Manceps::Client.open(&quot;https://mcp.example.com/mcp&quot;, auth: Manceps::Auth::Bearer.new(token)) do |client|<br>  client.tools.each { |t| puts t.name }<br>  result = client.call_tool(&quot;search&quot;, query: &quot;hello&quot;)<br>  puts result.text<br>end</pre><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=cf30b35d3fa7" width="1" height="1" alt=""><hr><p><a href="https://medium.com/zar-engineering/introducing-manceps-cf30b35d3fa7">Introducing Manceps</a> was originally published in <a href="https://medium.com/zar-engineering">ZAR Engineering</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Stop Memorizing Port Numbers: Introducing dot-test]]></title>
            <link>https://medium.com/zar-engineering/stop-memorizing-port-numbers-introducing-dot-test-9d0148d6bfe5?source=rss-9e1370f50f6e------2</link>
            <guid isPermaLink="false">https://medium.com/p/9d0148d6bfe5</guid>
            <category><![CDATA[developer-tools]]></category>
            <category><![CDATA[software-engineering]]></category>
            <category><![CDATA[developer-productivity]]></category>
            <category><![CDATA[ruby-on-rails]]></category>
            <dc:creator><![CDATA[Obie Fernandez]]></dc:creator>
            <pubDate>Tue, 10 Mar 2026 14:01:53 GMT</pubDate>
            <atom:updated>2026-03-26T17:08:20.201Z</atom:updated>
            <content:encoded><![CDATA[<p>If you’re a Rails developer juggling multiple projects locally, you know the drill. One app on port 3000, another on 3001, a third on… wait, was it 3002 or 3003?</p><p>I built <strong>dot-test</strong> to kill this problem dead. It gives every Rails project in your projects directory a clean .test domain that just works. http://nexus.test instead of http://localhost:3001. http://empirium.test instead of whatever port that you can&#39;t remember.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*5z2VDU1x2HmkHBdxqQzqCQ.png" /><figcaption>when I run the sync command on my laptop</figcaption></figure><h3>How It Works</h3><p>dot-test does three things:</p><ol><li><strong>Discovers</strong> your Rails projects automatically by scanning your projects folder (e.g. ~/projects) for apps with config/application.rb</li><li><strong>Assigns</strong> stable port numbers and updates each project’s .env file so bin/dev picks up the right port</li><li><strong>Runs</strong> a tiny DNS server and HTTP reverse proxy so *.test domains resolve to the correct backend</li><li>The entire thing is a single Go binary with zero external dependencies.</li></ol><h3>Getting Started</h3><p>This is my first Go project and first thing I’ve ever published on homebrew! Thanks Claude Code!</p><pre>brew tap zarpay/tap<br>brew install dot-test<br>dot-test setup        # one-time macOS DNS config (needs sudo)<br>dot-test sync         # discover projects, assign ports<br>brew services start dot-test</pre><p>That’s it. Now http://yourapp.test routes to the right place.</p><p>Running dot-test sync gives you a clear picture of your local environment:</p><pre>3 projects:<br>​<br>  http://zarcto.test -&gt; localhost:3000<br>  http://nexus.test  -&gt; localhost:3001<br>  http://praxis.test -&gt; localhost:3002<br>​<br>Updating .env files...<br>  3 updated</pre><h3>Why I Built It</h3><p>At ZAR, we run multiple Rails apps in development — our main platform, a slew of internal tools, and other supporting services. Remembering which port maps to which app was a minor but constant friction. Tools like Puma-dev exist, but I wanted something simpler and more predictable.</p><p><strong>dot-test</strong> takes the opposite approach from most dev proxy tools. Instead of intercepting process management or requiring you to change how you start your apps, it stays out of the way. You still run bin/dev like normal. dot-test just handles the boring parts: port assignment, DNS, and routing.</p><h3>Smart Port Detection</h3><p>One detail we’re particularly happy with: dot-test doesn’t blindly assign ports. It scans your project configs first — bin/dev, Procfile.dev, docker-compose.yml, even TOML manifests — to find hardcoded port assignments. If your app already has a port it expects, dot-test respects that and builds around it.</p><p>Over time, I’ll happily add additional kinds of apps and frameworks so that it works with more than just Rails apps. The goal is that you can introduce dot-test into an existing multi-app setup without breaking anything.</p><h3>Under the Hood</h3><p>The implementation is deliberately minimal:</p><ul><li><strong>DNS server</strong>: A from-scratch UDP DNS responder that handles *.test queries. No DNS libraries — just raw packet parsing with RFC-compliant label compression. It responds with 127.0.0.1 for any .test domain and NXDOMAIN for everything else.</li><li><strong>HTTP proxy</strong>: Go’s stdlib httputil.ReverseProxy strips the .test suffix, looks up the port, and forwards the request. If the backend isn&#39;t running, you get a helpful message:</li></ul><pre>dot-test: nexus.test is not running<br>Start it with: cd ~/projects/nexus &amp;&amp; bin/dev</pre><ul><li><strong>Port persistence</strong>: Mappings are stored in a simple app=port dotfile. Your projects get the same ports every time, so bookmarks and configs stay valid.</li></ul><p>The whole thing compiles to a ~9MB binary that runs as a background daemon.</p><h3>Commands</h3><pre>dot-test sync     # discover projects and assign ports<br>dot-test list     # show current mappings<br>dot-test up       # start the daemon<br>dot-test down     # stop the daemon<br>dot-test setup    # configure macOS DNS resolver<br>dot-test clean    # remove everything</pre><h3>Open Source</h3><p>dot-test is MIT licensed and available at <a href="https://github.com/zarpay/dot-test">github.com/zarpay/dot-test</a>. It’s Go 1.22, zero dependencies, and works on macOS today with Linux support in progress.</p><p>If you’re tired of memorizing port numbers, give it a try. I’ve been running it in my daily workflow at ZAR and it’s one of those small tools that, once you have it, you wonder why you didn’t think of it sooner.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=9d0148d6bfe5" width="1" height="1" alt=""><hr><p><a href="https://medium.com/zar-engineering/stop-memorizing-port-numbers-introducing-dot-test-9d0148d6bfe5">Stop Memorizing Port Numbers: Introducing dot-test</a> was originally published in <a href="https://medium.com/zar-engineering">ZAR Engineering</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[It might be time to say goodbye to HTML inputs]]></title>
            <link>https://medium.com/zar-engineering/it-might-be-time-to-say-goodbye-to-html-inputs-f37ccf434cc3?source=rss-9e1370f50f6e------2</link>
            <guid isPermaLink="false">https://medium.com/p/f37ccf434cc3</guid>
            <category><![CDATA[software-engineering]]></category>
            <category><![CDATA[agentic-applications]]></category>
            <category><![CDATA[ux-design]]></category>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[web-development]]></category>
            <dc:creator><![CDATA[Obie Fernandez]]></dc:creator>
            <pubDate>Tue, 24 Feb 2026 03:03:23 GMT</pubDate>
            <atom:updated>2026-02-24T03:06:02.691Z</atom:updated>
            <content:encoded><![CDATA[<h4>Why does my web app need forms if my users have Claude Code to interact with it?</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*SKCI29cEFC9PWhuIJGYBKQ.png" /><figcaption>I replaced Linear at ZAR with a homegrown experiment tracking system</figcaption></figure><p>Empirium is an experiment-tracking system I built for ZAR, my company. It replaces Linear. Teams use it to manage assumptions, run experiments, record what they learn, track metrics. This is the kind of app that would normally normally be drowning in text-entry forms. Multi-step wizards for creating experiments. Nested evidence fields for updating assumption confidence levels. Comment dialogs. Learning capture modals.</p><p>I built none of that. I mean, my first version had that stuff. But along the way I figured out that the time I was spending on those parts of the app were pointless.</p><p>So in a fit of ebullient audacity, I deleted all the new and edit templates and replaced every “Create” and “Edit” button with a small prompt popup. You click it and it gives you a short prompt that you can paste into Claude Code to get you started adding or modifying whatever it is that you want to change. Claude <a href="https://medium.com/zar-engineering/code-mode-mcp-ac17c2a1038b">calls the right MCP tool</a>, the database gets updated, and then since this is a modern Ruby on Rails application, Turbo Streams update the web page in real time.</p><p>In other words, my web interface is now almost purely a read layer. Dashboards and detail pages and evidence timelines. The only interactive things left are navigation, filtering, and some status toggles. The search feature. Updating your profile and settings. Every other write operation goes through MCP.</p><h3>Really?</h3><p>Yeah. Everyone at ZAR uses Claude Code. Literally, all day, every day, for everything they work on. That’s the crux of why this works. I like to say that we’re living in 2028. The rest of you will get there eventually. Probably.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/850/0*_XtbLfGQKSGfIFfK.jpg" /></figure><h3>Laziness, Impatience, and Hubris</h3><p>Complex webapps are expensive to build and get right. Lots of design time, good judment and taste. And iteration. Linear is like the pinnacle of the form, don’t you think? Must have taken tens of thousands of hours of effort to build the user experience for that beast.</p><p>But giving an agent harness like Claude Code MCP access to your API is easy and it handles complicated structured input perfectly. Master-detail evidence records, nested learnings, conditional fields, whatever. Several of the forms I was contemplating designing for Empirium were going to be a multi-section beast with radio groups, conditional visibility, and validation UX that takes a long time to get right.</p><p>But won’t my users get frustrated by not being able to enter data directly. What hubris!</p><p>Lazy, impatient, and hubristic. That’s what makes me a great programmer.</p><h3>The prompt bubble</h3><p>What makes the UX intuitive is the MCP PROMPT bubble. It renders a small icon that opens a native element with a pre-composed command. One click copies it to clipboard.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/971/1*RUE_C5Q8IFnJjMQxgeDIpQ.png" /></figure><p>The icons follow a convention: plus for create, pencil for edit, terminal for general MCP actions, chat for comments, arrow-right for status transitions.</p><p>The prompts themselves are imperative with just enough context. Entity IDs, natural keys, names, URLs (if appropriate). Enough that Claude Code can always find the right resource without asking follow-up questions. The user copies the prompt then finishes it in their console. And if they don’t supply required attributes, Claude helps them figure that out. The conversation is a vital part of the user experience.</p><h3>But what does it mean?</h3><p>I’ve been building web apps for nearly 30 years. Forms with inputs, selects, radio buttons, checkboxes, and textareas were always the answer to “how does data get into the system.” You render forms, the user fills them in, you submit. It’s so ingrained that I must admit questioning that modus operandi felt kinda weird.</p><p>But when it comes down to it, a form is really just a translation layer between what the user wants and what the database needs. And my users already have a better translation layer open in their terminal. One that they converse with in plain English. So now the form is the slow path. The redundant path.</p><h3>Caveats, since people will ask</h3><p>This is not a pattern for consumer apps. It’s for teams where everyone runs Claude Code or something like it.</p><p>The prompt bubbles are kind of silly, and they won’t get used very much once users understand the modality, but I think their presence matters more than you’d might think. Without them, new users would be really confused. With them, it’s like immediately, “Oh, okay I see what you did there.”</p><p>You still need some forms for onboarding and for ceremony-type interactions where the clicking-through is the experience. I also left normal forms in for profile and settings.</p><h3>Be kind, give them an MCP Setup page</h3><p>Ideally you want instructions on how this newfangled user experience works very easy to spot in your main navigation. The setup page should have clear instructions and not make them go elsewhere to grab an API key.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/862/1*B7z8T7SnQsXoZDwCuNnJRQ.png" /></figure><h3>Definitely make your views update in realtime</h3><p>Turbo Streams are what make the read-only UI feel alive. Without broadcasts from model callbacks, you’d be telling people to refresh their browser, which would kill the whole thing.</p><h3>Final thoughts</h3><p>One of my favorite aspects of these interesting times we’re living in is just how innovative/brazenly cocky you can be with design decisions like the one I just described. Because the cost is minimal!</p><p>The entire transformation I just described was five commits on a low-key Friday night. The big rewrite, then prompt bubble polish, then z-index fixes (always z-index), then cleaning up dead form code. Testing. Deployed.</p><p>What a time to be alive!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=f37ccf434cc3" width="1" height="1" alt=""><hr><p><a href="https://medium.com/zar-engineering/it-might-be-time-to-say-goodbye-to-html-inputs-f37ccf434cc3">It might be time to say goodbye to HTML inputs</a> was originally published in <a href="https://medium.com/zar-engineering">ZAR Engineering</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Code Mode MCP]]></title>
            <link>https://medium.com/zar-engineering/code-mode-mcp-ac17c2a1038b?source=rss-9e1370f50f6e------2</link>
            <guid isPermaLink="false">https://medium.com/p/ac17c2a1038b</guid>
            <category><![CDATA[software-development]]></category>
            <category><![CDATA[mcps]]></category>
            <category><![CDATA[ruby]]></category>
            <category><![CDATA[ai]]></category>
            <dc:creator><![CDATA[Obie Fernandez]]></dc:creator>
            <pubDate>Sun, 22 Feb 2026 16:29:35 GMT</pubDate>
            <atom:updated>2026-02-22T16:29:35.719Z</atom:updated>
            <content:encoded><![CDATA[<h4>Give Your AI Agent an Entire API in 1,000 Tokens</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*wBnvtg3DFiouUZiCHjJ10A.png" /><figcaption>Over the last couple weeks I’ve built a full experiment-centric replacement for Linear called Empirium.</figcaption></figure><p>Empirium, our in-house experiment-tracking platform at ZAR, exposes 42 MCP tools. Every time Claude Code connects, all 42 tool definitions would get loaded into its context window. That’s about 18,000 tokens of JSON schema — 427 tokens per tool — burned before the model does a single useful thing.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/547/1*2aTyC5ylNWIC7WJbwbWT7Q.png" /></figure><p>Cloudflare <a href="https://blog.cloudflare.com/code-mode-mcp/">faced the same problem at a much larger scale</a> — 2,500 API endpoints, over 2 million tokens of tool definitions. Their solution was elegant: replace all those tools with just two. One to search, one to execute. They called it “Code Mode.”</p><p>I implemented it for Empirium last night, using Rails and ActionMCP. Opus 4.6 mostly one-shotted the change after being pointed at the Cloudflare blog post, and getting some tips from me on how to approach the problem in Ruby.</p><p>So without further ado, here’s how my implementation of Code Mode works, and why every MCP server with more than a handful of tools should consider it.</p><h3>The Problem with Many Tools</h3><p>MCP (Model Context Protocol) is straightforward. You define tools with JSON Schema, the AI model reads those schemas, and calls the tools by name with arguments. It works beautifully when you have five tools. It works okay with fifteen. At forty-five, you’re paying a serious tax. At hundreds or thousands of tools, it just doesn’t work.</p><p>Every MCP tool definition includes its name, description, and full input schema with property types, descriptions, and required fields. The model has to parse and retain all of this context to decide which tool to call. Most of it is wasted — a given interaction might use three or four tools at most, if any at all.</p><h3>Two Tools to Rule Them All</h3><p>The pattern is dead simple. You expose two meta-tools:</p><p><strong>code_search</strong> — accepts a Ruby code string, executes it in a read-only sandbox that exposes a tools method returning the full tool catalog. The AI writes Ruby to filter, search, and explore.</p><p><strong>code_execute</strong> — accepts a Ruby code string, executes it in a sandbox that can invoke any MCP tool via call_tool(name, **args). The AI writes Ruby to chain calls, transform results, and build workflows.</p><p>The key insight is that LLMs are very good at writing code. Better, in fact, than they are at navigating large JSON schemas to pick the right tool and assemble the right arguments. Give them a programming environment and they figure it out.</p><h3>The Implementation</h3><h4>The Sandbox</h4><p>The foundation is a BasicObject clean room. BasicObject in Ruby gives you almost nothing — no Kernel, no Object methods, no file access, no shell access. You build up only what you need:</p><pre>module CodeMode<br>  class Sandbox &lt; BasicObject<br>    FORBIDDEN_PATTERNS = [<br>      /\bsystem\b/, /\bexec\b/, /\b`/, /\bFile\b/, /\bDir\b/,<br>      /\brequire\b/, /\beval\b/, /\bProcess\b/, /\bKernel\b/,<br>      /\bsend\b/, /\bconst_get\b/, /\bObjectSpace\b/, /\bENV\b/<br>    ].freeze<br>​<br>    TIMEOUT_SECONDS = 5<br>​<br>    def evaluate(code)<br>      if (violation = check_forbidden(code))<br>        return { error: &quot;Forbidden pattern detected: #{violation}&quot; }<br>      end<br>​<br>      result = ::Timeout.timeout(TIMEOUT_SECONDS) { _eval(code) }<br>      { result: result }<br>    rescue ::Timeout::Error<br>      { error: &quot;Code execution timed out after #{TIMEOUT_SECONDS} seconds&quot; }<br>    rescue ::StandardError =&gt; e<br>      { error: &quot;#{e.class}: #{e.message}&quot; }<br>    end<br>  end<br>end</pre><p>Regex guards catch obvious escape attempts before evaluation. Timeout catches infinite loops. BasicObject blocks access to the rest of Ruby’s standard library. It’s not a Turing-complete security boundary — it’s a practical one. Empirium is for internal use only. Your mileage may vary. (But keep reading to learn about an additional layer of security using “LLM as judge”.)</p><h3>The Search Sandbox</h3><p>The search sandbox inherits from Sandbox and exposes the tool catalog:</p><pre>class SearchSandbox &lt; Sandbox<br>  def tools<br>    ::ActionMCP::ToolsRegistry.non_abstract.map do |item|<br>      item.klass.to_h<br>    end<br>  end<br>end</pre><p>The ToolsRegistry is picking up the 42 tool definitions that I mentioned earlier.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/957/1*EIDVAdLtEdVK0TQ4y3L1bQ.png" /><figcaption>Example of one of the 42 tools defined using ActionMCP</figcaption></figure><p>Using the SearchSandbox the AI can now write things like:</p><pre>tools.select { |t| t[:name].include?(&quot;experiment&quot;) }.map { |t| t[:name] }</pre><p>And get back a filtered view of just the tools it needs, without loading all 42 definitions.</p><h3>The Execute Sandbox</h3><p>The execute sandbox adds the ability to call tools with user context:</p><pre>class ExecuteSandbox &lt; Sandbox<br>  def initialize(user)<br>    @user = user<br>  end<br>​<br>  def call_tool(name, **args)<br>    ::ActionMCP::Current.set(gateway: ::OpenStruct.new(user: @user)) do<br>      response = ::ActionMCP::ToolsRegistry.tool_call(name, args.stringify_keys)<br>      parse_response(response)<br>    end<br>  end<br>end</pre><p>The user context flows through exactly as it would with a normal MCP tool call. Authentication, audit trails, everything works. The AI can now write:</p><pre>teams = call_tool(&quot;teams_list&quot;)<br>alpha = teams[&quot;teams&quot;].find { |t| t[&quot;slug&quot;] == &quot;alpha&quot; }<br>exps = call_tool(&quot;experiments_list&quot;, team_slug: alpha[&quot;slug&quot;])<br>{ team: alpha[&quot;name&quot;], total: exps[&quot;count&quot;],<br>  running: exps[&quot;experiments&quot;].count { |e| e[&quot;status&quot;] == &quot;running&quot; } }</pre><p>Three MCP tool calls, chained together with data transformation, in a single round trip. Without code mode, that’s three separate tool invocations with three LLM reasoning steps in between.</p><h3>The LLM Pre-Scan</h3><p>Before executing code, I run a quick safety check through Google’s Gemini 3 Flash (using my own <a href="https://github.com/OlympiaAI/raix">Raix</a> gem and <a href="https://openrouter.ai/">OpenRouter</a>). This approach is cheap, fast, and adds a semantic layer on top of the regex guards:</p><pre>class CodeScanner<br>  include Raix::ChatCompletion<br><br>  SYSTEM_PROMPT = &lt;&lt;~PROMPT<br>    You are a code safety scanner for an MCP (Model Context Protocol) sandbox environment.<br>    The sandbox allows Ruby code that calls `tools` (to list available MCP tools) and<br>    `call_tool(name, **args)` (to invoke them).<br><br>    Your job: determine if the submitted code is SAFE or UNSAFE.<br><br>    SAFE code:<br>    - Calls `tools` to discover available MCP tools<br>    - Calls `call_tool` to invoke MCP tools with arguments<br>    - Uses basic Ruby (arrays, hashes, strings, iteration, filtering)<br>    - Chains multiple `call_tool` invocations together<br><br>    UNSAFE code:<br>    - Attempts to access the filesystem, network, or shell<br>    - Tries to break out of the sandbox (eval, send, const_get, ObjectSpace, etc.)<br>    - Accesses environment variables or credentials<br>    - Does anything unrelated to Empirium data operations<br><br>    Respond with exactly one word: SAFE or UNSAFE<br>  PROMPT<br>​<br>  def initialize(code)<br>    @code = code<br>    self.model = &quot;google/gemini-3-flash-preview&quot;<br>  end<br>​<br>  def safe?<br>    transcript &lt;&lt; { system: SYSTEM_PROMPT }<br>    transcript &lt;&lt; { user: @code }<br>    response = chat_completion<br>    response.to_s.strip.upcase.start_with?(&quot;SAFE&quot;)<br>  rescue StandardError<br>    true # Fail open — scanner unavailable means skip<br>  end<br>end</pre><p><em>For us the scanner is a nice-to-have, not a gate. </em>It fails open, so the code runs even if the scanner fails with just the regex guards and BasicObject sandbox. <em>Your use case might want to do the opposite</em>, especially if your MCP tools are open to consumers outside of your company walls.</p><h3>Profile-Based Routing</h3><p>I left the original 42 tools on /mcp untouched, probably for no good reason other than I would have had to come up with a different way of documenting and implementing my API if I had gotten rid of them.</p><p>Code mode lives at /mcp_cm as a separate concurrent endpoint. A Rack middleware switches ActionMCP&#39;s thread-local profile:</p><pre>class CodeModeProfile<br>  def initialize(app)<br>    @app = app<br>  end<br>​<br>  def call(env)<br>    if env[&quot;PATH_INFO&quot;]&amp;.start_with?(&quot;/mcp_cm&quot;)<br>      ActionMCP.with_profile(:code_mode) { @app.call(env) }<br>    else<br>      @app.call(env)<br>    end<br>  end<br>end</pre><p>The code_mode profile in config/mcp.yml exposes only the two tools:</p><pre>profiles:<br>  primary:<br>    tools: [all]<br>  code_mode:<br>    tools: [code_search, code_execute]</pre><p>Both endpoints share the same authentication, the same tool implementations, the same database. The only difference is what the AI client sees when it connects.</p><h3>What It Looks Like in Practice</h3><p>Here’s a real interaction. I asked Claude Code to add a random emoji to every assumption title in Empirium using the code mode endpoint. One tool call:</p><pre>emojis = %w[🚀 🔥 💡 🎯 ⚡ 🌟 🎲 🧪 🔬 🏆 💎 🌈]<br>​<br>all = call_tool(&quot;assumptions_list&quot;)<br>assumptions = all[&quot;assumptions&quot;]<br>​<br>results = assumptions.map do |a|<br>  emoji = emojis.sample<br>  new_statement = &quot;#{emoji} #{a[&#39;statement&#39;]}&quot;<br>  call_tool(&quot;assumptions_update&quot;, id: a[&quot;id&quot;], statement: new_statement)<br>  { id: a[&quot;id&quot;], emoji: emoji }<br>end<br>​<br>{ updated: results.size, details: results }</pre><p>Twenty assumptions updated in a single round trip. I watched the web page update in realtime as it worked. Blazing fast, like mind-blowingly so.</p><p>Without code mode, that would have been 21 separate tool calls (1 list + 20 updates), each requiring the model to reason about the next step. With code mode, the model writes the loop once and the server executes it.</p><p>Undoing it was equally trivial:</p><pre>all = call_tool(&quot;assumptions_list&quot;)<br>all[&quot;assumptions&quot;].map do |a|<br>  clean = a[&quot;statement&quot;].sub(/\A\p{So}\s*/, &quot;&quot;)<br>  call_tool(&quot;assumptions_update&quot;, id: a[&quot;id&quot;], statement: clean)<br>end</pre><p>The savings you gain with Code Mode compound.</p><p>Every conversation turn that would have listed all 42 tools now lists two. Every multi-step workflow that would have required multiple tool calls and LLM reasoning steps collapses into a single code execution.</p><h3>Should You Do This?</h3><p>If your MCP server has fewer than ten tools, maybe not? The overhead isn’t worth the complexity. But if you’re north of twenty tools —or if your users routinely chain multiple tools together — code mode pays for itself immediately.</p><p>The implementation is small. My entire code mode implementation is under 200 lines of Ruby across four files, plus a middleware and some config. It took me about an hour, including testing. The sandbox pattern is reusable. The profile-based routing means I can offer both endpoints simultaneously and let clients choose.</p><p>The deeper principle here is one that keeps showing up in AI application development: don’t make the model navigate complexity when you can give it tools to manage that complexity itself. LLMs write code better than they do almost anything else. Set them free.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=ac17c2a1038b" width="1" height="1" alt=""><hr><p><a href="https://medium.com/zar-engineering/code-mode-mcp-ac17c2a1038b">Code Mode MCP</a> was originally published in <a href="https://medium.com/zar-engineering">ZAR Engineering</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Building a Personal CTO Operating System with Claude Code]]></title>
            <link>https://obie.medium.com/building-a-personal-cto-operating-system-with-claude-code-b3fb9c4933c7?source=rss-9e1370f50f6e------2</link>
            <guid isPermaLink="false">https://medium.com/p/b3fb9c4933c7</guid>
            <category><![CDATA[productivity]]></category>
            <category><![CDATA[cto]]></category>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[claude-code]]></category>
            <dc:creator><![CDATA[Obie Fernandez]]></dc:creator>
            <pubDate>Sun, 25 Jan 2026 21:06:11 GMT</pubDate>
            <atom:updated>2026-01-25T21:06:11.235Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/700/0*0c6MooM5I_-azmjC.jpeg" /></figure><h4><em>How I use AI as an executive assistant to manage 10 engineers, ship code, and operate at the C-level simultaneously.</em></h4><p>Three weeks ago, I started my new role as full-time CTO at ZAR. The first thing I did was to create a new folder on my laptop, fire up Claude Code , and give it a fairly simple prompt:</p><pre>Create me a markdown-based system where I can regularly run you,<br>Claude Code, that lets me be the best world-class CTO possible. <br>I&#39;m planning to use you as my personal executive assistant and<br>CTO expert coad. Document everything in a series of folders as<br>you see fit. We&#39;re in plan mode, feel free to interview me if you<br>have any questions about this job.</pre><p>I didn’t design a folder structure, ahead of time. I didn’t search for prompts or templates for this sort of thing. I just trusted the very capable Opus 4.5 model to figure it out. This post documents what I’ve learned so far, for anyone who wants to build something similar.</p><h3>Practical results after three weeks</h3><p>First let me communicate <em>why</em> you might want to try this:</p><ul><li>Meeting preparation: I’m prepared for every meeting. Every day. Full context on attendees, relevant history, suggested talking points.</li><li>Clarity on priorities: At any moment, no matter how distracted or stressed I might be, I can trust Claude to help me get centered and figure out what to focus on next.</li><li>Decision history: When someone eventually asks about a previous decision, I will have the full context. Not a vague memory. The actual alternatives considered, why I chose what I did, and who was involved.</li><li>Execution without context switching: Posting to Slack, updating calendars, drafting blog post ideas, tracking action items… All handled through natural conversation.</li></ul><p>I’ve had executive assistants in the past. Good ones. This system is better. It never forgets anything, never needs to be brought up to speed, and operates at the speed of conversation. Did I mention it’s always available?</p><h3>How I use it to manage my day to day</h3><p>I keep at least one Claude Code session open in this directory at all times. During parallel workstreams (preparing for multiple meetings, researching candidates, writing documents) I’ll open additional tabs with concurrent sessions. I just checked and at the moment I have 9 concurrent Claude Code sessions open in separate tabs within one terminal window dedicated to just my assistant.</p><p>Even though I am a very fast touch typer, I am trying as much as possible to force myself to use voice input (Wispr Flow) for anything longer than a sentence. Stream of consciousness works great.</p><p>Short prompts don’t even need to be full thoughts. “Morning sync”, “Next 1:1”, “Post summary of this to Slack”. Claude is pretty good at figuring out what I mean from context.</p><h3>Morning routine</h3><p>One of the first benefits was to begin my work day by saying “good morning,” to which Claude responds with the following steps:</p><ol><li>Reads my weekly focus document</li><li>Checks pending action items</li><li>Fetches my actual calendar items for the day</li><li>Figures out what needs my immediate attention.</li></ol><p>Takes about 30 seconds to figure exactly what my day looks like.</p><p>Eventually, I wondered what Claude would come up with if I standardized my morning routine as a skill.</p><p>This is the prompt it wrote, triggerable with a slash command as /morning:</p><pre># Morning Briefing<br><br>Run the morning sync workflow:<br><br>1. Read the weekly focus from `priorities/weekly-focus.md`<br>2. Check `meetings/actions/` for any pending action items<br>3. Check today&#39;s calendar using Google Calendar MCP tools (GOOGLECALENDAR_EVENTS_LIST with today&#39;s timeMin/timeMax in UTC)<br>4. Summarize:<br>   - What&#39;s the focus today based on weekly priorities<br>   - What meetings are scheduled for today<br>   - Any pending action items that need attention<br>   - Any blockers or items requiring immediate attention<br><br>Keep the briefing concise and actionable.</pre><h3>Processing meeting transcripts</h3><p>At work we use Gemini to transcribe almost all meetings. At first I was copying the transcripts into the console. Eventually, I got MCP integration to my Google Suite working (see below how) and I was able to automate the process.</p><p>Multiple times a day I invoke a custom skill called /meetsync and Claude automatically:</p><ul><li>Creates meeting notes in the appropriate folder</li><li>Extracts action items with owners</li><li>Updates team roster with anything new learned about people</li><li>Updates any other relevant context files</li></ul><p>I don’t think about where transcripts go. It handles the filing and organization itself. Of course, I didn’t write the custom skill either, it was written by Claude itself:</p><pre># Meeting Sync Command<br><br>Sync unprocessed meeting transcripts from Google Calendar/Gemini<br>into the zarcto knowledge system.<br><br>## Instructions<br><br>Execute the following workflow:<br><br>### Step 1: Get Recent Meetings from Google Calendar<br><br>Use the Rube MCP to query Google Calendar for meetings from the <br>last 48 hours that have Gemini notes attachments:<br><br>1. Call `RUBE_SEARCH_TOOLS` with use_case &quot;list calendar events with<br>   attachments&quot;<br>2. Call `GOOGLECALENDAR_EVENTS_LIST` for the primary calendar with:<br>   - timeMin: 48 hours ago (RFC3339 format)<br>   - timeMax: now (RFC3339 format)<br>   - singleEvents: true<br>   - orderBy: startTime<br>   - timeZone: Europe/Amsterdam<br><br>3. Filter results to only meetings that have:<br>   - `attachments` array containing items with `title` containing &quot;Gemini&quot;<br>      or &quot;Notes by Gemini&quot;<br>   - `eventType` of &quot;default&quot; (exclude working locations, etc.)<br><br>### Step 2: Find Unprocessed Meetings<br><br>1. Read the list of existing files in `meetings/notes/` directory<br>2. For each calendar meeting with Gemini notes:<br>   - Extract the date (YYYY-MM-DD) and generate expected filename pattern<br>   - Check if a corresponding note already exists<br>   - If no note exists, add to &quot;unprocessed&quot; list<br><br>### Step 3: Fetch Transcript Content<br><br>For each unprocessed meeting:<br><br>1. Extract the Google Doc ID from the attachment fileUrl or fileId<br>2. Use `GOOGLEDOCS_GET_DOCUMENT_BY_ID` to fetch the document<br>3. Extract plain text from the document body using the standard<br>   extraction pattern:<br>   ```<br>   body.content → paragraph.elements → textRun.content<br>   ```<br><br>### Step 4: Process Each Transcript<br><br>For each fetched transcript, follow the standard &quot;Process Meeting<br>Transcript&quot; workflow from CLAUDE.md:<br><br>1. **Create meeting note** in `meetings/notes/YYYY-MM-DD-topic.md`:<br>   - Use the meeting summary/title to derive the topic slug<br>   - Include key discussion points, decisions, and context<br>   - Format similar to existing notes (see examples in the directory)<br><br>2. **Create action items** in `meetings/actions/YYYY-MM-DD-topic.md`:<br>   - Extract actionable items from the transcript<br>   - Group by person responsible<br>   - Use checkbox format: `- [ ] Action item`<br><br>3. **Update team roster** (`team/roster.md`):<br>   - Add any new information learned about team members<br>   - Update skills, interests, or context if relevant<br><br>4. **Update recruiting pipeline** (`recruiting/pipeline.md`):<br>   - Only if meeting involved candidate discussions<br><br>5. **Update other context files** as appropriate:<br>   - `context/architecture.md` for technical decisions<br>   - `priorities/weekly-focus.md` if priorities discussed<br>   - `decisions/` if significant decisions made<br><br>### Step 5: Present Summary<br><br>After processing all meetings, present a summary:<br><br>```<br>## Meeting Sync Complete<br><br>**Processed**: X meetings<br>**Skipped** (already processed): Y meetings<br>**Failed** (permission denied, etc.): Z meetings<br><br>### Newly Processed:<br>1. YYYY-MM-DD Topic Name<br>   - Created: meetings/notes/YYYY-MM-DD-topic.md<br>   - Actions: X items for Y people<br>   - Updates: [list any other files updated]<br><br>2. ...<br><br>### Action Items Created:<br>- Person A: X items<br>- Person B: Y items<br><br>### Notable Updates:<br>- [Any significant context file changes]<br>```<br><br>## Error Handling<br><br>- If Google Calendar or Docs connection fails, prompt user to reconnect via Rube<br>- If a specific document has permission issues, note it and continue with others<br>- If no unprocessed meetings found, report &quot;All meetings already synced&quot;<br><br>## Notes<br><br>- Default lookback is 48 hours; user can specify different range with argument<br>- Only processes meetings where Obie is an attendee<br>- Skips external meetings without Gemini notes</pre><h3>1:1 preparation</h3><p>Here’s my /prepcommand for generating one-on-one meeting notes. Again, this was written by Claude, not myself. I’m only reproducing these skills here for illustrative purposes, not so that you can copy them.</p><pre># 1:1 Preparation<br><br>Prepare for a 1:1 meeting with $ARGUMENTS.<br><br>## Instructions<br><br>1. **Read team member info** from `team/roster.md`<br>   - Extract their current context, recent work, concerns, goals<br><br>2. **Read recent 1:1 notes** from `team/one-on-ones/[name].md`<br>   - Review last 2-3 conversations<br>   - Note any follow-up items from previous meetings<br><br>3. **Check pending action items** in `meetings/actions/`<br>   - Find any action items assigned to them or involving them<br>   - Note status of items from previous 1:1s<br><br>4. **Suggest topics to cover**:<br>   - Follow-up on previous action items<br>   - Current blockers or challenges<br>   - Career development and growth<br>   - Feedback (both directions)<br>   - Team dynamics or concerns<br>   - Any patterns noticed from recent work<br><br>5. **Format the output**:<br>   ```<br>   # 1:1 Prep: [Name]<br><br>   ## Context<br>   [Brief summary of their role, current focus, recent wins/challenges]<br><br>   ## Last 1:1 Highlights<br>   [Key points from most recent conversation]<br><br>   ## Pending Items<br>   - [ ] Item 1 from previous 1:1<br>   - [ ] Item 2 related to them<br><br>   ## Suggested Topics<br>   1. Topic 1 (with context)<br>   2. Topic 2 (with context)<br>   3. Topic 3 (with context)<br><br>   ## Notes to Remember<br>   [Anything specific to bring up or be mindful of]<br>   ```<br><br>Keep it concise and actionable. Focus on what matters most right now.</pre><p>Using this command, I begin every one-on-one meeting with full context on our previous conversations and any outstanding items.</p><h3>Logging Decisions</h3><p>When I make a decision, I say “log decision about X” or invoke /decide. Claude discusses context and options with me, creates a structured decision record, and links to relevant context.</p><p>Three months later when someone asks “why did we switch from X to Y?”, I will have the full rationale documented. Not just the decision, but the alternatives considered and why we rejected them.</p><pre># Log Decision<br><br>Capture the following important decision with context, alternatives, <br>and rationale:<br><br>$ARGUMENTS<br><br>## Instructions<br><br>1. **Understand the decision context**<br>   - Ask clarifying questions if the decision topic is vague<br>   - Understand what problem this decision solves<br>   - Identify who was involved in making this decision<br><br>2. **Explore alternatives**<br>   - What other options were considered?<br>   - Why were they rejected?<br>   - What tradeoffs were evaluated?<br><br>3. **Read the decision template** from `decisions/_template.md`<br><br>4. **Check for related decisions** in `decisions/`<br>   - Search for similar past decisions<br>   - Note if this reverses or builds on previous decisions<br>   - Link to relevant prior decisions<br><br>5. **Create the decision record**:<br>   - Use filename format: `decisions/YYYY-MM-DD-slug.md`<br>   - Follow the template structure<br>   - Include:<br>     - Date and context<br>     - Problem/need<br>     - Decision made<br>     - Alternatives considered<br>     - Rationale and tradeoffs<br>     - Consequences (expected)<br>     - People involved<br>     - Related decisions or context files<br><br>6. **Link to relevant context**:<br>   - Reference architecture docs if technical<br>   - Reference project briefs if project-specific<br>   - Reference team discussions if relevant<br><br>7. **Confirm with Obie** before writing the file<br>   - Show him the draft<br>   - Get approval on completeness<br>   - Then write the file<br><br>Keep it concise but complete. Future you (or future team members)<br>should be able to understand why this decision was made without <br>additional context.</pre><h3>Posting to Slack and other tools</h3><p>I need to update the engineering channel about a significant PR or decision. I say “post this to #engineering on Slack” and provide the message. It posts it. Done.</p><p>I’ve had it communicate with all my direct reports via Slack, asking them to set up recurring 1:1s with my scheduling link. Same with Twitter. Same with calendar invites. Same with any service I’ve connected via the Rube MCP integration.</p><p>As mentioned above, the Rube MCP integration is awesome for this kind of stuff, without bloating your context.</p><h3>The Technical Setup</h3><h4>Directory Structure</h4><p>Here’s what Claude created, just to show you that it’s comprehensive. I never go in here and purposely did not try to design it myself.</p><pre>context/           # Company, team, architecture docs<br>decisions/         # Decision records with rationale<br>drafts/            # Work in progress documents<br>journal/           # Weekly reflections<br>meetings/<br>  actions/         # Action items with owners<br>  notes/           # Meeting transcripts and summaries<br>playbooks/         # Recurring process documentation<br>priorities/        # Weekly focus, 90-day plans<br>projects/          # Project briefs and status<br>recruiting/        # Pipeline, candidates<br>reference/         # Mental models, frameworks<br>team/<br>  one-on-ones/     # Individual 1:1 histories<br>  roster.md        # Team member details</pre><p>I genuinely don’t think about this structure, ever. Claude knows where things go. I just talk to it. The day the contents start getting too big or seem to be bogging down, I’ll ask Claude to optimize it. Until then we’re good.</p><h4>MCP Integration</h4><p>The <a href="https://rube.app">Rube MCP</a> integration is essential. It gives Claude access to:</p><ul><li>Google Calendar (reading and creating events)</li><li>Slack (posting messages, reading channels)</li><li>Twitter/X (posting updates)</li><li>Linear (project and issue management)</li><li>Gmail and other services</li></ul><p>This means I can say “what’s on my calendar tomorrow?” or “post this to Slack” without context switching. The integration is what turns this from a note-taking system into an actual executive assistant.</p><h4>Version Control</h4><p>Everything is in a private Git repo. Hooks handle syncing automatically. This gives me:</p><ul><li>Full history of all changes</li><li>Backup without thinking about it</li><li>Ability to reference anything from any point in time</li></ul><p>Everyone at ZAR is in the process of setting up and using this system. I’ve advised the non-technical folks to run this in a personal Google Drive instead of trying to learn Github and repos.</p><h3>Real Examples</h3><p>Hiring: I paste a candidate’s resume or LinkedIn profile. Claude updates the recruiting pipeline, suggests interview questions based on gaps in our team, and preps me for the screening call.</p><p>Performance tracking: If I need to review an engineer’s history, I can say “walk me through the history with [name]”. Claude pulls up 1:1 notes, shows patterns across conversations, references any documented concerns or wins.</p><p>Planning: “What are the top three things blocking productivity right now?” Claude reads recent meeting notes, checks pending action items, reviews Linear status, and surfaces actual bottlenecks.</p><p>Communication: “Post an update about Stephen accepting our offer to #leadership on Slack.” Done. No context switch.</p><h3>The Numbers After Three Weeks</h3><p>I didn’t know these numbers until asking Claude to calculate them for this blog post. Pretty impressive stats, if I do say so myself.</p><ul><li>82 meeting notes processed and filed</li><li>47 meetings in January (2+ per day)</li><li>18 documented 1:1s with full context and follow-up</li><li>35 action item tracking files with owners and status</li><li>23 team members tracked with 264 lines of detailed context</li><li>9 context documents maintained</li><li>11,579 total lines of institutional knowledge captured</li></ul><p>While also shipping code. While also operating at the C-level with the CEO and CPO.</p><h3>Why This Works Better Than Other Systems</h3><p>Most knowledge management systems fail because maintaining them is a second job. You have to remember to update things. You have to organize things. You have to think about the system instead of thinking about your actual work.</p><p>This system works because I never think about the system. I think about my work, and the system captures it as a side effect of natural conversation.</p><p>Compare to Notion where you’re constantly deciding “should this be a page or a database?” or “which workspace does this belong in?” or reorganizing because the taxonomy you picked six months ago doesn’t fit anymore.</p><p>With this approach, the implementation is invisible. I just talk to Claude and it handles everything.</p><h3>How to get started</h3><ol><li>Start simple: Open Claude Code in a fresh directory. Tell it what you need. Let it figure out the structure. Use my prompt shared above, something like: “Build a markdown-based operating system that makes me operate like a world-class [your role].”</li><li>Use it for a week: See what works and what doesn’t. Claude will adapt.</li><li>Put it in version control: A private Git repo if you’re technical. Otherwise, put it in a folder that syncs (Google Drive, iCloud, Dropbox).</li><li>Connect your tools: Set up MCP integrations for your calendar, communication tools, and project management. This is what turns it from note-taking into an actual assistant.</li><li>Run multiple sessions: One session is good. Three concurrent sessions for parallel work is when you feel the leverage.</li><li>Once you find yourself doing the same thing a few times, ask Claude to write a skill. This is how you get compounding productivity.</li></ol><p>The system gets smarter over time. Every conversation adds context. Every decision creates a reference point. Every team update builds a richer picture.</p><h3>What’s Next</h3><p>Right now I’m constrained to a terminal window. Voice input via Wispr Flow already makes it feel more natural. The trajectory is toward continuous conversation throughout the day, not “using a tool.”</p><p>I would like to give Claude a way of popping up its results on my desktop in rich text. Basically because if I have it generate something like 1:1 meeting notes, I need that to pop up and stay available instead of scrolling out of the current terminal context. This would minimize my need to keep opening new sessions.</p><p>I would also really like to be able to communicate with my assistant via text messaging and voice chats, when I’m not at my computer. I’m experimenting with Clawdbot to learn more about how to tackle that particular challenge. It’s very possible that I will eventually port this whole system over to a Clawdbot instance!</p><p>Bottom line, I’m sure that the models will continue to improve with better reasoning, better memory, and better tool use. So the system that I have now will only get more capable. I’ll post updates here as I develop new techniques and breakthroughs.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=b3fb9c4933c7" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[We’re Building Something Different. Want In?]]></title>
            <link>https://medium.com/zar-engineering/were-building-something-different-want-in-0a1e389210bd?source=rss-9e1370f50f6e------2</link>
            <guid isPermaLink="false">https://medium.com/p/0a1e389210bd</guid>
            <category><![CDATA[software-engineering]]></category>
            <category><![CDATA[ai]]></category>
            <dc:creator><![CDATA[Obie Fernandez]]></dc:creator>
            <pubDate>Thu, 15 Jan 2026 16:06:10 GMT</pubDate>
            <atom:updated>2026-01-15T16:06:10.730Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*CsVnbtPYmdiNooaio-Z_WQ.png" /></figure><p><em>The first post from ZAR’s engineering team</em></p><p>Over the Christmas break, while most CTOs were reviewing strategy decks, I was building. From scratch. A <a href="https://obie.medium.com/what-used-to-take-months-now-takes-days-cc8883cc21e9">complex knowledge graph system called Nexus</a> that my team now uses every day.</p><p>Nobody asked me to build it. I had just started as full-time CTO at <a href="https://zar.app">ZAR</a>. By any traditional playbook, I should have been doing stakeholder interviews and organizational assessments.</p><p>But here’s the thing: I fucking love building software.</p><p>That’s why I’m here. That’s why I’m writing this. And that’s why we’re launching this engineering blog — to show you what we’re actually doing, not just talk about what we believe.</p><h3>What We’re Building</h3><p>ZAR helps people in emerging markets like Pakistan save, spend, and send US dollars. We’re backed by a16z crypto and Dragonfly and the founders of Solana, amongst other top-notch investors.</p><p>The technical challenges are genuine:</p><ul><li>Blockchain infrastructure on Solana with US dollar-backed stablecoins.</li><li>Merchant networks spanning countries where “going to the bank” isn’t an option</li><li>Mobile-first UX for users on low-end devices with unreliable data</li></ul><p>We’re a Rails 8 shop. Ten engineers. Small enough that your work matters, big enough that we’re solving real problems at scale.</p><h3>How We’re Building ZAR</h3><p>This is where it gets interesting.</p><p>We’re going all-in on AI-assisted everything, not just software development. Not as a gimmick — as a fundamental rethinking of how startups and their software engineers operate. Everyone at ZAR is using Claude Code as their day-to-day executive assistant, not just the engineers.</p><p>That knowledge graph I built over Christmas? It captures decisions, context, and learnings from every engineering session. Every pull request. Every Linear ticket. Every significant conversation on Slack. When someone (either human or autonomous AI agent) joins a project here, they don’t start from zero. They have the full history of why things are the way they are.</p><p>The engineers I know who are paying attention to agentic AI understand what’s happening. Teams that reimagine how they work using agents are going to accomplish amazing things with impossibly-small sounding numbers of people. We intend to be one of those teams.</p><h3>What Kind of Place This Is</h3><p>I’ve been building software professionally for 30 years. I’ve led a lot of teams, including at successful venture-backed startups. I know what works and what doesn’t.</p><p>Here’s what I promise as your CTO and engineering leader:</p><p><strong>I will listen.</strong> If you tell me I’m wrong and show me why, I’m going to celebrate you for having the guts to challenge me. I’ve left companies that didn’t have that culture. I won’t build one.</p><p><strong>I will teach you everything I know.</strong> I have no ego about hoarding knowledge. The techniques and mental models we figure out together should compound for you long after you’ve moved on to whatever’s next.</p><p><strong>I will give you real ownership.</strong> We only have two kinds of full-stack engineers: Product and Platform. Both serve actual customers with real business outcomes at stake. That means actual ownership over critical business initiatives. The chance to say “I’m making myself responsible for this” and then have the latitude to do it. No bullshit metrics. Real accountability.</p><p><strong>I will be humble.</strong> I’ll be the first to admit when I’ve failed. When my ideas don’t work. When someone else’s approach was better.</p><p>In return, I expect a lot. I’ll push you. I’m not a professional manager — I’m an engineer leading an engineering team.</p><h3>Who Thrives Here</h3><p>You’re a full-stack Product Engineer who wants to be building at the frontier of what’s possible, not maintaining legacy systems while the world changes around you.</p><p>You see AI-assisted development as a superpower, not a threat. You understand that investing in your own productivity is how you deliver compounding returns.</p><p>You want to be surrounded by other world-class talent operating at your level. Iron sharpens iron — that’s how you grow faster than you thought possible.</p><p>You care deeply about the craft of software. You stay close to the work because the work is what makes you come alive.</p><h3>What’s Next</h3><p>This blog is going to be a window into how we work. Technical deep dives. Architecture decisions. Experiments that failed. Lessons learned.</p><p>We’re not going to polish everything for public consumption. We’re going to show you the actual work.</p><p>If that sounds like the kind of team you want to be part of, reach out. We’re hiring full-stack engineers who are genuinely excited about this moment in our industry.</p><p>And if you’re not ready yet — that’s fine too. Follow along. See if we deliver on what we’re promising. The work will speak for itself.</p><p><em>Obie Fernandez is the CTO at </em><a href="https://zar.app"><em>ZAR</em></a><em>. He’s been building software professionally since 1995 and still thinks it’s the best job in the world. Reach out at obiefernandez@zarpay.app or find him on </em><a href="https://x.com/obie"><em>Twitter/X</em></a><em>.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=0a1e389210bd" width="1" height="1" alt=""><hr><p><a href="https://medium.com/zar-engineering/were-building-something-different-want-in-0a1e389210bd">We’re Building Something Different. Want In?</a> was originally published in <a href="https://medium.com/zar-engineering">ZAR Engineering</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[What Used to Take Months Now Takes Days]]></title>
            <link>https://obie.medium.com/what-used-to-take-months-now-takes-days-cc8883cc21e9?source=rss-9e1370f50f6e------2</link>
            <guid isPermaLink="false">https://medium.com/p/cc8883cc21e9</guid>
            <category><![CDATA[claude-code]]></category>
            <category><![CDATA[programming]]></category>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[ruby-on-rails]]></category>
            <dc:creator><![CDATA[Obie Fernandez]]></dc:creator>
            <pubDate>Sun, 04 Jan 2026 20:04:45 GMT</pubDate>
            <atom:updated>2026-01-04T20:04:45.798Z</atom:updated>
            <content:encoded><![CDATA[<h4>Building production software with Claude Code while doing my day job</h4><p>Look, I’ve been slinging code professionally for 30 years now. I’ve also built succesful startups, written bestselling books, consulted for Fortune 500s, and watched countless technology waves come and go. Catching some of those waves at just the right moment are what propelled my career to where it is now. I’ve also witnessed “paradigm shifts” that weren’t and “revolutions” that fizzled. So believe me when I tell you that what I’m living through this week is genuinely different from any change I’ve ever seen before.</p><p>It started this past week (between Christmas and New Year’s) a weird liminal period where in ordinary years I would just relax and take it easy. Instead I was reflecting on how much time my team at <a href="https://zar.app">ZAR</a> and I spend in <a href="https://claude.ai/code">Claude Code</a> sessions. Hours and hours of deep technical work, decisions being made, architecture being discussed, bugs being solved. And then… poof. The transcript disappears. The next session starts fresh.</p><p>Of course it’s not just Claude Code. The same challenge faces our <a href="https://slack.com">Slack</a> conversations. <a href="https://linear.app">Linear</a> comments. <a href="https://github.com">GitHub</a> PR discussions. All this priceless institutional knowledge, constantly evaporating.</p><p>Also, the following tweet was very much on my mind.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/591/1*uQxy_9KDQswwgUwtF8krtA.png" /></figure><p>The problem isn’t unique to AI-assisted development, but AI makes it worse. Traditional development leaves artifacts: design documents, ADRs, wiki pages, commented code. When you’re pair-programming with Claude Code, the artifact is the conversation itself. The decisions live in the dialogue. And unless you’re meticulously copying things into documentation (face it, you’re not), then that knowledge disappears the moment you close the terminal.</p><p>I started thinking about the scale of it. My team runs hundreds of Claude Code sessions per week. Each one contains decisions, learnings, architectural discussions, debugging insights. Multiply that by the dozens of Slack threads, the Linear comments on tickets, the GitHub PR review conversations. We’re generating reams of institutional knowledge and retaining none of it in a methodical fashion that can be leveraged by the advanced/autonomous AI agents I want to build in 2026.</p><p>But what if we could capture it all? What if there was a system that passively ingested every transcript, every thread, every discussion and distilled it into queryable organizational memory? Not just storing raw text, but actually <em>understanding</em> it. Extracting the decisions. The learnings. The conflicts. Building a semantic graph that a coding assistant could traverse. Believe it or not, that’s been <a href="https://scholar.google.com/citations?view_op=view_citation&amp;hl=en&amp;user=0A5LUGIAAAAJ&amp;citation_for_view=0A5LUGIAAAAJ:9yKSN-GCB0IC">a pet topic of mine</a> since 2005!</p><p>RAG you say? Sure, I guess I could buy a solution or use something open-source. There’s probably a few dozen startups working on exactly what I’m about to show you. There’s also a slew of more traditional enterprise knowledge management platforms that would cost at least six figures before you factor in the integration costs, the consultants, the months of implementation. Fuck that.</p><p><strong>I built it in four days.</strong></p><p>Not a proof of concept. Not a demo. The first cut of Nexus, a production-ready system with authentication, semantic search, an <a href="https://modelcontextprotocol.io/">MCP</a> server for agent access, webhook integrations for our primary SaaS platforms, comprehensive test coverage, deployed, integrated and ready for full-scale adoption at my company this coming Monday. Nearly 13,000 lines of code. By the time I let this blog post marinate awhile and hit publish, I’ll probably have written another few thousand just with the issues I have queued up.</p><p>And here’s the funny thing: building Nexus wasn’t even my primary focus all week. I’m the new CTO of <a href="https://zar.app">ZAR</a>, and other than New Year’s Eve and New Year’s Day (when I had the luxury of uninterrupted time), I was building Nexus while juggling otherwise normal life and work responsibilities. Meetings. Slack threads. Writing production code. Code reviews. Planning. Recruiting. The usual shit. Nexus happened in the gaps, in the afternoons, in the bursts of flow state I could carve out between other obligations.</p><p><a href="https://www.linkedin.com/feed/update/urn:li:activity:7412887667111133184/">#zar #fintech #crypto #stablecoins #payments #rubyonrails #solana #engineeringleadership #ai #agents #amsterdam | Obie Fernandez | 31 comments</a></p><h3>Why I’m writing this (instead of open-sourcing)</h3><p>Before diving in, let me address something. My career was built on open source. I’ve contributed to Rails since the early days. I’ve authored gems that have been downloaded millions of times. I literally wrote “<a href="https://www.amazon.com/Rails-Way-Addison-Wesley-Professional-Ruby/dp/0321944275">The Rails Way</a>” franchise, which across its eight editions has helped codify best practices for generations of Rails developers. I’ve evangelized sharing code freely for two decades.</p><p>So why am I blogging about Nexus instead of just publishing it on GitHub?</p><p>Because of an uncomfortable realization: in an era where killer software can be developed this fast, by the right people with the right tools, and maintaining that software is practically free thanks to agentic help… open-sourcing doesn’t make the same kind of sense it used to. Not for this project.</p><p>Nexus represents a genuine competitive advantage for my company. It’s the kind of infrastructure that could differentiate us in the market. In the old days, building something like this was so expensive and time-consuming that you’d never just build it in-house. (Unless you’re Shopify, I guess.) If you were crazy/stupid enough to try anyway, then open-sourcing it made strategic sense. You’d gain community contributions, bug fixes, and reputation, all while knowing your competitors would need their own multi-month effort to catch up. The barrier to replication was high enough that sharing made sense.</p><p>Now? Fuck, no…. If I open-source Nexus today, pretty much anyone with Claude Code could fork it, customize it, and deploy it by tomorrow afternoon. My competitive advantage would evaporate in hours. That’s not an exaggeration. I’ve used Claude Code to take complex codebases and modify them substantially in single sessions. To rewrite Python libraries in Ruby in one sitting. The replication barrier has collapsed.</p><p>Before you go into fits of despair, don’t worry. I don’t think Claude Code is the death knell for open source. Foundational libraries, protocols, and tools still benefit enormously from open collaboration. The <a href="https://www.ruby-lang.org/">Ruby</a> ecosystem, the JavaScript ecosystem, infrastructure tools like PostgreSQL and Redis… all will continue to benefit from shared development. But for custom built in-house applications that represent strategic advantage? The calculus has shifted dramatically.</p><p>Another reason I’m writing this because I keep seeing skeptics online asking: “Okay, if AI-assisted development is so revolutionary, where are the projects? Where’s the evidence?” The implication is that people like me are just hype-mongering, that “vibe coding” produces nothing of substance. That we’re exaggerating our productivity gains or building toy projects and calling them production systems.</p><p>Well, here you go. Here’s the evidence. Here’s a real project, with real commits, real timestamps, and real production deployment. Follow along. I’ll show you what was built, when it was built, and how long it actually took.</p><h3>What Nexus Actually Does</h3><p>Let me be concrete about what got built. Abstract descriptions of “knowledge management” don’t convey the scope. Let me walk you through the actual system.</p><p><strong>Nexus</strong> is an organizational knowledge distillation service. The core workflow:</p><ol><li><strong>Transcripts come in</strong> from any source: Claude Code hooks that fire automatically when sessions end, Slack threads via <a href="https://api.slack.com/apis/events-api">Events API</a>, GitHub webhooks for PR discussions, Linear webhooks for issue comments, or manual submission through the API.</li><li><strong>LLM distillation</strong> analyzes each transcript and extracts structured knowledge: decisions made, lessons learned, people involved, topics discussed.</li><li><strong>RDF storage</strong> persists everything as semantic triples in Oxigraph, a high-performance graph database with full SPARQL support. Our source of truth is the graph, not Postgres.</li><li><strong>Vector embeddings</strong> (via pgvector) enable semantic similarity search across the entire knowledge base.</li><li><strong>MCP server</strong> exposes the graph to AI agents, so Claude can query your organizational memory directly during coding sessions.</li><li><strong>Web UI</strong> lets humans browse sessions, explore the ontology, search semantically, and manage conflicts.</li></ol><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*cGeY4xX5wF0vH73mmYAPng.png" /><figcaption><em>The sessions index with conversational query interface running on my local development environment. Real sessions from actual Claude Code work on Nexus itself.</em></figcaption></figure><p>The technology stack deserves explanation because the choices matter:</p><p><strong>Ruby 4.0 and Rails 8 (edge)</strong>: I’m running the brand new version of Ruby and the main branch of Rails, not a released version. (Why, see the screenshot below for an example of Claude being witty.)</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*0YW4pv0pLKxWR17vLaf7DQ.png" /></figure><p>One of my favorite things about Rails 8 is that ships with the Solid* gems that eliminate <a href="https://redis.io/">Redis</a> as a dependency. <a href="https://github.com/rails/solid_queue">SolidQueue</a> for background jobs, <a href="https://github.com/rails/solid_cache">SolidCache</a> for caching, <a href="https://github.com/rails/solid_cable">SolidCable</a> for websockets. One less piece of infrastructure to manage.</p><p><a href="https://www.postgresql.org/"><strong>PostgreSQL</strong></a><strong> with </strong><a href="https://github.com/pgvector/pgvector"><strong>pgvector</strong></a>: The primary relational database, but also the vector store for semantic search. The pgvector extension lets you store 768-dimensional embeddings alongside your regular data and query them with similarity operators. No need for a separate Pinecone or Weaviate instance.</p><p><a href="https://oxigraph.org/"><strong>Oxigraph</strong></a>: Oxigraph is a Rust-based <a href="https://www.w3.org/RDF/">RDF</a> triple store with <a href="https://www.w3.org/TR/sparql11-query/">SPARQL</a> support. It’s what makes the knowledge graph actually work. Every piece of distilled knowledge becomes semantic triples that you can query with the full power of SPARQL. “Find all decisions related to authentication made by Sarah” is a single query.</p><p><a href="https://github.com/OlympiaAI/raix-rails"><strong>Raix</strong></a>: My own gem for LLM orchestration via <a href="https://openrouter.ai/">OpenRouter</a>. It handles the prompt construction, response parsing, and error handling for the distillation pipeline.</p><p><strong>GitHub OAuth</strong>: Authentication for the web UI and API. Everyone has a GitHub account, so there’s no signup friction.</p><p>Now let me take you through how this thing came together, day by day.</p><h3>Day 1: December 29th — The Initial Checkpoint</h3><p><strong>Time: Late morning to evening<br>Commits: 1 major checkpoint</strong> <br><strong>Lines added: ~6,000</strong></p><p>The first real checkpoint landed at 5:47 PM Central time. I’d been working since late morning, but I didn’t checkpoint it until I had something coherent and working.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/625/1*EM-YaeHQywo0m5czD9mUrw.png" /></figure><p><strong>136 files. Six thousand lines.</strong> That’s not a typo.</p><p>It’s hard to properly explain the power of Claude Code to people who haven’t experienced AI-assisted development at this level. I wasn’t typing 6,000 lines. I was <em>directing</em> 6,000 lines. Describing what I wanted, reviewing what Claude proposed, course-correcting, integrating, testing. The actual character input from my keyboard was maybe 10% of that. But the design decisions? The architecture? That was all me, refined through rapid dialogue with an AI that could actually implement what I was describing.</p><p>Here’s what made this sustainable rather than chaotic: TDD. Test-driven development. For most of the features, I insisted that Claude Code follow the red-green-refactor cycle with me. Write a failing test first. Make it pass with the simplest implementation. Then refactor while keeping tests green.</p><p>This wasn’t just methodology purism. TDD served a critical function in AI-assisted development: it kept me in the loop. When you’re directing thousands of lines of code generation, you need a forcing function that makes you actually understand what’s being built. Tests are that forcing function. You can’t write a meaningful test for something you don’t understand. And you can’t verify that a test correctly captures intent without understanding the intent yourself.</p><p>I’ve written about this more extensively in <a href="https://obie.medium.com/ruby-was-ready-from-the-start-4b089b17babb">Ruby Was Ready From the Start</a>, but the short version is: TDD is the only development process I know of that continually validates intent. When machines can generate endless variations of working-looking code, the only reliable way to know that software does what you intend is to encode that intent in tests and keep those tests running all the time.</p><p>Let me break down what that initial checkpoint actually contained:</p><h3>The Distillation Pipeline</h3><p>The core of Nexus is the DistillTranscript service. It takes raw transcript text and produces structured knowledge. Here&#39;s what it does:</p><ol><li><strong>Session identification</strong>: Generate a deterministic ID from the transcript content so we can detect duplicates and updates</li><li><strong>LLM extraction</strong>: Send the transcript to an LLM with a carefully crafted prompt that extracts decisions, learnings, participants, and topics</li><li><strong>Response processing</strong>: The LLM returns JSON</li><li><strong>Deduplication</strong>: Check if we’ve already processed this session, and if so, only process new content</li><li><strong>RDF transformation</strong>: Convert the structured JSON into semantic triples</li><li><strong>Storage</strong>: Write the triples to Oxigraph</li></ol><p>The prompt engineering was critical, but didn’t take a lot of time. The LLM needs to understand what counts as a “decision” versus a “learning,” how to identify participants, and how to extract meaningful topics without being too granular or too vague. Claude one-shotted it.</p><h3>The RDF Schema</h3><p>Claude designed a custom ontology for organizational knowledge:</p><pre>@prefix nx: &lt;https://nexus.zar.app/ontology#&gt; .<br>@prefix skos: &lt;http://www.w3.org/2004/02/skos/core#&gt; .  # SKOS vocabulary<br><br>nx:Session a rdfs:Class ;<br>    rdfs:label &quot;Session&quot; ;<br>    rdfs:comment &quot;A conversation or transcript session from any source&quot; .<br><br>nx:Decision a rdfs:Class ;<br>    rdfs:label &quot;Decision&quot; ;<br>    rdfs:comment &quot;An architectural, strategic, or implementation decision&quot; .<br><br>nx:Learning a rdfs:Class ;<br>    rdfs:label &quot;Learning&quot; ;<br>    rdfs:comment &quot;An insight, lesson learned, or piece of knowledge discovered&quot; .</pre><p>Every session, decision, and learning becomes a node in the graph with typed relationships. A decision nx:madeIn a session. A session nx:hasTopic concepts. A person nx:proposedDecision a decision. The graph structure enables queries that would be impossible with traditional relational storage.</p><h3>The Claude Code Hooks</h3><p>One of the most useful features landed on Day 1 and let me dogfood from that moment on: automatic transcript capture. Claude Code supports hooks that fire at various lifecycle points. I built a Stop hook that:</p><ol><li>Captures the full conversation transcript from the session</li><li>POSTs it to a Nexus API endpoint</li><li>Handles authentication automatically (more on this later)</li></ol><p>The hook is a simple shell script:</p><pre>#!/bin/bash<br># Capture Claude Code session transcript and send to Nexus<br><br>TRANSCRIPT=$(cat &quot;$CLAUDE_TRANSCRIPT_FILE&quot;)<br>SESSION_ID=&quot;$CLAUDE_SESSION_ID&quot;<br><br>curl -X POST &quot;$NEXUS_URL/transcripts/ingest&quot; \<br>  -H &quot;Authorization: Bearer $NEXUS_API_KEY&quot; \<br>  -H &quot;Content-Type: application/json&quot; \<br>  -d &quot;{<br>    \&quot;content\&quot;: $(echo &quot;$TRANSCRIPT&quot; | jq -Rs .),<br>    \&quot;source\&quot;: \&quot;claude_code\&quot;,<br>    \&quot;session_id\&quot;: \&quot;$SESSION_ID\&quot;,<br>    \&quot;project\&quot;: \&quot;$CLAUDE_PROJECT_DIR\&quot;<br>  }&quot;</pre><p>Every Claude Code session now automatically becomes organizational memory. No manual effort required. The transcript gets distilled, decisions and learnings get extracted, and everything becomes queryable.</p><h3>Conversational Knowledge Queries</h3><p>This one was fun. Instead of requiring users to write SPARQL (which, let’s be honest, nobody wants to do), I built a conversational interface. You ask a question in plain English, and an LLM translates it into SPARQL, executes the query, and explains the results.</p><p>The KnowledgeQueryAssistant service orchestrates this:</p><ol><li><strong>Question analysis</strong>: Understand what the user is asking for</li><li><strong>Schema awareness</strong>: Know what entity types and properties exist in the graph</li><li><strong>SPARQL generation</strong>: Translate the natural language question into a valid query</li><li><strong>Execution</strong>: Run the query against Oxigraph</li><li><strong>Result explanation</strong>: Present the results in human-readable form</li></ol><p>It’s genuinely useful. “What decisions did we make about authentication?” becomes:</p><pre>PREFIX nx: &lt;https://nexus.zar.app/ontology#&gt;<br>SELECT ?decision ?title ?description ?rationale WHERE {<br>  ?decision a nx:Decision ;<br>            nx:title ?title ;<br>            nx:description ?description .<br>  OPTIONAL { ?decision nx:rationale ?rationale }<br>  FILTER(CONTAINS(LCASE(?title), &quot;authentication&quot;) ||<br>         CONTAINS(LCASE(?description), &quot;authentication&quot;))<br>}</pre><p>The query returns matching triples, and the assistant presents them conversationally: “I found 3 decisions related to authentication. The most recent was ‘Use GitHub OAuth for user authentication,’ made yesterday…”</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*I6Ob94bTZTbwdxOUrTofnQ.png" /></figure><h3>The Web UI</h3><p>Even on Day 1, I wanted a browsable interface. The initial UI was already pretty full featured:</p><ul><li><strong>Sessions list</strong>: See all ingested transcripts with their generated titles and summaries</li><li><strong>Decisions view</strong>: Browse all extracted decisions with their rationale</li><li><strong>Learnings view</strong>: Browse all extracted learnings</li><li><strong>Inquiries</strong>: For demonstration purposes and to validate functionality</li></ul><p>That first day established the core architecture. Transcript in, LLM distillation, RDF storage, queryable knowledge. The foundation was solid. Everything after was iteration and enhancement.</p><h3>Day 2: December 30th — Refinements and the Delta Problem</h3><p><strong>Commits: 5<br>Theme: Making it actually work in production</strong></p><p>Day two was about confronting reality. The initial system worked beautifully for short transcripts. But Claude Code sessions can run for hours. Transcripts grow to thousands of lines. And the way I’d built the system, every time a transcript was submitted, it re-processed the entire thing.</p><p>That’s fine for demos. It’s not fine when you’re paying for LLM tokens and waiting for responses.</p><p>Enter delta distillation.</p><p>The insight was simple: track how much of each transcript we’ve already processed, and only distill the new content. But implementing it required rethinking the entire pipeline.</p><h3>The Delta Algorithm</h3><p>The new approach:</p><ol><li><strong>Content offset tracking</strong>: Store the character offset of how much we’ve processed for each session</li><li><strong>Delta extraction</strong>: When a transcript update arrives, extract only the content after the last offset</li><li><strong>Incremental distillation</strong>: Send only the new content to the LLM</li><li><strong>Append-only storage</strong>: New decisions and learnings get added to the existing session, not replaced</li><li><strong>Metadata preservation</strong>: Session title and summary come from the first distillation only</li></ol><p>Here’s the commit that captured this change:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/573/1*Ch1uYbkr4AWc_0n6XfZOsA.png" /></figure><p>This is the kind of architectural decision that separates production software from demos. It required extracting new service objects (KnowledgeDeduplicator, KnowledgeQuery), rethinking how sessions were identified, and ensuring idempotent behavior when the same transcript got submitted multiple times with different lengths.</p><p>The TDD discipline proved essential here. Refactoring from full-transcript processing to delta-based processing could have introduced subtle bugs at every seam. But because we had comprehensive specs for the original behavior, I could refactor confidently. Change the internals, run the specs, verify everything still works. Then add new specs for the delta-specific behavior. The red-green-refactor cycle made a potentially dangerous architectural change feel safe.</p><p>This is exactly the pattern I’d internalized from decades of extreme programming practice: tests dissolve fear. When I encounter new code or need to make substantial changes, I don’t tiptoe around it worrying about breaking things. I know I can move in small steps, keep the system passing its tests at almost all times, try a refactoring and back it out if it doesn’t feel right. That safety net isn’t just technical. It changes how willing you are to explore.</p><p><em>By the way, introducing the delta algorithm and the entire refactoring I described above took about 30 minutes.</em></p><h3>Killing the Action Items Feature</h3><p>I also learned something important about my own system on Day 2: Action Items were noise.</p><p>The initial design extracted three types of knowledge: decisions, learnings, and action items. It seemed logical. Surely transcripts contain action items that people need to track?</p><p>The LLM was dutifully extracting them. “Need to update the database schema.” “Should add tests for the authentication flow.” “Remember to update the documentation.” Dozens of action items from each session.</p><p>The problem? They were almost always stale by the time anyone saw them. Claude Code sessions involve immediate implementation. By the time Nexus distilled “need to update the database schema,” the database schema was already updated. The action item was historical noise, not useful information.</p><p>Decisions and learnings persist; action items expire. I ripped out the entire feature: <em>11 files changed, 8 insertions(+), 132 deletions(-)</em></p><p>More lines deleted than added. A good sign. The willingness to remove features that don’t work is important. It’s easy to keep accumulating functionality. It’s harder to admit something isn’t useful and cut it.</p><h3>Minor Fixes</h3><p>The other commits were bug fixes and refinements:</p><ul><li><strong>User tracking</strong>: Properly associate sessions with the API user who submitted them</li><li><strong>URI domain corrections</strong>: Claude has hallucinated zar.com into my ontology instead of zar.app in several places. Easy fix, including some rake tasks to fix existing data.</li><li><strong>Session deduplication edge cases</strong>: Handling discovered cases where two sessions have identical content but different metadata, stuff like that.</li></ul><p>In other words, normal software development stuff, just happening at 50x speed.</p><h3>Day 3: December 31st — The Explosion</h3><p><strong>Commits: 18<br>Lines added: ~3,000+<br>Theme: Production-ready features</strong></p><p>New Year’s Eve. Most people are planning their parties or getting drunk. I’m in the zone. My first all-hands company meeting as CTO was tomorrow and I wanted to demo my work, so it’s go time.</p><p>This day was absolutely packed. Eighteen commits. Multiple major features. Let me break it down by capability.</p><h3>RESTful Knowledge API</h3><p>First up: proper REST endpoints for everything. The initial system had basic views, but nothing approaching a real API. Day 3 changed that.</p><p>Sessions, decisions, and learnings all got their own controllers with full RESTful endpoints:</p><pre>GET  /sessions          # List all sessions<br>GET  /sessions/:id      # Show session details<br>GET  /decisions         # List all decisions<br>GET  /decisions/:id     # Show decision details<br>GET  /learnings         # List all learnings<br>GET  /learnings/:id     # Show learning details</pre><p>Each endpoint supports multiple formats:</p><ul><li><strong>HTML</strong>: For human browsing</li><li><strong>JSON</strong>: For API consumers</li><li><strong>TXT</strong>: For LLM consumption</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*I_p29fpJCri-qjKdlb0zow.png" /><figcaption><em>The decisions index showing distilled decisions with titles, descriptions, and rationale.</em></figcaption></figure><p>The text format endpoints deserve special mention. When an AI agent requests /decisions.txt, it gets a clean, token-efficient representation:</p><pre># Decision: Use PostgreSQL with pgvector for semantic search<br>Rationale: Keeps vector storage in the same database as other data, simplifying the stack.<br>Session: 9cafca33-cb1c-49c4-8696-d8be97871356<br>Date: 2026-01-01<br><br># Decision: Implement delta distillation<br>Rationale: Full re-processing is too expensive for long transcripts.<br>Session: bf6dd8f-1f2b-4f8c-8c81-c3551f2fb368<br>Date: 2025-12-30</pre><p>No HTML cruft. No JavaScript. No navigation chrome. Just the knowledge, formatted for machine consumption. This is the kind of thing that matters when you’re building for a world where agents consume APIs.</p><h3>GitHub OAuth + API Key System</h3><p>This system will have precious knowledge stored. I can’t deploy it without authentication, so that was next. I went with GitHub OAuth because literally everyone at my company has a GitHub account. No signup friction, no password management, no email verification flows.</p><iframe src="https://cdn.embedly.com/widgets/media.html?type=text%2Fhtml&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;schema=twitter&amp;url=https%3A//x.com/obie/status/2005689332976271459&amp;image=" width="500" height="281" frameborder="0" scrolling="no"><a href="https://medium.com/media/de7fd11c675b52402ad4b804676f3c6a/href">https://medium.com/media/de7fd11c675b52402ad4b804676f3c6a/href</a></iframe><p>The most challenging part was the device authorization flow for CLI tools.</p><p>Claude Code hooks need to authenticate somehow, but they run in a terminal with no browser. I couldn’t find a way to share authentication state with an installed MCP server. Solution: a streamlined device flow that automatically opens your browser.</p><p>The flow works like this:</p><ol><li>Hook calls POST /device/authorize</li><li>Server returns a device code and verification URL</li><li>Hook automatically opens your browser to the verification page (using open on macOS or xdg-open on Linux)</li><li>You see the Nexus authorization page and click to verify</li><li>If not already logged in, you’re redirected to GitHub OAuth</li><li>Meanwhile, the hook polls /device/token in the background</li><li>Once you authorize, the server returns an API token</li><li>Hook saves the token to ~/.config/nexus/api_key.&lt;hostname&gt; with secure permissions</li></ol><p>The UX is seamless: on first run, your browser pops open, you click authorize, and you’re done. Future sessions authenticate automatically using the cached token.</p><h3>Deployment Infrastructure</h3><p>I wanted this running in production ASAP, not just localhost. I picked <a href="https://render.com">Render</a> as my target platform since I know it inside and out, but Claude says that I took an unconventional approach: a single Docker container that bundles everything.</p><p>The container includes:</p><ul><li><strong>PostgreSQL 17 with pgvector</strong>: Embedded database, not a managed service</li><li><strong>Oxigraph</strong>: The RDF triple store, also embedded</li><li><strong>Rails + Puma</strong>: The web application</li><li><strong>SolidQueue worker</strong>: Background job processing</li></ul><p>All four processes are managed by <a href="https://github.com/DarthSim/overmind">Overmind</a>, a Procfile-based process manager. One container, one persistent disk mount at /data, everything self-contained:</p><pre>postgres: su postgres -c &#39;/usr/lib/postgresql/*/bin/postgres -D /data/postgresql&#39;<br>oxigraph: oxigraph serve --location /data/oxigraph --bind 127.0.0.1:7878<br>web: bundle exec puma -p 3000 -b tcp://0.0.0.0<br>worker: bundle exec rake solid_queue:start</pre><p>Why bundle everything? Simplicity. No coordinating multiple services. No managed database pricing. No network latency between app and database. For a side project that might scale to a small team, this is perfect. If it needs to scale beyond that, breaking out the database is straightforward.</p><p>Is a single container deployment really unconventional? I don’t think it should be.</p><p>The commit also added:</p><ul><li>GitHub Actions<strong> workflow</strong>: Automated deployment on push to main</li><li><strong>CI pipeline</strong>: RSpec tests running before deploy, blocking bad commits</li></ul><pre>name: Deploy to Render<br>on:<br>  push:<br>    branches: [main]<br>jobs:<br>  test:<br>    runs-on: ubuntu-latest<br>    steps:<br>      - uses: actions/checkout@v4<br>      - name: Run tests<br>        run: bundle exec rspec<br>  deploy:<br>    needs: test<br>    runs-on: ubuntu-latest<br>    steps:<br>      - name: Deploy to Render<br>        uses: johnbeynon/render-deploy-action@v0.0.8<br>        with:<br>          service-id: ${{ secrets.RENDER_SERVICE_ID }}<br>          api-key: ${{ secrets.RENDER_API_KEY }}</pre><p>By end of day, commits to main were automatically tested, built into Docker images, and deployed to production. From idea to deployed feature in minutes, not days.</p><p>The needs: test line is critical. No deployment happens unless the test suite passes. And because we&#39;d been doing TDD from the start, that test suite was substantial.</p><p>By Day 3, we had specs covering:</p><ul><li>The full distillation pipeline with various transcript formats</li><li>RDF transformation and SPARQL query execution</li><li>Deduplication logic and delta processing</li><li>Authentication flows for both web and API</li><li>The conversational query assistant</li><li>Edge cases for identity resolution</li></ul><p>The TDD discipline was paying dividends in letting me practice continuous deployment with confidence.</p><h3>Day 4: January 1st — New Year’s Day, New Capabilities</h3><p><strong>Commits: 29<br>PRs Merged: 12<br>Theme: Making it intelligent</strong></p><p>Happy New Year! While normal people were recovering from celebrations, I was building an MCP server. Looking back at the commit log for January 1st, I honestly don’t know how I fit it all in.</p><h3>The Ontology Browser</h3><p>RDF is powerful because it’s self-describing. The schema itself is data. You can query the ontology just like you query the instances.</p><p>I wanted that exposed through the UI, so users (and agents) could explore what types of entities exist, what properties they have, and how to query them.</p><p>PR #19 delivered the ontology browser:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*qRApstEMGdPnmmqVki0DKg.png" /><figcaption><em>The ontology browser showing all 10 entity types with property counts and entity totals.</em></figcaption></figure><p><strong>Visual grid of entity types</strong>: Session, Decision, Learning, Conflict, Person, Agent, Project, Concept, User, ExternalResource. Each card shows the type name, description, property count, and instance count.</p><p><strong>Detail views</strong>: Click on any type to see its full property list, relationships to other types, and example SPARQL queries.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*6x7DkA8fcYwmYui_wPwgWw.png" /></figure><p><strong>Downloadable Turtle format</strong>: Export the full ontology definition for use in other tools.</p><p><strong>JSON API endpoints</strong>: So agents can programmatically explore the schema before querying.</p><p>This might sound minor, but it’s important for discoverability. When you land on a new knowledge system, you need to understand its shape. What kinds of things are stored? How are they related? What can you query for? The ontology browser provides that.</p><h3>Identity Resolution</h3><p>One of the big PRs of the day tackled a fundamental problem: who is involved in these sessions?</p><p>When a transcript mentions “Sarah made the decision to use PostgreSQL,” the system should understand that Sarah is a real person. It should link her to other sessions she’s participated in. It should enable queries like “What decisions has Sarah been involved in?”</p><p>This required multiple components:</p><p><strong>New RDF entity types</strong>: nx:Person for humans, nx:Agent for AI assistants. They&#39;re both participants, but they need different handling. You want to track what decisions Sarah has made. You probably don&#39;t need to track what decisions Claude has &quot;made&quot; (though it&#39;s interesting for analysis).</p><p><strong>IdentityResolver service</strong>: Takes a name or email from a transcript and resolves it to a canonical Person record. Handles variations: “Sarah,” “Sarah Chen,” “<a href="mailto:sarah@company.com">sarah@company.com</a>” should all resolve to the same person. Uses fuzzy matching and email-based identification.</p><p><strong>Attribution tracking</strong>: The distillation prompt now extracts not just “participants” but specific attributions: who proposed this decision, who discovered this learning, who contributed to the discussion.</p><p><strong>People UI</strong>: A /people endpoint to browse and merge identities. Sometimes the system creates duplicate Person records that need manual consolidation.</p><p>The identity resolver is smart about AI names too. “Claude,” “Claude Code,” “Assistant,” “GPT-4,” “Copilot” all get correctly classified as agents, not people. This prevents accidentally creating Person records for AI assistants.</p><p>Around this time I started realizing that a lot of my entities were going to look similar. Maybe I could try to feed the context of the knowledge distillation worker with similar entities right off the bat? Or do some post-processing later? Either way I would need embeddings.</p><h3>Semantic Search with pgvector</h3><p>SPARQL is powerful but exact. You query for specific predicates and values. You get back things that match precisely.</p><p>What if you want conceptual similarity? “Find knowledge related to authentication” should surface:</p><ul><li>Decisions about OAuth</li><li>Discussions about API keys</li><li>Learnings about session management</li><li>Security-related conflicts</li></ul><p>Even if none of them literally contain the word “authentication.”</p><p>PR #21 added vector similarity search:</p><p><strong>pgvector extension</strong>: PostgreSQL with the pgvector extension can store and query high-dimensional vectors. No separate vector database needed.</p><p><strong>Embedding generation</strong>: Using <a href="https://deepmind.google/technologies/gemini/">Gemini</a>’s embedding model (via OpenRouter), each decision and learning gets converted to a 768-dimensional vector that captures its semantic meaning.</p><p><strong>RdfEntity model</strong>: A PostgreSQL table that mirrors entities from the RDF store and adds vector embeddings. When knowledge gets distilled, each entity is queued for embedding generation.</p><p><strong>SemanticSearch service</strong>: Takes a natural language query, embeds it, and finds the closest matches in vector space.</p><pre>class SemanticSearch<br>  def search(query, limit: 10)<br>    query_embedding = EmbeddingService.embed(query)</pre><pre>    RdfEntity<br>      .nearest_neighbors(:embedding, query_embedding, distance: &quot;cosine&quot;)<br>      .limit(limit)<br>      .map { |entity| enrich_with_rdf_data(entity) }<br>  end<br>end</pre><p>When knowledge gets distilled, each decision and learning is automatically queued for embedding generation. The graph stays in sync with the vector index.</p><h3>Conflict Detection</h3><p>Knowledge systems accumulate contradictions. This is inevitable. Two sessions might record opposing decisions:</p><ul><li>Session A: “Decided to use REST APIs for the mobile client”</li><li>Session B: “Decided to use GraphQL for the mobile client”</li></ul><p>Or learnings that directly contradict each other:</p><ul><li>Learning 1: “Caching improved performance by 40%”</li><li>Learning 2: “Caching caused consistency issues and was removed”</li></ul><p>Rather than silently harboring inconsistency, Nexus now surfaces these explicitly.</p><p>PR #22 added the Conflict entity type:</p><p><strong>Async conflict detection</strong>: After each distillation, a background job scans for potential conflicts. It uses embeddings to find semantically similar items, then prompts an LLM to assess whether they actually conflict.</p><p><strong>Status tracking</strong>: Conflicts can be: open, investigating, resolved. Each status change gets tracked with timestamps.</p><p><strong>Priority levels</strong>: Not all conflicts are equal. A contradiction between architectural decisions is more important than conflicting preferences about code style.</p><h3>Entity Browser</h3><p>The ontology browser shows the schema. But what about the actual data? You need to be able to browse and search the instances themselves.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*-Xs6M-FBQ4qrNPORJxircg.png" /><figcaption><em>The entity browser with type filters and semantic search.</em></figcaption></figure><p>PR #25 and #27 added entity browsing:</p><p><strong>/entities endpoint</strong>: Browse all entities with type filtering. Show me all Decisions. Show me all Learnings. Show me all Conflicts.</p><p><strong>Search</strong>: Both exact text search and semantic similarity search. Find entities that mention “authentication” literally, or find entities conceptually related to authentication.</p><p><strong>Individual entity views</strong>: Click through to see all properties and relationships for any entity.</p><p><strong>Counts in ontology view</strong>: The ontology browser now shows how many instances of each type exist.</p><h3>The MCP Server</h3><p><a href="https://modelcontextprotocol.io/">MCP (Model Context Protocol)</a> is <a href="https://anthropic.com/">Anthropic</a>’s standard for exposing tools to AI agents. By embedding an MCP server in Nexus, any AI agent can directly query the organizational knowledge base.</p><p>Think about what this enables. Before starting to implement a feature, Claude Claude can be configured to check Nexus: “Has this team made any decisions about authentication patterns?” Before proposing an architectural change, it can query: “What learnings do we have about caching in this codebase?”</p><p>PR #30 implemented the MCP server with five tools:</p><p><strong>nexus_ontology</strong>: Get the full schema with types, properties, and example queries. This is the starting point for exploration.</p><p><strong>nexus_type</strong>: Get detailed information about a specific entity type. “Tell me about Decision entities.”</p><p><strong>nexus_recent</strong>: Fetch recent sessions, decisions, learnings, or conflicts. “What were the last 10 decisions?”</p><p><strong>nexus_query</strong>: Execute arbitrary SPARQL queries. For agents that know what they’re looking for.</p><p><strong>nexus_search</strong>: Semantic vector similarity search. “Find knowledge related to database performance.”</p><p>The tools follow <a href="https://en.wikipedia.org/wiki/HATEOAS">HATEOAS</a>-style progressive discovery. Each response includes hints about what to query next. An agent can start with nexus_ontology, understand the schema, then drill into specific types, then query for specific instances.</p><pre>Agent: nexus_ontology()<br>→ Returns: all types with descriptions and example queries<br>  Hint: &quot;Use nexus_type(&#39;Decision&#39;) for detailed Decision properties&quot;<br><br>Agent: nexus_type(type_name: &quot;Decision&quot;)<br>→ Returns: properties, relationships, example query<br>  Hint: &quot;Use nexus_recent(type: &#39;Decision&#39;) to see recent decisions&quot;<br><br>Agent: nexus_recent(type: &quot;Decision&quot;, limit: 5)<br>→ Returns: 5 most recent decisions with full details<br>  Hint: &quot;Use nexus_query() with SPARQL for more specific queries&quot;</pre><p>The MCP server gives them read access to everything Nexus knows. The organizational memory is no longer just for humans browsing a web UI; it’s infrastructure that agents consume, which was the goal from the start.</p><p>Twenty-nine commits. Twelve merged PRs. On January 1st. I still can’t quite believe it.</p><h3>Days 5–6: January 2nd-3rd — Opening the Floodgates</h3><p><strong>Commits: ~10<br>Theme: Universal ingestion</strong></p><p>I successfully demoed the system at our Friday all-hands company meeting. People are psyched. I’m psyched. The system was powerful, but it only captured Claude Code sessions. What about all those other knowledge sources I mentioned at the start? GitHub PR discussions. Slack threads. Linear issues. What about non-engineers. The vision was passive ingestion from everywhere, right? Time for some magic.</p><figure><a href="https://www.amazon.com/Patterns-Application-Development-Using-AI/dp/B0DN9KK4X7"><img alt="" src="https://cdn-images-1.medium.com/max/682/1*lRcgsies7lJTP1f-UBy71w.png" /></a><figcaption>Want to become the type of person who writes a <em>Universal Webhook Processor and calls it a day? First step, buy my book on </em><a href="https://www.amazon.com/Patterns-Application-Development-Using-AI/dp/B0DN9KK4X7"><em>Amazon</em></a><em> or </em><a href="https://leanpub.com/patterns-of-application-development-using-ai"><em>Leanpub</em></a><em> (where it’s available in 31 languages)</em></figcaption></figure><h3>Universal Webhook Processor</h3><p>PR #33 was the key unlock. Instead of building bespoke integrations for every SaaS platform, I built a universal webhook endpoint that uses AI to understand and transform any JSON payload:</p><pre>POST /webhooks/:source</pre><p>Point GitHub webhooks here. Linear webhooks here. Slack event subscriptions here. Notion webhooks here. Any service that can POST JSON can become a knowledge source.</p><p>The WebhookProcessor service:</p><ol><li><strong>Receives the raw JSON payload</strong>: No assumptions about structure</li><li><strong>LLM analysis</strong>: Uses Gemini 3 Flash to understand what kind of webhook this is and extract meaningful content</li><li><strong>Transcript transformation</strong>: Converts the webhook into a “transcript” format that the existing distillation pipeline can process as if it was coming from Claude Code.</li><li><strong>Job queuing</strong>: Enqueues a distillation job for background processing</li></ol><p>The beautiful part is graceful degradation:</p><ul><li><strong>Slack</strong>: Gets optimized handling with a specific extractor that knows where to find the relevant content. (See next section.)</li><li><strong>Everything else</strong>: The LLM analyzes the JSON structure and makes its best effort to extract meaningful content</li></ul><p>Some random internal tool sends webhooks? As long as there’s meaningful text content somewhere in the payload, Nexus will try to distill knowledge from it.</p><h3>Slack Integration</h3><p>Slack deserved deeper integration than just webhooks. It’s where so much organizational discussion happens.</p><p>PR #34 added:</p><p><strong>Slack Events API handler</strong>: Real-time event processing for messages, reactions, and thread updates.</p><p><strong>SlackThreadCollector</strong>: Fetches and formats entire threads with full context. When a thread gets distilled, we capture the whole conversation, not just the triggering message.</p><p><strong>Threshold filtering</strong>: Not every Slack message is worth preserving. The defaults require at least 3 replies or 2 participants before auto-ingestion. ∑e also filter out many casual “sounds good” (ok, lol, etc) while capturing substantive discussions.</p><p><strong>ID resolution</strong>: Slack uses internal IDs like U09RS4298TY for users and C01234ABCDE for channels. The LLM now has a tool to resolve these to human-readable names. &quot;U09RS4298TY said...&quot; becomes &quot;Obie Fernandez said...&quot;</p><p>The threshold filtering is important. Slack generates enormous amounts of content. Without filtering, you’d be paying to distill “sounds good” and “thanks!” a thousand times a day.</p><h3>Metadata as RDF Properties</h3><p>The final architectural piece: when sessions originate from webhooks, their metadata becomes queryable.</p><p>PR #44 added new RDF properties for webhook-originated sessions:</p><p><strong>nx:sourceUrl</strong>: A link back to the original resource. For a GitHub PR discussion, this links to the PR. For a Slack thread, it links to the thread in Slack.</p><p><strong>nx:resourceType</strong>: What kind of external resource this came from: pull_request, issue, slack_thread, linear_issue, etc.</p><p><strong>nx:eventType</strong>: What triggered the session: pr_opened, pr_reviewed, issue_commented, message_posted.</p><p><strong>nx:significance</strong>: Impact categorization: low, medium, high. Webhook processors can assess significance based on factors like number of participants, length of discussion, or explicit markers.</p><p>Now you can query:</p><pre>PREFIX nx: &lt;https://nexus.zar.app/ontology#&gt;<br>SELECT ?session ?title ?url WHERE {<br>  ?session a nx:Session ;<br>           nx:resourceType &quot;pull_request&quot; ;<br>           nx:significance &quot;high&quot; ;<br>           nx:title ?title ;<br>           nx:sourceUrl ?url .<br>}</pre><p>“Show me all high-significance decisions from GitHub PRs in the last month.” The RDF graph knows where knowledge came from, not just what it contains.</p><h3>The Numbers</h3><p>Despite the detail in this now very long blog post, I still feel like I’m only scraping the surface of the work that was involved.</p><p><strong>Code</strong></p><ul><li>~12,800 lines of code across app, lib, and spec directories</li><li>20+ controllers handling various endpoints</li><li>10+ models including Person, RdfEntity, Inquiry, DeviceAuthorization</li><li>15+ services for distillation, transformation, search, conflict detection, identity resolution</li><li>A full-featured first draft of a Nexus ontology with 10 unique types and 23 properties</li><li>Comprehensive test coverage with RSpec (247 examples)</li><li>Working integration with Claude Code, Github, Slack, and Linear</li></ul><p><strong>Infrastructure</strong></p><ul><li>Full deployment infrastructure on Render with CI/CD</li><li>Production-ready authentication with GitHub OAuth and API keys</li><li>MCP server for agent integration</li><li>Universal webhook ingestion for any SaaS platform</li></ul><p><strong>Timeline</strong></p><p>December 29th to January 3rd. Call it four working days. Here’s a full list of the PRs.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*OjErfzuKUuZa_SCjVESfIA.png" /><figcaption>I worked directly in main for the first couple days, but I wish I had not done that</figcaption></figure><h3>What This Means</h3><p>I’ve built a lot of software in my career. I’ve owned and run substantial software delivery consulting organizations. I know how long projects typically take and I know what they cost. I know the difference between “working demo” and “production system.”</p><iframe src="https://cdn.embedly.com/widgets/media.html?type=text%2Fhtml&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;schema=twitter&amp;url=https%3A//x.com/obie/status/1982857512207426010&amp;image=" width="500" height="281" frameborder="0" scrolling="no"><a href="https://medium.com/media/be88dc4fb18af2b159885f545b5f3721/href">https://medium.com/media/be88dc4fb18af2b159885f545b5f3721/href</a></iframe><p>What Claude Code enabled me to do this week is not incremental improvement. It’s categorical change.</p><p>A system like Nexus, built by a solo developer in the “old” way, would take months. Each of the following bullet points would take at least a week or two, assuming a motivated engineer with little in the way of distractions.</p><ul><li>Research and spike the RDF/SPARQL approach</li><li>Design the schema and ontology</li><li>Build the ingestion pipeline</li><li>Implement distillation with prompt engineering</li><li>Add authentication and authorization</li><li>Build the web UI</li><li>Set up deployment infrastructure</li><li>Add semantic search with vector embeddings</li><li>Build the MCP server</li><li>Test everything thoroughly</li><li>Handle the hundred edge cases that emerge</li></ul><p>That’s at least 13 weeks of focused development. Call it 4–5 months. And that’s with an experienced developer who knows the technologies involved. This would not be a viable side project for a typical engineer that has other deliverable feature work. For a busy CTO? Yeah, right.</p><p>Yet I did it in the time between Christmas and New Year’s, and still had time for regular work, walking my dog, and holiday activities. The leverage is absurd. (Admittedly some of the work was done on Claude Code on my phone at the barber shop.)</p><p>It’s not about replacing experienced developers. My architectural judgment, domain knowledge, and product intuition were essential every step of the way. Claude Code couldn’t have built Nexus alone. It’s very likely that <em>you, dear reader </em>cannot build Nexus in a few days if you tried, even with Claude Code’s help. An AI coding agent doesn’t automatically understand my company’s needs, our existing infrastructure, our preferences for certain patterns. But I couldn’t have built it this fast alone either.</p><p>Never. Would. Have. Happened.</p><h3>The Development Experience</h3><p>Productivity measures don’t capture the qualitative difference involved in working this way. Traditional development has a certain rhythm. You think about what you want to build. You start typing. You hit issues, debug them, refer to documentation, search Stack Overflow. You write tests. You refactor. Each step takes time, and there’s cognitive overhead in context-switching between “thinking about the problem” and “mechanically implementing the solution.”</p><p>AI-assisted development collapses that overhead. I stay in “thinking about the problem” mode almost the entire time. When I need something implemented, I describe it. When I need documentation, I ask. When I hit an issue, I describe the symptoms and get debugging suggestions.</p><p>Actually, if I happen to think of something new that’s outside of the current workstream, you know what I do? I used to log an issue. Now I use the &amp; prefix in Claude Code to just fire up a remote instance and have it start working on whatever I thought of.</p><p>I’ve been reflecting on this shift in <a href="https://obie.medium.com/what-happens-when-the-coding-becomes-the-least-interesting-part-of-the-work-ab10c213c660">What Happens When the Coding Becomes the Least Interesting Part of the Work</a>, which recently went mega viral. For someone at my experience level, the actual typing of code has long since stopped being the thing that teaches me anything. The intellectual work happens before the first line is written: understanding the problem space, recognizing patterns from decades of experience, making judgment calls about abstraction levels, assessing blast radius of changes, feeling out whether something should be general or specific.</p><p>That’s the “senior thinking” that AI doesn’t replace. What AI does replace is the mechanical translation of those decisions into working code. And honestly? That translation was always the boring part.</p><p>The experience is closer to pair programming with a very capable, very fast colleague who never gets tired and knows every API by heart. I’m still making all the decisions. I’m still responsible for the architecture. But the mechanical parts happen at conversation speed instead of typing speed.</p><h3>Conclusion</h3><p>Enterprise software vendors charge six and seven figures for knowledge management systems. They’re built by teams of dozens over years. They require consultants to implement and months to configure.</p><p>I built equivalent functionality in four days. While doing my day job as a CTO. Yes, I’m exceptional, but so what? I can only do it today because the tools have changed so dramatically.</p><p>If you’re a developer who hasn’t tried Claude Code yet, you’re working with one hand tied behind your back. The productivity multiplier is real. The creative leverage is real. The ability to turn ideas into working systems in days instead of months is real.</p><p>But bring your discipline with you. Bring your tests. Bring your judgment. The AI provides velocity. You provide direction. Together, you build things that neither could build alone.</p><blockquote>Live in EMEA and want to work on Nexus and AI agents with me? I’m currently hiring senior product engineering talent with Ruby on Rails expertise at <a href="https://zar.app">ZAR</a>, where we’re building the future of global stablecoin adoption.</blockquote><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=cc8883cc21e9" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[My Full Supplement Stack for 2026]]></title>
            <link>https://obie.medium.com/my-full-supplement-stack-for-2026-a05d0c9b714e?source=rss-9e1370f50f6e------2</link>
            <guid isPermaLink="false">https://medium.com/p/a05d0c9b714e</guid>
            <category><![CDATA[supplements]]></category>
            <category><![CDATA[biohacking]]></category>
            <category><![CDATA[health]]></category>
            <dc:creator><![CDATA[Obie Fernandez]]></dc:creator>
            <pubDate>Thu, 01 Jan 2026 21:11:44 GMT</pubDate>
            <atom:updated>2026-01-02T00:37:31.825Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*lTFwFu2VB_gT9_LcrIJFPw.jpeg" /></figure><h4>Better living through chemistry</h4><p>I’ve spent most of my life operating at the intersection of high-performance engineering, entrepreneurship, and electronic music. My schedule swings between deep technical work, long creative sessions, international travel, and occasional late-night shows. That lifestyle demands a cognitive and physical baseline that doesn’t happen by accident. Besides regular exercise and an increasingly consistent sleep schedule, I’ve also built a supplement stack that keeps me sharp, stable, and resilient, and for 2026 I’ve refined it into something that feels complete.</p><p>This is my version of <em>better living through chemistry </em>at age 51.</p><p>What follows is a full breakdown of what I take, why I take it, and how the pieces fit together. For what it’s worth, this is also the first time I’m documenting this information in public. Thanks to <a href="https://x.com/@bryan_johnson">@bryan_johnson</a> and the popularity of peptides it seems like the Overton window for this kind of supplementation has shifted quite a bit in the last 12 months. I’m still a freak, but I’m in good company now.</p><p><em>Nothing in this post is medical advice. A lot of this involves prescription medication and experimental compounds; work with a real doctor if you decide to go anywhere near this territory. Also yes, this shit gets expensive fast and your mileage definitely will vary. Don’t say I didn’t warn you.</em></p><h3>Core Cognitive Stack</h3><p>The foundation of my productivity revolves around a simple principle: support focus without degrading long-term neurological health. I do that with a combination of modafinil, phenylpiracetam, and targeted precursors that has entirely cut my use of traditional amphetamine-based ADD medicines.</p><p><strong>Modafinil </strong>This is my primary focus tool. It gives me reliable mental clarity and stamina during long engineering or creative sessions, and it smooths out jet lag when I’m bouncing between continents. I dose conservatively and take structured breaks to keep it effective.</p><p><strong>Phenylpiracetam </strong>Every few days I rotate modafinil out and replace it with phenylpiracetam. The effect profile is different — more dopaminergic, more immersive — and it helps prevent tolerance buildup. It’s perfect for deep, locked-in work where I want both speed and enjoyment.</p><h4>Choline + Dopamine Precursors</h4><p>Modafinil can drain choline levels, which is why I pair the stack with daily <strong>CDP-Choline</strong>. It prevents headaches and keeps mental clarity crisp. I also use <strong>L-Tyrosine</strong> everyday to ensure sustained dopaminergic output, especially on heavy performance days.</p><h4>Stress, Focus &amp; Mood Modulation</h4><p>Some days demand raw horsepower, but most days require controlled, quiet consistency. This part of my stack keeps my internal environment steady under the pressure of work, deadlines, and travel.</p><p><strong>L-Theanine </strong>This is the unsung hero of my daily routine. It keeps cortisol smooth, helps me avoid stimulant edginess, and improves sleep quality later in the day. It’s subtle, reliable, and essential.</p><p><strong>NAC </strong>Long-term stimulant use, travel fatigue, and intense schedules create oxidative stress. NAC keeps glutathione topped up and helps maintain clean neurotransmitter cycling. It also pairs well with dopamine-modulating compounds.</p><h4>Peptide-Based Enhancement</h4><p>Peptides are where modern neurochemistry gets genuinely interesting. They modulate plasticity and focus in ways traditional supplements can’t match.</p><p><strong>Semax</strong> Provides clean, upward-tilting cognitive profile: slightly more plasticity, slightly more drive, noticeably better retention during reading or learning sessions. It pairs extremely well with coding.</p><p><strong>Selank </strong>Where Semax nudges productivity upward, Selank smooths emotional load downward. Travel and deadlines are easier to navigate on Selank. It provides calm clarity without dullness.</p><blockquote><strong>Looking for a reliable peptides supplier? Try </strong><a href="https://www.peptaura.com/"><strong>Peptaura</strong></a> <strong>and use code OBIE for 20% discount at checkout.</strong></blockquote><h3>Longevity, Cellular Energy &amp; Metabolic Control</h3><h4>Vitamin D3 + K2</h4><p>A foundational combo for immunity, mood stability, hormone balance, and calcium regulation.</p><p><strong>How it works (plain language):</strong></p><ul><li>D3 boosts mood, immune function, and metabolic health.</li><li>K2 ensures calcium goes to bones instead of arteries.</li><li>Together they support long-term cardiovascular and cognitive health.</li></ul><h4>Magnesium (Glycinate &amp; L-Threonate)</h4><p>I use <strong>glycinate</strong> for muscle recovery and calm, and <strong>L-threonate</strong> for deep sleep and cognitive restoration.</p><p><strong>How it works:</strong></p><ul><li>Glycinate is highly bioavailable and relaxes the nervous system.</li><li>L-threonate crosses the blood–brain barrier and improves synaptic density.</li></ul><h4>Probiotics (Lactobacillus Reuteri)</h4><p>This specific strain has unique effects on mood, immune resilience, and even social behavior.</p><p><strong>How it works:</strong></p><ul><li>Increases oxytocin signaling.</li><li>Improves skin, gut lining integrity, and overall immune modulation.</li><li>Promotes a calmer, more connected mental state.</li></ul><h4>Triphala</h4><p>My go‑to for gentle digestive resilience everyday, taken at night along with probiotics on empty stomach for maximum effectiveness. Along with a glass of high-quality psyllium husk fiber everyday, this makes sure that the GLP-1 doesn’t prevent me from pooping regularly.</p><p><strong>How it works:</strong></p><ul><li>A blend of three Ayurvedic fruits that improve gut motility and reduce inflammation.</li><li>Supports nutrient absorption and reduces digestive stress.</li></ul><h4>Beetroot Extract</h4><p>Taken before training for vascular support and stamina.</p><p><strong>How it works:</strong></p><ul><li>Provides nitrates that convert into nitric oxide.</li><li>Improves blood flow, workout capacity, and cognitive oxygenation.</li><li>Supports strong sexual performance!</li></ul><h4>Collagen Peptides</h4><p>Part of my long-term joint, skin, and connective tissue maintenance. I know it works because people always assume I’m in my 30s, in no small part due to my healthy skin complexion.</p><p><strong>How it works:</strong></p><ul><li>Provides amino acids for collagen synthesis.</li><li>Supports joint health and faster recovery.</li></ul><h4>Creatine Monohydrate</h4><p>One of the most-researched performance supplements and part of my daily baseline.</p><p><strong>How it works:</strong></p><ul><li>Enhances ATP regeneration in both muscles and neurons.</li><li>Improves strength, recovery, and cognitive clarity.</li></ul><p>A glass of creatine in the morning also works really well to shake the mental fog when I get less than 7 hours sleep.</p><h4>Pterostilbene</h4><p>A more bioavailable analog of resveratrol derived from blueberries.</p><p><strong>How it works:</strong></p><ul><li>Supports mitochondrial function.</li><li>Provides antioxidant and anti-inflammatory benefits.</li><li>May improve lipid profiles and longevity markers.</li><li>Noted for benefits to young and healthy looking skin.</li></ul><h4>NAC</h4><p>Previously listed in the stress/mood section, but it also belongs here.</p><p><strong>How it works:</strong></p><ul><li>Replenishes glutathione, the body’s master antioxidant.</li><li>Protects the liver and reduces oxidative stress from stimulants or travel.</li></ul><h4>Red Yeast Rice + CoQ10</h4><p>My cholesterol levels are somewhat high due to genetic factors. So I take Red Yeast Rice every night for lipid management instead of prescription statins.</p><p><strong>How it works:</strong></p><ul><li>Red yeast rice contains monacolin K, which works like a mild statin.</li><li>CoQ10 prevents the mitochondrial depletion that statins can cause.</li></ul><h4>Pygeum + Saw Palmetto</h4><p>For hormonal balance and prostate support.</p><p><strong>How it works:</strong></p><ul><li>Both reduce DHT conversion.</li><li>Smooth out urinary flow, reduce inflammation, and support long-term male endocrine health.</li></ul><h4>DIM (Diindolylmethane)</h4><p>DIM has been a quiet but reliable part of my hormonal health toolkit, especially for estrogen metabolism.</p><p><strong>How it works (plain language):</strong></p><ul><li>Helps the body convert estrogen into its <em>beneficial</em> metabolites rather than the problematic ones.</li><li>Supports hormonal balance, mood stability, and clearer skin.</li><li>Useful during cycles of high stress, travel, or disrupted sleep when estrogen pathways tend to get messy.</li></ul><p>It’s subtle, but over time it keeps things smooth in a way you definitely notice when you stop taking it.</p><h4>Taurine</h4><p>A multi-functional amino acid with wide-ranging benefits.</p><p><strong>How it works:</strong></p><ul><li>Supports GABAergic calming.</li><li>Improves cardiovascular function.</li><li>Enhances mitochondrial performance and electrolyte balance.</li></ul><h4>Lysine</h4><p>One of the oldest staples in my supplement routine (going on 20 years) and still indispensable.</p><p><strong>How it works (plain language):</strong></p><ul><li>Lysine is essential for collagen formation, immune resilience, and tissue repair.</li><li>Helps prevent HSV flare-ups — the original reason many people begin supplementing it.</li><li>Supports calm and mood stability by influencing serotonin receptors.</li><li>Useful during heavy travel, stress cycles, or when sleep is inconsistent.</li></ul><p>It’s simple, inexpensive, and consistently effective — which is why it never left my stack.</p><h4>Serrapeptase</h4><p>An enzyme with systemic anti-inflammatory and fibrinolytic effects.</p><p><strong>How it works:</strong></p><ul><li>Breaks down excess fibrin, scar tissue, and inflammatory byproducts.</li><li>Supports cardiovascular health and reduces swelling.</li></ul><h4>Fish Oil (Omega-3 EPA/DHA)</h4><p>Fish oil has been part of my baseline for years because it supports cardiovascular health, reduces inflammation, and improves cognitive performance.</p><p><strong>How it works (plain language):</strong></p><ul><li>EPA reduces systemic inflammation.</li><li>DHA is a structural component of neuronal membranes and improves signaling efficiency.</li><li>Together they support mood stability, brain health, and long-term heart protection.</li></ul><h4>Black Seed Oil (Nigella Sativa)</h4><p>A broad-spectrum anti-inflammatory and metabolic support compound I’ve used intermittently.</p><p><strong>How it works (plain language):</strong></p><ul><li>Contains <strong>thymoquinone</strong>, which reduces inflammatory markers and oxidative stress.</li><li>Supports metabolic health and may improve insulin sensitivity.</li><li>Provides mild antihistamine and bronchodilation effects — helpful during travel or allergy cycles.</li></ul><h4>Saffron Extract</h4><p>Saffron is one of the few natural mood-enhancing supplements with strong clinical backing.</p><p><strong>How it works (plain language):</strong></p><ul><li>Boosts serotonin through mild reuptake modulation.</li><li>Reduces anxiety and improves mood without sedation.</li><li>Has comparable effects to low-dose SSRIs in several trials, but with a cleaner side-effect profile.</li></ul><h3>Taking care of my dopamine levels</h3><h4><strong>Agmatine</strong></h4><p>Plays a stabilizing role in my stack. By modulating NMDA receptors and supporting nitric oxide pathways, it helps smooth out the sharper edges of stimulants, improves training performance, and slows tolerance buildup across dopaminergic compounds.</p><p><strong>How it works (plain language):</strong></p><ul><li>It acts like a “volume knob” on excitatory neurotransmission, preventing overstimulation.</li><li>It improves blood flow through nitric oxide pathways, which supports both cognition and physical training.</li><li>It interacts with the same receptors involved in opioid tolerance, which is why it can help prevent tolerance creep from dopaminergic compounds too.</li></ul><p>It’s one of the simplest additions with the broadest synergistic effects.</p><h4><strong>Mucuna Pruriens</strong></h4><p>My natural dopamine reset. Because it contains L-DOPA, it provides gentle dopaminergic replenishment on the days following heavier modafinil, phenylpiracetam, or peptide cycles. Used sparingly, it prevents the motivational dip that can follow high-output periods and supports a smooth return to baseline.</p><p><strong>How it works (plain language):</strong></p><ul><li>Mucuna contains <strong>L-DOPA</strong>, the direct precursor your brain uses to make dopamine.</li><li>Instead of forcing a dopamine spike, it restores raw material so your brain can rebuild its supply naturally.</li><li>This makes it ideal for days after hard mental output, when neurotransmitter stores may be temporarily depleted.</li></ul><p><strong>How it compares to synthetic dopamine agonists:</strong></p><ul><li><strong>Mucuna (L-DOPA):</strong> Provides dopamine <em>building blocks.</em> Your brain stays in control of how much is converted and released. The result is smoother, more physiologic support.</li><li><strong>Synthetic agonists (e.g., pramipexole, ropinirole):</strong> Directly stimulate dopamine receptors, bypassing normal regulation. This produces much stronger effects but also increases risks like receptor downregulation, compulsive behaviors, and harsh withdrawal.</li><li><strong>Why I prefer Mucuna:</strong> Natural replenishment avoids the “yo-yo” pattern that comes with synthetic agonists. It lifts motivation without hijacking dopamine pathways or creating dependency patterns.</li></ul><p>Used correctly, it feels like sharpening the motivational edge without overstimulation.</p><h3>Sustainable Performance &amp; Energy</h3><p>Performance only matters if it’s sustainable. This section of my stack focuses on metabolism, mitochondrial efficiency, cardiovascular health, and fat loss.</p><p><strong>Methylene Blue </strong>A low dose supports mitochondrial electron transport and consistently sharpens my thinking. I use it deliberately, not daily, when I want maximum clarity.</p><p><strong>Lactoferrin </strong>Initially I added this for immune support, but it has a meaningful fat-loss effect when used consistently. It’s also excellent during travel and high-stress cycles.</p><p><strong>Nattokinase </strong>Cardiovascular resilience matters when traveling constantly and occasionally using stimulants. Nattokinase supports fibrin breakdown and blood flow.</p><h3>Sleep Optimization &amp; Recovery</h3><p>Everything falls apart without sleep, especially in a lifestyle that often includes late-night shows and early-morning flights. I keep this part simple.</p><p><strong>Magnesium L-Threonate </strong>This form crosses the blood-brain barrier and reliably deepens my sleep. It helps me wake up clear instead of groggy, especially after stressful days.</p><p><strong>Theanine (Night Protocol) </strong>If I take theanine during the day, I use a second dose in the evening to ease the transition into sleep. It stacks well with magnesium for a non-drug, non-groggy wind-down.</p><p>I’ve drawn major inspiration from Bryan Johnson’s Blueprint protocol in the last year, prioritizing consistency and environmental tweaks to combat the chaos of irregular schedules. I aim for a fixed bedtime every night, winding down with a physical book in hand about 10 minutes before lights out, and keep the room pitch-black and cool around 65–68°F during the night. Lately I’ve been trying to not drink any liquids before bedtime to avoid waking up to pee. Finishing my last meal four hours before bed prevents digestive interference, letting me recover deeply.</p><p>Lately I’ve also been making it a point to get bright morning light exposure right upon waking to reset my circadian rhythm. Since shifting my wake time to before sunrise, it’s not possible to get this naturally so I’m planning to buy one of those high LUX lamps.</p><h3>Microdosing GLP-1s</h3><p>This is one of the biggest structural changes I’m making to my 2025 stack: switching to <strong>very low-dose GLP-1 receptor agonists</strong> as a background metabolic governor instead of going full “Wegovy dosage” mode.</p><p>I’m using “microdosing” here in the biohacker sense: doses substantially lower than the standard obesity/diabetes protocols, tuned to blunt appetite and clean up food noise without the full GI side-effect package or drastic, rapid weight loss.</p><h3>What GLP-1s Actually Do</h3><p>GLP-1 receptor agonists like semaglutide and liraglutide work through a few big levers:</p><ul><li>Appetite and satiety centers in the brain</li><li>Gastric emptying and gut signaling</li><li>Reward and food motivation circuitry</li><li>Glycemic and cardiometabolic effects</li></ul><p>Large randomized trials (STEP, SELECT, SCALE) show significant weight loss and cardiovascular improvements at full therapeutic doses. I took advantage of that in 2025 to drop almost 20 pounds and increase my motivation at the gym, and am now feeling healthier than ever.</p><p>My use case in 2026 will be different: leveraging the same mechanisms at much lower intensities.</p><h3>Why Microdosing Might Still Work</h3><p>GLP-1 drugs are very expensive and dose-responsive. Lower exposures will not be felt as strongly, but should still:</p><ul><li>Reduce hunger and food chatter</li><li>Flatten reward response to hyperpalatable foods</li><li>Improve post-meal glycemic stability</li></ul><p>Therefore I treat GLP-1s as a <strong>long-horizon metabolic nudge</strong>, not a crash diet.</p><h3>Experimental &amp; Research Compounds</h3><p>I often test compounds that sit on the frontier of nootropics and have done so for nearly 30 years, since the first time I got my hands on some Piracetam. These aren’t daily tools but part of my ongoing exploration.</p><ul><li><strong>Seltorexant</strong></li><li><strong>Mirodenafil</strong></li><li>AMPAkine-adjacent or receptor-specific modulators</li></ul><p>These remain in the R&amp;D bucket — interesting, sometimes powerful, but not part of my baseline stack.</p><h3>α-MSH (Melanotan-Related Compounds)</h3><p>I’ve recently started experimenting with α-MSH analogs for their unique interactions with melanocortin receptors that influence appetite, inflammation, and mood. In low, infrequent doses they are supposed to provide a subtle but noticeable lift in drive, energy, and overall resilience.</p><p><strong>How it works (plain language):</strong></p><ul><li>Your body naturally produces α-MSH as part of the system that regulates appetite, energy expenditure, and inflammation.</li><li>When stimulated, melanocortin receptors essentially send the signal: <em>“Boost energy, reduce appetite, stay alert.”</em></li><li>The effect feels like a mild rise in motivation and metabolic tone, not a stimulant high.</li></ul><p>They aren’t part of my daily routine, but they sit in the category of highly targeted tools I deploy when the benefits align with the demands of my schedule.</p><h3>Dopamine-Targeted Research Compounds</h3><p>I’ve been experimenting with a rotating set of modern, targeted molecules that support motivation, mood, and cognitive drive without the crash profile of classic stimulants. The rabbit hole for these kinds of compounds is super deep. They include:</p><ul><li><strong>Bromantane</strong></li><li><strong>Usmarapride</strong></li><li><strong>ACD-856</strong></li><li><strong>TAK-653</strong></li><li><strong>GB-115</strong></li><li><strong>Pinealon</strong></li></ul><p>I don’t treat these as daily staples. They’re tools, not crutches — useful when I need to amplify creativity, stabilize mood during intense work cycles, or get past bottlenecks in motivation.</p><h3>How the Stack Fits Together</h3><p>The value of this stack isn’t in the individual items — it’s in the architecture.</p><h4>Morning</h4><ul><li>Light dopamine precursors</li><li>Modafinil or phenylpiracetam</li><li>Theanine for smoothing</li><li>NAC and metabolic support</li><li>Low-dose GLP-1 as a passive appetite/reward governor</li></ul><h4>Afternoon</h4><ul><li>Peptides during learning or coding sessions</li><li>Optional dopamine-targeted compounds</li><li>Hydration + electrolytes</li></ul><h4>Evening</h4><ul><li>Magnesium L-Threonate</li><li>Theanine</li><li>No stimulants after the cutoff</li><li>Lactoferrin and nattokinase depending on travel and workload</li></ul><h4>Weekly Cadence</h4><ul><li>Rotate stimulants</li><li>Avoid tolerance buildup</li><li>Maintain stable sleep</li><li>Keep GLP-1 dosing modest and consistent</li><li>Track energy, focus, appetite, and mood without chasing novelty</li></ul><h3>What’s New for 2026</h3><p>A few shifts define this year’s stack:</p><ul><li>Greater focus on metabolic health and cardiovascular longevity</li><li>Strategic use of micro-dosed GLP-1s</li><li>More precision with dopamine modulation</li><li>Peptides as an established productivity tool</li><li>Targeted, mechanism-driven supplementation instead of broad-spectrum blends</li><li>Smarter cycling and recovery</li></ul><p>The result is a stack that supports both sides of my life: the engineer and the artist.</p><blockquote>Looking for a reliable peptides supplier? Try <a href="https://www.peptaura.com/">Peptaura</a> and use code OBIE for 20% discount at checkout.</blockquote><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=a05d0c9b714e" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Sísifo versus el Director de Orquesta]]></title>
            <link>https://obie.medium.com/s%C3%ADsifo-versus-el-director-de-orquesta-88ebe59706ad?source=rss-9e1370f50f6e------2</link>
            <guid isPermaLink="false">https://medium.com/p/88ebe59706ad</guid>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[society]]></category>
            <category><![CDATA[medium-en-español]]></category>
            <dc:creator><![CDATA[Obie Fernandez]]></dc:creator>
            <pubDate>Wed, 24 Dec 2025 23:03:35 GMT</pubDate>
            <atom:updated>2025-12-25T16:15:57.706Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*aiI67CmFjVHeR3fU5iR5cw.png" /></figure><h4><strong>Por qué la IA se siente como trampa para algunas personas</strong></h4><p>La reacción contra la IA en la era de los modelos de lenguaje suele presentarse como un debate sobre la verdad, la seguridad, la creatividad o el empleo. Ese encuadre pasa por alto el conflicto real. Lo que está bajo ataque no es el trabajo, ni el arte, ni siquiera las relaciones humanas. Es una forma particular de mantener un sentido del yo.</p><p>La verdadera división se da entre dos modelos de subjetividad. Llamémoslos <em>monádico</em> y <em>poliádico</em>.</p><p>El yo monádico es indivisible. Experimenta el significado a través del aislamiento, el esfuerzo y la lucha visible. El trabajo solo es real si puedes verte a ti mismo haciéndolo. El resultado importa menos que el esfuerzo que lo precede. Este es el modelo de Sísifo. Empujo la piedra, luego existo, interminablemente.</p><p>El yo poliádico funciona de otra manera. Mantiene su coherencia a través de la coordinación. El pensamiento se externaliza por defecto. Las ideas rebotan entre personas, cuadernos, herramientas y sistemas. El yo no se ve amenazado por la delegación porque la autoría vive en la dirección, no en la ejecución. Este es el modelo del director de orquesta. Orquesto, luego existo.</p><p>Ninguno de estos modelos es particularmente nuevo y ambos han coexistido durante mucho tiempo. Lo nuevo es que los LLM reducen el costo de la cognición externalizada casi a cero. Ese colapso desestabiliza la identidad monádica de una forma que se siente escandalosamente existencial, incluso cuando los argumentos en la superficie suenan técnicos o morales.</p><p>Esto se ve con claridad en lo que parece ser un pánico viral en torno a la Generación Z usando ChatGPT para analizar conversaciones de citas y redactar respuestas.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*oHcPuL2j1TCSJXyW.png" /></figure><p>Las objeciones al uso de ChatGPT de esta manera suelen ser ruidosas y mayormente negativas. Distópico. Sin alma. Inauténtico. Manipulador. Como si algo sagrado hubiera sido profanado.</p><p>Pero en realidad no está pasando nada nuevo.</p><p>Las personas siempre han externalizado la incertidumbre romántica con sus amigos y confidentes más cercanos. Muestran mensajes privados. Ensayan conversaciones, especialmente las difíciles. Preguntan qué quiso decir alguien. En la era de las redes sociales, incluso a veces externalizan el consejo amoroso a una audiencia más amplia. La diferencia ahora es que la capa de coordinación para esa externalización es mucho más rápida, más silenciosa y, creo que de forma beneficiosa, más coherente. La lucha ritualizada ha sido eliminada.</p><p>Pero para un yo monádico, la lucha es el punto. La incertidumbre romántica funciona como la piedra de Sísifo. Si no sufriste la confusión ni arriesgaste la vergüenza, entonces la conexión resultante se siente inmerecida. La optimización parece trampa porque evita la prueba que produce significado, incluso si esa prueba puede arruinar cualquier posibilidad de un resultado positivo.</p><p>¿Usar ChatGPT para guiar el comportamiento en una relación es realmente distinto de seguir el consejo de un terapeuta que cita la investigación de Gottman? Si ambos caminos conducen a una modificación deliberada del comportamiento, ¿por qué uno se siente mal y el otro aceptable?</p><p>Funcionalmente, no hay diferencia. El consejo puede ser idéntico. El comportamiento puede ser idéntico. Los resultados pueden ser idénticos.</p><p>La diferencia es el ritual. Por eso las mismas personas que rechazan los mensajes asistidos por IA no tienen problema en pagar por libros de autoayuda o terapia.</p><p>Un terapeuta preserva la jerarquía. Controla el acceso al conocimiento. Cobra dinero, mucho dinero. Exige agendas, presencia y exposición social. El consejo llega envuelto en un esfuerzo sancionado por la sociedad. El yo monádico sobrevive intacto a ese tipo de externalización porque el sufrimiento, el permiso y la legitimidad siguen formando parte del proceso.</p><p>Un LLM colapsa todo eso en un instante. Sin jerarquía. Sin guardianes. Sin costo social. Sin trabajo visible. Solo coordinación.</p><p>Ese colapso es lo que se siente distópico para mucha gente.</p><p>En otro momento político, esto habría provocado una reacción conservadora en toda regla. No la versión actual de guerra cultural, sino el impulso más antiguo de defender la jerarquía, la fricción y la autoridad ganada. Si el trumpismo no hubiera vaciado al conservadurismo estadounidense y lo hubiera reemplazado por agravio y espectáculo, creo que la oposición a la IA sería mucho más ideológica y mucho más organizada.</p><p>Los LLM aplanan estructuras de estatus que antes se sentían naturales. Cortocircuitan profesiones basadas en el control de acceso. Disuelven el vínculo entre esfuerzo y legitimidad. Ese es exactamente el tipo de cosas que el conservadurismo tradicional solía existir para resistir.</p><p>He visto la misma división desarrollarse en el software durante el último año, y yo vivo firmemente en uno de esos lados.</p><p>Existe una versión del programador cuya identidad está atada a las pulsaciones del teclado. El código es real si está escrito a mano. Pensar se equipara con escribir. La asistencia se siente como contaminación. El significado proviene de la manipulación directa de símbolos, línea por línea, en soledad. Seguramente hay una correlación con los programadores que juran por VIM.</p><p>En contraste, está el programador que trata el desarrollo de software como un sistema complejo, no como una actuación. El trabajo no es teclear. El trabajo es especificar intención, restricciones y criterio, y luego guiar el sistema hacia un resultado. Intuyo aquí una conexión con quienes idealizamos enfoques como Behavior-Driven Development (BDD, desarrollo guiado por comportamiento), que ya insinuaban un futuro donde la implementación quedaba subordinada a la especificación.</p><blockquote>Cuando la gente dice que las interacciones mediadas por IA no tienen alma, no está haciendo una afirmación metafísica. Está describiendo la ausencia de un ritual que valida la autenticidad para ellos.</blockquote><p>Hoy uso herramientas agenticas como Claude Code para generar, refactorizar, probar y explorar. No solo el código, sino todo el proyecto, incluyendo el trabajo de otras personas. Solo intervengo donde el juicio importa. Esto no me hace sentir menos programador. Me hace sentir más responsable del resultado, porque puedo hacerme cargo de una parte mayor de él.</p><p>El tema de la música generada con IA hace este conflicto aún más evidente.</p><p>El artista monádico ubica la autenticidad en el esfuerzo. El sufrimiento es la prueba. El trabajo manual es la señal. Las herramientas se vuelven sospechosas cuando cruzan una línea invisible entre asistencia y autoría.</p><p>Yo también hago música de manera profesional, y este año me he apoyado fuertemente en la generación asistida por IA con Suno, especialmente para materializar mi composición original. No porque no pueda escribir o producir sin ella, sino porque me importan más los resultados que ejecutar los rituales tradicionales por mí mismo.</p><p>Este enfoque no es nuevo para mí. Tengo problemas auditivos bastante significativos por décadas de ambientes ruidosos, lo que significa que no puedo hacer una mezcla final decente ni de broma. Así que trabajo con un excelente ingeniero de grabación para terminar mis temas, y no hay ninguna vergüenza en eso.</p><p>En cuanto a Suno, si pudiera permitirme contratar regularmente a compositores y cantautores de clase mundial para colaborar como vocalistas y pulir mis letras, lo haría sin ningún pudor.</p><p>Como muchos productores modernos, trato el sonido como un material maleable. Uso presets de sintetizadores y sampling sin reparos. Compongo a través de dirección, selección e iteración, a menudo en colaboración con artistas más talentosos que yo. No confundo mi inversión de tiempo manual con el significado. Tal vez confundo el significado con el significado.</p><p>Ya hemos visto esta película antes. El sampling. Las cajas de ritmos. La música electrónica. El autotune. Cada uno provocó pánico sobre el alma y la autenticidad. Cada vez, el arte sobrevivió. La jerarquía existente no.</p><p>En programación, en música y ahora en las relaciones, el conflicto es el mismo. Las herramientas que reducen drásticamente los costos de coordinación amenazan identidades construidas alrededor del aislamiento, la lucha y el sufrimiento.</p><p>Cuando la gente dice que las interacciones mediadas por IA no tienen alma, no está haciendo una afirmación metafísica. Está describiendo la ausencia de un ritual que valida la autenticidad para ellos. El alma, en este contexto, significa espontaneidad bajo incertidumbre. Significa actuar sin red. Significa no saber qué hacer y hacerlo de todos modos. Para mí, entre otras cosas, significa estar divorciado dos veces a los 51 años.</p><p>Quita eso, y para algunos seres humanos la experiencia se siente vacía, incluso si funciona mejor.</p><p>Este mismo pánico ya ocurrió antes.</p><p>La imprenta desencadenó el mismo tipo de miedo. Información peligrosa. Pérdida de habilidades. Degradación estética. Voces no autorizadas que de pronto sonaban competentes. Las quejas no eran incorrectas en un sentido estrecho. Copiar a mano sí disminuyó. Proliferaron libros malos. La desinformación se propagó más rápido.</p><p>Pero el miedo más profundo no era sobre la calidad. Era sobre la legitimidad.</p><p>Si cualquiera podía imprimir, ¿cómo sabías quién merecía autoridad? Si la alfabetización se expandía, ¿cómo preservabas el vínculo entre esfuerzo y valor? Si el conocimiento podía adquirirse sin aprendizaje formal, ¿qué pasaba con las identidades construidas alrededor del dominio logrado mediante el sufrimiento?</p><p>Ahora imagina un mundo donde esa oposición hubiera ganado. Imprentas fuertemente reguladas. Libros caros y escasos. Alfabetización reservada a una élite. Copiar preservado como un oficio venerado. Cultura protegida del “slop”.</p><p>La innovación se desacelera. El poder se concentra. Los guardianes prosperan. La estética del esfuerzo sobrevive. Los beneficios de la coordinación nunca llegan.</p><p>Los LLM ocupan el mismo lugar histórico. No son una ruptura. Son la herramienta de coordinación más reciente en una larga cadena. Las lanzas extendieron el alcance. El fuego extendió la digestión. Las bicicletas extendieron la locomoción. Las calculadoras extendieron la aritmética. Los motores de búsqueda extendieron la memoria.</p><p>Cada vez, la unidad del valor humano cambió. De la fuerza al objetivo. De la memoria al juicio. De la ejecución a la dirección.</p><p>Lo completamente nuevo es que los LLM empujan ese cambio hacia la cognición misma. Pero como padre de hijos de la Generación Z, entiendo por qué no se inmutan. Ya son poliádicos por defecto. Para bien o para mal, crecieron con un iPad en las manos. Su identidad se construyó en red desde el inicio. Su pensamiento se externaliza a través de mensajes de texto con sus amigos. La coordinación no amenaza su coherencia. Al contrario, es cómo la mantienen. Para ellos, usar la mejor herramienta disponible para reducir la ambigüedad no es hacer trampa. Es competencia.</p><p>Así que, al final del día, la indignación contra la IA proviene sobre todo de personas cuyo sentido del yo depende de la fricción. Eso no las hace estar equivocadas. Solo hace que este conflicto vaya más allá de cuestiones de política o ética. Porque no vas a convencer a alguien de abandonar una estrategia de mantenimiento identitario citando ganancias de eficiencia.</p><p>Lo que sí puedes hacer es nombrar el intercambio con honestidad, y eso fue lo que me inspiró a escribir este ensayo.</p><p>Los LLM no eliminan el elemento humano. Eliminan el sufrimiento sancionado por la sociedad. A veces eliminan la necesidad de pedir permiso. Eliminan el trabajo visible en el que algunas personas se apoyan para sentirse reales.</p><p>Por otro lado, lo que dejan intacto es la autoría. Espera, escúchame. Incluso usando IA, los humanos siguen eligiendo sus objetivos. Siguen decidiendo qué importa. Siguen aceptando o rechazando el consejo que reciben. Y, lo más importante, siguen cargando con las consecuencias.</p><p>Pase lo que pase, el director de orquesta sigue en el podio.</p><p>La pregunta abierta no es si la asistencia de la IA funciona. Claramente funciona, en muchos ámbitos. La pregunta es si la humanidad está dispuesta a dejar atrás identidades que requieren confusión, esfuerzo y aislamiento para experimentar significado, sin pensar que eso nos hace menos humanos.</p><p>La historia sugiere que sí, eventualmente. No porque las nuevas herramientas sean moralmente superiores, sino porque la coordinación compone y la lucha ritualizada no.</p><p>Sísifo puede ser convincente. Pero, al final, el director de orquesta gana.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=88ebe59706ad" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Sisyphus versus the Conductor]]></title>
            <link>https://obie.medium.com/sisyphus-versus-the-conductor-b2097f55dd9e?source=rss-9e1370f50f6e------2</link>
            <guid isPermaLink="false">https://medium.com/p/b2097f55dd9e</guid>
            <category><![CDATA[essay]]></category>
            <category><![CDATA[llm]]></category>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[society]]></category>
            <dc:creator><![CDATA[Obie Fernandez]]></dc:creator>
            <pubDate>Wed, 24 Dec 2025 22:38:21 GMT</pubDate>
            <atom:updated>2025-12-24T22:38:21.959Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*aiI67CmFjVHeR3fU5iR5cw.png" /></figure><h4>Why AI feels like cheating to some people</h4><p>The backlash against AI in the age of large language models is usually framed as a debate about truth, safety, creativity, or jobs. That framing misses the real conflict. What is under attack is not work, or art, or even human relationships. It is a particular way of maintaining a sense of self.</p><p>The real divide runs between two models of selfhood. Call them <em>monadic</em> and <em>polyadic</em>.</p><p>The monadic self is indivisible. It experiences meaning through isolation, effort, and visible struggle. Work is only real if you can watch yourself doing it. Output matters less than the labor that precedes it. This is the Sisyphus model. I push the stone, therefore I am, endlessly.</p><p>The polyadic self works differently. It maintains coherence through coordination. Thinking is externalized by default. Ideas bounce off people, notebooks, tools, systems. The self is not threatened by delegation because authorship lives in direction, not execution. This is the conductor model. I orchestrate, therefore I am.</p><p>Neither model is particularly new and have coexisted for a long time. What is new is that LLMs collapse the cost of <em>externalized cognition</em> to near zero. That collapse destabilizes monadic identity in a way that feels outrageously existential, even when the surface arguments sound technical or moral.</p><p>You can see this clearly in what seems to be a viral panic about Gen Z using ChatGPT to analyze dating conversations and craft replies.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*VdgQdClb-WezH0UsbTRNfQ.png" /></figure><p>Objections to the use of ChatGPT in this way tend to be loud and mostly negative. Dystopian. Soulless. Inauthentic. Manipulative. As if something sacred had been violated.</p><p>But nothing new is actually happening.</p><p>People have always externalized romantic uncertainty with their closest friends and confidantes. They reveal private text messages. They roleplay conversations, especially disagreements. They ask what something meant. In the social media era, they even sometimes crowdsource romantic advice. The difference now is that the coordination layer for externalization is far faster, quieter, and (beneficially, I think) more coherent. The ritualized struggle has been removed.</p><p>But for a monadic self, struggle is the point! Romantic uncertainty functions like the Sysyphus stone. If you did not suffer through confusion and risk embarrassment, then the resulting connection feels unearned. Optimization looks like cheating <em>because it bypasses the ordeal that produces meaning</em>. Even if the ordeal could sink all chances of a positive outcome!</p><p>Is using ChatGPT to guide relationship behavior any different from following advice from a therapist citing Gottman’s research? If both routes lead to deliberate behavioral modification, why does one feel wrong and the other acceptable?</p><p>Functionally, there is no difference. The advice can be identical. The behavior can be identical. The outcomes can be identical.</p><p>The difference is ritual. Which is why the same people who recoil at AI-assisted texting think nothing of paying for self-help books or therapy.</p><p>A therapist preserves hierarchy. They gatekeep knowledge. They charge money. (A lot of money!) They require scheduling, presence, and social exposure. The advice arrives wrapped in sanctioned-by-society effort. The monadic self survives that kind of externalization intact because suffering, permission, and legitimacy remain part of the process.</p><p>An LLM collapses all of that in an instant. No hierarchy. No gatekeeper. No social cost. No visible labor. Just coordination.</p><p>That collapse is what feels dystopian to a lot of people.</p><p>In a different political moment, this would have triggered a full-blown conservative backlash. Not the culture-war version we have now, but the older instinct to defend hierarchy, friction, and earned authority. If Trumpism had not hollowed out American conservatism and replaced it with grievance and spectacle, I think opposition to AI would look much more ideological and much more organized.</p><p>LLMs flatten status structures that used to feel natural. They short-circuit gatekeeping professions. They dissolve the link between effort and legitimacy. That is exactly the kind of thing traditional conservatism once existed to resist.</p><p>I’ve been seeing the same split play out in software for the last year, and I live firmly on one side of it.</p><p>There is a version of the programmer whose identity is bound up in keystrokes. Code is real if it is typed. Thinking is equated with writing. Assistance feels like contamination. Meaning comes from direct manipulation of symbols, line by line, alone. There must be a correlation with programmers who swear by VIM!</p><p>In contrast then there is the programmer who treats development of software as more of a complex system, not just a performance. The work is not the typing. The work is specifying intent, constraints, and taste, then steering the system toward a result. I sense some connection here to those of us that really idealized behavior-driven development approaches, which hinted at a future where the implementation of a system was subordinate to its specification.</p><blockquote>When people say AI-mediated interactions lack soul, they are not making a metaphysical claim. They are describing <em>the absence of a ritual that validates authenticity </em>for them.</blockquote><p>These days I use agentic tools like Claude Code to generate, refactor, test, and explore. Not just the codebase, but everything on my project including what other people are working on. I only intervene in its work where judgment matters. This does not make me feel less like a programmer. It does make me feel more responsible for the outcome, because I can own more of it.</p><p>The topic of AI music makes the same conflict even clearer.</p><p>The monadic artist locates authenticity in effort. Suffering is proof. Labor is the signal. Tools become suspect once they cross an invisible line from assistance into authorship.</p><p>I make music professionally too, and I’ve leaned hard into AI-assisted generation with Suno this year, especially to manifest my original songwriting. Not because I cannot write or produce without it, but because I care more about results than performing the traditional rituals myself.</p><p>This approach is nothing new for me. I have somewhat significant issues with my hearing from decades of loud environments, which means I can’t do a proper mixdown to save my life. So I work with a fine recording engineer to finish my tracks, and there’s no shame in that.</p><p>When it comes to Suno, if I could afford to regularly hire world-class singer-songwriters to collaborate with as vocalists and workshop my lyrics with, then of course I would do so with no shame, either.</p><p>Like many other modern producers, I treat sound as a malleable material. I use synth presets and sampling liberally. I compose through direction, selection, and iteration, often in conjunction with other more talented artists. I do not confuse my own manual time investment with meaning. Maybe I confuse meaning with meaning.</p><p>We have seen this movie before. Sampling. Drum machines. Electronic music. Autotune. Each triggered panic about soul and authenticity. Each time, the art survived. The existing hierarchy loses.</p><p>In programming, in music, and now in relationships, the conflict is the same. Tools that collapse coordination costs threaten identities built around isolation, struggle and suffering.</p><p>When people say AI-mediated interactions lack soul, they are not making a metaphysical claim. They are describing <em>the absence of a ritual that validates authenticity </em>for them. Soul, in this context, means spontaneity under uncertainty. It means acting without a net. It means not knowing what to do and doing it anyway. For me, amongst other things, it means being twice-divorced at 51 years of age.</p><p>Remove that, and for some humans the experience feels hollow even if it works better.</p><p>This exact panic has played out before.</p><p>The printing press triggered the same shape of fear. Dangerous information. Loss of skill. Aesthetic degradation. Unauthorized voices suddenly sounding competent. The complaints were not wrong in the narrow sense. Copying by hand did decline. Bad books proliferated. Misinformation spread faster.</p><p>But the deeper fear was not about quality. It was about legitimacy.</p><p>If anyone could print, how would you know who deserved authority. If literacy spread, how would you preserve the link between effort and worth. If knowledge could be accessed without apprenticeship, what happened to the identities built around mastery through suffering.</p><p>But imagine a world where that opposition won. Printing tightly licensed. Books expensive and rare. Literacy elite-coded. Copying preserved as a revered craft. Culture protected from slop!</p><p>Innovation slows. Power concentrates. Gatekeepers thrive. The aesthetic of effort survives. The benefits of coordination never arrive.</p><p>LLMs sit in the same historical slot. They are not a rupture. They are the latest coordination tool in a long line of them. Spears extended reach. Fire extended digestion. Bicycles extended locomotion. Calculators extended arithmetic. Search engines extended memory.</p><p>Each time, the unit of human value shifted. From strength to aim. From recall to judgment. From execution to direction.</p><p>What’s completely new is that LLMs push that shift into cognition itself. But as a parent to Gen Z kids, I understand why they don’t flinch about it. They are already polyadic by default. For better or worse, they were raised with an iPad in their grubby little hands. Their identity has been networked on the socials from the start. Their thinking is externalized via text messages to their friends. Coordination does not threaten their coherence. On the contrary, it’s how they maintain their coherence. To them, using the best available tool to reduce ambiguity is not cheating. It is competence.</p><p>So when it comes down to it, the outrage against AI comes mostly from people whose sense of self depends on friction. Which does not make them wrong. It just makes this conflict run deeper than questions of policy or ethics. Because you ain’t gonna argue someone out of an identity maintenance strategy by citing efficiency gains.</p><p>You can however name the tradeoff honestly, which is what inspired me to write this essay.</p><p>LLMs do not remove the human element. They remove society-sanctioned suffering. Sometimes they remove the need to ask for permission. They remove the visible labor that some people rely on to feel real.</p><p>On the other hand, what they leave untouched is authorship. No wait, hear me out! Even using AI, humans are still choosing their goals. They’re still deciding what matters. They still accept or reject the advice they’re given. And most importanly, they still bear the consequences.</p><p>No matter what, the conductor is still on the podium.</p><p>The open question is not whether AI assistance works. It clearly does, in many domains. The question is whether humanity is willing to let go of identities that require confusion, effort, and isolation in order to experience meaning. Without thinking it makes us less human.</p><p>History suggests that we will, eventually. Not because the new tools are morally superior, but because coordination compounds and ritualized struggle does not.</p><p>Sisyphus can be compelling. But ultimately the conductor wins.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=b2097f55dd9e" width="1" height="1" alt="">]]></content:encoded>
        </item>
    </channel>
</rss>