<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by Fatma Ali on Medium]]></title>
        <description><![CDATA[Stories by Fatma Ali on Medium]]></description>
        <link>https://medium.com/@fatmali?source=rss-1ef1eb93c4d3------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Wed, 08 Apr 2026 02:29:23 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@fatmali/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[The End of ‘Just Chat’: Why the Future of AI is Multimodal]]></title>
            <description><![CDATA[<div class="medium-feed-item"><p class="medium-feed-image"><a href="https://fatmali.medium.com/the-end-of-just-chat-why-the-future-of-ai-is-multimodal-c2efce8bf7b5?source=rss-1ef1eb93c4d3------2"><img src="https://cdn-images-1.medium.com/max/2600/0*6ICCcNtDP4wGkPay.png" width="2600"></a></p><p class="medium-feed-snippet">We&#x2019;ve mastered &#x2018;prompt engineering.&#x2019; Now it&#x2019;s time for interface engineering. Why the future of AI isn&#x2019;t just smarter chat, it&#x2019;s multimodal</p><p class="medium-feed-link"><a href="https://fatmali.medium.com/the-end-of-just-chat-why-the-future-of-ai-is-multimodal-c2efce8bf7b5?source=rss-1ef1eb93c4d3------2">Continue reading on Medium »</a></p></div>]]></description>
            <link>https://fatmali.medium.com/the-end-of-just-chat-why-the-future-of-ai-is-multimodal-c2efce8bf7b5?source=rss-1ef1eb93c4d3------2</link>
            <guid isPermaLink="false">https://medium.com/p/c2efce8bf7b5</guid>
            <category><![CDATA[artificial-intelligence]]></category>
            <category><![CDATA[llm]]></category>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[generative-ui]]></category>
            <category><![CDATA[generative-ai-tools]]></category>
            <dc:creator><![CDATA[Fatma Ali]]></dc:creator>
            <pubDate>Fri, 05 Dec 2025 00:00:48 GMT</pubDate>
            <atom:updated>2025-12-06T07:01:02.069Z</atom:updated>
        </item>
        <item>
            <title><![CDATA[Why useState is Breaking Your AI App: The Case for State Machines in Complex React Interfaces]]></title>
            <link>https://fatmali.medium.com/why-usestate-is-breaking-your-ai-app-the-case-for-state-machines-in-complex-react-interfaces-1943b649b596?source=rss-1ef1eb93c4d3------2</link>
            <guid isPermaLink="false">https://medium.com/p/1943b649b596</guid>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[xstate]]></category>
            <category><![CDATA[usestate]]></category>
            <category><![CDATA[react]]></category>
            <category><![CDATA[state-machine]]></category>
            <dc:creator><![CDATA[Fatma Ali]]></dc:creator>
            <pubDate>Sun, 19 Oct 2025 00:00:04 GMT</pubDate>
            <atom:updated>2025-11-01T03:48:07.719Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*Z0cB6mlU1hIHLAFW.jpg" /></figure><blockquote>Juggling useState booleans for complex UI creates “impossible states” and bugs. Finite state machines enforce exactly-one-state-at-a-time semantics, eliminating this problem. Use useReduce for simple state machines, or <strong>XState</strong> for complex orchestration with guards, timers, and hierarchies.</blockquote><p>You’ve been there. I’ve been there. We’ve all been there. Staring at a React component that started as “just a simple form” and now looks like it needs its own architectural diagram. Your component has a dozen useState hooks at the top, all fighting each other like tabs in your browser:</p><pre>const [isLoading, setIsLoading] = useState(false);<br>const [isStreaming, setIsStreaming] = useState(false);<br>const [isComplete, setIsComplete] = useState(false);<br>const [error, setError] = useState&lt;Error | null&gt;(null);<br>const [data, setData] = useState&lt;string[] | null&gt;(null);<br>const [isRetrying, setIsRetrying] = useState(false);<br>const [showConfetti, setShowConfetti] = useState(false); // Because why not?</pre><p>You tell yourself it’s fine. “It’s manageable,” you whisper, as you write another useEffect to synchronize three of these boolean flags, creating a side effect that will haunt you in your dreams. This isn’t just messy; it’s a breeding ground for what I call <strong>“impossible states.”</strong></p><p>What happens when isLoading and isComplete are both true? Does your UI show a loading spinner <strong>and</strong> the final result? What if error has a value but isLoading is also true? Is it loading or is it an error? Your UI becomes a quantum superposition of confusion, and you’re the unfortunate physicist tasked with observing it into a non-buggy state.</p><h3>The AI Broke My Booleans</h3><p>Now, let’s throw a modern AI-powered feature into the mix. Your simple data fetch is now a generative UI that streams responses from a large language model. The state diagram in your head, which used to be a linear flow, now has branching paths, error recovery, cancellation states, and retry logic.</p><p>You’re not just fetching data anymore. You are:</p><p>1. 🙂 <strong>Idle</strong>: Waiting for a user prompt.</p><p>2. 🔄 <strong>Submitting</strong>: Sending the prompt to the backend.</p><p>3. 🤔 <strong>Waiting for stream</strong>: The server has acknowledged the request, but the first token hasn’t arrived.</p><p>4. 𝌗 <strong>Streaming</strong>: Receiving the response, token by token.</p><p>5. ✅ <strong>Success</strong>: The stream has finished, and the full response is displayed.</p><p>6. ⛔️ <strong>Error</strong>: Something, somewhere, went horribly wrong. Maybe the AI is having an existential crisis. Maybe you forgot an API key.</p><p>How do you model this with useState?</p><pre>const [isSubmitting, setIsSubmitting] = useState(false);<br>const [isStreaming, setIsStreaming] = useState(false);<br>const [isError, setIsError] = useState(false);<br>// … and so on, and so on.</pre><p>You are now manually choreographing a ballet of booleans. setIsSubmitting(true), then setIsSubmitting(false) and setIsStreaming(true) in the same function. It’s fragile. It’s imperative. It’s a bug waiting to happen. You’ve created a system where isSubmitting and isStreaming can be trueat the same time, an impossible state that your UI has no idea how to render.</p><p>This is the moment of truth for many engineers: the realization that <strong>managing state with a loose collection of booleans can be fundamentally tricky for complex systems.</strong></p><h3>State Machines 101: What They Are and Why They Matter</h3><p>A <strong>finite state machine (FSM)</strong> is a computational model that can be in exactly one of a finite number of states at any given time. Think of it like a flowchart with strict rules:</p><p>1. <strong>States</strong>: A defined set of conditions your system can be in (e.g., idle, loading, success, error)</p><p>2. <strong>Transitions</strong>: Allowed movements between states, triggered by events (e.g., SUBMIT event moves from idleto loading)</p><p>3. <strong>Guards</strong>: Conditions that must be met for a transition to occur (e.g., can’t submit if the form is invalid)</p><p>4. <strong>Actions</strong>: Side effects that occur during transitions (e.g., clear error message when retrying)</p><p>The key insight: <strong>you can only be in one state at a time</strong>. No overlapping, no ambiguity.</p><p>Imagine a traffic light:</p><pre>const trafficLightMachine = {<br>initial: &#39;red&#39;,<br>states: {<br>red: { on: { TIMER: &#39;green&#39; } },<br>yellow: { on: { TIMER: &#39;red&#39; } },<br>green: { on: { TIMER: &#39;yellow&#39; } }<br>  }<br>};</pre><p>The light can’t be both red and green. The only valid transitions are defined. This is the power of explicit state modeling.</p><h3>State Machines + The Actor Model: A Powerful Blend</h3><p>You might have heard of the <strong>Actor Model</strong> — a concurrent computation model where “actors” are independent entities that communicate via messages. Here’s the key insight: <strong>these aren’t competing concepts, they’re complementary</strong>.</p><p>Modern state machine frameworks like XState blend both paradigms:</p><p>- <strong>State Machines</strong>: Model deterministic state transitions within each actor (what states can it be in, what transitions are valid)</p><p>- <strong>Actor Model</strong>: Enable multiple state machines to run independently and communicate via events (spawning children, sending messages between machines)</p><p>Think of it this way: each actor <strong>is</strong> a state machine. The state machine defines its internal behavior, while the actor model defines how multiple machines coordinate.</p><h3>Building State Machines in React</h3><p>Now that we understand what state machines are, let’s see how to actually implement them in React. We’ll start simple and progressively handle more complexity.</p><h3>AI Streaming UX</h3><p>Before we dive into solutions, let’s be clear about what we’re building. What seems like a simple “type a prompt, get a response” feature actually has **challenges** that break simple state management:</p><pre>idle → connecting → streaming → complete<br>↓ ↓<br>error ← ─ ─ ─ ┘<br>↓<br>(retry logic)</pre><p>Your useReducermachine, which happily handled loadingand success, suddenly feels inadequate. Here are the actual production scenarios you need to handle:</p><h3>Solution 1: Start with useReducer</h3><p>For simple to moderate state machine needs, React’s built-in `useReducer` is your friend. Instead of scattered `useState` calls, you define a single state type and explicit transitions:</p><pre>type State =<br>| { status: &#39;idle&#39; }<br>| { status: &#39;loading&#39; }<br>| { status: &#39;success&#39;, data: string[] }<br>| { status: &#39;error&#39;, message: string };<br>type Action =<br>| { type: &#39;SUBMIT&#39; }<br>| { type: &#39;SUCCESS&#39;, data: string[] }<br>| { type: &#39;ERROR&#39;, message: string }<br>| { type: &#39;RESET&#39; };<br>function reducer(state: State, action: Action): State {<br>switch (state.status) {<br>case &#39;idle&#39;:<br>return action.type === &#39;SUBMIT&#39; ? { status: &#39;loading&#39; } : state;<br>case &#39;loading&#39;:<br>if (action.type === &#39;SUCCESS&#39;) return { status: &#39;success&#39;, data: action.data };<br>if (action.type === &#39;ERROR&#39;) return { status: &#39;error&#39;, message: action.message };<br>return state;<br>case &#39;success&#39;:<br>case &#39;error&#39;:<br>return action.type === &#39;RESET&#39; ? { status: &#39;idle&#39; } : state;<br>}<br>}</pre><p>This is already a massive improvement:</p><p>- Impossible states are <strong>structurally impossible</strong></p><p>- State transitions are explicit and centralized</p><p>- TypeScript ensures you handle all cases</p><p>- The reducer documents your component’s behavior</p><p><strong>When to use useReducer:</strong></p><p>- Simple async flows (fetch → loading → success/error)</p><p>- Form wizards with sequential steps</p><p>- UI state that doesn’t need timers or complex side effects</p><p>But what about our complex AI streaming scenario?</p><p><strong>Where `useReducer` Falls Short</strong></p><p>Our AI streaming component has requirements that `useReducer` struggles with:</p><p>- <strong>Race condition prevention</strong>: Can’t start new stream while another is active (need to cancel first)</p><p>- <strong>Guards</strong>: Can’t retry if max attempts reached; can’t start if prompt is empty</p><p>- <strong>Entry/exit actions</strong>: Clear streamed text when starting, preserve it on error for display</p><p>- <strong>Complex conditional logic</strong>: Different error recovery paths (network vs auth vs stream)</p><p>You could model this with useReducer + useEffect, but now you’re manually managing:</p><p>- Abort controllers and cleanup in useEffect for race conditions</p><p>- Manual state checking before every transition</p><p>- Synchronization between reducer state and async operations</p><p>- Your reducer becomes a tangled mess of nested `switch` statements with validation</p><p>You’re back to the same complexity problem, just in a different form.</p><h3>Solution #2: XState for Complex Orchestration</h3><p>This is the point where you’re not just managing state; you’re orchestrating a complex user experience. And for that, you need a tool designed for orchestration.</p><p>Enter <strong>XState</strong>. XState isn’t just a state management library; it’s a framework for building <strong>statecharts</strong>. A statechart extends finite state machines with hierarchical (nested) states, parallel states, guarded transitions, and entry/exit actions — critical features for modeling complex async flows like AI streaming.</p><p>Key capabilities relevant to our real-world AI challenges:</p><p>-<strong> Guards (Conditional Transitions)</strong>: Prevent race conditions (block `START` while streaming, block `RETRY` if max attempts reached).</p><p>- <strong>Deterministic Transitions</strong>: Given current state + event, next state is always predictable (no `isLoading &amp;&amp; isRetrying` ambiguity).</p><p>- <strong>Entry/Exit Actions:</strong> Reset or preserve context cleanly when states change (clear text on start, preserve on error).</p><p>- <strong>Unified Context</strong>: One source of truth for tokens, retries, errors, partial responses — atomic updates, no desync.</p><p>- <strong>Explicit Error Paths</strong>: Network failures, stream errors, and cancellations have defined recovery paths.</p><p>Here’s a sample state machine for AI streaming:</p><pre>import { createMachine, assign } from &#39;xstate&#39;;<br>const aiStreamingMachine = createMachine({<br>id: &#39;aiStreaming&#39;,<br>initial: &#39;idle&#39;,<br>context: {<br>  prompt: &#39;&#39;,<br>  streamedText: &#39;&#39;,<br>  tokens: 0,<br>  retryCount: 0,<br>  error: null,<br>  abortController: null<br> },<br>states: {<br>  idle: {<br>    on: {<br>      START: {<br>        target: &#39;connecting&#39;,<br>        guard: ({ context }) =&gt; context.prompt.trim().length &gt; 0,<br>        actions: assign({<br>          abortController: () =&gt; new AbortController(),<br>          error: null<br>        })<br>      },<br>     UPDATE_PROMPT: {<br>      actions: assign({ prompt: ({ event }) =&gt; event.value })<br>    }<br>  }<br>},<br>connecting: {<br>  invoke: {<br>    src: &#39;connectToStream&#39;,<br>    onDone: { target: &#39;streaming&#39; },<br>    onError: {<br>    target: &#39;error&#39;,<br>    actions: assign({<br>      error: ({ event }) =&gt; event.error.message<br>    })<br>  }<br>},<br>  on: {<br>    CANCEL: {<br>      target: &#39;cancelled&#39;,<br>      actions: &#39;abortConnection&#39; // cleanup action<br>  }<br> }<br>},<br>streaming: {<br>    entry: assign({ streamedText: &#39;&#39;, tokens: 0 }), // clear on entry<br>    on: {<br>      CHUNK_RECEIVED: {<br>        // PARALLEL CONTEXT UPDATES: atomic updates, no desync<br>        actions: assign({<br>        streamedText: ({ context, event }) =&gt; context.streamedText + event.chunk,<br>        tokens: ({ context }) =&gt; context.tokens + 1<br>    })<br>  },<br>  STREAM_COMPLETE: { target: &#39;complete&#39; },<br>  STREAM_ERROR: {<br>  target: &#39;error&#39;,<br>  // PRESERVE PARTIAL DATA: keep what we received<br>  actions: assign({<br>  error: ({ event }) =&gt; event.message<br>    // Note: streamedText NOT cleared, preserved for user<br>  })<br>},<br> CANCEL: {<br>    target: &#39;cancelled&#39;,<br>    actions: &#39;abortStream&#39;<br>  }<br>} <br>},<br>complete: {<br>  on: {<br>    REGENERATE: {<br>    target: &#39;connecting&#39;,<br>    actions: assign({ retryCount: 0 }) // reset retry count<br>  },<br>RESET: {<br>  target: &#39;idle&#39;,<br>  actions: assign({<br>  prompt: &#39;&#39;,<br>  streamedText: &#39;&#39;,<br>  tokens: 0,<br>  retryCount: 0<br>....</pre><p>Compare this to managing the same logic with scattered `useState` hooks and conditional `useEffect` cleanup. The machine is living documentation of your app’s behavior.</p><h3>The Right Tool for the Job</h3><p>- <strong>useState</strong>: Simple, independent values (form inputs, toggles)</p><p>- <strong>useReducer</strong>: Local state machines with moderate complexity</p><p>- <strong>Redux/Zustand/Jotai</strong>: Shared application state across components</p><p>- <strong>XState</strong>: Complex state orchestration with guards, timers, hierarchies, and visual debugging</p><p>For our AI streaming component, the complexity is in **state transitions and orchestration**, not in sharing data globally.</p><h3>Try the Interactive Demo</h3><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fcodesandbox.io%2Fembed%2Fmd3rg4%3Fview%3Deditor%2B%252B%2Bpreview&amp;display_name=CodeSandbox&amp;url=https%3A%2F%2Fcodesandbox.io%2Fs%2Fmd3rg4&amp;image=https%3A%2F%2Fcodesandbox.io%2Fapi%2Fv1%2Fsandboxes%2Fmd3rg4%2Fscreenshot.png&amp;type=text%2Fhtml&amp;schema=codesandbox" width="1000" height="500" frameborder="0" scrolling="no"><a href="https://medium.com/media/4152feb036e5f95eec0e49638496d9f2/href">https://medium.com/media/4152feb036e5f95eec0e49638496d9f2/href</a></iframe><h3>Performance Considerations</h3><p>Before you refactor your entire codebase to use state machines, let’s address the elephant in the room: **performance**.</p><h3>The Overhead Question</h3><p>State machines add a layer of abstraction, and abstraction has cost. Let’s be honest about the tradeoffs:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*q-4XO5LCXKCOMOJiK7WvmA.png" /></figure><h3>When Performance Actually Matters</h3><p>For <strong>most applications</strong>, XState’s overhead is imperceptible. But there are edge cases:</p><p><strong>❌ Avoid XState for:</strong></p><p>- High-frequency updates (60fps animations, mouse tracking, canvas interactions)</p><p>- Hundreds of simultaneously active machines in a single view</p><p>- Ultra-lightweight components (simple toggles, accordions)</p><p>✅<strong> XState Shines for:</strong></p><p>- Complex async orchestration (like our AI streaming example)</p><p>- Infrequent but critical state transitions (checkout flows, multi-step forms)</p><p>- Features where <strong>correctness &gt; raw speed</strong> (payment processing, data submission)</p><h3>Optimization Strategies</h3><p>If you adopt XState and need to squeeze out performance:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*3bK64d_Uc9FDyQCuptLdkA.png" /></figure><h3>The Real Cost: Maintenance vs. Performance</h3><p>Here’s the uncomfortable truth: <strong>premature optimization kills more projects than slow code</strong>.</p><p>A state machine that’s 2ms slower but prevents 10 hours of debugging impossible states is a <strong>massive</strong> win. The question isn’t “Is XState slower than useState?” It’s “What’s the cost of shipping buggy state management?”</p><h3>The Bottom Line</h3><p>Don’t choose state machines for performance. Choose them for <strong>correctness, maintainability, and developer experience</strong>. If you later discover a performance bottleneck, you can optimize selectively or replace specific hot paths.</p><p>But start with the right abstraction. Premature optimization is the root of all evil, but so is choosing useState for a problem that demands state machine semantics.</p><p><em>Originally published at </em><a href="https://fatmaali.dev/blog/Why-useState-is-Breaking-Your-AI-App-The-Case-for-State-Machines-in-Complex-React-Interfaces/"><em>https://fatmaali.dev</em></a><em> on October 19, 2025.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=1943b649b596" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[5 Open-Source MCP Servers That Actually 10x Your GitHub Copilot Workflow]]></title>
            <link>https://fatmali.medium.com/5-open-source-mcp-servers-that-actually-10x-your-github-copilot-workflow-49cdabcc432b?source=rss-1ef1eb93c4d3------2</link>
            <guid isPermaLink="false">https://medium.com/p/49cdabcc432b</guid>
            <category><![CDATA[mcp-server]]></category>
            <category><![CDATA[code]]></category>
            <category><![CDATA[github-copilot]]></category>
            <category><![CDATA[productivity]]></category>
            <dc:creator><![CDATA[Fatma Ali]]></dc:creator>
            <pubDate>Sun, 28 Sep 2025 00:00:48 GMT</pubDate>
            <atom:updated>2025-09-29T16:39:56.344Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*XfEXIaRiEdoG29K5.jpg" /></figure><p><em>Originally published at </em><a href="https://www.fatmaali.dev/blog/5-free-open-source-mcp-servers-that-actually-10x-your-workflow-no-secrets-leaked/"><em>https://www.fatmaali.dev</em></a></p><p>Github Copilot doesn’t need to be smarter. It needs to be plugged in. That’s what free, open-source MCP servers do: they turn Copilot from “autocomplete with swagger” into a teammate that can actually fetch live docs, query your DB, automate your browser, and remember your coding quirks.</p><p>I’ve tested dozens of MCP servers. Most just duplicate what Copilot already does well (reading your open files, writing code). The 5 servers below are the best so far:</p><ul><li><strong>Chroma MCP</strong>: Gives Copilot long-term memory of your decisions and architecture</li><li><strong>Context7 MCP</strong>: Keeps external documentation always fresh and accurate</li><li><strong>Task Master MCP</strong>: Structured AI-driven task planning &amp; execution from PRDs</li><li><strong>browser-use MCP</strong>: Automates repetitive browser tasks and data extraction</li><li><strong>Knowledge Graph Memory</strong>: Structured graph of entities, relations &amp; lessons-persistent contextual + error memory beyond raw vectors</li></ul><p>Let’s see how each one transforms your daily workflow.</p><h3>1. Chroma</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*lj6lWG5PiG6-nBigqVLEhg.png" /></figure><blockquote><strong>The Problem</strong>: Six months ago, your team made a crucial architectural decision about payment retry logic. The reasoning was solid, discussed in depth, but now it’s scattered across PR comments, Slack threads, and meeting notes.</blockquote><p><strong>How Chroma MCP Solves It</strong>: Chroma creates a searchable semantic database of your team’s knowledge-README files, Architecture Decision Records (ADRs), design docs, and important discussions.</p><p>Install it from <a href="https://github.com/chroma-core/chroma-mcp">https://github.com/chroma-core/chroma-mcp</a></p><h3>2. Context7</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*KMZIL4G93ORyEN4bTbZe8Q.png" /></figure><p><strong>The Problem</strong>: You bookmark the Prisma documentation, but three weeks later, the API you’re using has been deprecated. The blog post you saved about React best practices is from 2022. Your knowledge goes stale fast.</p><p><strong>How Context7 MCP Solves It</strong>: Context7 automatically fetches the latest version of any external documentation and keeps it synchronized. No more outdated information.</p><p>Install it from <a href="https://github.com/upstash/context7">https://github.com/upstash/context7</a></p><h3>3. TaskMaster</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*zKf9KCEAAGMThYBI805ctw.png" /></figure><p><strong>The Problem</strong>: Turning a Product Requirements Doc (PRD) into well‑scoped, dependency‑aware implementation tasks takes time. Work drifts from the original intent, priorities become unclear, and developers constantly ask, <em>“What’s the next actionable thing?”</em></p><p><strong>How Task Master Solves It</strong>: It ingests a PRD and generates a structured tasks.json (tasks, subtasks, dependencies, priority, test strategy). Through MCP you can ask natural language questions (&quot;What&#39;s next?&quot;, &quot;Expand task 5&quot;, &quot;Move 5.2 under 7&quot;) and it maps them to deterministic CLI operations-keeping planning, execution, and refactoring of tasks inside your editor.</p><p>Install from <a href="https://github.com/eyaltoledano/claude-task-master">https://github.com/eyaltoledano/claude-task-master</a></p><h3>4. browser-use</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*DX1FziSf176PN6HRTGz1Pw.png" /></figure><blockquote><strong>The Problem</strong>: You need to check analytics dashboards, export data, or perform the same multi-step browser workflow every week. It’s tedious and error-prone.</blockquote><p><strong>How browser-use MCP Solves It</strong>: It can automate browser interactions-logging into systems, navigating to dashboards, extracting data, and returning structured results.</p><p>Install from <a href="https://docs.browser-use.com/customize/mcp-server">https://docs.browser-use.com/customize/mcp-server</a></p><h3>5. Knowledge Graph</h3><blockquote><strong>The Problem</strong>: Your team keeps re‑diagnosing the same build, dependency, and environment errors; architectural intent erodes; and Copilot can’t surface prior reasoning because it isn’t stored in a structured, queryable form.</blockquote><p><strong>How It Solves It</strong>: A local knowledge graph that stores:</p><p>- <strong>Entities </strong>(people, services, domains, features)</p><p>- <strong>Relations</strong> (“service_A depends_on service_B”, “job_X publishes_to queue_Y”)</p><p>- <strong>Observations</strong> (atomic facts: “Rollout uses canary: true”)</p><p>- <strong>Lessons</strong> (error pattern + verified resolution + success rate tracking)</p><p>Unlike plain embedding memory, lessons capture error fingerprints (type, message, context) plus evolving remediation steps and verification commands. Success/failure feedback updates the lesson’s effectiveness score.</p><p><strong>Real</strong> <strong>Scenario</strong>: A recurring CI failure: `Playwright timeout in headless mode on macOS runners`. Instead of re‑searching, you ask: *”Find similar errors and show the highest success‑rate fix.”* The server returns a prior lesson with the exact environment nuance and validated mitigation steps.</p><p>Install from: <a href="https://github.com/modelcontextprotocol/servers/tree/main/src/memory">https://github.com/modelcontextprotocol/servers/tree/main/src/memory</a></p><h3>Now, think of the flow:</h3><ol><li>Discover broadly with Chroma (“What did we discuss about circuit breakers?”).</li><li>Distill durable facts (decision, constraint, error fingerprint) → promote into graph as an entity/observation/lesson.</li><li>During a future incident: query graph first for precise, curated remediation; fall back to vector search if no lesson exists.</li></ol><p>The magic happens when you use multiple MCP servers together. Here are some powerful combinations:</p><ol><li><strong>Start with local servers</strong> (Chroma, DB Introspection, Knowledge Graph Memory)</li><li><strong>Add network servers gradually</strong> for specific use cases</li><li><strong>Review what data each server accesses</strong> before installation</li><li><strong>Test with non-sensitive data</strong> first</li></ol><p>The best part? All of these tools are <strong>free and open-source</strong>. No subscriptions, no API limits, no vendor lock-in.</p><p><strong>Your 5-minute action plan:</strong></p><ol><li>Install Chroma MCP and embed your README</li><li>Set up Context7 for your main framework</li><li>Ask Copilot one question that uses both</li><li>Experience the difference</li></ol><p>Once you see the power of having instant access to both your team’s knowledge and fresh external documentation, you’ll wonder how you ever worked without it.</p><p><strong>What will you automate first?</strong> Share your MCP setup on Twitter and share below — I love seeing creative combinations!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=49cdabcc432b" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Use Context and Custom Hooks to share user state across your React App]]></title>
            <link>https://fatmali.medium.com/use-context-and-custom-hooks-to-share-user-state-across-your-react-app-ad7476baaf32?source=rss-1ef1eb93c4d3------2</link>
            <guid isPermaLink="false">https://medium.com/p/ad7476baaf32</guid>
            <category><![CDATA[javascript]]></category>
            <category><![CDATA[context-api]]></category>
            <category><![CDATA[software-development]]></category>
            <category><![CDATA[typescript]]></category>
            <category><![CDATA[react]]></category>
            <dc:creator><![CDATA[Fatma Ali]]></dc:creator>
            <pubDate>Sat, 20 Mar 2021 11:44:57 GMT</pubDate>
            <atom:updated>2021-07-12T04:15:16.337Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*PBy_t4WJMfQwGMc6zupeqg.jpeg" /></figure><p>The hooks and context API changed the way we build React apps forever. The new API would help in building components in a more functional and reusable way. In this blog, we are going to see how we could use hooks and context to share the user object across the app.</p><h4>Note: Before we start</h4><p>As the original <a href="https://reactjs.org/docs/context.html#when-to-use-context">docs</a> say:</p><blockquote>Context is primarily used when some data needs to be accessible by <em>many</em> components at different nesting levels. <strong>If you only want to avoid passing some props through many levels, </strong><a href="https://reactjs.org/docs/composition-vs-inheritance.html"><strong>component composition</strong></a><strong> is often a simpler solution than context.</strong></blockquote><p>Additionally, I’m going to assume that you have an already running React App, so we’re going to work with some high-level methods to shorten the blog.</p><h4>Why use Context for this use-case, and are there alternatives?</h4><p>In some apps, we find the need to have access to the user object in many deeply nested components. Previously, to achieve sharing an object across many parts of the app, global state management libraries like <a href="https://redux.js.org/"><em>redux</em></a><em> </em>were used. Even worse, prop drilling was used to pass down the props to child component<em>s.</em></p><p>Many opt for the inbuilt Context API for its simplicity, and the fact that it comes with React so you don’t need to install a third-party library.</p><h4>Defining the context</h4><p>Let’s consider the example below</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*N8gAskPaBTKdW5rNezarAg.png" /></figure><p>We are declaring the context in the UserContext variable, then making it accessible throughout our app with UserContext.Provider . Pretty straightforward right?</p><p>However, this solution is limiting because apps are usually more complicated than this. Often we have routing in place, and we’d want some routes protected and some opened.</p><h4>Extending the implementation</h4><p>First, let’s move the context into its own file for better readability.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*SCCPcbHDxj1eR_ztNq5vHg.png" /><figcaption>Defining the context</figcaption></figure><p>Here, we’ve created a wrapper to handle our user context logic. We’ve also renamed it to AuthContext . The AuthContext now provides the current user, as well as the setter for the current user. Awesome, right?</p><p>Okay, let’s define a simple custom hook to consume our context in the child components.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*hdRXvBFAbcfHezRz5k-BHg.png" /><figcaption>Defining a simple custom hook to consume the context</figcaption></figure><h4>That’s it! So let’s consume it.</h4><p>First off, we would have to provide the context in our application root as shown below. You can see that we’re getting our user object from sessionStorage We’re assuming that she logged in earlier and we persisted her details in session storage. This is done to prevent us from losing her user info once she refreshes the page, forcing her to log in again.</p><p>There are many ways to do this with pros and cons in each. For now, let’s use this as just an example.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*rVv1T5LWH3SZlUzwqQF7bw.png" /></figure><h4>Consuming the context</h4><p>Assuming you’re using <a href="https://reactrouter.com/">react-router</a>, we could define our routes like so:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*XWWXnXFL-h9o4CvKSCyZLw.png" /></figure><p>We’ve defined different routes for our app, some are protected while some are not. For the protected routes, you could see that we have defined a wrapper component called &lt;ProtectedRoute&gt; . The implementation &lt;ProtectedRoute&gt; is shown below:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*L-AVAy2jwCKgiDtVvDGUAQ.png" /><figcaption>Defining routes</figcaption></figure><p>So far so good! Lastly, let’s see how we can implement and consume the authentication context within our login component.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*dlUXv6mIBLJq8qEMv9FQfw.png" /><figcaption>How to consume context in the user sign in</figcaption></figure><p>Finally, let’s assume that we need to implement the ability to logout within a nested child component inside our app. How would that implementation look like?</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*lbHasMlckpXXpysb1eyajg.png" /><figcaption>sign out implementation</figcaption></figure><p>And that’s it! You can extend the context and add more values as you need, but this pretty much enables you to share the current user state across the app. Feel free to drop in questions/suggestions below! Thanks for reading!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=ad7476baaf32" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Diary of a Crazy Software Engineer]]></title>
            <link>https://medium.com/series/diary-of-a-crazy-software-engineer-c02307a88585?source=rss-1ef1eb93c4d3------2</link>
            <guid isPermaLink="false">https://medium.com/p/c02307a88585</guid>
            <dc:creator><![CDATA[Fatma Ali]]></dc:creator>
            <pubDate>Thu, 30 Apr 2020 11:15:26 GMT</pubDate>
            <atom:updated>2020-04-30T11:19:36.967Z</atom:updated>
            <content:encoded><![CDATA[<h4>Intro</h4><p>Spring 2020. The birds are singing. The earth is replenishing its beauty with lush greenery. Everything is perfect, well, except for a global pandemic that’s confined human beings in their homes.</p><p><strong>What to do?</strong></p><p>During the coronavirus pandemic of 2020, most of us find ourselves at home suddenly with a lot of time on our hands, food in the fridge, Netflix, and in 90% of the cases, no bae (thank you social distancing). Unfortunately and fortunately for (dark-themed, hooded and introverted) software engineers like myself, life is the same. We just can’t get the best coffee anymore from that little coffee shop in the neighborhood, but that’s fine.</p><p>I’ve spent literally the last seven weeks like a zombie: work, Netflix, eat and, repeat. Frankly, this is the life I thought I always wanted, not having to leave the house. But it gets boring too. So I decided to write a diary about some of my experiences as a software engineer. Some crazy, some fun and some not so pretty. I mean, why not?</p><p>In the next few posts, I’ll be talking about the (rough) journey, the panic attacks, the craziest bugs ever (literally), the changes and evolution that engineering brought to me as a person. My goal is to make software engineers, or anyone starting out a career, understand how hard and rewarding it is to be passionate, and most of all, yourself. I’m not the type that writes a lot, but I’m hoping this can be the beginning of my best selling book. Haha, just kidding. Hope you, whoever you are, enjoy this :-)</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=c02307a88585" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Moving from Cron To Apache Airflow]]></title>
            <link>https://fatmali.medium.com/moving-from-cron-to-apache-airflow-ac73007aa28e?source=rss-1ef1eb93c4d3------2</link>
            <guid isPermaLink="false">https://medium.com/p/ac73007aa28e</guid>
            <category><![CDATA[cron]]></category>
            <category><![CDATA[devops]]></category>
            <category><![CDATA[programming]]></category>
            <category><![CDATA[data]]></category>
            <category><![CDATA[airflow]]></category>
            <dc:creator><![CDATA[Fatma Ali]]></dc:creator>
            <pubDate>Fri, 06 Sep 2019 09:43:54 GMT</pubDate>
            <atom:updated>2019-09-10T19:10:56.085Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/377/1*3uMIjsSfykHfb11KReKBXg.jpeg" /><figcaption>It’s time to say goodbye, Cron</figcaption></figure><p>We have all been there. With tens or hundreds of cron jobs running, and you’re (or your boss is) pulling your hair out trying to figure out why a critical job didn’t run last night. You’re puzzled and confused. Didn’t it run? Did it fail? Why could it have failed?</p><p>At AMPATH, our reliance on cron jobs to schedule <a href="https://www.webopedia.com/TERM/E/ETL.html"><strong>ETL</strong></a> jobs was becoming increasingly impossible. Our ETL processes involve first denormalizing and flattening data , then use the denormalized data to build calculated tables. The calculated tables then produce real-time reports and decision support systems for the Ministry of Health and clinicians respectively. In the beginning, using cron jobs was a simple and effective way to execute the jobs.</p><p>The cron workflow was as follows:</p><ol><li>Run the denormalizing jobs every second minute of the hour</li><li>Run the calculated tables jobs every fifth minute, with the <em>hope</em> that the previous jobs will take only three minutes to complete.</li></ol><p>Over time as data and the demand for more reports increased, it became difficult to estimate the time a particular job would take and we more than often run into lock wait timeouts and deadlocks. Cron had reached its limit. We needed a way to:</p><ol><li>Handle complex relationships between jobs.</li><li>Handle all the jobs centrally with a well defined user interface.</li><li>Error reporting and alerting.</li><li>Viewing and analyzing job run times.</li><li>Security (protecting credentials of databases).</li></ol><h3>Enter Apache Airflow</h3><p>Upon searching for solutions to improve or replace our ETL workflow, I stumbled upon an open source tool, Apache Airflow. Airflow’s<strong> </strong><a href="https://en.wikipedia.org/wiki/Directed_acyclic_graph"><strong>Direct Acryclic Graph</strong></a> (DAG) concept offered a way to build and schedule complex and dynamic data pipelines in an extremely <strong>simple</strong>, <strong>testable</strong> and <strong>scalable</strong> way. And I freaked out! This is exactly what we have been looking for.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/500/1*nwDLdQgkFy6ZVt3GN8PdRQ.gif" /></figure><p>We quickly setup airflow using docker-compose like <a href="https://github.com/AMPATH/etl/blob/master/docker-compose.yaml"><strong>this</strong></a>. And started transforming some of the main ETL jobs we had into python code for Airflow. Here is the github <a href="https://github.com/AMPATH/etl"><strong>repo</strong></a> for the scripts.</p><h3>Airflow Concepts</h3><p>We are going to cover some of the basic concepts to get you started with Airflow. For a more detailed documentation, head over to this official docs site here</p><h4><strong>DAGs and Operators</strong></h4><p>In Airflow, all workflows are considered to be DAG’s. You can think of DAG’s as a set of tasks with some sort of relationship. This is how a DAG looks like in Airflow Graph View:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/751/1*QBAZe6e3t9K6gIKzHbLGTQ.png" /><figcaption>An example DAG</figcaption></figure><p>A <strong>DAG</strong> usually has a schedule, a start time and a unique ID. The tasks inside DAG’s are made of operators. <a href="https://airflow.apache.org/concepts.html#concepts-operators"><strong>Operators</strong></a> are define what actually run for a particular task. Examples of Operators in Airflow include:</p><p>BashOperator: To execute shell commands/scripts</p><p>PythonOperator: To execute Python code.</p><p>You can define a simple DAG that simply prints out ‘Hello World!’ every 10 minutes like this:</p><pre><em>from</em> <strong>airflow.models</strong> <em>import</em> <strong>DAG<br></strong><em>from</em> <strong>airflow.operators.bash_operator</strong> <em>import</em> <strong>BashOperator</strong></pre><pre><strong>default_args</strong> = {<br>&#39;owner&#39;: &#39;airflow&#39;,<br>&#39;email&#39;: [&#39;fali@ampath.or.ke&#39;],<br>&#39;email_on_failure&#39;: True,<br>&#39;email_on_retry&#39;: True,<br>&#39;email_on_success&#39;: False,<br>&#39;start_date&#39;: datetime(2019, 5, 31)<br>}</pre><pre><strong>dag</strong> = <strong>DAG</strong>(<br>    dag_id=&#39;hello_world_dag&#39;,<br>    default_args=default_args,<br>    schedule_interval= &#39;*/10 * * * *&#39;)</pre><pre><strong>task1</strong> = <strong>BashOperator</strong>(<br>           task_id=&quot;echo_hello_world&quot;<br>           bash_command=&#39;echo Hello World!&#39;,<br>           dag=<strong>dag</strong>)</pre><h4>Extending Operators</h4><p>What’s even more exciting is you can extend/create custom operators if none of the inbuilt operators match your needs. Here’s an example of how I extended the inbuilt MySQL Operator to return the results of a statement after execution:</p><pre><em>from</em> airflow.hooks.mysql_hook <em>import</em> <strong>MySqlHook</strong></pre><pre><strong>class</strong> CustomMySqlOperator(<strong>MySqlOperator</strong>):<br>    <strong>def</strong> execute(self, context):<br>      <em>self</em>.log.info(&#39;Executing: %s&#39;, <em>self</em>.sql)<br>      hook = <strong>MySqlHook</strong>(<br>             mysql_conn_id=<em>self</em>.mysql_conn_id,<br>             schema=<em>self</em>.database)<br>      <em>return</em> hook.get_records(<em>self</em>.sql, parameters=<em>self</em>.parameters);</pre><pre>....</pre><pre><br># Use the custom operator in one of your tasks</pre><pre>task = <strong>CustomMySqlOperator</strong>(<br>          task_id=&#39;custom_mysql_task&#39;,<br>          sql=&#39;select * from person;&#39;,<br>          mysql_conn_id=&#39;mysql_conn&#39;,<br>          database=&#39;etl&#39;,<br>          dag=<strong>dag</strong>)</pre><h4>Defining relationships between tasks</h4><p>Often, you find that some of your tasks need to execute one after another. In airflow you can define relationships like this:</p><pre><strong>task1</strong> = <strong>BashOperator</strong>(<br>           task_id=&quot;echo_hello_world&quot;<br>           bash_command=&#39;echo I will execute first!&#39;,<br>           dag=<strong>dag</strong>)</pre><pre><strong>task2</strong> = <strong>BashOperator</strong>(<br>           task_id=&quot;echo_hello_world&quot;<br>           bash_command=&#39;echo I will execute second!&#39;,<br>           dag=<strong>dag</strong>)</pre><pre># There are multiple ways you can define this relationship<br># Using the bitwise operators or methods<br># Option 1<br><strong>task1</strong> &gt;&gt; <strong>task2</strong></pre><pre># Option 2<br><strong>task2</strong> &lt;&lt; <strong>task1</strong></pre><pre># Option 3<br><strong>task1</strong>.set_downstream(<strong>task2</strong>)</pre><pre># Option 4<br><strong>task2</strong>.set_upstream(<strong>task1</strong>)</pre><h4>Branching</h4><p>Sometimes you want to execute tasks depending on certain conditions. That is very possible using airflow, here’s an example:</p><pre>...</pre><pre>run_task = <strong>BashOperator</strong>(<br>           task_id=&quot;echo_hello_world&quot;<br>           bash_command=&#39;echo Hello World!&#39;,<br>           dag=<strong>dag</strong>)</pre><pre>sleep = <strong>BashOperator</strong>(<br>           task_id=&quot;echo_hello_world&quot;<br>           bash_command=&#39;sleep 1m&#39;,<br>           dag=<strong>dag</strong>)</pre><pre><em>### function decides which task to run depending on the time by returning the task_id</em></pre><pre><strong>def</strong> decide_path():<br>    now = <strong>datetime</strong>.now(<strong>timezone</strong>(&#39;Africa/Nairobi&#39;))<br>    <strong><em>if</em></strong> <strong>now</strong>.hour &gt;= 19:<br>       <em>return</em> &quot;run_task&quot;<br>    <strong><em>else</em></strong>:<br>       <em>return</em> &quot;sleep&quot;</pre><pre>branch = <strong>BranchPythonOperator</strong>(<br>                 task_id=&#39;check_time&#39;,<br>                 python_callable=decide_path,<br>                 dag=dag)</pre><pre>...</pre><h4>Connections</h4><p>One of the best things about Airflow is Security. Airflow handles and encrypts your credentials for you so you will never have to do that by yourself or include it in your code.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*-5UjSXmgeW6WmdIweEOd4w.png" /><figcaption>Airflow connections tab</figcaption></figure><p>Once you add and save the credentials under connections, you can access it simply by adding the connection id in your operator when defining your task.</p><h3>Conclusion</h3><p>Airflow is a game changer when it comes to scheduling and monitoring workflows. In this article, I have only covered the basic concepts that helped me get started, but there’s a lot <a href="https://airflow.apache.org/concepts.html"><strong><em>more</em></strong></a> that I haven’t covered. Feel free to dig deeper and reach out.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=ac73007aa28e" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Set up Uptime Monitoring with Kibana, Heartbeat and Slack (Part 2)]]></title>
            <link>https://fatmali.medium.com/set-up-uptime-monitoring-with-kibana-heartbeat-and-slack-part-2-cf88bc14fed6?source=rss-1ef1eb93c4d3------2</link>
            <guid isPermaLink="false">https://medium.com/p/cf88bc14fed6</guid>
            <category><![CDATA[heartbeat]]></category>
            <category><![CDATA[kibana]]></category>
            <category><![CDATA[docker]]></category>
            <category><![CDATA[elasticsearch]]></category>
            <category><![CDATA[slack]]></category>
            <dc:creator><![CDATA[Fatma Ali]]></dc:creator>
            <pubDate>Tue, 16 Apr 2019 16:22:51 GMT</pubDate>
            <atom:updated>2019-04-16T16:22:51.838Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*l83g3QxKqLbMyMX7RyNtJA.jpeg" /></figure><p>In the <a href="https://medium.com/@fatmali/set-up-uptime-monitoring-with-kibana-heartbeat-and-slack-part-1-fa157d35071e?source=friends_link&amp;sk=c7abb3f6bac3e75bff994f179c04c8ab">first</a> part of this series, we went ahead and set up heartbeat to monitor two HTTP servers and send the data to Elasticsearch. We also configured Kibana to automatically create the uptime dashboards for us. For this second part, we are going to include Slack notifications to notify us in case the server goes down.</p><p>Before we begin, we have to get the webhook url for your Slack channel. Follow the below instructions to get the webhook url.</p><ol><li><em>Log in to </em><a href="http://slack.com/"><em>slack.com</em></a><em> as a team administrator.</em></li><li><em>Go to </em><a href="https://my.slack.com/services/new/incoming-webhook/"><em>https://my.slack.com/services/new/incoming-webhook/</em></a><em>.</em></li><li><em>Select a default channel for the integration.</em></li></ol><figure><img alt="" src="https://cdn-images-1.medium.com/max/680/0*-DWL3GfrXfClpeIg.jpg" /></figure><p><em>4. Click Add Incoming Webhook Integration.</em></p><p>After the incoming webhook has been generated, copy the url and add it with the following configuration in the elasticsearch.yml file in the directory elasticsearch/config/.</p><pre>xpack.notification.slack:<br>  account:<br>    monitoring:<br>      url: &lt;WEB-HOOK-URL&gt;<br>      message_defaults:<br>        from: x-pack<br>        icon: <a href="http://example.com/images/watcher-icon.jpg">http://example.com/images/watcher-icon.jpg</a><br>        attachment:<br>          fallback: &quot;X-Pack Notification&quot;<br>          color: &quot;#36a64f&quot;<br>          title: &quot;X-Pack Notification&quot;<br>          title_link: &quot;<a href="https://www.elastic.co/guide/en/x-pack/current/index.html">https://www.elastic.co/guide/en/x-pack/current/index.html</a>&quot;<br>          text: &quot;One of your watches generated this notification.&quot;<br>          mrkdwn_in: &quot;pretext, text&quot;</pre><p>After you add this to your elasticsearch.yml, you have to restart all the containers for the changes to take effect. Run:</p><pre>$docker-compose restart $(docker-compose ps -q)</pre><p>Once the containers have restarted, head over to Kibana, under Management menu, under Elasticsearch, click the Watcher option and create a new threshold alert</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*uFeGagoqhLp3sGFr9mP1rw.png" /></figure><h3>Setting Up Watcher</h3><h4><em>1. Using the UI (Kibana 6.6 and above)</em></h4><p>With the Kibana 6.6, you can now configure the notifications schedule and message with the UI. After entering the name, indices and time field, you will be provided by a set of options like below:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/875/1*kM5ApSVAfPXiIJWmgWRExA.png" /><figcaption>Watcher Conditions for firing a Slack Notification</figcaption></figure><p>These options make Kibana aware of the conditions requires to fire the actions such as Slack notifications. For this case, the conditions are:</p><pre>WHEN count() GROUPED OVER top 5 http.response.status_code IS ABOVE 299 FOR THE LAST 5 minutes</pre><p>HTTP codes 200 to 299 represents a successful request, so we want to fire an action when we get responses with status codes greater than 299. Next we will move to actions.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*SoHNOzOCu9WvkuUnFYriSQ.png" /><figcaption>Slack notification to be sent once condition is met</figcaption></figure><p>For this case, you will pick the Slack action on the drop down. (If Slack wasn’t configured properly in elasticsearch.yml, it will be grayed out). The recipient can be a user or a channel (remember to add a # symbol for channels), and the message can be customized to include the error message and code as below:</p><pre>Encountered  {{ctx.payload.hits.total}} errors in the last 5 minutes: \n {{ctx.payload.hits.hits}} *Error Message*: _{{_source.error.message}}_, \n *HTTP Status*: _{{_source.http.response.status}}_ \n {{ctx.payload.hits.hits}}</pre><h4>2. Using JSON (Kibana 6.5 and below)</h4><p>The below JSON can be used to create an advanced watch or if you’re using Kibana version 6.5 or below where there is no option to use the UI.</p><pre>{<br>  &quot;trigger&quot;: {<br>    &quot;schedule&quot;: {<br>      &quot;interval&quot;: &quot;5m&quot;<br>    }<br>  },<br>  &quot;input&quot;: {<br>    &quot;search&quot;: {<br>      &quot;request&quot;: {<br>        &quot;search_type&quot;: &quot;query_then_fetch&quot;,<br>        &quot;indices&quot;: [<br>          &quot;heartbeat*&quot;<br>        ],<br>        &quot;types&quot;: [],<br>        &quot;body&quot;: {<br>          &quot;query&quot;: {<br>            &quot;bool&quot;: {<br>              &quot;must&quot;: [<br>                {<br>                  &quot;query_string&quot;: {<br>                    &quot;query&quot;: &quot;monitor.status:down&quot;<br>                  }<br>                },<br>                {<br>                  &quot;range&quot;: {<br>                    &quot;<a href="http://twitter.com/timestamp">@timestamp</a>&quot;: {<br>                      &quot;gte&quot;: &quot;now-5m&quot;<br>                    }<br>                  }<br>                }<br>              ]<br>            }<br>          }<br>        }<br>      }<br>    }<br>  },<br>  &quot;condition&quot;: {<br>    &quot;compare&quot;: {<br>      &quot;ctx.payload.hits.total&quot;: {<br>        &quot;gt&quot;: 0<br>      }<br>    }<br>  },<br>  &quot;actions&quot;: {<br>    &quot;send_trigger&quot;: {<br>      &quot;throttle_period_in_millis&quot;: 3600000,<br>      &quot;slack&quot;: {<br>        &quot;message&quot;: {<br>          &quot;from&quot;: &quot;heartbeat&quot;,<br>          &quot;to&quot;: [<br>            &quot;#heartbeat-monitoring&quot;<br>          ],<br>          &quot;text&quot;: &quot;Heartbeat Monitoring&quot;,<br>          &quot;icon&quot;: &quot;<a href="https://raw.githubusercontent.com/elastic/elasticsearch-net/master/build/nuget-icon.png">https://raw.githubusercontent.com/elastic/elasticsearch-net/master/build/nuget-icon.png</a>&quot;,<br>          &quot;attachments&quot;: [<br>            {<br>              &quot;color&quot;: &quot;danger&quot;,<br>              &quot;title&quot;: &quot;Server Down&quot;,<br>              &quot;text&quot;: &quot;Encountered  {{ctx.payload.hits.total}} errors in the last 5 minutes: \n {{#ctx.payload.hits.hits}} *Error Message*: _{{_source.error.message}}_, \n *HTTP Status*: _{{_source.http.response.status}}_ \n {{/ctx.payload.hits.hits}}&quot;<br>            }<br>          ]<br>        }<br>      }<br>    }<br>  }<br>}</pre><p>Further explanations for each of the properties in the above JSON object can be found <a href="https://www.elastic.co/guide/en/x-pack/current/actions-slack.html">here</a>.</p><h4>And that’s it!</h4><p>You can save your watch at this point and try to simulate it using the simulate menu in Kibana. You should receive a message like this from slackbot:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/643/1*7HfXogI3GD297QrB2ZYCVQ.png" /></figure><p>Feel free to reach out and ask any questions regarding the above set up!</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/704/1*fBtHnzYuw20U_WV82t3YFA.jpeg" /></figure><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fdocs.google.com%2Fforms%2Fd%2F12_BgpvZ2efbmfKytq6E8rsc_Vevb7x27jsC7ahL-PrA%2Fviewform%3Fembedded%3Dtrue&amp;url=https%3A%2F%2Fdocs.google.com%2Fforms%2Fd%2F12_BgpvZ2efbmfKytq6E8rsc_Vevb7x27jsC7ahL-PrA%2Fviewform%3Fedit_requested%3Dtrue&amp;image=https%3A%2F%2Flh4.googleusercontent.com%2FYZwULYebPa4C259Qv5moOW_f6Mh4n2CiookGuSL156KjwjvL7cmMPm6nqatDvME_JZw%3Dw1200-h630-p&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=google" width="760" height="500" frameborder="0" scrolling="no"><a href="https://medium.com/media/65a42b3831867f07ef25e3bf3bcf6e88/href">https://medium.com/media/65a42b3831867f07ef25e3bf3bcf6e88/href</a></iframe><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=cf88bc14fed6" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Set up Uptime Monitoring with Kibana, Heartbeat and Slack (Part 1)]]></title>
            <link>https://fatmali.medium.com/set-up-uptime-monitoring-with-kibana-heartbeat-and-slack-part-1-fa157d35071e?source=rss-1ef1eb93c4d3------2</link>
            <guid isPermaLink="false">https://medium.com/p/fa157d35071e</guid>
            <category><![CDATA[elasticsearch]]></category>
            <category><![CDATA[beats]]></category>
            <category><![CDATA[docker]]></category>
            <category><![CDATA[slack]]></category>
            <category><![CDATA[kibana]]></category>
            <dc:creator><![CDATA[Fatma Ali]]></dc:creator>
            <pubDate>Tue, 02 Apr 2019 08:59:36 GMT</pubDate>
            <atom:updated>2019-04-02T09:00:34.452Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/745/1*ADtm_3ZcXH5cU6zmNc9iFQ.jpeg" /></figure><p>Uptime is an important aspect in any online hosted service as it directly impacts a business. You can have the best software product in the world, but if you experience a lot of downtime, then it’s as good as nothing. In the real world, 100% uptime is cannot be guaranteed. Thus the need to determine the moment downtime happens, and act fast.</p><p>Overtime, we also need to know more details, for example, what causes downtime, at what time it occurs etc. And that’s why at AMPATH, we chose to do it with Kibana and Heartbeat.</p><p>From the official websites:</p><blockquote>Kibana lets you visualize your Elasticsearch data and navigate the Elastic Stack, so you can do anything from learning why you’re getting paged at 2:00 a.m. to understanding the impact rain might have on your quarterly numbers.</blockquote><blockquote>Heartbeat monitors services for their availability with active probing. Given a list of URLs, Heartbeat asks the simple question: Are you alive? Heartbeat ships this information and response time to the rest of the Elastic Stack for further analysis.</blockquote><p>Both of these tools are part of the Elastic stack. You can find a quick intro to the Elastic stack <a href="https://medium.com/@fatmali/https-medium-com-fatmali-how-to-setup-a-docker-elk-elastic-logstash-kibana-stack-in-a-jiffy-ab56e2660416">here</a>.</p><p>In the example below, we are going to set up Heartbeat to ping a HTTP server to determine if it’s up or down as well as configure Kibana to automatically create visualizations for the data received from heartbeat.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/400/1*lYeQSdn29C9_tjR2aQhQqA.jpeg" /></figure><h3>Step 1: Set up the ELK Stack</h3><p><em>(Make sure you have docker and docker-compose installed before you begin!)</em></p><p>Fork and clone this <a href="https://github.com/deviantony/docker-elk">repo</a>. The docker-compose.yml file has the essential docker images for the ELK stack, we are going to have to add one more service for Heartbeat.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/2fee0f48f6c3c5e1407fc35d0322cf4c/href">https://medium.com/media/2fee0f48f6c3c5e1407fc35d0322cf4c/href</a></iframe><h3>2: Configure Heartbeat</h3><p>As you can see, we have added the heartbeat service to the compose file. But before we start the containers, we have to create the volumes we named in order to map the config file to the containers. So let’s create a folder named heartbeat. Inside the heartbeat folder:</p><p>(i) <strong><em>Create a Dockerfile with the following content:</em></strong></p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/588f55856fc0b3fcadc33b3e06209b31/href">https://medium.com/media/588f55856fc0b3fcadc33b3e06209b31/href</a></iframe><p>(ii) <strong><em>Create a folder called config and inside the config folder, create a file called heartbeat.yml</em></strong></p><p>The heartbeat.yml file is a config file that we’ll use to configure Heartbeat monitors. To better understand what it does, here’s how it should be written for our particular case:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/c5c78444847776959f29ea429d44cd71/href">https://medium.com/media/c5c78444847776959f29ea429d44cd71/href</a></iframe><p><strong><em>Brief Explanation</em></strong></p><p>This file is responsible for configuring multiple heartbeat monitors. In our case, we have configured a single monitor of the http type. We have also instructed it to hit <a href="https://fatmali@github.io">https://fatmali@github.io</a> and <a href="https://google.com">https://google.com</a> every 10 seconds with a GET request, if it gets a response with a 200 status code, then this means a server is up else it means it’s having some trouble. Finally, we have configured an Elasticsearch output and the Kibana host for automatic dashboard creation.</p><p>To learn more of about the http monitor and the different types of monitor heartbeat supports, head over to the official <a href="https://www.elastic.co/guide/en/beats/heartbeat/current/heartbeat-reference-yml.html">docs</a>!</p><h3><strong>And that’s it, you’re ready to go!</strong></h3><p>Run:</p><pre>$ sudo docker-compose up -d</pre><p>And you should be up and running, once the containers have started, head over to Kibana and you should see the new Heartbeat dashboard automatically created. It should look like this:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*UvUHT4ddA0dhwu8Qh-wUTA.png" /></figure><p>The next part will cover how to utilize Elasticsearch Watchers to setup downtime notifications with Slack! Happy Monitoring 😉</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fdocs.google.com%2Fforms%2Fd%2F1O4ZRtZVAoSyV6jkrYB8pXogctXL4RdPX-NOuOuwbeEw%2Fviewform%3Fembedded%3Dtrue&amp;url=https%3A%2F%2Fdocs.google.com%2Fforms%2Fd%2F1O4ZRtZVAoSyV6jkrYB8pXogctXL4RdPX-NOuOuwbeEw%2Fviewform%3Fedit_requested%3Dtrue&amp;image=https%3A%2F%2Flh5.googleusercontent.com%2FqmSw2rDjYXKEZb4bClI_E3u51Mr5tHihReQg9gWY6p1lL-pGRyhdQTduCV8JBRMm-AE%3Dw1200-h630-p&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=google" width="760" height="500" frameborder="0" scrolling="no"><a href="https://medium.com/media/898d51879e90debe6ef2e740f8263671/href">https://medium.com/media/898d51879e90debe6ef2e740f8263671/href</a></iframe><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=fa157d35071e" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[How to succeed in a male-dominated career]]></title>
            <link>https://fatmali.medium.com/how-to-succeed-in-a-male-dominated-career-8f38b305aca6?source=rss-1ef1eb93c4d3------2</link>
            <guid isPermaLink="false">https://medium.com/p/8f38b305aca6</guid>
            <category><![CDATA[software-development]]></category>
            <category><![CDATA[women-in-tech]]></category>
            <category><![CDATA[women]]></category>
            <category><![CDATA[technology]]></category>
            <category><![CDATA[science]]></category>
            <dc:creator><![CDATA[Fatma Ali]]></dc:creator>
            <pubDate>Sat, 14 Jul 2018 05:13:42 GMT</pubDate>
            <atom:updated>2020-02-14T16:51:43.304Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*D0jLbMJNT7O3P2zFdBWalQ.jpeg" /></figure><p>When I finally graduated from college and landed my dream job, I was surprised to find myself as the only woman in my office. The first few days sucked. Despite my male colleagues being extremely helpful and welcoming, I felt lonely.</p><p>Sure, it can be motivating to be the first/only lady in the office but that doesn’t come without its own set of challenges. Many times, I was mistaken for a front-end developer. Nothing wrong about being a front-end developer, but the idea front-end is feminine and the back-end is too “tough” is what annoyed me.</p><p>The biggest challenge most women face in male-dominated careers is the lack of female mentors. Without mentors, we often try to push ourselves to perform better than our male colleagues. And that’s okay, but we shouldn’t forget how and why we got there in the first place.</p><p>Here are my top 6 tips to survive and succeed in a male-dominated career environment:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*HY-pvYQN-dOCQGq5MyGZIg.jpeg" /></figure><p><strong>Be Yourself</strong></p><p>Always. Don’t apologize for who you are. Understand your strengths and your weaknesses and act on them. Be confident. Remember you are worthy and just as good as everybody else, don’t be afraid to be you.</p><p><strong>Find a mentor and mentees</strong></p><p>It’s necessary to find someone “who’s been there” to be your mentor. Someone that understands the challenges you face. Here’s an <a href="http://fortune.com/2016/07/25/how-to-find-a-female-mentor/">article</a> that can guide you on finding a female mentor. Most importantly, find young girls you can mentor, we need more women in these careers.</p><p><strong>Don’t hold back</strong></p><p>Don’t be afraid to air out your opinions. Learn to communicate your ideas clearly. When an opportunity comes to lead or demo your skills, don’t, for heaven’s sake, hold back. Take your place on the <em>freaking</em> table.</p><p><strong>Become literally indispensable</strong></p><p>Work hard, not to compete but to become irreplaceable. Put in more hours and extra effort to get things done. Always get into the details when working on something. And get things done!</p><p><strong>Don’t be the office messenger (please!)</strong></p><p>Don’t ever allow yourself to be the office coffee-getter or messenger. Well unless you don’t mind being an unpaid assistant/secretary. Do what you’re paid to do.</p><p><strong>Do not tolerate disrespect/mistreatment</strong></p><p>Finally, if you’re in an environment that you don’t feel respected, you are at liberty to leave and find a place that will offer you better treatment. Never tolerate any mistreatment just because you’re female, stand up for yourself or walk out.</p><p>Personally, I try to follow the above tips every day to succeed in my career. I’d love to hear back from anyone who might have any comments related to this topic.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=8f38b305aca6" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Getting started with Docker (Part I)]]></title>
            <link>https://fatmali.medium.com/https-medium-com-fatmali-getting-started-with-docker-part-1-24b7992c8464?source=rss-1ef1eb93c4d3------2</link>
            <guid isPermaLink="false">https://medium.com/p/24b7992c8464</guid>
            <category><![CDATA[containers]]></category>
            <category><![CDATA[devops]]></category>
            <category><![CDATA[docker]]></category>
            <dc:creator><![CDATA[Fatma Ali]]></dc:creator>
            <pubDate>Wed, 27 Jun 2018 19:42:31 GMT</pubDate>
            <atom:updated>2018-06-27T19:44:09.430Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/568/1*mkahhuUkeOr6YR3oT-uC4Q.jpeg" /></figure><blockquote><a href="https://github.com/docker/docker">Docker</a> is a tool designed to make it easier to create, deploy, and run applications by using containers. Containers allow a developer to package up an application with all of the parts it needs, such as libraries and other dependencies, and ship it all out as one package.</blockquote><p>Unlike creating a virtual OS like VMs, docker allows applications to use the same Linux kernel as the host machine. This makes docker more light-weight and perfomant than VM’s. It is an open source tool made to make the lives of developers and system admins a lot easier by vanquishing the “works on my machine” problem.</p><h3>Getting Started</h3><p>For this tutorial, we are going to learn how to set up and basic commands I use almost everyday when working with containers.</p><p><strong>Step 1: Install Docker</strong></p><p>In this step, you could either <a href="https://docs.docker.com/install/">install</a> docker, or head over to <a href="http://play-with-docker.com">Docker playground</a> (make sure you have a docker hub account) and get started playing with docker right away!</p><p><strong>Step 2: Basic Commands</strong></p><p>You can confirm if docker is installed by running the command below and it should return the name of the current version of docker you’re running:</p><pre><strong>$ docker --version</strong></pre><pre>Docker version 18.03.1-ce, build 9ee9f40</pre><p>The next step is to fetch a mysql image from <a href="https://hub.docker.com/_/mysql/">docker hub</a>. A container is launched by running an image. An <strong>image</strong> is an executable package that includes everything needed to run an application — the code, a runtime, libraries, environment variables, and configuration files.</p><p>To pull the mysql image, we will run the command below:</p><pre><strong>$ docker pull mysql</strong></pre><pre>Using default tag: latest<br>latest: Pulling from library/mysql<br>07a152489297: Pull complete<br>Digest: sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47<br>Status: Downloaded newer image for mysql:latest</pre><p>To see a list of the images now available in your system:</p><pre><strong>$ docker image ls</strong></pre><pre>REPOSITORY          TAG                 IMAGE ID            SIZE<br>mysql               latest              8d99edb9fd40        445MB</pre><p>To run the mysql image, we will run:</p><pre><strong>$ </strong><strong>docker run --name mysql_container -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql</strong></pre><p>The above command runs the docker image of mysql. The --name tag names the container mysql_container, if we had not provided a name, docker would have given a random name to our container. The -e tag sets the environment variable which in this case is the mysql root password to my-secret-pw.</p><p>The -d tag tells docker to run this container in the background as a daemon, thus closing the shell won’t stop the container, (the opposite of this is -it), and finally we call the image name which is mysql.</p><p>To see the list of all running containers:</p><pre><strong>$ docker ps</strong></pre><p>To log into a running container:</p><pre><strong>$ sudo docker exec -ti mysql_container bash</strong></pre><p>once inside the container, you can execute commands such as ls ,cd, mkdir because it is a based on the unix kernel. Once in the container, you could try connecting to your mysql instance using mysql -uroot -pmy-secret-pw. To exit the container just hit exit .</p><p>To stop a running container simply hit:</p><pre><strong>$ docker stop mysql_container</strong></pre><p>When you run sudo docker ps, you won’t see mysql_container in the list of running containers, to see a list of all running and stopped containers:</p><pre><strong>$ sudo docker ps -a</strong></pre><p>The -a tag means all containers. To restart the container</p><pre><strong>$ sudo docker start mysql_container</strong></pre><p>To remove a container means you cannot restart it again. You simply do that with this command (note that you cannot remove a running container):</p><pre><strong>$ sudo docker rm mysql_container</strong></pre><p>And that concludes the basic docker commands that will get you started with docker. To see all the options available for a particular command, for example start, you can run:</p><pre><strong>$ docker start --help</strong></pre><p>If you have reached this far, congrats! :-D In the next tutorial, we are going to look at docker volumes!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=24b7992c8464" width="1" height="1" alt="">]]></content:encoded>
        </item>
    </channel>
</rss>