<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by Kasm Technologies on Medium]]></title>
        <description><![CDATA[Stories by Kasm Technologies on Medium]]></description>
        <link>https://medium.com/@kasm?source=rss-755f84541f54------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Sat, 16 May 2026 02:14:49 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@kasm/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Build a Personal AI Lab on Oracle Cloud in Under an Hour — Powered by Ampere & Kasm Workspaces]]></title>
            <link>https://kasm.medium.com/build-a-personal-ai-lab-on-oracle-cloud-in-under-an-hour-powered-by-ampere-kasm-workspaces-271f39bbcd9c?source=rss-755f84541f54------2</link>
            <guid isPermaLink="false">https://medium.com/p/271f39bbcd9c</guid>
            <category><![CDATA[developer]]></category>
            <category><![CDATA[artificial-intelligence]]></category>
            <category><![CDATA[arm]]></category>
            <category><![CDATA[cloud-computing]]></category>
            <category><![CDATA[oracle]]></category>
            <dc:creator><![CDATA[Kasm Technologies]]></dc:creator>
            <pubDate>Thu, 23 Apr 2026 20:01:01 GMT</pubDate>
            <atom:updated>2026-04-23T20:01:01.636Z</atom:updated>
            <content:encoded><![CDATA[<blockquote>From OCI Marketplace to a fully running AI Lab with frontier models and quantized CPU inference — no GPU required.</blockquote><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*MQ7BRbA9kgSF-xmr4Dm_Gw.png" /></figure><h3>Introduction</h3><p>What if you could stand up a fully isolated, browser-delivered AI development lab — running frontier models, quantized CPU inference, and collaborative workspaces — in less than an hour, on infrastructure that costs a fraction of traditional cloud VMs?</p><p>That’s exactly what this guide walks you through. We’re going to use <strong>Oracle Cloud Infrastructure (OCI)</strong>, <strong>Ampere compute</strong>, and <strong>Kasm Workspaces</strong> (deployed straight from the OCI Marketplace) to build a production-grade AI lab that you can access from any browser, on any device, anywhere in the world.</p><p>By the end, you’ll have:</p><ul><li>✅ Kasm Workspaces running on an Ampere instance (OCI)</li><li>✅ Workspace registries added for <strong>Calliope.AI</strong> and <strong>Scully’s Workspace Registry</strong></li><li>✅ Frontier model workspaces ready to launch</li><li>✅ Quantized CPU model inference via <strong>Ollama</strong> + <strong>Hugging Face</strong> + the <strong>Ampere Model Zoo</strong> — no GPU required</li></ul><p>Let’s build it.</p><h3>Why Ampere + Kasm?</h3><p>Before we dive in, here’s the short version of why this stack is worth your time:</p><p><strong>Kasm Workspaces</strong> delivers fully isolated, ephemeral containerized desktops and applications through your browser. No VPN. No client software. No data left on your endpoint. Every session spins up clean and destroys itself on logout — which makes it ideal for AI development where you want controlled, reproducible environments.</p><p><strong>Ampere on OCI</strong> gives you access to some of the best price-to-performance compute in public cloud. The A1 through A4 shapes offer up to 160 OCPUs and 1TB of RAM — purpose-built for cloud-native, multi-threaded workloads. For CPU-based LLM inference with quantized models (which is 90%+ of real-world AI workloads), Ampere’s architecture is genuinely excellent: INT8 and FP16 optimized, efficient per-core pricing, and no noisy-neighbor issues.</p><p>Combined: you get 4–8x container density vs. traditional VMs, 70–80% lower infrastructure cost vs. legacy VDI, and an AI lab that’s secure by design.</p><h3>Navigate to the OCI Marketplace and Find Kasm</h3><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2F_PraA5TJZHw%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3D_PraA5TJZHw&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2F_PraA5TJZHw%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/a5b73b4d9022c06708909c11098752cc/href">https://medium.com/media/a5b73b4d9022c06708909c11098752cc/href</a></iframe><p><strong>1.1 — Log into your OCI Console</strong></p><p>Head to <a href="https://cloud.oracle.com">cloud.oracle.com</a> and sign in. If you don’t have an account, Oracle’s Always Free tier includes <strong>4 OCPUs + 24GB RAM</strong> on Ampere A1 — more than enough to start.</p><p><strong>1.2 — Open the OCI Marketplace</strong></p><p>From the main OCI Console navigation:</p><ul><li>Click the <strong>☰ hamburger menu</strong> (top-left)</li><li>Navigate to <strong>Marketplace → All Applications</strong></li></ul><p>Or use the search bar at the top of the Console and type <strong>Marketplace</strong>.</p><p><strong>1.3 — Search for Kasm</strong></p><p>In the Marketplace search bar, type:</p><pre>Kasm</pre><p>You’ll see several Kasm listings appear. Look for the <strong>Kasm Workspaces ARM</strong> listing — this is the one optimized for Ampere compute.</p><blockquote><em>💡 </em><strong><em>Pro tip:</em></strong><em> Filter by </em><strong><em>ARM-compatible</em></strong><em> or </em><strong><em>Arm64</em></strong><em> in the category/filter panel on the left to narrow results quickly.</em></blockquote><p><strong>1.4 — Select the Kasm ARM Instance</strong></p><p>Click on the <strong>Kasm Workspaces (ARM)</strong> listing. Review the overview page — you’ll see the supported shapes, licensing model (Kasm has a free tier and paid tiers), and the “Launch Instance” button.</p><p>When you’re ready, click <strong>Launch Instance</strong>.</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FrqoR6OFwTYg%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DrqoR6OFwTYg&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FrqoR6OFwTYg%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/fd2ff17737a9bb0dfa11505ed72ba857/href">https://medium.com/media/fd2ff17737a9bb0dfa11505ed72ba857/href</a></iframe><h3>Step 2: Configure Your Instance — Shape, Placement &amp; Details</h3><p>This is where you define what your Kasm server actually looks like. Take your time here — the choices you make affect performance and cost.</p><p><strong>2.1 — Name Your Instance</strong></p><p>Give your instance a meaningful name:</p><pre>kasm-ai-lab-01</pre><p><strong>2.2 — Choose Your Availability Domain and Fault Domain</strong></p><p>Under <strong>Placement</strong>, select:</p><ul><li><strong>Availability Domain</strong> — OCI regions have multiple ADs. For a single-node lab, AD-1 is fine. If you’re in a region with limited Ampere capacity, try AD-2 or AD-3.</li><li><strong>Fault Domain</strong> — Leave as default unless you have specific HA requirements.</li></ul><blockquote><em>💡 Ampere A1 shapes are in high demand in some regions. If you get a capacity error on one AD, try another — availability varies by region and time of day.</em></blockquote><p><strong>2.3 — Choose Your Ampere Shape</strong></p><p>Under <strong>Shape</strong>, click <strong>Change Shape</strong>. You’ll see the <strong>Flexible Shapes</strong> section. Select:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/694/1*6IubvaWdifHE6V33exRs0A.png" /></figure><blockquote><strong><em>Always Free eligibility:</em></strong><em> </em><em>VM.Standard.A1.Flex with up to </em><strong><em>4 OCPUs and 24 GB RAM</em></strong><em> qualifies for OCI&#39;s Always Free tier — enough to stand up Kasm and test a small quantized model. No credit card charges for staying within those limits.</em></blockquote><blockquote><strong><em>OCPU ≠ vCPU:</em></strong><em> On A2 and A4, one OCPU equals </em><strong><em>2 physical cores</em></strong><em> of the AmpereOne/AmpereOne M processor. When you configure 8 OCPUs on an A2 instance, you’re getting 16 physical ARM64 cores — keep this in mind when sizing for model inference throughput.</em></blockquote><p><strong>Recommended starting points by use case:</strong></p><ul><li><strong>Just exploring / free tier:</strong> VM.Standard.A1.Flex — 4 OCPUs / 24 GB</li><li><strong>Personal AI lab with Ollama:</strong> VM.Standard.A2.Flex — 8 OCPUs / 64 GB</li><li><strong>Multi-user lab or larger models (13B–34B):</strong> VM.Standard.A2.Flex — 16 OCPUs / 128 GB</li><li><strong>Production / latest silicon:</strong> VM.Standard.A4.Flex — check regional availability first</li></ul><p>OCI’s Ampere instances use <strong>per-OCPU pricing</strong> — you pay for exactly what you configure, and you can resize the OCPU and RAM counts independently without redeployment.</p><h3>Step 3: Build Out the Network and Sandbox</h3><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FMrlRRGZ6gy0%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DMrlRRGZ6gy0&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FMrlRRGZ6gy0%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/d81f7c6ee136227436172a4df84a7a2d/href">https://medium.com/media/d81f7c6ee136227436172a4df84a7a2d/href</a></iframe><p>Don’t skip past this section in the wizard — networking is configured here, <strong>before</strong> you set your boot volume and SSH key, and getting it right now saves you from having to dig into security rules after the fact.</p><p><strong>3.1 — VCN (Virtual Cloud Network) Setup</strong></p><p>The OCI Marketplace launcher will offer to create a new VCN automatically — you can accept that for a quick start. If you’re deploying into an existing VCN, make sure it has:</p><ul><li>A <strong>public subnet</strong> with an Internet Gateway attached</li><li>A <strong>route table</strong> with a default route to the Internet Gateway (0.0.0.0/0)</li><li>A <strong>DHCP Options</strong> set pointing to OCI’s DNS resolvers</li></ul><p><strong>3.2 — Security List / Network Security Group Rules</strong></p><p>Open the <strong>Security List</strong> attached to your subnet and add the following <strong>Ingress Rules</strong>:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/750/1*ZECv81V-BJnHAwGI_xov6g.png" /></figure><blockquote><em>🔒 </em><strong><em>Security note:</em></strong><em> For a personal lab, consider restricting port 443 to your IP range rather than the open internet (0.0.0.0/0). For team access, you’ll want to keep it open or place OCI’s Load Balancer in front.</em></blockquote><p><strong>3.3 — Now Configure Boot Volume</strong></p><p>With networking sorted, scroll down to the <strong>Boot Volume</strong> section:</p><ul><li>Set boot volume size to at least <strong>100 GB</strong> — Kasm pulls container images and you’ll want the headroom</li><li>Enable <strong>In-Transit Encryption</strong> if you have compliance requirements</li><li>Keep <strong>VPU (Volume Performance Units)</strong> at Balanced unless you need higher IOPS for heavy model workloads</li></ul><p><strong>3.4 — SSH Key Pair</strong></p><p>Upload or paste in your public SSH key. You’ll need this for initial shell access and any post-deployment configuration. Store the private key securely — this is your admin lifeline if you ever need to get into the instance directly.</p><blockquote><em>💡 If you don’t have a key pair yet, the OCI Console can generate one for you and download the private key. Keep that </em><em>.pem file somewhere safe.</em></blockquote><p><strong>3.5 — Review and Launch</strong></p><p>Click <strong>Create</strong>. OCI will provision the instance in 3–5 minutes. Watch the state move from <strong>Provisioning → Running</strong> — once it’s green you’re ready.</p><p><strong>3.6 — Firewall Check on the Instance</strong></p><p>OCI security lists handle cloud-level traffic, but the OS has its own firewall as a second layer. SSH into your instance once it’s running and verify:</p><pre>sudo firewall-cmd --list-all<br># or if using iptables:<br>sudo iptables -L -n</pre><p>Kasm’s Marketplace image typically handles this automatically, but it’s worth a 30-second confirmation.</p><p><strong>3.7 — First Login to the Kasm Console</strong></p><p>Open your browser and navigate to:</p><pre>https://&lt;YOUR-OCI-INSTANCE-PUBLIC-IP&gt;</pre><p>You’ll land on the Kasm login page. Default credentials are set during Marketplace deployment — check your OCI instance <strong>Console Connection</strong> or the Marketplace deployment output for the initial admin password.</p><p><strong>3.8 — Harden the Sandbox</strong></p><p>Before adding any workspaces, do a quick hardening pass:</p><ul><li><strong>Change the default admin password</strong> immediately</li><li>Set up a <strong>second admin account</strong> as backup</li><li>Under <strong>Settings → Authentication</strong>, enable MFA if this instance is internet-exposed</li><li>Review <strong>Kasm’s built-in DLP controls</strong> — clipboard restrictions, watermarking, and download blocking are available out of the box and worth enabling for any AI workspaces that will touch sensitive data</li></ul><h3>Step 4: Add Workspace Registries and Build Your AI Lab</h3><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FB_PhF0qdNJk%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DB_PhF0qdNJk&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FB_PhF0qdNJk%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/b125bc3800158a56f1b7bb3c513bc661/href">https://medium.com/media/b125bc3800158a56f1b7bb3c513bc661/href</a></iframe><p>This is where it gets exciting. Kasm’s <strong>Workspace Registry</strong> system lets you pull in curated catalogs of containerized environments — including AI/ML workspaces, secure browsers, development tools, and more — with a single URL.</p><p>We’re going to add two key registries:</p><ul><li><strong>Calliope.AI Workspace Registry</strong> — AI-focused workspaces, frontier model interfaces, and intelligent tooling( Also supports Ollama)</li><li><strong>Scully’s Workspace Registry</strong> — Community-curated workspaces including development environments, AI tools, and utilities</li></ul><p><strong>4.1 — Navigate to Workspace Registries</strong></p><p>In the Kasm Admin Console:</p><ul><li>Go to <strong>Workspaces → Workspace Registry</strong></li><li>Click <strong>Add Registry</strong></li></ul><p><strong>4.2 — Add the Calliope.AI Registry</strong></p><p><a href="https://calliopeai.github.io/calliope-kasm/">https://calliopeai.github.io/calliope-kasm/</a></p><p>Calliope AI Registry and click <strong>Add</strong>.</p><p><strong>4.3 — Add Scully’s Workspace Registry (OpenClaw Workspace + Unreal)</strong></p><p><a href="https://sullyschoice.github.io/kasm-registry/">https://sullyschoice.github.io/kasm-registry/</a></p><p>Scully&#39;s Registry and click <strong>Add</strong>.</p><p><strong>4.4 — Browse and Install Workspaces</strong></p><p>Once both registries are added, click <strong>View Workspaces</strong> on each registry. You’ll see a catalog of available environments. Install the ones relevant to your AI lab:</p><p><strong>Calliope Workspaces or Kasm workspaces Ubuntu DND support</strong></p><ul><li><strong>Frontier model interfaces</strong> (Claude, ChatGPT, Gemini browser-isolated workspaces)</li><li><strong>Jupyter/VS Code</strong> environments pre-configured for AI development</li><li><strong>Ollama</strong> workspace for local quantized inference</li><li><strong>Open-WebUI</strong> or equivalent chat UI workspaces</li></ul><p>Click <strong>Install</strong> on each workspace you want available to your users.</p><p>Click <strong>Install</strong> on each workspace you want available to your users.</p><p><strong>4.5 — Configure Workspace Permissions (Critical for Path B)</strong></p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2F050TyuXcNIs%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3D050TyuXcNIs&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2F050TyuXcNIs%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/8ce28af24519c6742a68eb96ceb8400f/href">https://medium.com/media/8ce28af24519c6742a68eb96ceb8400f/href</a></iframe><blockquote><em>⚠️ </em><strong><em>Don’t skip this step if you’re going the Ollama or Docker route.</em></strong><em> Kasm workspaces run as unprivileged users by default — which is exactly what you want for browser isolation and frontier model access. But if you’re planning to install Ollama, run Docker containers inside your workspace, or pull and serve quantized models, you need to explicitly grant elevated permissions before the workspace will cooperate.</em></blockquote><p>In the Kasm Admin Console, go to <strong>Workspaces</strong> and click the <strong>Edit</strong> (pencil) icon on any workspace you intend to use for AI inference.</p><p><strong>Enable Sudo / Root Access:</strong></p><p>Scroll to the <strong>Advanced</strong> tab inside the workspace editor. Under <strong>Docker Run Config Override</strong>, make sure the following is set:</p><pre>{<br>  &quot;user&quot;: &quot;root&quot;<br>}</pre><p>Or alternatively, under the <strong>Permissions</strong> section, toggle <strong>Allow Sudo</strong> to <strong>ON</strong>. This grants the workspace user sudo access inside the container session — required for Ollama&#39;s install script and any package installs (apt, pip, curl-based installers).</p><blockquote><em>🔒 </em><strong><em>Security note:</em></strong><em> Only enable sudo/root on workspaces explicitly designated for AI development. Keep your browser-isolation and frontier model workspaces locked down with default unprivileged settings. Scope the elevated access — don’t apply it globally.</em></blockquote><p><strong>Enable Docker-in-Docker (for easier model deployment):</strong></p><p>For workspaces where you want to run Docker containers inside the Kasm session — for example, pulling Open-WebUI, running a llama.cpp server container, or spinning up a full inference stack — you need to enable privileged mode and bind the Docker socket.</p><p>In the same <strong>Advanced → Docker Run Config Override</strong> field:</p><p>&quot;user&quot;: &quot;root&quot;,<br> &quot;privileged&quot;: true,<br> &quot;volumes&quot;: [&quot;/var/run/docker.sock:/var/run/docker.sock&quot;]<br>}</p><p>This mounts the host Docker socket into the workspace container, giving you full docker CLI access from inside the session. You can now run docker pull, docker run, and docker compose directly — making it dramatically easier to deploy multi-container AI stacks like Ollama + Open-WebUI without leaving the workspace.</p><blockquote><em>💡 </em><strong><em>Why Docker-in-Docker matters here:</em></strong><em> Instead of manually installing Ollama via shell script and managing the process yourself, you can use pre-built Docker Compose stacks from the community — pull them into your workspace, </em><em>docker compose up, and have a full inference UI running in under two minutes. The Ampere Model Zoo and many Hugging Face model repos ship with ready-made Compose files that just work.</em></blockquote><figure><img alt="" src="https://cdn-images-1.medium.com/max/720/1*3jaH5DO7Th1rTXzUjE3e6w.png" /></figure><h3>Running AI Workloads: Two Paths</h3><p>Once your workspaces are live, you have two main approaches for running AI models in your lab:</p><h3>Path A — Frontier Models via Workspace Isolation</h3><p>Use Kasm workspaces as <strong>isolated, controlled containers</strong> for accessing frontier AI services (Claude, ChatGPT, Gemini, Perplexity, etc.) Use Calliope Workspaces or Kasm Ubuntu DND:</p><ul><li>Every session is <strong>ephemeral</strong> — no data persists after logout</li><li><strong>DLP controls</strong> prevent accidental data leakage to external APIs</li><li><strong>Session recording</strong> gives you an audit trail of all AI interactions</li><li>Works perfectly on ARM64 since these are browser-delivered services — no local compute needed</li></ul><p>This is ideal for teams who need governed access to commercial AI tools without the risk of proprietary data being sent to external services unchecked.</p><h3>Path B — Quantized CPU Inference on Ampere (No GPU Required)</h3><p>This is where Ampere ARM64 genuinely shines. Modern quantized models (GGUF format, INT4/INT8 quantization) run surprisingly well on high-core-count ARM CPUs — and Ampere is purpose-built for exactly this workload.</p><p><strong>Using Ollama:</strong></p><p>Ollama makes pulling and running quantized models trivially easy. From within your Kasm workspace:</p><pre># Install Ollama (ARM64 binary available)<br>curl -fsSL https://ollama.com/install.sh | sh</pre><pre># Pull a quantized model<br>ollama pull llama3.2:3b          # 2GB, great for fast inference<br>ollama pull mistral:7b-instruct  # 4.1GB, strong general assistant<br>ollama pull qwen2.5:14b          # 9GB, excellent reasoning</pre><pre># Run it<br>ollama run llama3.2:3b</pre><blockquote><em>💡 On an A2.Flex with 8 OCPUs / 64GB RAM, you can comfortably run 7B–14B parameter models at INT4/INT8 quantization with solid token/sec throughput — all on CPU, zero GPU cost.</em></blockquote><p>or build a File mapping or Script</p><pre>cat &lt;&lt; &#39;EOF&#39; &gt; start_ollama.sh<br>#!/bin/bash<br><br># 1. Install Ollama<br>echo &quot;Installing Ollama...&quot;<br>curl -fsSL https://ollama.com/install.sh | sh<br><br># 2. Start Ollama server in the background<br>echo &quot;Starting Ollama server...&quot;<br># OLLAMA_HOST=0.0.0.0 allows connections from outside the container if needed<br>OLLAMA_HOST=0.0.0.0 ollama serve &gt; ollama.log 2&gt;&amp;1 &amp;<br><br># 3. Wait for the server to be ready<br>echo &quot;Waiting for server to initialize...&quot;<br>until curl -s http://localhost:11434/api/tags &gt; /dev/null; do<br>  sleep 2<br>done<br>echo &quot;Ollama is up and running!&quot;<br><br># 4. Pull the Qwen models<br>echo &quot;Pulling qwen3:8b...&quot;<br>ollama pull qwen3:8b<br><br>echo &quot;Pulling llama3.2:latest...&quot;<br>ollama pull llama3.2:latest<br><br><br>echo &quot;---------------------------------------&quot;<br>echo &quot;All set! Models are ready to use.&quot;<br>ollama list<br>EOF<br><br># Make it executable and run it<br>chmod +x start_ollama.sh<br>./start_ollama.sh</pre><p><strong>From the Ampere Model Zoo:</strong></p><p>Ampere maintains an optimized model zoo at <a href="https://solutions.amperecomputing.com">solutions.amperecomputing.com</a> with models specifically benchmarked and tuned for their silicon. Look for:</p><ul><li><strong>LLaMA family</strong> — Optimized GGUF builds</li><li><strong>Mistral / Mixtral</strong> — Excellent performance per core</li><li><strong>Phi-3 / Phi-4</strong> — Microsoft’s small, efficient models that punch above their weight on ARM</li><li><strong>Qwen2.5</strong> — Strong multilingual and coding performance</li></ul><p><strong>From Hugging Face (with llama.cpp / Ollama backend):</strong></p><pre># Using the HuggingFace CLI to pull GGUF models directly<br>pip install huggingface_hub<br>huggingface-cli download \<br>  bartowski/Llama-3.2-3B-Instruct-GGUF \<br>  --include &quot;*.Q4_K_M.gguf&quot; \<br>  --local-dir ./models</pre><p>Then point Ollama or a llama.cpp server at the downloaded model file for inference.</p><p><strong>Putting it together — Open-WebUI as your AI Lab frontend:</strong></p><pre># Run Open-WebUI connected to your local Ollama instance<br>docker run -d \<br>  --network=host \<br>  -e OLLAMA_BASE_URL=http://127.0.0.1:11434 \<br>  -v open-webui:/app/backend/data \<br>  --name open-webui \<br>  ghcr.io/open-webui/open-webui:main</pre><p>Access it at http://localhost:8080 from within your Kasm workspace — you now have a full ChatGPT-style interface running entirely on your Ampere instance, with no API keys, no usage limits, and no data leaving your OCI environment.</p><h3>What You’ve Built</h3><p>Let’s take stock of what’s running:</p><pre>OCI Ampere ARM64 Instance (VM.Standard.A2.Flex or better)<br>    └── Kasm Workspaces (from OCI Marketplace)<br>            ├── Calliope.AI Registry Workspaces<br>            │       ├── Frontier Model Interfaces (isolated)<br>            │       └── AI Development Environments<br>            ├── Scully&#39;s Registry Workspaces<br>            │       ├── Dev Tools &amp; Utilities<br>            │       └── Custom AI Workspaces<br>            └── Local Inference Stack<br>                    ├── Ollama (ARM64)<br>                    ├── Quantized Models (GGUF / INT4/INT8)<br>                    │       ├── Ampere Model Zoo Optimized Builds<br>                    │       └── Hugging Face GGUF Downloads<br>                    └── Open-WebUI (Chat Interface)</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/697/1*x-biuZluteJYrMH5TwQLRQ.png" /></figure><h3>Next Steps &amp; Resources</h3><ul><li>🔗 <strong>Kasm Workspaces Docs:</strong> <a href="https://kasmweb.com/docs">kasmweb.com/docs</a></li><li>🔗 <strong>OCI Ampere Always Free:</strong> <a href="https://oracle.com/cloud/free">oracle.com/cloud/free</a></li><li>🔗 <strong>Ampere Model Zoo:</strong> <a href="https://solutions.amperecomputing.com">solutions.amperecomputing.com</a></li><li>🔗 <strong>Ollama Model Library:</strong> <a href="https://ollama.com/library">ollama.com/library</a></li><li>🔗 <strong>Hugging Face GGUF Models:</strong> <a href="https://huggingface.co/models?library=gguf">huggingface.co/models?library=gguf</a></li><li>🔗 <strong>Calliope.AI Registry:</strong> <a href="https://calliopeai.github.io/calliope-kasm/">Calliope.Ai Workspace Registry link</a></li><li>🔗 <strong>Scully’s Workspace Registry:</strong> <a href="https://sullyschoice.github.io/kasm-registry/">Scully’s OpenClaw Workspace</a></li><li>🔗 <strong>Kasm Helm Chart (for Kubernetes deployments):</strong> <a href="https://github.com/kasmtech/kasm-helm">github.com/kasmtech/kasm-helm</a></li></ul><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=271f39bbcd9c" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Using F5 as Forward Proxy for Seamless Browser Isolation with Kasm]]></title>
            <link>https://kasm.medium.com/using-f5-as-forward-proxy-for-seamless-browser-isolation-with-kasm-e367b91ef422?source=rss-755f84541f54------2</link>
            <guid isPermaLink="false">https://medium.com/p/e367b91ef422</guid>
            <dc:creator><![CDATA[Kasm Technologies]]></dc:creator>
            <pubDate>Thu, 12 Mar 2026 13:27:26 GMT</pubDate>
            <atom:updated>2026-03-12T13:27:26.978Z</atom:updated>
            <content:encoded><![CDATA[<iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FiO_8PZO1AdM%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DiO_8PZO1AdM&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FiO_8PZO1AdM%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/20eeafeda40ef64a2663193d0c5f0c22/href">https://medium.com/media/20eeafeda40ef64a2663193d0c5f0c22/href</a></iframe><p>Direct internet access from user endpoints is one of the highest-impact attack vectors in enterprise environments. A single malicious link, clicked from a corporate laptop, can be the entry point for ransomware, credential theft, or a supply chain compromise. Traditional defenses (proxies, URL filtering, endpoint AV) reduce the likelihood of that click causing harm, but they don’t eliminate the underlying problem: the browser executes untrusted code directly on the endpoint.</p><p>Browser isolation takes a different architectural approach. Instead of trying to make the local browser safe, you move it off the endpoint entirely.</p><p>This post covers how to combine <strong>F5 BIG-IP as an explicit forward proxy</strong> with <a href="https://docs.kasm.com/docs/develop/guide/browser_isolation"><strong>Kasm Workspaces browser isolation</strong></a> to achieve seamless, transparent redirection of all web traffic through isolated, containerized browser sessions, without modifying how users browse.</p><h3>What is Kasm Workspaces?</h3><p><a href="https://kasm.com">Kasm Workspaces</a> is a platform that streams browser sessions and desktops to users directly from isolated Docker containers running on a server. Instead of executing web content locally, the user’s device receives only a pixel stream of the remote session. All browsing activity, including JavaScript execution, file downloads, and media rendering, happens inside the container, completely separated from the endpoint. Sessions are ephemeral by default, meaning each session starts from a clean state and leaves nothing behind when it ends.</p><h3>The Architecture at a Glance</h3><p>Before jumping into configuration, it’s worth understanding what this setup actually does and why each component exists.</p><h3>What Is Browser Isolation?</h3><p>With Kasm Workspaces, users don’t browse the internet from their local device. Instead, they interact with a remote, containerized browser session running on a Kasm server. The local browser renders a pixel stream of that session. All web content (JavaScript, CSS, media, executables) is processed inside the container, never on the endpoint.</p><p>The user experience is essentially unchanged. But the execution environment is completely different.</p><h3>What Is Seamless Browsing?</h3><p>Seamless browsing is the redirection mechanism that makes this transparent to the user. The flow looks like this:</p><ol><li>A user enters a URL in their local browser.</li><li>The request is intercepted by the F5 forward proxy.</li><li>F5 applies an iRule that redirects the request to Kasm’s /go endpoint, passing the original URL as a parameter.</li><li>The user authenticates to Kasm (if not already logged in).</li><li>Kasm launches the user’s default workspace (like Chrome or Firefox).</li><li>The isolated browser automatically navigates to the originally requested site.</li></ol><p>From the user’s perspective, they clicked a link and a webpage opened.</p><h3>The Kasm /go Endpoint</h3><p>Kasm provides a special endpoint that acts as the entry point for seamless redirection:</p><pre>https://kasm.company.local/#/go?kasm_url=&lt;destination-url&gt;</pre><p>When a request hits this URL, Kasm automatically launches the configured default workspace for that user or group and opens the destination URL inside the containerized session.</p><p>To configure the default workspace on Kasm, navigate to <strong>Access Management → Groups → Edit Group → Settings</strong>. Click <strong>Add Settings</strong> and add the default_image setting, selecting the workspace you want launched for that group. Once set, any request to the /go endpoint will automatically spin up that workspace and navigate to the specified URL.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*F4Zrdsk_Z-gCR2huIPYzig.png" /></figure><h3>F5 BIG-IP Architecture for Forward Proxy</h3><p>The F5 deployment uses several cooperating components. Understanding how they fit together matters before you touch the configuration.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*M6gdZKUjY3bHND4B3xoTPg.png" /></figure><p><strong>Key points:</strong></p><ul><li><strong>Explicit Forward Proxy Virtual Server</strong>: The main entry point. Listens on port 3128 on an internal IP. Clients configure their browser proxy settings to point here.</li><li><strong>DNS Resolver</strong>: Handles domain name resolution for all proxied traffic. Configured with a forward zone to send all DNS queries to an upstream resolver.</li><li><strong>Internal TCP Tunnel</strong> (tcp-forward profile): An internal tunnel interface that carries proxied traffic from the forward proxy VS to the wildcard virtual servers for further processing.</li><li><strong>Wildcard HTTP Virtual Server</strong>: Handles plain HTTP traffic on port 80 from within the tunnel. Applies the redirect iRule.</li><li><strong>Wildcard SSL Virtual Server</strong>: Handles HTTPS traffic on port 443 from within the tunnel. SSL Forward Proxy is enabled here, allowing F5 to decrypt, inspect, and redirect HTTPS traffic.</li></ul><p>The SSL virtual server is the most important piece for HTTPS. Without SSL Forward Proxy, F5 cannot inspect the contents of encrypted requests, and seamless redirection to Kasm will not work for HTTPS sites.</p><h3>SSL Forward Proxy: How It Works</h3><p>When an HTTPS request arrives at the Wildcard SSL VS, F5:</p><ol><li>Presents a dynamically generated certificate to the client, signed by a local CA you create on BIG-IP. The SSL connection is then established provided that the client already trusts the BIG-IP’s CA</li><li>Decrypts the client traffic using the <strong>Client SSL profile</strong>.</li><li>Inspects the plaintext HTTP request and applies the iRule redirect logic.</li><li>Redirects the client to the Kasm /go endpoint.</li></ol><p>Because F5 is generating certificates on the fly and signing them with its own CA, <strong>clients must trust that CA</strong>. This is a hard requirement. Any client that does not trust the F5 CA will receive SSL certificate errors for every HTTPS site.</p><p>In Active Directory environments, the CA certificate can be distributed via Group Policy. In smaller deployments, it can be installed manually.</p><h3>Step-by-Step Configuration</h3><h3>Step 1: Create the CA Certificate for SSL Forward Proxy</h3><p>Navigate to: <strong>System → Certificate Management → Traffic Certificate Management → SSL Certificate List</strong></p><p>Click <strong>Create</strong> and generate a new self-signed certificate. This certificate will be used by F5 to sign dynamically generated certificates for intercepted HTTPS traffic.</p><p>After creation, <strong>download the certificate</strong>. You’ll need to install it as a trusted root CA on all client machines.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*hLVRO3h0NV5-0c1TaBJ7Pg.png" /></figure><h3>Step 2: Create the DNS Resolver</h3><p>Navigate to: <strong>Network → DNS Resolvers → DNS Resolver List</strong></p><p>Click <strong>Create</strong>, give the resolver a name, and leave defaults.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*qQqNz980E2ESrOBM3PHGuw.png" /></figure><p>After creating the resolver, click into it and go to the <strong>Forward Zones</strong> tab. Click <strong>Add</strong>:</p><ul><li><strong>Name</strong>: . (a single dot; this forwards all DNS queries)</li><li><strong>Nameserver Address</strong>: Your preferred DNS server (e.g., 8.8.8.8)</li><li><strong>Service Port</strong>: 53</li></ul><p>Click <strong>Finished</strong>.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1017/1*dTTLdMAyh-qZ_-Hg3lbOtg.png" /></figure><h3>Step 3: Create the TCP Forward Tunnel</h3><p>Navigate to: <strong>Network → Tunnels → Tunnel List</strong></p><p>Click <strong>Create</strong>:</p><ul><li><strong>Name</strong>: tcp_forward_tunnel</li><li><strong>Profile</strong>: tcp-forward</li></ul><p>Leave all other settings at defaults and click <strong>Finished</strong>. This creates the internal tunnel interface that carries traffic from the explicit proxy VS to the wildcard virtual servers.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/978/1*KCXoOkKKpxJKvwa59a3XVw.png" /></figure><h3>Step 4: Create the HTTP Explicit Proxy Profile</h3><p>Navigate to: <strong>Local Traffic → Profiles → Services</strong></p><p>Click <strong>Create</strong>:</p><ul><li><strong>Name</strong>: Choose a descriptive name (e.g., http_explicit_proxy)</li><li><strong>Parent Profile</strong>: http-explicit</li><li><strong>Proxy Mode</strong>: Explicit</li></ul><p>Enable the <strong>Custom</strong> checkbox and scroll to the <strong>Explicit Proxy</strong> section:</p><ul><li><strong>DNS Resolver</strong>: Select the resolver you just created</li><li><strong>Tunnel Name</strong>: tcp_forward_tunnel</li></ul><p>Click <strong>Finished</strong>.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*fvzro6okP-lsnaIOgtDKcQ.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*zxXsSp-R76o3L7NlNDxeRw.png" /></figure><h3>Step 5: Create the Client SSL Forward Proxy Profile</h3><p>Navigate to: <strong>Local Traffic → Profiles → SSL → Client</strong></p><p>Click <strong>Create</strong>:</p><ul><li><strong>Name</strong>: Choose a descriptive name</li><li><strong>Parent Profile</strong>: clientssl</li></ul><p>Scroll to <strong>SSL Forward Proxy</strong> and enable the <strong>Custom</strong> checkbox:</p><ul><li><strong>SSL Forward Proxy</strong>: Enabled</li><li><strong>CA Certificate</strong>: Select the CA certificate created in Step 1</li><li><strong>CA Key</strong>: Select the corresponding private key</li><li><strong>Certificate Lifespan</strong>: 30 (days, or adjust per your policy)</li><li><strong>SSL Forward Proxy Bypass</strong>: Enabled</li><li><strong>Bypass Default Action</strong>: Intercept</li></ul><p>Click <strong>Finished</strong>.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*od0uHreOivseIjiJnVW-3A.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*y-g3BVlu0a8neMsixuYrgg.png" /></figure><h3>Step 6: Create the Server SSL Forward Proxy Profile</h3><p>Navigate to: <strong>Local Traffic → Profiles → SSL → Server</strong></p><p>Click <strong>Create</strong>:</p><ul><li><strong>Name</strong>: Choose a descriptive name</li><li><strong>Parent Profile</strong>: serverssl</li></ul><p>Enable the <strong>Custom</strong> checkbox under <strong>Configuration</strong>:</p><ul><li><strong>Certificate</strong>: Select the CA certificate</li><li><strong>Key</strong>: Select the corresponding key</li><li><strong>SSL Forward Proxy</strong>: Enabled</li><li><strong>SSL Forward Proxy Bypass</strong>: Enabled</li></ul><p>Click <strong>Finished</strong>.</p><blockquote><em>Note: The Server SSL profile is required by BIG-IP when SSL Forward Proxy is in use, even though it doesn’t perform significant processing in this flow. It must be attached to the SSL virtual server or the configuration will not be valid.</em></blockquote><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Xx8pOJueQ5BPBMynO4BNLw.png" /></figure><h3>Step 7: Create the Kasm Redirect iRule</h3><p>Navigate to: <strong>Local Traffic → iRules → iRule List</strong></p><p>Click <strong>Create</strong> and name it Kasm_Redirect_iRule. Paste the following iRule:</p><pre>when HTTP_REQUEST {<br> # Extract the original URL<br> set scheme &quot;https&quot;<br> if { [TCP::local_port] == 80 } {<br> set scheme &quot;http&quot;<br> }<br> <br> set original_url &quot;${scheme}://[HTTP::host][HTTP::uri]&quot;<br> <br> # Log for debugging (optional)<br> log local0. &quot;Redirecting: $original_url to Kasm&quot;<br> <br> # Redirect to Kasm workspace<br> HTTP::redirect &quot;https://kasm.company.local/#/go?kasm_url=$original_url&quot;<br>}</pre><p><strong>What this does:</strong></p><ul><li>Triggers on every inbound HTTP request.</li><li>Determines the original scheme based on the local port (80 = HTTP, anything else = HTTPS).</li><li>Reconstructs the full original URL from the host header and URI.</li><li>Issues an HTTP redirect to the Kasm /go endpoint with the original URL encoded as a query parameter.</li></ul><p>Replace kasm.company.local with your actual Kasm FQDN.</p><p>This is a minimal implementation. In production, you’ll likely want to add bypass logic for internal domains, trusted SaaS applications, or other traffic that should not be redirected to Kasm.</p><p>Click <strong>Finished</strong>.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*oDVKP57PzG0GLBtQ5wcmZQ.png" /></figure><h3>Step 8: Create the Explicit Forward Proxy Virtual Server</h3><p>Navigate to: <strong>Local Traffic → Virtual Servers → Virtual Server List</strong></p><p>Click <strong>Create</strong>:</p><ul><li><strong>Type</strong>: Standard</li><li><strong>Destination Address/Mask</strong>: Your internal subnet (e.g., 10.0.0.0/24)</li><li><strong>Service Port</strong>: 3128</li><li><strong>Protocol</strong>: TCP</li><li><strong>Protocol Profile (Client)</strong>: tcp</li><li><strong>Protocol Profile (Server)</strong>: tcp</li><li><strong>HTTP Profile (Client)</strong>: Your HTTP Explicit Proxy profile</li><li><strong>VLAN and Tunnel Traffic</strong>: Enabled on: tcp_forward_tunnel</li><li><strong>Source Address Translation</strong>: Auto Map</li><li><strong>iRules</strong>: Kasm_Redirect_iRule</li></ul><p>Click <strong>Finished</strong>.</p><blockquote><em>Using an internal subnet range (not a public IP) for the destination address is strongly recommended. Exposing an explicit forward proxy on a public IP attracts automated abuse traffic.</em></blockquote><h3>Step 9: Create the Wildcard SSL Virtual Server</h3><p>Navigate to: <strong>Local Traffic → Virtual Servers → Virtual Server List</strong></p><p>Click <strong>Create</strong>:</p><ul><li><strong>Type</strong>: Standard</li><li><strong>Destination Address/Mask</strong>: 0.0.0.0/0</li><li><strong>Service Port</strong>: 443</li><li><strong>Protocol</strong>: TCP</li><li><strong>Protocol Profile (Client)</strong>: tcp</li><li><strong>Configuration</strong>: Advanced</li><li><strong>HTTP Profile (Client)</strong>: http</li><li><strong>SSL Profile (Client)</strong>: Your Client SSL profile</li><li><strong>SSL Profile (Server)</strong>: Your Server SSL profile</li><li><strong>VLAN and Tunnel Traffic</strong>: Enabled on: tcp_forward_tunnel</li><li><strong>Source Address Translation</strong>: Auto Map</li><li><strong>Address Translation</strong>: Disabled</li><li><strong>iRules</strong>: Kasm_Redirect_iRule</li></ul><p>Click <strong>Finished</strong>.</p><blockquote><em>The </em><em>0.0.0.0/0 destination combined with tunnel-only VLAN binding ensures this virtual server matches all HTTPS destinations forwarded through the internal tunnel, without being reachable from outside.</em></blockquote><h3>Step 10: Create the Wildcard HTTP Virtual Server</h3><p>Navigate to: <strong>Local Traffic → Virtual Servers → Virtual Server List</strong></p><p>Click <strong>Create</strong>:</p><ul><li><strong>Type</strong>: Standard</li><li><strong>Destination Address/Mask</strong>: 0.0.0.0/0</li><li><strong>Service Port</strong>: 80</li><li><strong>Protocol</strong>: TCP</li><li><strong>Protocol Profile (Client)</strong>: tcp</li><li><strong>Configuration</strong>: Advanced</li><li><strong>HTTP Profile (Client)</strong>: http</li><li><strong>VLAN and Tunnel Traffic</strong>: Enabled on: tcp_forward_tunnel</li><li><strong>Source Address Translation</strong>: Auto Map</li><li><strong>Address Translation</strong>: Disabled</li><li><strong>iRules</strong>: Kasm_Redirect_iRule</li></ul><p>Click <strong>Finished</strong>.</p><p>F5 configuration is now complete.</p><h3>Client Configuration</h3><h3>Install the CA Certificate</h3><p>The F5 CA certificate must be trusted by every client that will use the forward proxy for HTTPS traffic.</p><p><strong>Manual installation (Windows):</strong></p><ol><li>Press Win + R, type certmgr.msc, and press Enter.</li><li>Navigate to <strong>Trusted Root Certification Authorities → Certificates</strong>.</li><li>Right-click and select <strong>Import</strong>.</li><li>Browse to the CA certificate downloaded from F5 and complete the wizard.</li></ol><figure><img alt="" src="https://cdn-images-1.medium.com/max/532/1*LZoCMemMFQrrefI1V4z-nw.png" /></figure><p><strong>For domain-joined environments</strong>, you can distribute the certificate via Group Policy (GPO). This ensures all domain machines automatically trust the F5 CA without manual intervention.</p><h3>Configure Windows/Browser Proxy Settings</h3><p>Navigate to <strong>Settings → Network &amp; Internet → Proxy → Manual proxy setup</strong>:</p><ul><li><strong>Proxy IP</strong>: The internal IP address of your F5 BIG-IP (check <strong>Network → Self IPs</strong> in the F5 management interface)</li><li><strong>Port</strong>: 3128</li><li><strong>Exceptions</strong>: Add your Kasm FQDN and any other addresses that should bypass the proxy (e.g., internal management interfaces)</li></ul><p>Click <strong>Save</strong>.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*iST0RLohu2pVOiw97QCygA.png" /></figure><p>All outbound web traffic from the browser is now routed through the F5 forward proxy.</p><p><strong>For enterprise deployments</strong>, manually configuring proxy settings on each machine is not practical. A common approach is to use a PAC (Proxy Auto-Config) file hosted on an internal web server and distribute it to clients automatically via DHCP option 252. When a client receives this DHCP option, the browser picks up the PAC file URL and applies the proxy configuration without any manual steps. This also gives you a centralized place to define bypass rules for internal domains, rather than managing exceptions per machine.</p><h3>Verifying the Setup</h3><p>With the proxy configured and the CA certificate trusted, open any browser and navigate to an external URL. The sequence of events:</p><ol><li>The browser sends the request to F5 on port 3128.</li><li>F5 decrypts the traffic (for HTTPS), applies the iRule, and issues a redirect to https://kasm.company.local/#/go?kasm_url=&lt;original-url&gt;.</li><li>The browser follows the redirect to Kasm.</li><li>After authentication, Kasm launches the default workspace for the user’s group.</li><li>The isolated browser session opens the originally requested URL.</li></ol><p>From the user’s perspective: they navigated to a URL and it opened. The redirect, authentication, and container launch all happen in the background.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*dMUipJvCaVJjrEWAmwA0Cg.png" /></figure><h3>Security Considerations</h3><p><strong>Proxy exposure</strong>: Keep the forward proxy virtual server bound to an internal IP. Avoid exposing port 3128 on a public IP, as it will be abused.</p><p><strong>CA certificate distribution</strong>: The F5 CA must be trusted before SSL traffic will work correctly. Plan your deployment method (GPO vs. manual) based on your environment before rollout.</p><p><strong>iRule bypass logic</strong>: The example iRule redirects everything. In practice, you should add bypass conditions for:</p><ul><li>Internal domains and SaaS applications that don’t require isolation</li><li>Kasm itself (to avoid redirect loops)</li><li>Any URLs that cannot function correctly inside an isolated container</li></ul><p><strong>Session stickiness</strong>: If users need persistent sessions in Kasm (e.g., authenticated sessions on specific sites), configure Kasm workspace persistence accordingly. Ephemeral containers are more secure but require users to re-authenticate to sites on each session.</p><h3>Conclusion</h3><p>Integrating F5 BIG-IP’s explicit forward proxy with Kasm Workspaces browser isolation is a defensible, operationally practical approach to eliminating direct endpoint internet access. The architecture is straightforward: intercept all web traffic at F5, redirect it to Kasm’s /go endpoint, and let Kasm handle execution in isolated containers.</p><p>The configuration involves several cooperating components (DNS resolver, TCP tunnel, explicit proxy profile, SSL Forward Proxy profiles, wildcard virtual servers, and a redirect iRule), but each serves a specific role and the overall design is coherent once the data flow is understood.</p><p>For organizations looking to move beyond perimeter filtering toward genuine endpoint isolation, this stack provides a solid foundation.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=e367b91ef422" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Your Browser Is the Attack Surface: Here’s How to Rethink It]]></title>
            <link>https://kasm.medium.com/your-browser-is-the-attack-surface-heres-how-to-rethink-it-eea7e8d8fd05?source=rss-755f84541f54------2</link>
            <guid isPermaLink="false">https://medium.com/p/eea7e8d8fd05</guid>
            <dc:creator><![CDATA[Kasm Technologies]]></dc:creator>
            <pubDate>Thu, 05 Mar 2026 21:05:38 GMT</pubDate>
            <atom:updated>2026-03-05T21:05:38.843Z</atom:updated>
            <content:encoded><![CDATA[<h3>CVE-2026–2441 and the Case for Browser Isolation with Kasm Workspaces</h3><p>In early 2026, Google issued an emergency patch for <a href="https://nvd.nist.gov/vuln/detail/CVE-2026-2441">CVE-2026–2441</a>, a zero-day vulnerability buried deep inside Chrome’s CSS rendering engine. The exploit required nothing unusual from the victim. A user visited a page. Chrome rendered it. The CSS engine did the rest, triggering memory corruption during what looked like ordinary browsing.</p><p>That framing matters. This wasn’t an attack that relied on a user making a mistake. It operated at a layer most enterprise defenses don’t reach.</p><p>But before you reach for the next layer of endpoint tooling, consider whether the architecture itself (not just the tooling) needs to change.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*gy299d8P38ZwZ6dH" /></figure><h3>When the Runtime Becomes the Threat</h3><p>Modern browsers are not simple applications. They are effectively operating systems, parsing and executing HTML, CSS, JavaScript, WebAssembly, fonts, media codecs, and GPU-accelerated rendering pipelines in real time, against content pulled from untrusted corners of the internet.</p><p>CVE-2026–2441 exploited this reality. The vulnerability lived inside the browser’s native rendering engine, and a valid-looking page from an otherwise legitimate compromised website was enough to trigger it. The traffic looked normal. The domain may have been trusted. The browser just rendered the page and the exploit ran.</p><p>That’s the nature of engine-level zero-days. The browser’s own code becomes the weapon.</p><h3>Why Your Existing Stack Couldn’t See It Coming</h3><p>Most enterprise browser security stacks are built around a shared assumption: that the browser runtime itself remains trustworthy.</p><ul><li><strong>Secure Web Gateways</strong> evaluate domains, IPs, and traffic signatures. CVE-2026–2441 was delivered as valid HTML and CSS, with nothing anomalous to catch.</li><li><strong>Browser extensions</strong> operate within the browser’s extension APIs and have no visibility into memory allocation or rendering engine internals, by design.</li><li><strong>EDR agents</strong> can detect post-exploitation behavior, but exploitation here happens inside a trusted process executing complex code all day long.</li></ul><p>These are good tools solving real problems. They just operate within the same failure domain as the browser itself. If the browser is compromised, they’re all riding that same sinking ship.</p><p>The deeper issue isn’t that these tools failed. It’s that they were never designed to handle a compromise <em>of the platform they run on</em>.</p><h3>The Architectural Answer: Isolation</h3><p>If you can’t reliably prevent engine-level vulnerabilities (and history tells us you can’t, given the patch cadence of every major browser vendor), the next best thing is to contain them.</p><p>This is the premise behind browser isolation architectures: <strong>assume the browser will be compromised, and design so that compromise doesn’t matter much.</strong></p><p>In an isolated model:</p><ul><li>The browser executes in a segmented environment away from the user’s physical endpoint.</li><li>Network access from that environment is explicitly controlled.</li><li>Sensitive internal systems aren’t directly reachable from the browsing session.</li><li>Sessions are ephemeral, reset to a known-good state after each use.</li></ul><p>Arbitrary code execution in the browser becomes a contained event rather than a beachhead into your environment.</p><p>This is where <strong>Kasm Workspaces</strong> enters the picture.</p><h3>How Kasm Workspaces Reframes the Browser Security Problem</h3><p><a href="https://kasm.com">Kasm Workspaces</a> is a container-native streaming platform that delivers browser sessions (and full desktops) from isolated containers directly to users’ screens. The user sees and interacts with a browser. But that browser runs inside a Docker container on a server, not on the user’s device.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/640/1*gnKI8eUChIGVeKJGTSsesg.gif" /><figcaption>Chrome Session powered by Kasm delivered to your browser</figcaption></figure><p>The user’s endpoint receives only a pixel stream. The entire browser stack runs server-side: the DOM, the JavaScript engine, the CSS renderer. None of it touches the endpoint.</p><p>Here’s why that matters for CVE-2026–2441 specifically:</p><h4>The Exploit Stays in the Container</h4><p>With Kasm, the Chrome instance processing that malicious CSS runs inside an ephemeral container on a Kasm server, not on your employee’s laptop or corporate desktop. Even if CVE-2026–2441 achieves arbitrary code execution, the attacker is running code inside a container with no access to:</p><ul><li>The user’s local file system</li><li>Cached credentials or session tokens on the endpoint</li><li>Corporate VPN connections</li><li>Internal network segments not explicitly routed to the container</li></ul><p>The blast radius is fundamentally smaller because the execution environment is fundamentally separated.</p><h4>Sessions Are Ephemeral by Design</h4><p>Kasm’s container model is inherently disposable. When a browsing session ends, the container is destroyed. Any foothold the attacker established, whether modified files, planted malware, or injected processes, evaporates with it. There is no persistent state to harvest later.</p><p>This directly addresses one of the core CISO questions raised by CVE-2026–2441: <em>Can browsing environments be quickly reset or destroyed to eliminate persistence?</em> With Kasm, the answer is yes, by default, every session.</p><h4>Network Segmentation Is Architectural, Not Bolted On</h4><p>Kasm allows administrators to define granular network policies for each workspace. A general internet browsing container can be firewalled away from internal resources entirely. An employee browsing the open web for research doesn’t need a path to your ERP system or Active Directory, and with Kasm, they won’t have one.</p><p>This aligns directly with the least-privilege and segmentation principles that zero-trust frameworks demand but that traditional browser deployments consistently fail to deliver. The browser has always had too much access to too many things. Kasm makes that access explicit and controllable.</p><h4>Flexible Deployment for Enterprise Needs</h4><p>Kasm can be deployed on-premises, in your own cloud environment, or as a managed service, which matters for organizations with data residency requirements or compliance obligations. It supports a range of workspace types:</p><ul><li><strong>Isolated browser sessions</strong> for general web access</li><li><strong>Full containerized desktops</strong> for sensitive workflows</li><li><strong>Application streaming</strong> for specific apps that need web access without exposing the endpoint</li></ul><p>Want to see it in action? This demo link opens a live Kasm Chrome session right in your browser, no account or install needed, try it out: <a href="https://app.kasm.com/#/cast/chrome-casting">https://app.kasm.com/#/cast/chrome-casting</a></p><h3>Evaluating Your Posture</h3><p>CVE-2026–2441 is not an outlier. Browser engines receive frequent high-severity patches. Active exploitation in the wild has become routine. The time between vulnerability disclosure and weaponization continues to compress.</p><p>Security leaders should honestly assess their browser security posture against three questions:</p><ol><li><strong>Assumption of Compromise:</strong> Does your architecture account for the possibility that the browser is already exploited?</li><li><strong>Blast Radius Control:</strong> If exploitation occurs today, what systems and credentials are immediately reachable from the browser process?</li><li><strong>Session Containment and Reset:</strong> Can you destroy a potentially compromised browsing environment and restore a clean one in minutes?</li></ol><p>If the honest answers are “no,” “too many,” and “not really,” the gap isn’t in your tooling. It’s in your architecture.</p><h3>Conclusion</h3><p>CVE-2026–2441 is a useful lens for evaluating a truth the security industry has long understood but often underweighted: browsers are exposed execution environments processing untrusted content, and they will have exploitable vulnerabilities.</p><p>The organizations that will weather the next engine-level zero-day most effectively are not necessarily those with the most tools. They’re the ones whose architecture assumes browsers can be compromised and designs accordingly: limiting what a compromised browser can reach, ensuring sessions leave no lasting foothold, and separating browsing execution from the endpoint entirely.</p><p>Kasm Workspaces is one of the most operationally practical ways to get there. It doesn’t ask you to replace your existing security stack. It asks you to rethink where the browser runs and what it can touch, and provides the infrastructure to do that at enterprise scale.</p><p>Kasm’s community edition is free to self-host. If you want to stand up your own instance, the single-server install gets you a working environment in well under 20 minutes: <a href="https://docs.kasm.com/docs/install/single_server_install">https://docs.kasm.com/docs/install/single_server_install</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=eea7e8d8fd05" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Installing Kasm Workspaces on Rancher Using the Official Helm Chart]]></title>
            <link>https://kasm.medium.com/installing-kasm-workspaces-on-rancher-using-the-official-helm-chart-a4c4ef918e35?source=rss-755f84541f54------2</link>
            <guid isPermaLink="false">https://medium.com/p/a4c4ef918e35</guid>
            <dc:creator><![CDATA[Kasm Technologies]]></dc:creator>
            <pubDate>Tue, 20 Jan 2026 15:48:42 GMT</pubDate>
            <atom:updated>2026-01-20T15:48:42.636Z</atom:updated>
            <content:encoded><![CDATA[<p>Running Kasm Workspaces on Kubernetes makes it easier to integrate secure, browser-based workspaces into an existing cloud-native environment. In this post, we walk through how to deploy the Kasm control plane on a Rancher-managed Kubernetes cluster using the <a href="https://github.com/kasmtech/kasm-helm">official Helm chart</a>, and then explore multiple ways to deliver sessions, including connecting existing Windows VMs for traditional VDI-style workloads as well as deploying Kasm Agents for container-based sessions.</p><p>When browsing Rancher’s Apps and Charts catalog, you will see two Kasm-related charts available: Kasm and Kasm Demo. For any evaluation, testing, or real-world use, the Kasm chart is the recommended option. The Kasm Demo chart is an older variant and is not recommended.</p><p>In this walkthrough, we will use the standard Kasm Helm chart.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*_r68tBfLKXWoFD5cjeX5kw.png" /></figure><h3>Prerequisites</h3><p>The Kasm Helm chart expects an SSL certificate to be provided for ingress. If you already have a certificate signed by a trusted certificate authority, you can use that. Otherwise, a self-signed certificate can be generated using OpenSSL on any VM or local machine.</p><p>Use the following OpenSSL command to generate your own self-signed certificate and private key. Refer to the SSL cert instructions our <a href="https://github.com/kasmtech/kasm-helm/blob/release/1.18.1/docs/upload-certs-to-k8s.md">Helm Chart GitHub repo</a> for more information.</p><pre>openssl req -x509 -nodes -days 365 -newkey rsa:2048 \<br>  -keyout tls.key -out tls.crt \<br>  -subj &quot;/CN=kasm.example.com/O=Kasm Self-Signed&quot;</pre><p>Once the certificate and private key are generated, the next step is to create a dedicated <strong>Kubernetes namespace</strong> for Kasm. This namespace is required before creating secrets. Open a shell on Rancher and execute the following command to create a <strong>kasm </strong>namespace.</p><pre>kubectl create ns kasm</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/597/1*0p-aWQ289wC6_JDOz-Ch2Q.png" /></figure><p>Once the namespace is created, go to “Storage” → “Secrets” → “Create” and select “TLS Certificate”.</p><p>Over here, paste the generated Private Key and Certificate content and click “Create”.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*UiKKZ0fiQLtgQV2BZoQutw.png" /></figure><h3>Installing the Kasm Helm Chart</h3><p>With the namespace and TLS secret in place, the Kasm chart can now be installed from Rancher.</p><p>Go to the Kasm Chart on Rancher, and click “Install”.<br>During installation, a few required fields must be configured:</p><ul><li>The public address where the Kasm control plane will be accessible</li><li>The name of the TLS certificate secret</li><li>Ingress settings, typically using a ClusterIP service when ingress is enable</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*T1wukPnsD0uU9Z8tb_AjRQ.png" /></figure><p>These values are the minimum required to get the chart up and running. For more advanced configuration and additional options, go to the <strong>“Edit YAML”</strong> tab. All available settings are documented in the <a href="https://github.com/kasmtech/kasm-helm/blob/release/1.18.1/charts/kasm/values.yaml">values.yaml file on our GitHub repo</a>, along with descriptions and example configurations.</p><p>Finally, click “Install” and the Kasm pods will initialize. It is normal for some pods to remain in an <strong>init </strong>state for a few minutes.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/957/1*Lzdj6pOQ5wWIUtvZEk1lbw.png" /></figure><h3>Accessing the Kasm Control Plane</h3><p>After all pods are running, the Kasm web interface becomes accessible at the configured hostname. If a self-signed certificate was used, the browser will display a security warning. This does not occur when using a properly signed certificate.</p><p>Administrative credentials and other secret values can be retrieved directly from Rancher shell. The Kasm Helm Chart’s release notes on Rancher provide commands to extract the admin password and other credentials.</p><p>For example, you can run this command to extract your Kasm Admin Password:</p><pre>kubectl get secret --namespace kasm kasm-secrets -o jsonpath=&quot;{.data.admin-password}&quot; | base64 -d</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/1002/1*v-48DzDNHmp0WgZpqTClxw.png" /></figure><p>You can then login to your Kasm deployment with the default Admin username <strong>admin@kasm.local</strong> and the retrieved password.</p><p>Once the Kasm control plane is online, sessions cannot be launched until compute resources are connected. Kasm supports multiple resource types, allowing administrators to choose the delivery model that best fits their environment.</p><p>Kasm can broker sessions to existing infrastructure such as machines running SSH, VNC, RDP, or KasmVNC. This makes it easy to integrate Kasm with environments that already have virtual desktops or remote-access servers deployed.</p><p>In addition to existing systems, Kasm can also deploy container-based sessions by connecting Linux systems running the Kasm Agent. For Windows-based workloads, Windows VMs or servers can be added to Kasm and accessed through the browser using RDP, without requiring a separate client.</p><p>Both Kasm Agents and Windows servers can later be configured for autoscaling, allowing resources to be dynamically provisioned and removed based on user demand.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*7xgkOjnWrgiQNojIQRElGA.png" /></figure><h3>Connecting an Existing Windows VM to Kasm</h3><p>Kasm can deliver Windows desktops and applications by connecting to existing Windows virtual machines or servers. This is a common approach for organizations looking to provide traditional VDI-style workloads through a browser-based interface.</p><p>To connect a Windows system, first ensure the Windows VM is running and reachable over RDP. On the Windows system, install the <strong>Kasm Desktop Service</strong> using the installer provided in the <a href="https://docs.kasm.com/docs/1.18.1/guide/windows/windows_service">Kasm documentation</a>.</p><p>The installer requires three values:</p><ul><li>The hostname or IP address of the Kasm API</li><li>The Kasm API port (443 by default)</li><li>A registration token generated by the Kasm control plane</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/797/1*7rFVJc9m2dkJ0Pk9vhduPA.png" /></figure><p>The registration token is generated from the Kasm admin interface by navigating to <strong>Infrastructure → Servers</strong> and selecting <strong>Add Server</strong>. After defining basic server details, setting the connection type to RDP, and enabling “<em>Kasm Desktop Service Installed”</em>, Kasm generates a registration token.</p><p>Once the token is entered into the installer, the Windows system securely registers with Kasm and appears in the admin interface as a managed server.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/404/1*prfdj_GGM7163vB_WiPcGw.png" /></figure><p>A server-based workspace can then be created and assigned to this system, allowing users to launch Windows sessions directly from the browser using Kasm’s Web Native client or a traditional RDP client.</p><h3>Installing an Agent</h3><p>For container-based sessions, at least one Kasm Agent must be connected to the deployment. Installing Kasm Agents inside Kubernetes is not currently supported, so an external Linux VM or bare-metal system is required.</p><p>The agent installation command is available in the <a href="https://docs.kasm.com/docs/install/multi_server_install#install-agent-server-roles">Kasm documentation</a> under the multi-server installation section. The command requires three key values:</p><ul><li>The agent hostname or IP address</li><li>The manager hostname where the Kasm control plane is running</li><li>A manager token retrieved from the Kasm deployment</li></ul><pre># Kasm Agent installation commands<br>cd /tmp<br>curl -O https://kasm-static-content.s3.amazonaws.com/kasm_release_1.18.1.tar.gz<br>tar -xf kasm_release_1.18.1.tar.gz<br>sudo bash kasm_release/install.sh --role agent --public-hostname [AGENT_HOSTNAME] --manager-hostname [MANAGER_HOSTNAME] --manager-token [MANAGER_TOKEN]</pre><p>To retrieve the Manager Token, you can once again browse to the Release Notes of the Kasm chart on Rancher and find the appropriate command to retrieve the manager token.</p><pre>kubectl get secret --namespace kasm kasm-secrets -o jsonpath=&quot;{.data.manager-token}&quot; | base64 -d</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/1002/1*9WafDTLTF-XCQGBD16KaaA.png" /></figure><p>Then, substitute the agent installation command with your retrieved values and execute the command.</p><p>Once the command is executed on the VM and the license is accepted, the agent installs and automatically registers with the Kasm control plane.</p><p>After installation completes, the agent appears in the Kasm admin interface and can be enabled.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1001/1*KC9HQO0rX-iQS1gj_LmL9A.png" /></figure><p>With the agent enabled, workspaces can now be installed from the registry. Installing a workspace image may take a few minutes on first use while the image is downloaded to the agent.</p><p>Once the image is ready, a session can be launched. This confirms that the control plane, agent, and workspace infrastructure are all functioning correctly. From here, users can launch desktop environments, browsers, and other applications supported by Kasm.</p><h3>Autoscaling in Kasm</h3><p>In this walkthrough, resources were connected manually by registering a Windows VM and installing a Kasm Agent on a Linux system. In production environments, Kasm does not require these resources to be managed by hand.</p><p>Kasm supports <a href="https://docs.kasm.com/docs/1.18.1/how-to/autoscale/">autoscaling for both Windows servers and Kasm Agents</a>. Autoscaling allows Kasm to automatically provision and deprovision resources based on real-time user demand or scheduling policies. This functionality is supported across major cloud providers as well as on-prem hypervisors, including <a href="https://docs.kasm.com/docs/1.18.1/how-to/autoscale/autoscale_providers/harvester">Harvester</a>.</p><p>With autoscaling enabled, Kasm can dynamically create Windows VMs or agent hosts, automatically register them with the control plane, and remove them when demand decreases. Detailed autoscaling configuration is covered in provider-specific documentation and <a href="https://www.youtube.com/playlist?list=PLGVRoK_5yweRIyFJjejDW1kzjlDb7C5ba">videos</a>.</p><h3>Links</h3><ul><li>Kasm Helm Chart GitHub Repo: <a href="https://github.com/kasmtech/kasm-helm">https://github.com/kasmtech/kasm-helm</a></li><li>Kasm Kubernetes Documentation: <a href="https://docs.kasm.com/docs/install/kubernetes">https://docs.kasm.com/docs/install/kubernetes</a></li><li>Rancher Partner Chart for Kasm: <a href="https://github.com/rancher/partner-charts/tree/main-source/charts/kasm">https://github.com/rancher/partner-charts/tree/main-source/charts/kasm</a></li><li>Kasm Autoscaling YouTube playlist: <a href="https://www.youtube.com/playlist?list=PLGVRoK_5yweRIyFJjejDW1kzjlDb7C5ba">https://www.youtube.com/playlist?list=PLGVRoK_5yweRIyFJjejDW1kzjlDb7C5ba</a></li></ul><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=a4c4ef918e35" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Pulumi Automation for Kasm on Kubernetes in GCP and AWS]]></title>
            <link>https://kasm.medium.com/pulumi-automation-for-kasm-on-kubernetes-in-gcp-and-aws-75c70116ae7e?source=rss-755f84541f54------2</link>
            <guid isPermaLink="false">https://medium.com/p/75c70116ae7e</guid>
            <dc:creator><![CDATA[Kasm Technologies]]></dc:creator>
            <pubDate>Mon, 04 Aug 2025 15:31:01 GMT</pubDate>
            <atom:updated>2025-08-04T15:31:01.468Z</atom:updated>
            <content:encoded><![CDATA[<p>We are pleased to announce the release of Kasm <a href="https://github.com/kasmtech/kasm-pulumi">Pulumi automation scripts</a> for large-scale automated deployments into GCP and AWS environments.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*fg7FI11wkBA6u8G-E-L2KA.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*LT6UU31taH0fPdjlaGFKNw.png" /></figure><p><strong>What is Kasm?</strong></p><p>Kasm allows you to stream Windows desktops and containerized Linux desktops to any modern web browser. It can be used for everything from Zero-Trust Browser Isolation to VDI.</p><p>Kasm consists of several containerized services that govern authentication, session management, and collaboration. In the simplest case, Kasm can be deployed on a single server and can power dozens of concurrent sessions. Tens of thousands of users run Kasm like this every day in their homelabs and in the cloud.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*mTWbaAsxbp2pUaxl-Xazeg.png" /></figure><p><strong>Scaling challenges</strong></p><p>Kasm customers also deploy in environments that must support thousands of concurrent sessions, sometimes with users distributed across the globe. To meet such scenarios, Kasm can be deployed in a multi-server configuration (or under Kubernetes with sessions powered by VMs). It can be spread across different data centres or cloud regions to support a globally distributed user base.</p><p>Such large-scale deployments will always require a degree of planning. Administrators must understand access patterns and identify where their users will be accessing workspaces from around the world. Kasm also has built-in autoscaling for on-premise virtualisation platforms as well as for major cloud providers. Autoscaling can dynamically create VMs to deal with elastic usage patterns cost-effectively.</p><p>These deployments typically fall into two categories:</p><ol><li>All users in a single geography: Kasm can be deployed with autoscaling in a single data centre or cloud region.</li><li>Users across geographies: Kasm can be deployed in one primary region with satellite installations in other data centres or cloud regions.</li></ol><figure><img alt="AWS multi-zone / multi-region architecture" src="https://cdn-images-1.medium.com/max/1024/1*nq10z-Pt2BXpiYmYxqXwUQ.png" /><figcaption>Multi-region AWS architecture</figcaption></figure><p><strong>Kubernetes</strong></p><p>The Kasm control plane is deployed into a managed Kubernetes environment with the Postgres database configured as an RDS database (for AWS) or CloudSQL (for GCP).</p><p>Kasm workspaces run in Docker containers on EC2 instances or Cloud VMs that are configured for the Kasm Agent role. The reason these don’t run inside Kubernetes today is that Kasm requires specific host-level configuration which are not universally achievable on Kubernetes nodes.</p><p><strong>Kasm Autoscaling</strong></p><p>Kasm agent VMs can be autoscaled using the built-in Kasm Autoscaler. This gives customers control over the number of VMs that are available as Kasm Agents in each zone — improving efficiency, end-user experience, and minimising cost.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*i1JdWfhUE6jsujjli6tsZg.png" /></figure><p>The Pulumi integration pre-configures these, so all an administrator needs to do is set up an IAM role (AWS) or Service Account (GCP) with appropriate permissions.</p><p><strong>Pulumi automation</strong></p><p>Deploying Kasm into a cloud environment for large-scale use can be time-consuming to set up. Administrators will need to set up networking, routing, load balancers, SSL certificates, and other related components. There can easily be hundreds of cloud resources that need to be created and configured. Aside from the amount of work involved, it is also prone to human error, and any architectural adjustments can require time-consuming planning efforts.</p><p>Pulumi is an Infrastructure-as-Code (IaC) tool similar to Terraform and Ansible. It enables infrastructure engineers to define cloud resources through code in multiple programming languages. This allows for consistently reproducible infrastructure and provides an effective audit trail for infrastructure configuration.</p><p>Our Pulumi scripts are developed in Python to make them accessible for engineers of all levels. With Pulumi, administrators can define infrastructure requirements in YAML format and run a command to deploy it into either GCP or AWS cloud environments.</p><p><strong>Planning</strong></p><p>At the outset, administrators need to decide which cloud region(s) they want to deploy Kasm into, as well as the initial sizing of Kasm zones and agents within them. They will also need to decide which area will serve as primary (this is where a Kubernetes cluster will be configured and will serve as the control plane for all zones). Finally, you will also need a domain name that the Kasm deployment will use. For more detail,s look at the <a href="https://github.com/kasmtech/kasm-pulumi/blob/develop/gcp/Pulumi.dev.yaml.example">GCP configuration example</a> or the <a href="https://github.com/kasmtech/kasm-pulumi/blob/develop/aws/Pulumi.dev.yaml.example">AWS configuration example</a>. We populate the key information into a spreadsheet at Kasm, but how you do this is up to you:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/754/1*LJb2YuvhYr_WKlbFwgDCcw.png" /><figcaption>Example plan for single-zone GCP environment</figcaption></figure><p><strong>Setup guide</strong></p><p>Our GitHub repo includes comprehensive setup guides. Refer to the <a href="https://github.com/kasmtech/kasm-pulumi/tree/develop/aws#prerequisites">AWS README</a> or <a href="https://github.com/kasmtech/kasm-pulumi/tree/develop/gcp#prerequisites">GCP README</a> for full details. The key points to summarise are:</p><ol><li>Install a Python environment and create a virtualenv per the documentation</li><li>Install GCP or AWS CLI tools</li><li>Clone our repo and install Python requirements, including Pulumi (<strong>note</strong>: a Pulumi cloud account <em>is not required</em>)</li><li>Populate the configuration YAML file with the information from your plan.</li></ol><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*B3ds6CIIgZQeBmMOA3cAuA.png" /></figure><p><strong>Summary</strong></p><p>With the Pulumi scripts provided, customers can provision and maintain production-grade Kasm environments within hours, whereas previously it could take days or weeks to plan and set up. Being written in Python makes these scripts accessible to engineers who want to customise or add their own cloud resources into a deployment. Ultimately, the entire process can be integrated into a CI/CD flow to achieve a high degree of automation.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=75c70116ae7e" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Scaling Windows: Kasm Autoscaling in Microsoft Azure]]></title>
            <link>https://kasm.medium.com/scaling-windows-kasm-autoscaling-in-microsoft-azure-75991108b6b0?source=rss-755f84541f54------2</link>
            <guid isPermaLink="false">https://medium.com/p/75991108b6b0</guid>
            <category><![CDATA[scalability]]></category>
            <category><![CDATA[kasm]]></category>
            <category><![CDATA[azure]]></category>
            <category><![CDATA[linux]]></category>
            <category><![CDATA[windows]]></category>
            <dc:creator><![CDATA[Kasm Technologies]]></dc:creator>
            <pubDate>Fri, 13 Jun 2025 20:08:33 GMT</pubDate>
            <atom:updated>2025-06-13T20:08:33.709Z</atom:updated>
            <content:encoded><![CDATA[<iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FKLqg0zOxG_A%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DKLqg0zOxG_A&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FKLqg0zOxG_A%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/b30cf4b397be6b287c7f5d19be78b2eb/href">https://medium.com/media/b30cf4b397be6b287c7f5d19be78b2eb/href</a></iframe><p><strong>AutoScaling is a powerful feature that dynamically provisions and destroys Kasm system resources based on a predefined schedule or real-time user demand.</strong> This means you can automatically scale your infrastructure up to meet peak demand and scale it back down during quiet periods, ensuring you only pay for what you use.</p><p>Kasm Workspaces offers robust AutoScale capabilities for both full-stack virtual machines (VMs) and containerized environments. It can dynamically provision and integrate full-stack VMs into a Server Pool to support RDP, KasmVNC, VNC, and SSH sessions for Windows or Linux desktops. Additionally, Kasm can AutoScale Docker Agent VMs to efficiently manage and deploy container-based workspaces, ensuring optimal resource utilization and scalability.</p><p>By leveraging a cloud provider like Microsoft Azure, you can build a highly efficient, secure, and cost-effective Kasm deployment that adapts to your needs in real time.</p><h3>The Benefits of Cloud-Native Autoscaling</h3><ul><li><strong>Maximize Cost-Efficiency:</strong> Instead of paying for a large fleet of idle VMs 24/7, autoscaling ensures you only spin up — and pay for — resources when they are actively needed. When demand subsides, resources are automatically destroyed, slashing your infrastructure costs.</li><li><strong>Ultimate Scalability:</strong> Handle sudden traffic spikes with ease. Whether it’s the start of the workday or the launch of a new training program, Kasm and Azure can automatically provision new agents or servers to meet user demand without manual intervention.</li><li><strong>Enhanced Security:</strong> Autoscaling promotes the use of ephemeral, or temporary, instances. When a user session ends, the underlying VM can be destroyed. This practice significantly reduces the attack surface, as any potential compromises, malware, or misconfigurations are wiped away with the instance, and a fresh, clean VM is provisioned for the next user.</li></ul><h3>Overview of Autoscaling:</h3><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FBaDBtZl_j3g%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DBaDBtZl_j3g&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FBaDBtZl_j3g%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/ac3104d53b1edc6ce86591e341e78b8d/href">https://medium.com/media/ac3104d53b1edc6ce86591e341e78b8d/href</a></iframe><p><a href="https://kasmweb.com/docs/latest/how_to/infrastructure_components/autoscale.html">AutoScale - Kasm 1.17.0 documentation</a></p><h3>Technical Guide: Configuring Kasm Autoscaling in Azure</h3><p>This technical section, based on the Kasm Workspaces demonstration video, walks through configuring autoscaling for both Docker Agents (for containerized workspaces) and Windows Servers (for RDP sessions).</p><p>For the detailed configuration guidance, please refer to our documentation:</p><p><a href="https://kasmweb.com/docs/latest/guide/compute/pools.html#azure-settings">Pools - Kasm 1.17.0 documentation</a></p><h3>Part 1: Autoscaling Docker Agents</h3><p>This setup will automatically create and destroy Linux VMs to run containerized Kasm Workspaces like Chrome, Firefox, or Slack.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*02Nuhpb3lpSEjUV4.png" /></figure><h4>Step 1: Create an Azure App Registration</h4><p>Kasm needs an identity in Azure to manage resources. We’ll create an App Registration for this.</p><ol><li>Navigate to <strong>Microsoft Entra ID</strong> in the Azure portal.</li><li>Go to <strong>App registrations</strong> and select <strong>+ New registration</strong>.</li><li>Give it a descriptive name, like kasm_autoscale_app. The other defaults are fine. Click <strong>Register</strong>.</li><li>On the app’s overview page, copy the <strong>Application (client) ID</strong>. You will need this for the Kasm configuration.</li><li>Go to <strong>Certificates &amp; secrets</strong> and click <strong>+ New client secret</strong>.</li><li>Add a description and choose an expiry period suitable for your organization’s security policy. Click <strong>Add</strong>.</li><li>Immediately copy the <strong>Value</strong> of the secret. This is your only chance to see it. This is the client secret key.</li></ol><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*cr4NrUsqYkDw0fF_.png" /></figure><h4>Step 2: Assign Permissions</h4><p>Now, give this new application identity permissions to manage resources.</p><ol><li>Navigate to the <strong>Resource Group</strong> where you want Kasm to create the VMs.</li><li>Go to <strong>Access control (IAM)</strong> and click <strong>+ Add</strong> -&gt; <strong>Add role assignment</strong>.</li><li>Assign the necessary roles. The <strong>Contributor</strong> role is sufficient for it to manage VMs and networking, but for more granular control, refer to the official Kasm Workspaces documentation for the minimum required permissions.</li><li>Select the app registration you created (kasm_autoscale_app) as the member and save the assignment.</li></ol><figure><img alt="" src="https://cdn-images-1.medium.com/max/807/0*6R_-j1QwBi_h3C1W.png" /></figure><h4>Step 3: Configure Kasm Workspaces</h4><p>Log into your Kasm admin dashboard to link it with Azure.</p><ol><li><strong>Create a Pool:</strong> Go to <strong>Infrastructure -&gt; Pools -&gt; Add Pool</strong>.</li></ol><ul><li><strong>Name:</strong> Azure Autoscale Pool</li><li><strong>Pool Type:</strong> Docker Agent</li><li>Click <strong>Submit</strong>.</li></ul><ol><li><strong>Create an Autoscale Config:</strong> Go to <strong>Infrastructure -&gt; Autoscale Configs -&gt; Add Autoscale Config</strong>.</li></ol><ul><li><strong>Name:</strong> Azure Autoscale Config</li><li><strong>Pool:</strong> Select the Azure Autoscale Pool you just created.</li><li><strong>Downscale Backoff (Seconds):</strong> Set how long an unused agent should wait before being considered for destruction (e.g., 60).</li><li><strong>Standby Resources:</strong> Define the minimum resources you want available at all times (e.g., 1 Standby Core, 1024 Standby Memory). Kasm will automatically create VMs to meet this minimum.</li></ul><ol><li><strong>Configure the VM Provider:</strong></li></ol><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*UtAiwkJOsXWzZLkwnFIh5A.png" /></figure><ul><li>Under <strong>VM Provider Details</strong>, click <strong>Add Provider</strong>.</li><li><strong>Provider Type:</strong> Azure</li><li><strong>Name:</strong> Azure Docker Provider</li><li>Fill in the Azure details:</li><li><strong>Subscription ID:</strong> Your Azure Subscription ID.</li><li><strong>Resource Group Name:</strong> The name of the Resource Group you configured in IAM.</li><li><strong>Tenant ID:</strong> Your Azure Tenant ID (found under Microsoft Entra ID -&gt; Properties).</li><li><strong>Client ID:</strong> The <strong>Application (client) ID</strong> you copied earlier.</li><li><strong>Client Secret:</strong> The <strong>Value</strong> of the client secret you created.</li><li><strong>Azure Authority:</strong> Select the appropriate cloud (e.g., Azure Public Cloud).</li><li><strong>Region:</strong> The Azure region to deploy VMs in (e.g., Sweden Central).</li><li><strong>Max Instances:</strong> A cap on the number of VMs Kasm can create (e.g., 3).</li><li><strong>VM Size:</strong> The Azure VM size (e.g., Standard_B2ms).</li><li><strong>OS Disk Type &amp; Size:</strong> Standard_LRS and 100 GB.</li><li><strong>OS Image Reference:</strong> This requires a specific JSON blob. You can get this using the Azure CLI. For an Ubuntu image, it will look something like this:</li></ul><pre>{<br>  &quot;publisher&quot;: &quot;canonical&quot;,<br>  &quot;offer&quot;: &quot;0001-com-ubuntu-server-jammy&quot;,<br>  &quot;sku&quot;: &quot;22_04-lts-gen2&quot;,<br>  &quot;version&quot;: &quot;latest&quot;<br>}</pre><ul><li><strong>Network Security Group/Subnet:</strong> Navigate to an existing VM in your Azure portal, go to its networking settings, and use the <strong>JSON View</strong> to get the full resource ID for the NSG and Subnet. Paste these full IDs into the Kasm config.</li><li><strong>SSH Public Key:</strong> Paste your public SSH key to allow for administrative access.</li><li><strong>Agent Startup Script:</strong> Copy the agent startup script from the <a href="https://www.google.com/search?q=https://github.com/kasmtech/kasm-workspaces/tree/develop/src/install/agent_startup_scripts">Kasm Workspaces GitHub repository</a>. <strong>Important:</strong> Uncomment the section for Azure Private IP so the agent registers correctly.</li></ul><p>Click <strong>Submit</strong>. Kasm will now begin provisioning VMs in Azure to meet your standby requirements.</p><h3>Part 2: Autoscaling Windows Servers (RDP)</h3><p>This process is similar but tailored for full-stack Windows VMs that users will connect to via RDP.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*PH5egFJvy_6BrDl62AYnPA.png" /></figure><h4>Prerequisite: Azure Compute Gallery Image</h4><p>You must first have a generalized Windows VM image (Sysprepped) available in an <strong>Azure Compute Gallery</strong>. This will be your template.</p><h4>Step 1: Configure Kasm Workspaces</h4><ol><li><strong>Create a Pool:</strong> Go to <strong>Infrastructure -&gt; Pools -&gt; Add Pool</strong>.</li></ol><ul><li><strong>Name:</strong> Azure Windows Pool</li><li><strong>Pool Type:</strong> Server</li></ul><ol><li><strong>Create an Autoscale Config:</strong> Go to <strong>Infrastructure -&gt; Autoscale Configs -&gt; Add Autoscale Config</strong>.</li></ol><ul><li><strong>Name:</strong> Azure Windows Config</li><li><strong>Pool:</strong> Select the Azure Windows Pool.</li><li><strong>Connection Type:</strong> RDP</li><li><strong>Credentials:</strong> Choose your desired method (e.g., <strong>Static</strong> for a shared account).</li><li><strong>Minimum Available Sessions:</strong> Set the number of standby Windows VMs you want ready (e.g., 1).</li></ul><ol><li><strong>Configure the VM Provider:</strong></li></ol><ul><li>Create a new Azure VM Provider config as before.</li><li>Fill in the same credentials and resource details.</li><li><strong>OS Image Reference:</strong> Navigate to your image version in the Azure Compute Gallery and go to the <strong>JSON view</strong>. Copy the id and place it in a JSON blob like this:</li></ul><pre>{<br>  &quot;id&quot;: &quot;/subscriptions/YOUR_SUB_ID/resourceGroups/YOUR_RG/providers/Microsoft.Compute/galleries/YOUR_GALLERY/images/YOUR_IMAGE_DEF/versions/YOUR_VERSION&quot;<br>}</pre><ul><li><strong>Agent Startup Script:</strong> Use the appropriate Windows agent startup script from the Kasm GitHub.</li><li><strong>Config Override:</strong> If your Windows image uses Secure Boot and Trusted Launch (common for modern images), you must add a JSON override to enable it. This is detailed in the Kasm documentation.</li></ul><pre>{<br>  &quot;properties&quot;: {<br>    &quot;securityProfile&quot;: {<br>      &quot;uefiSettings&quot;: {<br>        &quot;secureBootEnabled&quot;: true,<br>        &quot;vTpmEnabled&quot;: true<br>      },<br>      &quot;securityType&quot;: &quot;TrustedLaunch&quot;<br>    }<br>  }<br>}</pre><p>Click <strong>Submit</strong>. Kasm will now begin provisioning full Windows Server VMs from your template, ready to accept RDP sessions. When users connect, Kasm will scale up to meet demand, and when they disconnect, it will deprovision the costly Windows VMs, saving you money.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*BQr2mcbV1zlaBfFodb6NTg.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*-MRrGq8HYQfU6yuW" /></figure><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=75991108b6b0" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[LearnLinuxTV Guide for Deploying Kasm on Proxmox, Linode and Digital Ocean]]></title>
            <link>https://kasm.medium.com/learnlinuxtv-guide-for-deploying-kasm-on-proxmox-linode-and-digital-ocean-75dc4e7543c9?source=rss-755f84541f54------2</link>
            <guid isPermaLink="false">https://medium.com/p/75dc4e7543c9</guid>
            <category><![CDATA[kasm]]></category>
            <category><![CDATA[proxmox]]></category>
            <category><![CDATA[linode]]></category>
            <category><![CDATA[digital-ocean-droplet]]></category>
            <category><![CDATA[linux]]></category>
            <dc:creator><![CDATA[Kasm Technologies]]></dc:creator>
            <pubDate>Wed, 11 Jun 2025 00:44:18 GMT</pubDate>
            <atom:updated>2025-06-11T00:44:18.281Z</atom:updated>
            <content:encoded><![CDATA[<iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FlkQerIu1Ndc%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DlkQerIu1Ndc&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FlkQerIu1Ndc%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/948aafcc18f93cdcf091df5f99c457a4/href">https://medium.com/media/948aafcc18f93cdcf091df5f99c457a4/href</a></iframe><p><a href="https://kasmweb.com/">Kasm Workspaces</a> is a powerful, container-based solution that lets you stream applications and even entire desktop environments directly to your web browser. It’s a fantastic tool for creating secure, isolated workspaces, setting up a centralized browser for your team, or just tinkering in a homelab. Since you can self-host Kasm, you have full control over your data and infrastructure.</p><p>This guide by LearnLinuxTV explores several ways to deploy Kasm, from a manual installation on a Proxmox VM to a one-click deployment on DigitalOcean. Whether you’re using on-premise hardware or a cloud provider, there’s a method here for you.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*WW48Mh5Ldv7zE27R3xVVWA.png" /></figure><h3>Method 1: Manual Installation on a Proxmox VM</h3><p>This method is perfect for homelabs or if you’re running your own physical server with Proxmox. The steps are also applicable to any bare-metal Linux installation.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*X2afPrwzeX5WZpvB-W0-wQ.png" /></figure><h3>1. Prepare Your Virtual Machine</h3><p>First, create a new virtual machine in Proxmox. A Debian or Ubuntu base is a great choice. Once the VM is running, SSH into it and perform some initial server setup:</p><ul><li><strong>Create a non-root user:</strong> It’s a security best practice to avoid operating as the root user.</li></ul><pre># Create the user (replace &#39;jay&#39; with your desired username) <br>adduser jay <br># Add the user to the sudo group <br>usermod -aG sudo jay</pre><ul><li><strong>Set the hostname:</strong> Give your server a unique identity. Edit the /etc/hostname and /etc/hosts files to set your desired name (e.g., kasm or kasm.yourdomain.com).</li><li><strong>Update your system:</strong> Ensure all packages are up to date before installing new software.</li></ul><pre># For Debian/Ubuntu <br>sudo apt update &amp;&amp; sudo apt dist-upgrade -y  <br># For Fedora/AlmaLinux <br>sudo dnf update -y</pre><ul><li><strong>Reboot:</strong> A quick reboot ensures all updates are applied.</li></ul><pre>sudo reboot</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*DM3Y9t5F0xY5pCpbbJoEBw.png" /></figure><h3>2. Install Kasm Workspaces</h3><p>Now you’re ready to install Kasm.</p><ol><li><strong>Download the latest release:</strong> Find the latest version on the <a href="https://www.google.com/search?q=https://github.com/kasmtech/kasm-workspaces/releases">Kasm releases page</a>. Right-click the link for your architecture (usually amd64) and copy the URL.</li><li><strong>Download and extract the package:</strong> Use curl to download the file into your server&#39;s /tmp directory and then extract it.</li></ol><pre># Navigate to the temporary directory <br>cd /tmp  <br><br># Download the release (replace with the URL you copied) <br>curl -O https://kasm-static-content.s3.amazonaws.com/kasm_release_1.17.0.7f020d.tar.gz  <br><br># Extract the archive <br>tar -xf kasm_release_*.tar.gz</pre><ol><li><strong>Run the installer:</strong> Execute the installation script.</li></ol><pre>sudo bash kasm_release/install.sh</pre><p>The script will take a few minutes. Once it’s finished, it will display your login credentials. <strong>Save these in a secure location!</strong></p><p>You can now access your Kasm instance by navigating to https://&lt;your-server-ip-or-hostname&gt;.</p><h3>Method 2: Deploying on a Cloud VPS (Linode)</h3><p>Running Kasm on a cloud provider like Linode (now Akamai) gives you a publicly accessible instance without needing to configure your home firewall. This process involves a manual installation similar to the Proxmox method but adds the crucial step of securing it with an SSL certificate.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*49GdpxUHFN0d6KpxdmWHPg.png" /></figure><h3>1. Create and Configure Your Instance</h3><ol><li><strong>Launch a VPS:</strong> In your cloud provider’s dashboard, create a new instance. A Debian or Ubuntu image is a reliable choice.</li><li><strong>Set up DNS:</strong> Once the instance is created, copy its public IP address. Go to your domain’s DNS settings and create an <strong>A record</strong> pointing a subdomain (e.g., kasm.yourdomain.com) to that IP.</li><li><strong>Initial Server Setup:</strong> SSH into your new cloud server and perform the same initial setup as in Method 1: create a non-root user, set the hostname to your fully qualified domain name (FQDN), and update all system packages.</li><li><strong>Install Kasm:</strong> Follow the exact same steps for downloading and installing Kasm as you did in the Proxmox method.</li></ol><h3>2. Secure Your Instance with Let’s Encrypt 🔐</h3><p>A public-facing server should always use a valid SSL certificate. We’ll use Certbot and Let’s Encrypt to get a free one.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*OBxnpF07qzsZGC65RUFF8g.png" /></figure><ol><li><strong>Check DNS Propagation:</strong> Before requesting a certificate, make sure your DNS record has propagated.</li></ol><pre># Replace with your domain <br>nslookup kasm.yourdomain.com</pre><ol><li>If it returns the correct IP address, you can proceed. If not, wait a bit longer.</li><li><strong>Install Certbot:</strong></li></ol><pre>sudo apt install certbot -y</pre><ol><li><strong>Request the Certificate:</strong> Stop the Kasm proxy container so Certbot can bind to the necessary port, then request the certificate.</li></ol><pre># Stop the proxy <br>docker stop kasm_proxy  <br><br># Request the certificate <br>sudo certbot certonly --standalone --agree-tos --preferred-challenges http -d kasm.yourdomain.com</pre><ol><li><strong>Link the New Certificate:</strong> Kasm needs to be told to use the new certificate. We’ll back up the default self-signed certs and create symbolic links to the ones from Let’s Encrypt.</li><li>Bash</li></ol><pre># Navigate to the certs directory <br>cd /opt/kasm/current/certs  <br><br># Back up the old certs <br>sudo mv kasm_nginx.crt kasm_nginx.crt.bak <br>sudo mv kasm_nginx.key kasm_nginx.key.bak  <br><br># Link the new certs (replace with your domain) <br>sudo ln -s /etc/letsencrypt/live/kasm.yourdomain.com/privkey.pem kasm_nginx.key <br>sudo ln -s /etc/letsencrypt/live/kasm.yourdomain.com/fullchain.pem kasm_nginx.crt</pre><ol><li><strong>Reboot:</strong></li></ol><pre>sudo reboot</pre><p>Once the server restarts, you should be able to access your Kasm instance at your domain over HTTPS without any browser warnings.</p><h3>Method 3: The Easy Way with DigitalOcean Marketplace 🚀</h3><p>DigitalOcean offers a Marketplace App for Kasm, which automates the entire installation process.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*AbsWFi-uRwI5-SkHGgUsvA.png" /></figure><p><a href="https://marketplace.digitalocean.com/apps/kasm-workspaces">Kasm Workspaces | DigitalOcean Marketplace 1-Click App</a></p><ol><li><strong>Navigate to the Marketplace:</strong> In your DigitalOcean dashboard, go to the <strong>Marketplace</strong>.</li><li><strong>Find Kasm:</strong> Search for “Kasm” and select the Kasm Workspaces app.</li><li><strong>Create Droplet:</strong> Click <strong>“Create Kasm Workspaces Droplet”</strong>.</li><li><strong>Configure the Droplet:</strong> Choose a region and a plan size. The marketplace app will pre-select a recommended size. Finalize your settings and click <strong>“Create Droplet”</strong>.</li></ol><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*LeqCVqUmYd0Ss4zE1YhIuQ.png" /></figure><ol><li><strong>Get Your Credentials:</strong> After the Droplet is created, you’ll need to get your admin password. Access the Droplet’s <strong>Console</strong> from the DigitalOcean dashboard. A script is provided to display your credentials.</li></ol><pre># Run this in the Droplet console ./show_kasm_credentials.sh</pre><ol><li><strong>Log In:</strong> Use the provided credentials to log in to your new Kasm instance. You will be prompted to change your password immediately. That’s it!</li></ol><h3>Advanced: Kasm on Proxmox with Autoscaling</h3><p>For a more robust Proxmox setup, you can configure Kasm to automatically create and destroy worker nodes based on demand. This is a more complex setup but provides incredible scalability.</p><p><a href="https://kasmweb.com/docs/latest/how_to/infrastructure_components/autoscale_providers/proxmox.html">Proxmox AutoScale - Kasm 1.17.0 documentation</a></p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FnXIBGs_WJcs%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DnXIBGs_WJcs&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FnXIBGs_WJcs%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/c19beb9e71d0eae4488c8ed9997e5f84/href">https://medium.com/media/c19beb9e71d0eae4488c8ed9997e5f84/href</a></iframe><h3>1. Prepare Proxmox</h3><ol><li><strong>Create a Resource Pool:</strong> In Proxmox, go to <strong>Datacenter -&gt; Permissions -&gt; Pools</strong> and create a new pool (e.g., KasmPool).</li><li><strong>Create a Dedicated User:</strong> Create a new user (kasm-autoscale-user) under <strong>Datacenter -&gt; Permissions -&gt; Users</strong>. Use &quot;Proxmox VE Authentication&quot;.</li><li><strong>Create an API Token:</strong> Under <strong>Datacenter -&gt; Permissions -&gt; API Tokens</strong>, add a token for the user you just created. Uncheck “Privilege separation” and save the <strong>Token ID</strong> and <strong>Secret</strong>.</li><li><strong>Create a Role:</strong> Create a new role (kasm-autoscale-role) with the specific permissions Kasm needs to manage VMs.</li><li><strong>Create a VM Template:</strong> Build a fresh Debian VM, install the qemu-guest-agent, generalize the machine ID, and then convert it into a Proxmox template. This template will be the base for all new worker nodes.</li></ol><h3>2. Configure Kasm for Autoscaling</h3><ol><li><strong>Log in to your primary Kasm server’s</strong> admin dashboard.</li><li><strong>Configure Pools:</strong> Go to <strong>Autoscale Configs</strong>, give the configuration a name, and set the deployment zone.</li><li><strong>Set Scaling Parameters:</strong></li></ol><ul><li><strong>Standby Cores/Memory:</strong> Define the minimum resources you always want available.</li><li><strong>Downscale Backoff:</strong> Set a delay (in seconds) before Kasm removes an idle worker node.</li></ul><ol><li><strong>Configure VM Provider Details:</strong></li></ol><ul><li><strong>Provider:</strong> Select <strong>Proxmox</strong>.</li><li><strong>API Info:</strong> Enter your Proxmox server URL, the user you created (kasm-autoscale-user), the API Token ID, and the Secret.</li><li><strong>VM Details:</strong> Specify the VM ID range for new nodes, the exact name of your Proxmox template, the resource pool name, and the CPU/memory to allocate to new nodes.</li></ul><ol><li><strong>Add the Startup Script:</strong> In the “Startup Script” box, paste the official Kasm startup script for agent nodes. This script will run on each new VM created by the autoscaler, installing the Kasm agent and registering it with your primary server.</li></ol><p>Once you save the configuration, Kasm will connect to Proxmox and, if your standby settings require it, immediately begin cloning new worker nodes from your template.</p><p>Try this out on another hypervisor or Cloud:</p><p><a href="https://kasmweb.com/downloads">Downloads | Kasm Workspaces</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=75dc4e7543c9" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[NVIDIA GPU Accelerated Workspaces in Kasm on Harvester]]></title>
            <link>https://kasm.medium.com/nvidia-gpu-accelerated-workspaces-in-kasm-on-harvester-8e4be366beb3?source=rss-755f84541f54------2</link>
            <guid isPermaLink="false">https://medium.com/p/8e4be366beb3</guid>
            <category><![CDATA[gpu]]></category>
            <category><![CDATA[kasm]]></category>
            <category><![CDATA[linux]]></category>
            <category><![CDATA[nvidia]]></category>
            <category><![CDATA[ai]]></category>
            <dc:creator><![CDATA[Kasm Technologies]]></dc:creator>
            <pubDate>Tue, 03 Jun 2025 13:43:22 GMT</pubDate>
            <atom:updated>2025-06-03T13:43:22.835Z</atom:updated>
            <content:encoded><![CDATA[<p>This article provides a comprehensive guide on configuring GPU PCI passthrough on a Harvester hypervisor, installing Nvidia drivers within the Kasm-hosting Virtual Machine, configuring the Nvidia container runtime, and verifying GPU visibility within Kasm Workspaces. We will also showcase ready-to-use AI and ML-ready workspaces from the Kasm AI Registry.</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2F3tMfc0fUvk4%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3D3tMfc0fUvk4&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2F3tMfc0fUvk4%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/6e97308555490f36e34569b38b1f2038/href">https://medium.com/media/6e97308555490f36e34569b38b1f2038/href</a></iframe><p>While this guide focuses on Harvester, the principles are applicable to other hypervisors like Proxmox, vSphere, cloud environments, or even bare-metal Linux deployments.</p><p><strong>Important Considerations Before You Begin:</strong></p><ul><li><strong>Dedicated GPU:</strong> When you passthrough a GPU via PCI passthrough to a hypervisor (and subsequently to a VM), that GPU becomes unavailable to the host operating system. It’s recommended to have at least two GPUs in your system if your host OS requires a GPU: one for the host and another dedicated to Harvester for Kasm Workspaces.</li><li><strong>vGPU as an Alternative:</strong> An alternative to PCI passthrough is using virtual GPUs (vGPUs). However, this typically requires enterprise-grade Nvidia GPUs and licensed drivers, which is beyond the scope of this article.</li></ul><h3>Step 1: Configuring GPU PCI Passthrough in Harvester</h3><p>This section details how to pass a physical Nvidia GPU directly to a Harvester Virtual Machine.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*gHEPGZs8R7Whs9vTWpiQiA.png" /></figure><p><strong>Enable PCI Controller in Harvester:</strong></p><ul><li>Navigate to <strong>Advanced → PCI Devices</strong> in your Harvester UI.</li><li>If not already enabled, enable the PCI controller. This allows Harvester to manage and passthrough PCI devices.</li></ul><p><strong>Create an Ubuntu Server VM Image:</strong></p><ul><li>Go to <strong>Images → Create</strong>.</li><li>Provide a name for the image.</li><li>Upload an ISO image file or paste a URL from where the image can be download. The recommended image is <strong>Ubuntu Noble Server Cloud Image (Ubuntu 24.04 LTS)</strong>.</li><li>Click <strong>Create</strong>.</li></ul><p><strong>Create a Virtual Machine Network:</strong></p><ul><li>Navigate to <strong>Networks → Virtual Machine Networks</strong>.</li><li>Click <strong>Create</strong>.</li><li><strong>Name:</strong> Give your network a descriptive name (e.g., kasm-vm-network).</li><li><strong>Type:</strong> Choose UntaggedNetwork.</li><li><strong>Cluster Network:</strong> Select your management cluster network (e.g., mgmt).</li><li>Click <strong>Create</strong>.</li></ul><p><strong>Create an SSH Key (Recommended):</strong></p><ul><li>Go to <strong>Advanced → SSH Keys → Create</strong>.</li><li><strong>Name:</strong> Give your SSH key a name.</li><li><strong>Public Key:</strong> Paste your SSH public key.</li><li>Click <strong>Create</strong>. This will allow passwordless SSH access to your VM.</li></ul><p><strong>Create and Configure the Virtual Machine for Kasm:</strong></p><ul><li>Navigate to <strong>Virtual Machines → Create</strong>.</li></ul><p><strong>Basics Tab:</strong></p><ul><li><strong>Name:</strong> Assign a name to your VM (e.g., kasm-gpu-server).</li><li><strong>CPU:</strong> Define the number of vCPUs.</li><li><strong>Memory:</strong> Allocate sufficient RAM.</li><li><strong>SSH Key:</strong> Select the SSH key you created.</li></ul><p><strong>Volumes Tab:</strong></p><ul><li>Click <strong>Add Volume → Existing</strong>.</li><li><strong>Image:</strong> Select the Ubuntu Noble Server image you uploaded.</li><li><strong>Size:</strong> Set an appropriate disk size (e.g., 100 GB or more, depending on your Kasm image storage needs).</li></ul><p><strong>Networks Tab:</strong></p><ul><li>Choose the network you created earlier (e.g., kasm-vm-network)</li><li>Click <strong>Add Network.</strong></li></ul><p><strong>Advanced Options → PCI Devices Tab:</strong></p><ul><li>In the PCI Devices section, search for your Nvidia GPU. You might see multiple entries related to your GPU (e.g., the GPU itself and its associated audio device).</li><li>Identify and select the GPU and its audio device you wish to dedicate to Kasm.</li><li>Click <strong>Enable Passthrough</strong>.</li><li>Once passthrough is enabled for the desired devices, attach them to your VM.</li><li>Click <strong>Create</strong> to provision the Virtual Machine.</li></ul><p><strong>Verify PCI Passthrough in Harvester:</strong></p><ul><li>Once the VM is in the “Running” state, select it.</li><li>Check the VM’s annotations or details page. You should see your PCI devices listed as allocated to this VM, confirming the passthrough from Harvester’s perspective.</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*kNXmjc0pFdj6QYVUxKVjMg.png" /></figure><h3>Step 2: Install Kasm Workspaces on the VM</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*oAmwsU-vtR-6tXVZIF1G9A.png" /></figure><ol><li><strong>SSH into your VM:</strong> Use the SSH key and the IP address assigned to your VM to log in.</li><li><strong>Install Kasm Workspaces:</strong> Follow the official Kasm Workspaces documentation for a single-server installation. Typically, this involves downloading the installation script and running a few commands:</li></ol><pre>cd /tmp <br>curl -O https://kasm-static.s3.amazonaws.com/kasm_release_1.17.0.3bf277.tar.gz <br># Check for the latest version <br>tar -xf kasm_release_1.17.0.3bf277.tar.gz <br>sudo bash kasm_release/install.sh</pre><p>Upon completion, the installer will output randomly generated credentials for your Kasm deployment. Store these securely. Log in to your Kasm instance via a web browser using these credentials.</p><h3>Step 3: Install Nvidia Drivers and Container Toolkit in the VM</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*GT2OxvNcm4yow1BhZPe_HA.png" /></figure><p>Even though the GPU is passed to the VM, you need to install Nvidia drivers and the Nvidia container toolkit within the VM’s operating system (Ubuntu Noble in this case) for Docker containers to utilize the GPU.</p><p><strong>Verify GPU Visibility in the VM:</strong> Inside your VM, run the following command to check if the OS can see the Nvidia PCI device:</p><pre>lspci | grep -i nvidia</pre><p>You should see your Nvidia GPU listed.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/914/1*V7LycN4GBJlAYjyL727Mvg.png" /></figure><p><strong>Install Nvidia Drivers and Container Toolkit:</strong> <a href="https://kasmweb.com/docs/latest/how_to/gpu.html">Kasm Workspaces documentation</a> provides a script for this, especially for recommended distributions like Ubuntu Noble. This script typically performs the following actions:</p><ul><li>Checks for an available Nvidia card.</li><li>Adds the official PPA for graphics drivers on Ubuntu.</li><li>Installs the latest available Nvidia drivers compatible with your GPU.</li><li>Installs the nvidia-container-toolkit.</li><li>Configures Docker to use the nvidia runtime.</li></ul><p>Save the following script (or one provided by Kasm documentation) to a file (e.g., gpu_install.sh) on your VM:</p><pre>#!/bin/bash<br><br># Check for NVIDIA cards<br>if ! lspci | grep -i nvidia &gt; /dev/null; then<br>    echo &quot;No NVIDIA GPU detected&quot;<br>    exit 0<br>fi<br><br>add-apt-repository -y ppa:graphics-drivers/ppa<br><br>curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \<br>  &amp;&amp; curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \<br>    sed &#39;s#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g&#39; | \<br>    sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list<br><br>apt update<br>apt install -y ubuntu-drivers-common<br><br># Run ubuntu-drivers and capture the output<br>DRIVER_OUTPUT=$(ubuntu-drivers list 2&gt;/dev/null)<br># Extract server driver versions using grep and regex<br># Pattern looks for nvidia-driver-XXX-server<br>SERVER_VERSIONS=$(echo &quot;$DRIVER_OUTPUT&quot; | grep -o &#39;nvidia-driver-[0-9]\+-server&#39; | grep -o &#39;[0-9]\+&#39; | sort -n)<br># Check if any server versions were found<br>if [ -z &quot;$SERVER_VERSIONS&quot; ]; then<br>    echo &quot;Error: No NVIDIA server driver versions found.&quot; &gt;&amp;2<br>    exit 1<br>fi<br># Find the highest version number<br>LATEST_VERSION=$(echo &quot;$SERVER_VERSIONS&quot; | tail -n 1)<br># Validate that the version is numeric<br>if ! [[ &quot;$LATEST_VERSION&quot; =~ ^[0-9]+$ ]]; then<br>    echo &quot;Error: Invalid version number: $LATEST_VERSION&quot; &gt;&amp;2<br>    exit 2<br>fi<br># Output only the version number<br>echo &quot;Latest version is: $LATEST_VERSION&quot;<br>ubuntu-drivers install &quot;nvidia:$LATEST_VERSION-server&quot;<br>apt install -y &quot;nvidia-utils-$LATEST_VERSION-server&quot;<br># Install NVIDIA toolkit + configure for docker<br>apt-get install -y nvidia-container-toolkit<br>nvidia-ctk runtime configure --runtime=docker</pre><p>Make the script executable and run it with sudo privileges:</p><pre>chmod +x gpu_install.sh <br>sudo ./gpu_install.sh</pre><p><strong>Note:</strong> A reboot of the VM might be necessary after the drivers are installed.</p><p><strong>Verify Driver Installation:</strong> After rebooting, SSH back into your VM and run:</p><pre>nvidia-smi</pre><p>This command should output details about your Nvidia GPU, the installed driver version (e.g., 570.133.20), and the CUDA version (e.g., 12.8).</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/807/1*kn0cpysCxOUEFq14Nf9IQg.png" /></figure><p><strong>Verify Nvidia Docker Runtime:</strong> Execute the following command to list all configured Docker runtimes:</p><pre>sudo docker info | grep -i runtime</pre><p>You should see nvidia listed among the runtimes (e.g., Runtimes: io.containerd.runc.v2 nvidia runc).</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/623/1*Deyz821_xw1drWJVoSGgEw.png" /></figure><h3>Step 4: Configure and Verify GPU Support in Kasm Workspaces</h3><p>Now, let’s configure Kasm to recognize and use the GPU.</p><p><strong>Verify GPU Detection in Kasm UI:</strong></p><ul><li>Log in to your Kasm Workspaces admin dashboard.</li><li>Navigate to <strong>Infrastructure → Docker Agents</strong>.</li><li>Select your agent and scroll down to the <strong>GPU Info</strong> section. Extend it. You should see your Nvidia GPU listed here.</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*qeHLmvk1eBkhdjbHD56jGA.png" /></figure><ul><li>Scroll further to <strong>Docker Info</strong> and extend it. Verify that the nvidia runtime is recognized by Kasm.</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*-9qeJNlJa3ii0N2MRx2Dmg.png" /></figure><p><strong>Install the </strong><a href="https://registry.kasmweb.com/"><strong>Kasm AI Registry</strong></a><strong> (Optional, but Recommended for AI/ML workflows):</strong></p><figure><img alt="" src="https://cdn-images-1.medium.com/proxy/1*pvxIPtXWsN6EAwGYf8csTQ.png" /></figure><ul><li>Go to the <strong>Registries</strong> tab in the Kasm admin UI.</li><li>If you are on Kasm version 1.17 or newer, the <strong>Kasm AI Registry</strong> should be visible in the “Registry Spotlight.” Click <strong>Install</strong>.</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*uMBzyIPq0ik4VWpFuWEbuQ.png" /></figure><ul><li>This registry contains pre-configured workspaces with GPU support for various AI/ML tools.</li></ul><p><strong>Enable GPU for Workspaces:</strong></p><p>By default, all images from the Kasm AI Registry have GPU support enabled. For other workspaces (e.g., a standard Ubuntu desktop or Blender from the default registry), you need to enable GPU access manually:</p><ul><li>Go to <strong>Workspaces</strong>.</li><li>Find the workspace you want to configure, click its menu (three dots), and select <strong>Edit</strong>.</li><li>In the workspace settings, set the <strong>GPU Count</strong> to 1. You can increase this if you want to pass through multiple GPUs to a single session (though this implies multiple physical GPUs passed to the VM or vGPU configurations).</li><li>Save the changes.</li></ul><p><strong>Agent GPU Override (Advanced):</strong> You can configure the Kasm Agent to allow multiple container sessions to share the same physical GPU.</p><ul><li>Go to <strong>Infrastructure → Docker Agents</strong>.</li><li>Edit your agent.</li><li>Set the <strong>GPU Override</strong> setting. For example, setting it to 4 means up to four container sessions can attempt to use the same GPU.</li><li><strong>Note:</strong> This does not evenly distribute GPU resources. For fine-grained resource allocation, vGPU solutions are required.</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/proxy/1*VYUOy9oeik1TGjvPJZCsXw.png" /></figure><h3>Step 5: Launching and Verifying GPU-Accelerated Workspaces</h3><p><strong>Basic GPU Test in a Standard Workspace:</strong></p><ul><li>Launch a workspace for which you’ve enabled GPU (e.g., a Kasm Ubuntu Desktop).</li><li>Open a terminal within the Kasm session and run nvidia-smi. You should see the GPU details, confirming the GPU is accessible within the container.</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*gQCHxo5ijK_7CHGI6Hv9Fw.png" /></figure><ul><li>You can also launch Firefox within the workspace and run WebGL browser tests (e.g., webglsamples.org/aquarium/aquarium.html) to confirm GPU-accelerated rendering. (Note: Chrome/Chromium-based browsers in Kasm might not have full GPU acceleration support for rendering at the time of writing; this is being worked on.)</li></ul><p><strong>Showcasing Kasm AI Registry Workspaces:</strong></p><p><strong>Easy Diffusion:</strong></p><ul><li>Install the “Easy Diffusion” workspace from the Kasm AI Registry.</li><li>Launch it. This workspace comes pre-loaded with all necessary tools.</li><li>The environment will set up, and Easy Diffusion’s web interface will automatically launch in a browser within the Kasm session. You can start generating images from text prompts, leveraging the GPU for fast processing.</li></ul><p><strong>CUDA-enabled PyTorch:</strong></p><ul><li>Install and launch the “CUDA-enabled PyTorch” workspace from the Kasm AI Registry.</li><li>This workspace includes PyTorch pre-installed.</li><li>Open a terminal or Python environment and verify PyTorch can detect the GPU:</li></ul><pre>import torch<br>print(f&quot;PyTorch version: {torch.__version__}&quot;)<br>print(f&quot;CUDA available: {torch.cuda.is_available()}&quot;)<br>if torch.cuda.is_available():<br>    print(f&quot;CUDA version: {torch.version.cuda}&quot;)<br>    print(f&quot;GPU Name: {torch.cuda.get_device_name(0)}&quot;) print(f&quot;CUDA available: {torch.cuda.is_available()}&quot;) if torch.cuda.is_available():     print(f&quot;CUDA version: {torch.version.cuda}&quot;)     print(f&quot;GPU Name: {torch.cuda.get_device_name(0)}&quot;)</pre><ul><li>This output will confirm that PyTorch is successfully utilizing the GPU.</li></ul><p><strong>Blender with GPU Acceleration:</strong></p><ul><li>The standard “Blender” workspace may not have GPU enabled by default.</li><li>Edit its settings as described in Step 4.3 to set “GPU Count” to 1.</li><li>Launch Blender.</li><li>Inside Blender, navigate to <strong>Edit → Preferences → System</strong>.</li><li>Under <strong>Cycles Render Devices</strong>, select <strong>CUDA</strong> or <strong>OptiX</strong> and ensure your Nvidia GPU is checked.</li><li>You can now use Blender for your VFX workflows with GPU-accelerated rendering in Cycles.</li></ul><h3>GPU Workloads</h3><p>By following these steps, you have successfully configured PCI passthrough for an Nvidia GPU on your Harvester hypervisor, installed the necessary drivers and runtime in your Kasm-hosting VM, and enabled GPU acceleration for Kasm Workspaces. Your Kasm deployment is now equipped to handle demanding GPU-accelerated workloads, providing a powerful and flexible container streaming solution.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*A90r-lR3J_HbBiAxQnsPNw.png" /></figure><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=8e4be366beb3" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Kasm Smartcard Pass-Through for ChromeOS]]></title>
            <link>https://kasm.medium.com/kasm-smartcard-pass-through-for-chromeos-1ffba142c3f7?source=rss-755f84541f54------2</link>
            <guid isPermaLink="false">https://medium.com/p/1ffba142c3f7</guid>
            <category><![CDATA[chrome-os]]></category>
            <category><![CDATA[vdi]]></category>
            <category><![CDATA[kasm]]></category>
            <category><![CDATA[chrome-extension]]></category>
            <category><![CDATA[chrome]]></category>
            <dc:creator><![CDATA[Kasm Technologies]]></dc:creator>
            <pubDate>Sun, 18 May 2025 17:23:15 GMT</pubDate>
            <atom:updated>2025-05-18T17:23:15.812Z</atom:updated>
            <content:encoded><![CDATA[<iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2Fc2wBcbi3HaQ%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3Dc2wBcbi3HaQ&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2Fc2wBcbi3HaQ%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/45d92151ba1287fa1e8851e4ab62ea1e/href">https://medium.com/media/45d92151ba1287fa1e8851e4ab62ea1e/href</a></iframe><h3><strong>Seamless Security and Enhanced Productivity: Kasm Workspaces Now Offers Smartcard Passthrough for ChromeOS to Windows</strong></h3><p>Kasm Technologies is excited to announce a powerful new feature for Kasm Workspaces: Smartcard Passthrough for Windows Workspaces accessed from ChromeOS endpoints. This enhancement significantly boosts security and productivity for users who rely on smart cards for authentication, digital signing, and certificate-based operations within their virtualized Windows environments.</p><p>Leveraging RDP-based Kasm sessions, Kasm Workspaces now supports passing through physical smart card devices directly from a ChromeOS device into the Windows workspace. This means users can utilize their existing smart cards and readers just as they would on a local machine, eliminating the need for complex workarounds or reduced functionality in their virtual sessions.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*rYGaUtSutKbjhyCbXvVM7A.png" /></figure><p>The integration of smartcard access into your virtual desktop infrastructure (VDI) offers tangible productivity gains. Users can seamlessly perform tasks requiring smart card authentication, such as logging into secure applications, digitally signing documents, or accessing sensitive systems, directly from their Kasm session. This streamlined workflow saves time and reduces frustration, making the virtual workspace a more complete and efficient work environment.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*6OwGongnsYSlYjlHIR0QhQ.png" /></figure><p>Enabling this feature involves straightforward configuration steps on both the Kasm platform and the ChromeOS endpoint. Administrators can enable smartcard passthrough for specific user groups within the Kasm admin interface by adding and setting the allow_kasm_rdp_smartcard_passthrough setting to &#39;true&#39; for the desired group (e.g., the default administrators group).</p><p>On the ChromeOS device, users or administrators need to install a few key extensions from the Chrome Web Store to facilitate the passthrough:</p><ol><li><strong>Drive Lock smart card middleware:</strong> This provides the necessary middleware for smart card interaction on ChromeOS.</li><li><strong>Smart Card Connector app:</strong> This app manages the connection between the smart card reader and the ChromeOS device. Users will need to grant the requested permissions upon installation.</li><li><strong>Kasm smart Card bridge extension:</strong> This extension specifically facilitates the communication between the ChromeOS smart card services and the Kasm RDP session. Again, users should allow the necessary permissions.</li></ol><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*gfQ939lEpeTddf_DgSGLiQ.png" /></figure><p>Once these steps are completed, connecting to a Windows workspace in Kasm from the configured ChromeOS device should result in the automatic detection of the smart card within the Windows session. Users can then interact with their smart card as needed, including entering their PIN for authentication or operations. The functionality can be easily tested using tools or websites designed to detect smart cards and their associated certificates within the Windows environment.</p><p>By enabling direct smartcard access, Kasm Workspaces removes a common barrier for organizations and users who require this layer of security for daily tasks. This feature underscores Kasm’s commitment to providing a flexible, secure, and productive virtual workspace experience across a variety of endpoints.</p><p>Detailed documentation is available at: <a href="https://www.kasmweb.com/docs/latest/guide/smartcard_passthrough.html">https://www.kasmweb.com/docs/latest/guide/smartcard_passthrough.html</a></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*-qn3aK6UTNcaA2du" /></figure><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=1ffba142c3f7" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Launching Secure AI Innovation with the Kasm AI Workspace Registry]]></title>
            <link>https://kasm.medium.com/launching-secure-ai-innovation-with-the-kasm-ai-workspace-registry-5dca57607fdf?source=rss-755f84541f54------2</link>
            <guid isPermaLink="false">https://medium.com/p/5dca57607fdf</guid>
            <category><![CDATA[kasm]]></category>
            <category><![CDATA[llm-applications]]></category>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[ai-experts]]></category>
            <category><![CDATA[ai-assisted-coding]]></category>
            <dc:creator><![CDATA[Kasm Technologies]]></dc:creator>
            <pubDate>Tue, 29 Apr 2025 16:02:45 GMT</pubDate>
            <atom:updated>2025-04-29T16:02:45.260Z</atom:updated>
            <content:encoded><![CDATA[<p>AI innovation needs more than just new models — it needs safe, fast, and scalable environments where experimentation can happen without risking data, security, or compliance.</p><p>That’s why we’re excited to introduce the <strong>Kasm AI Workspace Registry</strong>: a curated set of secure, containerized AI workspaces that make it easy to move fast without sacrificing control.</p><p>In this article, I’ll show you exactly <strong>how</strong> to set up the Kasm AI Workspace Registry inside your Kasm Workspaces 1.17+ environment and how to start deploying AI-ready workspaces like <strong>AnythingLLM</strong> and <strong>Easy Diffusion</strong> in minutes.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*N6UO7mJtM6G3Upe0Ta66tQ.png" /></figure><h3>About the New AI Workspaces: Noble Numbat and Jammy Jellyfish</h3><p>The new AI Workspaces are based on two Ubuntu Long Term Support (LTS) releases:</p><p><strong>Noble Numbat (24.04 LTS):</strong> The latest, featuring updated software packages, enhanced security, and longer support lifespan.</p><p><strong>Jammy Jellyfish (22.04 LTS):</strong> The previous LTS release, still supported but with a shorter remaining lifecycle.</p><p>By leveraging these bases, our AI Workspaces offer <strong>newer features, more modern software versions</strong>, and a <strong>longer runway</strong> for enterprise deployments.</p><h3>What’s Inside the Kasm AI Workspace Registry?</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/950/1*iDRgH0NkOhVPMdig6buUTQ.png" /></figure><p>Today, we’re kicking off the Registry with three initial AI Workspaces types:</p><h3>AnythingLLM Kasm Workspace</h3><p>A powerful, private AI agent framework:</p><p><strong>Custom models:</strong> Bring your own models — run them locally or connect to OpenAI, Azure, AWS, and others.</p><p><strong>Document ingestion:</strong> PDFs, Word files, CSVs, even online docs.</p><p><strong>Privacy by design:</strong> Local defaults for LLMs, embedding, vector databases, and storage. Nothing leaves your environment unless you choose.</p><h3>Easy Diffusion Kasm Workspace</h3><p>A one-click install of <strong>Stable Diffusion</strong> for secure, local, text-to-image generation:</p><p>Full installation of all necessary components.</p><ul><li>Simple, browser-based UI to generate stunning AI images.</li></ul><h3>CUDA-Enabled Workspaces</h3><p>Lightweight, secure, PyTorch and Tensorflow configured Linux-based workspaces for development and experimentation.</p><h3>How to Install the Kasm AI Workspace Registry</h3><p>First, make sure you have <strong>Kasm Workspaces version 1.17</strong> installed. (This version introduces important updates for Workspace Registries.)</p><p>Here’s the step-by-step:</p><h3>1. Access the Workspace Registry</h3><ul><li>Log in to your Kasm Workspaces Admin Panel.</li><li>Navigate to <strong>Workspaces</strong> → <strong>Workspace Registry</strong>.</li><li>Click <strong>“Install”</strong> on the new Kasm AI Registry</li><li>Here’s what it looks like once the Registry is installed:</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*N1xxIXjsM3zIHSsSmzN--w.png" /></figure><h3>3. Browse and Deploy Workspaces</h3><p>Once added:</p><p>Use the filter buttons under the Registry name to view available AI Workspaces.</p><p>Select <strong>AnythingLLM</strong>, <strong>Easy Diffusion</strong>, or <strong>KasmOS</strong>.</p><p>Click <strong>Deploy</strong> to add them to your Kasm Workspaces environment.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*GCrifcBIG0p9gWbKxRWS7w.png" /></figure><h3>Configuring AnythingLLM with a Google Gemini API Key</h3><p>After deploying <strong>AnythingLLM</strong>, you can enhance its capabilities by integrating it with a <strong>Google Gemini Key</strong>.</p><p>Here’s a quick guide:</p><p>Log in to <a href="https://aistudio.google.com/prompts/new_chat">Google AI Studio</a> and get a Gemini API Key</p><p>Launch your AnythingLLM workspace.</p><p>Navigate to the API configuration section inside the web interface.</p><p>Paste your <strong>Google Gemini API Key</strong>. Choose your Model</p><p>Save the settings.</p><p>You’re now ready to use a powerful, privacy-first AI agent with access to custom and public models!</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FthQcyNX58Ng%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DthQcyNX58Ng&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FthQcyNX58Ng%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/bf5bcfc3e5e4d06830cb6a8c60e0ec2c/href">https://medium.com/media/bf5bcfc3e5e4d06830cb6a8c60e0ec2c/href</a></iframe><h3>Let’s Get You Experimenting</h3><p>By installing the Kasm AI Workspace Registry, you’re giving your developers, researchers, and knowledge workers a secure sandbox to:</p><p>✅ Train private LLMs. Use AI-Assisted Development or CUA tools <br> ✅ Explore text-to-image generation.<br> ✅ Build document-based AI applications. Use A2A Frameworks or MCP<br> ✅ Stay fully compliant and secure — at cloud speed.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=5dca57607fdf" width="1" height="1" alt="">]]></content:encoded>
        </item>
    </channel>
</rss>