<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="https://dfederm.com/feed.xml" rel="self" type="application/atom+xml" /><link href="https://dfederm.com/" rel="alternate" type="text/html" /><updated>2026-03-31T04:13:29+00:00</updated><id>https://dfederm.com/feed.xml</id><title type="html">David’s Blog</title><subtitle>Some guy&apos;s programming blog.</subtitle><entry><title type="html">Migrating my Homelab from TrueNAS to Proxmox</title><link href="https://dfederm.com/migrating-my-homelab-from-truenas-to-proxmox/" rel="alternate" type="text/html" title="Migrating my Homelab from TrueNAS to Proxmox" /><published>2026-03-30T00:00:00+00:00</published><updated>2026-03-30T00:00:00+00:00</updated><id>https://dfederm.com/migrating-my-homelab-from-truenas-to-proxmox</id><content type="html" xml:base="https://dfederm.com/migrating-my-homelab-from-truenas-to-proxmox/"><![CDATA[<p>For a while now I’ve been considering ditching <a href="https://www.truenas.com/truenas-community-edition/" target="_blank">TrueNAS Community Edition</a> (formerly TrueNAS Scale) for my homelab in favor of running on <a href="https://www.proxmox.com/" target="_blank">Proxmox</a>. While the <a href="https://forums.truenas.com/t/scale-build-git-repo-going-closed-source/64313" target="_blank">recent controversy</a> and <a href="https://forums.truenas.com/t/clearing-the-air-on-build-scripts/64357" target="_blank">dismissive response</a> were the final push I needed, I’ve been pulled in this direction for some time. One issue I have is with the instability and confusion around its VM and container story. Things which should be easy aren’t, like GPU passthrough to Jellyfin. And things which should be simple are way too complex, like SMB permissions. Honestly I’m probably not the right user for TrueNAS in the first place; I am trying to run my whole homelab on the hardware, not “just” a NAS with some extras.</p>

<p>Table of contents:</p>
<ul>
  <li><a href="#the-goal">The goal</a></li>
  <li><a href="#prepare-services">Prepare services</a></li>
  <li><a href="#back-up-everything">Back up everything</a></li>
  <li><a href="#install-proxmox-and-import-zfs">Install Proxmox and import ZFS</a></li>
  <li><a href="#set-up-infrastructure">Set up infrastructure</a></li>
  <li><a href="#deploy-and-restore-services">Deploy and restore services</a></li>
</ul>

<h2 id="the-goal">The goal</h2>

<p>The philosophy is that everything becomes scriptable and declarative. I don’t know how many times I’ve made some change to my homelab and then months or years later I can’t remember the specifics and have to figure out what I did to either extend or re-do it. This is why having the configuration be declarative is so important. This is a hobby not a job, and I will absolutely forget things in the sometimes months between tinkering sessions.</p>

<p>I also wanted everything to live in a git repo, both for sharing and to make sure it lived somewhere external to the homelab itself. The repo for my homelab is <a href="https://github.com/dfederm/homelab" target="_blank">hosted on GitHub</a>, so feel free to use it as a base or inspiration for your own homelab.</p>

<p>As for the high-level end goal, I decided to go with <a href="https://www.proxmox.com/en/products/proxmox-virtual-environment/overview" target="_blank">Proxmox VE</a> as the hypervisor, and <a href="https://en.wikipedia.org/wiki/LXC" target="_blank">LXC containers</a> for services instead of VMs. The ZFS pool is directly attached to the Proxmox host, and the LXCs use bind mounts to directly access the ZFS pool (no SMB middleman). The Docker host is an LXC as well as the SMB server. For <a href="https://www.home-assistant.io/" target="_blank">Home Assistant</a>, I used a full-blown VM since it requires its own OS, Home Assistant OS.</p>

<h2 id="prepare-services">Prepare services</h2>

<p>Proxmox has basic system monitoring built-in, but I wanted service-level monitoring and alerting too. I wanted to set this up <em>before</em> the migration so that I’d have visibility immediately after migrating.</p>

<p>I chose to add <a href="https://beszel.dev/" target="_blank">Beszel</a> for system monitoring and <a href="https://dozzle.dev/" target="_blank">Dozzle</a> for viewing Docker logs. I know there is some overlap between the two, but Beszel seems better tuned for monitoring and Dozzle seems better tuned for log viewing, so I decided to just have both. Together they use ~55 MB of RAM for my setup, so there’s really no harm having both. I was previously using <a href="https://www.portainer.io/" target="_blank">Portainer</a> for log viewing, but Dozzle is much more lightweight and simpler.</p>

<p>I also decided to add <a href="https://uptimekuma.org/" target="_blank">Uptime Kuma</a> to help monitor whether services were actually responding to requests, as opposed to simply running. The configuration is very simple and easy to set up so is a nice addition to the homelab.</p>

<p>While I was adding and replacing services, I decided to take care of a few other things I had been meaning to do for a while but hadn’t gotten around to:</p>
<ul>
  <li>I added <a href="https://gethomepage.dev/" target="_blank">homepage</a></li>
  <li>I replaced a custom-built reverse proxy using <a href="https://github.com/dotnet/yarp" target="_blank">YARP</a> (why did I think that was a good idea?) with <a href="https://caddyserver.com/" target="_blank">Caddy</a></li>
  <li>I pinned all my docker image versions for determinism and stability and configured <a href="https://docs.renovatebot.com/" target="_blank">Renovate</a> to keep them updated.</li>
</ul>

<h2 id="back-up-everything">Back up everything</h2>

<p>As they say, safety first. My critical data is already <a href="/backing-up-truenas-scale-to-onedrive/">backed up properly</a>, so this is mostly about getting back into a good state if something goes wrong and I needed to turn back.</p>

<p>First save off the TrueNAS configuration. To do this, navigate to System -&gt; Advanced Settings -&gt; Manage Configuration and click Download File. Save the file to a safe location, <em>not</em> on the NAS. I just saved it to the laptop I was using to do the migration.</p>

<p><img src="/assets/images/migrating-homelab/truenas-save-config.png" alt="Saving TrueNAS configuration" class="center" /></p>

<p>Additionally you should generally note the structure of your ZFS datasets, most importantly your ZFS pool name (the default is <code class="language-plaintext highlighter-rouge">tank</code>).</p>

<p>To ensure that SMB clients continue to work after migration, you will want to make sure you have all active SMB usernames and passwords as you will need to manually set them on the new system as part of the migration.</p>

<p>I was running a VM on TrueNAS which was the Docker host for all my containers. As the goal is to move that to an LXC, I needed to back up all docker volumes I was using. Docker volumes live in the VM’s filesystem, not directly in ZFS. For the most part I used bind mounts to the NAS, which would survive the migration, but I did have a few volumes to keep. Going forward I plan on using bind mounts for (almost) everything, and the migration plan includes that.</p>

<p>You can list all volumes by SSH’ing into the VM and running <code class="language-plaintext highlighter-rouge">docker volume ls</code>. To back up a volume, you can use this command:</p>

<div class="language-sh highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker run <span class="nt">--rm</span> <span class="se">\</span>
  <span class="nt">-v</span> VOLUME_NAME:/source:ro <span class="se">\</span>
  <span class="nt">-v</span> /path/to/backup:/backup <span class="se">\</span>
  alpine <span class="nb">tar </span>czf /backup/VOLUME_NAME.tar.gz <span class="nt">-C</span> /source <span class="nb">.</span>
</code></pre></div></div>

<p>Replace the <code class="language-plaintext highlighter-rouge">VOLUME_NAME</code> with the volume name, and <code class="language-plaintext highlighter-rouge">/path/to/backup</code> to where you want your backups to go. A path within ZFS is fine as we will be importing the ZFS pool later in the migration and will be able to restore state from there.</p>

<p>Or use <a href="https://gist.github.com/dfederm/cb11bc9281d2af602d6058bceca0920e#file-backup-docker-volumes-sh" target="_blank">this script</a> to back up all volumes (usage: <code class="language-plaintext highlighter-rouge">./backup-docker-volumes.sh &lt;backup-dir&gt;</code>).</p>

<p>I also was running a VM for Home Assistant. To take a backup in Home Assistant, in the UI navigate to Settings -&gt; System -&gt; Backups and click Backup Now. Save it to the machine you’re orchestrating the migration from for ease of restoring it later.</p>

<p>As a final backup step, create a ZFS snapshot. This allows you to roll back easily if something gets messed up during migration, like bad permissions or a service gets misconfigured and writes bad data.</p>

<p>To do this, open the TrueNAS shell by going to System Settings -&gt; Shell and type:</p>

<div class="language-sh highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">sudo </span>zfs snapshot <span class="nt">-r</span> tank@migration
</code></pre></div></div>

<p>Replace <code class="language-plaintext highlighter-rouge">tank</code> with your ZFS pool name. <code class="language-plaintext highlighter-rouge">migration</code> can be any string you want if desired. Note that taking a snapshot is instant and does not duplicate data, so this is a quick and easy operation.</p>

<p>If you want to be extra safe, you can also do a scrub to verify data integrity and that no drives are currently failing. It takes a very long time (75 TiB took 7.5 hours for me), and TrueNAS regularly scrubs anyway, so this step is optional. If your last scrub was recent, I’d say just skip it.</p>

<p>To perform a scrub, in the TrueNAS shell run:</p>

<div class="language-sh highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">sudo </span>zpool scrub <span class="nt">-w</span> tank
<span class="nb">sudo </span>zpool status tank
</code></pre></div></div>

<p>Replace <code class="language-plaintext highlighter-rouge">tank</code> with your ZFS pool name. Then wait until the scrub finishes. The status is visible in the web UI as well so you don’t have to keep the shell active.</p>

<p>That’s all the backing up I needed to do, but make sure anything else that’s not in the ZFS pool are completely backed up. I also cannot stress enough how important it is to have an external backup of all your critical data.</p>

<h2 id="install-proxmox-and-import-zfs">Install Proxmox and import ZFS</h2>

<p>This is it, the point of no return! The NAS will be offline from this point on until you finish the migration.</p>

<p>Download the Proxmox ISO and flash it to a USB drive using <a href="https://etcher.balena.io/" target="_blank">balenaEtcher</a> or similar USB flashing application. Shut down your VMs and TrueNAS and then install Proxmox on the boot drive. Make sure when installing Proxmox that you leave your storage drives with ZFS untouched!</p>

<p>The Proxmox graphical installer is pretty straightforward.</p>
<ul>
  <li>For the host name put something like <code class="language-plaintext highlighter-rouge">proxmox.local</code> or whatever you want to call the Proxmox host.</li>
  <li>Use a static IP with something like <code class="language-plaintext highlighter-rouge">192.168.1.2/24</code>, or whatever you prefer.</li>
  <li>Set both the gateway and DNS to your router’s IP address, usually <code class="language-plaintext highlighter-rouge">192.168.1.1</code>.</li>
  <li>Ensure “pin network interface names” is checked as it ensures your network interface name stays consistent across reboots. Without it, the name could change if you add/remove hardware, which would break the network bridge.</li>
  <li>The default names for the network interfaces are fine, mine were <code class="language-plaintext highlighter-rouge">nic0</code> (ethernet) and <code class="language-plaintext highlighter-rouge">nic1</code> (WiFi), and if you have both definitely choose the wired connection as the management interface.</li>
</ul>

<p>After those inputs, it should take a few more minutes to finish installing Proxmox.</p>

<p>Once installed and booted up, you should immediately be able to ssh into it using the password you set up during the install:</p>

<div class="language-sh highlighter-rouge"><div class="highlight"><pre class="highlight"><code>ssh root@192.168.1.2
</code></pre></div></div>

<blockquote class="note">
  <p><strong>Note:</strong> I prepared for this quite a bit. I had a plan and scripts to help make the process smoother and faster, but I still ran into some issues along the way. You’ll want to account for extra time in debugging, and ensure your scripts are idempotent so that if you need to make fixes you can just run the whole script again instead of trying to figure out what needs to be done and what needs to be skipped. For example, git refused to operate on the repo because it was created on a different system, so I needed to add <code class="language-plaintext highlighter-rouge">git config --global --add safe.directory /path/to/repo</code> to the setup script. If you’re using my scripts, I did fix the issues I ran into but you’ll still want to be prepared to troubleshoot issues unique to your setup.</p>
</blockquote>

<p>Once SSH’d you will need to import the ZFS pool and fix ZFS properties.</p>

<p>To import the pool, run:</p>

<div class="language-sh highlighter-rouge"><div class="highlight"><pre class="highlight"><code>zpool import <span class="nt">-f</span> tank
</code></pre></div></div>

<p>The <code class="language-plaintext highlighter-rouge">-f</code> flag is needed because TrueNAS marked the pool as in use. As TrueNAS is shut down, it is safe to force the operation.</p>

<p>Verify the import worked by listing files on the pool. If you’re following along with my homelab repo run:</p>

<div class="language-sh highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">ls</span> /tank/homelab/config/
</code></pre></div></div>

<p>This should list the <code class="language-plaintext highlighter-rouge">.env</code> files you expect to be there, one of which should match the name of your Proxmox host.</p>

<p>At this point you should run the <a href="https://gist.github.com/dfederm/c6b6fd886d55e48ccaa4291beeed3c45" target="_blank">migration script</a>:</p>

<div class="language-sh highlighter-rouge"><div class="highlight"><pre class="highlight"><code>bash /tank/homelab/repo/scripts/migrate.sh
</code></pre></div></div>

<p>Adjust the path to match your pool and repo layout. The migration script does some one-time migration actions, calls the setup script internally, then handles a few more one-time migration tasks. Although the migration script handles the full remainder of the migration, we’ll walk through what it does step-by-step.</p>

<p>The first thing the script does is run these three commands for <em>each</em> dataset:</p>

<div class="language-sh highlighter-rouge"><div class="highlight"><pre class="highlight"><code>zfs <span class="nb">set </span><span class="nv">acltype</span><span class="o">=</span>posixacl tank/dataset
zfs <span class="nb">set </span><span class="nv">aclmode</span><span class="o">=</span>passthrough tank/dataset
zfs <span class="nb">set </span><span class="nv">xattr</span><span class="o">=</span>sa tank/dataset
</code></pre></div></div>

<p>This is because TrueNAS uses NFSv4-style ACLs and its own permission management. Standard Linux tools (<code class="language-plaintext highlighter-rouge">chmod</code>, <code class="language-plaintext highlighter-rouge">setfacl</code>, <code class="language-plaintext highlighter-rouge">chown</code>) expect POSIX ACLs. Without all three, you’ll hit mysterious permissions failures like I did initially (fixed in the script now).</p>

<h2 id="set-up-infrastructure">Set up infrastructure</h2>
<p>Now that Proxmox is installed and ZFS is mounted, it’s time to create and configure the LXCs and VMs, including setting up SMB.</p>

<p>My <a href="https://github.com/dfederm/homelab/blob/main/scripts/setup/setup.sh" target="_blank">setup script</a> (which is called by the migration script above) runs a list of modules declared in a per-host <code class="language-plaintext highlighter-rouge">.env</code> file. My Proxmox host’s looks like:</p>

<div class="language-sh highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">HOMELAB_SETUP_MODULES</span><span class="o">=</span><span class="s2">"configure-proxmox-repos install-tools configure-amdgpu configure-ssh create-lxcs create-vms"</span>
</code></pre></div></div>

<p>First up is configuring the Proxmox apt repositories (<code class="language-plaintext highlighter-rouge">configure-proxmox-repos</code>). By default, Proxmox is configured to use enterprise repos which require a paid subscription. <code class="language-plaintext highlighter-rouge">apt-get update</code> will return a 401 Unauthorized error, meaning installing or updating any packages is completely blocked. It is an unfortunate default behavior of Proxmox and all free Proxmox users have to reconfigure the apt repositories to replace the enterprise paid repositories with the free community repositories.</p>

<p>The base Proxmox installation is pretty barebones, by design. Most functionality should be in an LXC or VM. However, we do need a small set of common tools for the homelab scripts and basic maintenance work when SSH’d into the host. The <code class="language-plaintext highlighter-rouge">install-tools</code> module installs these.</p>

<p>For my machine I had an AMD iGPU. Proxmox already includes the <code class="language-plaintext highlighter-rouge">amdgpu</code> kernel driver, so the <code class="language-plaintext highlighter-rouge">configure-amdgpu</code> module loads the kernel module and ensures it is loaded on boot going forward.</p>

<p>Initially we used a password to SSH into the Proxmox host, but a more secure option is to use key-based authentication. The <code class="language-plaintext highlighter-rouge">configure-ssh</code> module turns off password authentication and deploys an authorized key file from ZFS. The reason for this file being on ZFS is so that it can be shared across all your hosts and can survive re-paves.</p>

<p>Next is what Proxmox is best at: creating LXCs and VMs (<code class="language-plaintext highlighter-rouge">create-lxcs</code> and <code class="language-plaintext highlighter-rouge">create-vms</code>). Like the modules list, LXC and VM definitions also live in the per-host <code class="language-plaintext highlighter-rouge">.env</code> file. My Proxmox host’s configures a NAS LXC, docker LXC, and a Home Assistant VM and looks similar to this:</p>

<div class="language-sh highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c">####################</span>
<span class="c"># LXC Containers</span>
<span class="c">####################</span>
<span class="nv">HOMELAB_LXCS</span><span class="o">=</span><span class="s2">"NAS_LXC DOCKER_LXC"</span>

<span class="c"># NAS LXC</span>
<span class="nv">NAS_LXC_VMID</span><span class="o">=</span>100
<span class="nv">NAS_LXC_HOSTNAME</span><span class="o">=</span>nas
<span class="nv">NAS_LXC_IP</span><span class="o">=</span>192.168.1.4
<span class="nv">NAS_LXC_MEMORY_MIB</span><span class="o">=</span>512
<span class="nv">NAS_LXC_CORES</span><span class="o">=</span>1
<span class="nv">NAS_LXC_ROOTFS_GIB</span><span class="o">=</span>4
<span class="nv">NAS_LXC_NESTING</span><span class="o">=</span>0
<span class="nv">NAS_LXC_MP0</span><span class="o">=</span>/tank/homelab,mp<span class="o">=</span>/mnt/homelab
<span class="nv">NAS_LXC_MP1</span><span class="o">=</span>/tank/media,mp<span class="o">=</span>/mnt/media
<span class="nv">NAS_LXC_MP2</span><span class="o">=</span>/tank/share,mp<span class="o">=</span>/mnt/share

<span class="c"># Docker LXC</span>
<span class="nv">DOCKER_LXC_VMID</span><span class="o">=</span>101
<span class="nv">DOCKER_LXC_HOSTNAME</span><span class="o">=</span>docker
<span class="nv">DOCKER_LXC_IP</span><span class="o">=</span>192.168.1.5
<span class="nv">DOCKER_LXC_MEMORY_MIB</span><span class="o">=</span>16384
<span class="nv">DOCKER_LXC_CORES</span><span class="o">=</span>12
<span class="nv">DOCKER_LXC_ROOTFS_GIB</span><span class="o">=</span>32
<span class="nv">DOCKER_LXC_NESTING</span><span class="o">=</span>1
<span class="nv">DOCKER_LXC_GPU</span><span class="o">=</span>1
<span class="nv">DOCKER_LXC_MP0</span><span class="o">=</span>/tank/homelab,mp<span class="o">=</span>/mnt/homelab
<span class="nv">DOCKER_LXC_MP1</span><span class="o">=</span>/tank/media,mp<span class="o">=</span>/mnt/media
<span class="nv">DOCKER_LXC_MP2</span><span class="o">=</span>/tank/share,mp<span class="o">=</span>/mnt/share

<span class="c">####################</span>
<span class="c"># VMs</span>
<span class="c">####################</span>
<span class="nv">HOMELAB_VMS</span><span class="o">=</span><span class="s2">"HAOS_VM"</span>

<span class="c"># Home Assistant VM (hestia)</span>
<span class="nv">HAOS_VM_VMID</span><span class="o">=</span>102
<span class="nv">HAOS_VM_HOSTNAME</span><span class="o">=</span>homeassistant
<span class="nv">HAOS_VM_IP</span><span class="o">=</span>192.168.1.3
<span class="nv">HAOS_VM_MEMORY_MIB</span><span class="o">=</span>2048
<span class="nv">HAOS_VM_BIOS</span><span class="o">=</span>ovmf
<span class="nv">HAOS_VM_MACHINE</span><span class="o">=</span>q35
<span class="nv">HAOS_VM_OSTYPE</span><span class="o">=</span>l26
<span class="nv">HAOS_VM_AGENT</span><span class="o">=</span>1
<span class="nv">HAOS_VM_CORES</span><span class="o">=</span>2
<span class="nv">HAOS_VM_IMAGE</span><span class="o">=</span>/tank/homelab/images/HomeAssistant/haos_ova-17.1.qcow2
</code></pre></div></div>

<p>Note the <code class="language-plaintext highlighter-rouge">DOCKER_LXC_GPU=1</code>, which causes the setup script to passthrough the GPU to the LXC. It essentially adds these two lines to the LXC config:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>lxc.cgroup2.devices.allow: c 226:* rwm
lxc.mount.entry: /dev/dri dev/dri none bind,optional,create=dir
</code></pre></div></div>

<p>The first line allows the LXC to access the DRI device, and the second line bind-mounts /dev/dri from the host into the container. This is so much simpler than messing around with IOMMU/VFIO on TrueNAS, which I never actually got working.</p>

<p>Now that the LXCs are created, the setup script recursively sets them up using their host-specific <code class="language-plaintext highlighter-rouge">.env</code> files. My NAS’s has:</p>

<div class="language-sh highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">HOMELAB_SETUP_MODULES</span><span class="o">=</span><span class="s2">"create-users install-tools configure-ssh install-samba set-share-permissions"</span>
</code></pre></div></div>

<p>The <code class="language-plaintext highlighter-rouge">create-users</code> module creates users and groups for the homelab based on config. For configuration which applies to the entire homelab and so shared across multiple hosts, there is actually a <code class="language-plaintext highlighter-rouge">common.env</code> which the scripts use in addition to the per-host files.</p>

<p>My <code class="language-plaintext highlighter-rouge">common.env</code> has something like this for the user config:</p>

<div class="language-sh highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">HOMELAB_GROUPS</span><span class="o">=</span><span class="s2">"ADMIN ADULTS KIDS FAMILY"</span>
<span class="nv">ADMIN_GID</span><span class="o">=</span>1099
<span class="nv">ADULTS_GID</span><span class="o">=</span>1100
<span class="nv">KIDS_GID</span><span class="o">=</span>1101
<span class="nv">FAMILY_GID</span><span class="o">=</span>1102

<span class="nv">HOMELAB_USERS</span><span class="o">=</span><span class="s2">"DAVID SPOUSE KID1 KID2 KID3"</span>
<span class="nv">DAVID_UID</span><span class="o">=</span>1001
<span class="nv">DAVID_GROUPS</span><span class="o">=</span><span class="s2">"admin,adults,family"</span>
<span class="nv">SPOUSE_UID</span><span class="o">=</span>1002
<span class="nv">SPOUSE_GROUPS</span><span class="o">=</span><span class="s2">"adults,family"</span>
<span class="nv">KID1_UID</span><span class="o">=</span>1003
<span class="nv">KID1_GROUPS</span><span class="o">=</span><span class="s2">"kids,family"</span>
<span class="nv">KID2_UID</span><span class="o">=</span>1004
<span class="nv">KID2_GROUPS</span><span class="o">=</span><span class="s2">"kids,family"</span>
<span class="nv">KID3_UID</span><span class="o">=</span>1005
<span class="nv">KID3_GROUPS</span><span class="o">=</span><span class="s2">"kids,family"</span>
</code></pre></div></div>

<p>Note that in my case, my spouse is not technical so she’s not in the admin group. Not necessarily a trust issue but a matter of avoiding providing privilege where it isn’t needed and won’t be used.</p>

<p><code class="language-plaintext highlighter-rouge">install-tools</code> and <code class="language-plaintext highlighter-rouge">configure-ssh</code> work exactly as they did on the Proxmox host. <code class="language-plaintext highlighter-rouge">install-samba</code> installs <a href="https://www.samba.org/" target="_blank">Samba</a> and configures which shares exist and who can see them. <code class="language-plaintext highlighter-rouge">set-share-permissions</code> then defines permissions by setting POSIX ACLs on each user’s directories. The way I have it set up is:</p>

<ul>
  <li>Each user has full control in their own directory</li>
  <li>Adults have read-only access to other adults’ directories</li>
  <li>Adults have full control in kids’ directories (parental oversight)</li>
  <li>There is an <code class="language-plaintext highlighter-rouge">adults</code> dir where all adults have full access</li>
  <li>There is a <code class="language-plaintext highlighter-rouge">family</code> dir where everyone has full access</li>
  <li>Admins have full control everywhere</li>
</ul>

<p>My docker host’s <code class="language-plaintext highlighter-rouge">.env</code> file has:</p>

<div class="language-sh highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">HOMELAB_SETUP_MODULES</span><span class="o">=</span><span class="s2">"create-users install-tools configure-ssh install-docker configure-macvlan-bridge"</span>
</code></pre></div></div>

<p><code class="language-plaintext highlighter-rouge">create-users</code>, <code class="language-plaintext highlighter-rouge">install-tools</code>, and <code class="language-plaintext highlighter-rouge">configure-ssh</code> all do the same as before. <code class="language-plaintext highlighter-rouge">install-docker</code> simply installs docker using the <a href="https://docs.docker.com/engine/install/debian/#install-using-the-repository" target="_blank">official method</a>, configures it to start on boot, and also starts it immediately. I’ll describe deploying the docker services, and restoring the backed up volumes, later.</p>

<p>One of my docker services, <a href="https://adguard.com/en/adguard-home/overview.html" target="_blank">AdGuard Home</a>, needs a dedicated IP address since it’s a DNS resolver and needs to avoid conflicting with the host’s own DNS resolution. It does this by using a macvlan network, but due to how macvlan works at the Linux kernel level, a workaround is needed to ensure that the docker host can talk to its own macvlan container. This is what the <code class="language-plaintext highlighter-rouge">configure-macvlan-bridge</code> module does.</p>

<p>If you also have a DNS server running as a Docker service, one important thing to watch out for is that you point your infrastructure (eg, your LXCs and VMs) to your router for DNS instead of your DNS server. I have my router configured to use my DNS server as its primary but falls back to <code class="language-plaintext highlighter-rouge">1.1.1.1</code> when my DNS server is unavailable. Otherwise you hit a chicken-and-egg situation when bootstrapping the host that runs the DNS server.</p>

<p>The Home Assistant VM is also created by the script, but since HAOS needs to be restored from backup, we’ll cover that in the next section. One important gotcha I ran into is that the default SCSI controller type (<code class="language-plaintext highlighter-rouge">lsi</code>) doesn’t work with UEFI boot, which HAOS uses, and so <code class="language-plaintext highlighter-rouge">virtio-scsi-pci</code> is needed instead. The setup script uses this for all VMs since it has better performance and compatibility than the legacy <code class="language-plaintext highlighter-rouge">lsi</code> one.</p>

<h2 id="deploy-and-restore-services">Deploy and restore services</h2>

<p>With the infrastructure in place, now it’s time to get the services running and restore data.</p>

<p>The last thing the setup script does is deploy the docker services. Like with the other configs, the list of docker services to deploy is also config-driven. My docker host’s <code class="language-plaintext highlighter-rouge">.env</code> file has:</p>

<div class="language-sh highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">HOMELAB_SERVICES</span><span class="o">=</span>dns,reverse-proxy,jellyfin,photos,files,backup,monitoring,homepage,dozzle,webhook
</code></pre></div></div>

<p>Each of those services correspond to a folder in the homelab repo containing a <code class="language-plaintext highlighter-rouge">docker-compose.yml</code> file which defines the service.</p>

<p>The setup scripts are permanent and idempotent. They should be re-run any time configuration changes. The rest of the <code class="language-plaintext highlighter-rouge">migrate.sh</code> script finishes off the one-time migration operations.</p>

<p>Next, the migration script fixes file ownership. TrueNAS uses its own UID numbering starting at 3000 while my config followed Linux standards of starting at 1001. The files in ZFS still had the old TrueNAS UIDs baked into their metadata. So the script does a one-time recursive <code class="language-plaintext highlighter-rouge">chown</code> on each user’s data to remap from the old UIDs to the new ones. Until this is done, SMB users will not have proper access to any existing data.</p>

<p>Next the migration script stops all docker services and restores the volumes that were backed up earlier. It calls the <a href="https://gist.github.com/dfederm/cb11bc9281d2af602d6058bceca0920e#file-restore-docker-volumes-sh" target="_blank">restore-volumes.sh</a> script, which essentially does this on every backed up volume:</p>

<div class="language-sh highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker run <span class="nt">--rm</span> <span class="se">\</span>
  <span class="nt">-v</span> VOLUME_NAME:/target <span class="se">\</span>
  <span class="nt">-v</span> /path/to/backup:/backup:ro <span class="se">\</span>
  alpine sh <span class="nt">-c</span> <span class="s2">"find /target -mindepth 1 -delete &amp;&amp; tar xzf /backup/VOLUME_NAME.tar.gz -C /target"</span>
</code></pre></div></div>

<p>It then re-deploys all docker services.</p>

<p>Finally, there are a couple of manual steps to finish off the migration. The first one is to restore SMB passwords for all users. This requires typing in the password so needs to be done manually for each user. You do this by SSH’ing into the NAS LXC and running <code class="language-plaintext highlighter-rouge">smbpasswd -a &lt;user&gt;</code> (or just using <code class="language-plaintext highlighter-rouge">pct exec</code> from the Proxmox SSH session, eg <code class="language-plaintext highlighter-rouge">pct exec &lt;lxc-id&gt; -- smbpasswd -a &lt;user&gt;</code>) for each user.</p>

<p>To restore Home Assistant, navigate to <code class="language-plaintext highlighter-rouge">http://&lt;ip&gt;:8123</code> and upload the backup you saved earlier. This is why we saved the backup to the machine we’re using to orchestrate the migration.</p>

<p>After that, the migration is finally complete! Test everything out and ensure it’s all working as expected. Once validated, you can do some post-migration cleanup to reclaim some space, such as deleting the TrueNAS system datasets (~35 GB in my case).</p>

<p>So far I’ve found Proxmox much easier to manage day-to-day as everything is scriptable and declarative. I have full control and I can easily repave from scratch if needed. I’ve already made some additional post-migration changes like moving named volumes to ZFS bind mounts (for persistence) and setting up CI/CD deployments, both of which were much easier with the new system. Next up I’d like to beef up monitoring and improve my backup and recovery strategy.</p>]]></content><author><name></name></author><category term="Self Hosting" /><category term="Self Hosting" /><category term="NAS" /><category term="TrueNAS" /><category term="Proxmox" /><category term="ZFS" /><category term="Docker" /><summary type="html"><![CDATA[For a while now I’ve been considering ditching TrueNAS Community Edition (formerly TrueNAS Scale) for my homelab in favor of running on Proxmox. While the recent controversy and dismissive response were the final push I needed, I’ve been pulled in this direction for some time. One issue I have is with the instability and confusion around its VM and container story. Things which should be easy aren’t, like GPU passthrough to Jellyfin. And things which should be simple are way too complex, like SMB permissions. Honestly I’m probably not the right user for TrueNAS in the first place; I am trying to run my whole homelab on the hardware, not “just” a NAS with some extras.]]></summary></entry><entry><title type="html">Managing Home Assistant Config From GitHub</title><link href="https://dfederm.com/managing-home-assistant-config-from-github/" rel="alternate" type="text/html" title="Managing Home Assistant Config From GitHub" /><published>2026-02-17T00:00:00+00:00</published><updated>2026-02-17T00:00:00+00:00</updated><id>https://dfederm.com/managing-home-assistant-config-from-github</id><content type="html" xml:base="https://dfederm.com/managing-home-assistant-config-from-github/"><![CDATA[<p>Recently I wanted to do some refactoring of my collection of automations in <a href="https://www.home-assistant.io/" target="_blank">Home Assistant</a> which have grown and changed in various ways over the years and needed some TLC, including modernizing the structure like <a href="https://www.home-assistant.io/blog/2024/10/02/release-202410/#improved-yaml-syntax-for-automations" target="_blank"><code class="language-plaintext highlighter-rouge">triggers:</code>, <code class="language-plaintext highlighter-rouge">actions:</code>, and other syntax changes</a>, as well as consolidating and simplifying some similar triggers across multiple automations, for example creating a “Home Mode” concept. Historically I have just used the <a href="https://github.com/home-assistant/addons/blob/master/configurator/README.md" target="_blank">File editor app</a> (<a href="https://www.home-assistant.io/blog/2026/02/04/release-20262#add-ons-are-now-called-apps" target="_blank">apps are the new name for add-ons</a>), but that doesn’t scale for larger refactors, and also doesn’t provide the safety of being able to roll back without having to fully restore from backup.</p>

<h2 id="install-the-studio-code-server-app">Install the Studio Code Server App</h2>

<p>The first thing to do, and really what makes a huge difference on its own by greatly improving the editing experience, is installing the <a href="https://github.com/hassio-addons/addon-vscode" target="_blank">Studio Code Server</a> app. This app lets you run <a href="https://code.visualstudio.com/" target="_blank">Visual Studio Code</a> directly in the browser to edit your config files.</p>

<p>Installing the app is straightforward, just navigate to Settings -&gt; Apps, click “Install app”, and search for “studio code server”. It should be under the Home Assistant Community Apps. No additional configuration of the app is required, just install it and start it.</p>

<p>The code server opens to <code class="language-plaintext highlighter-rouge">/config</code> which is your configuration folder, exactly the folder you want to manage.</p>

<p><img src="/assets/images/home-assistant-config-git/install-studio-code-server.png" alt="Studio Code Server in the App Store" class="center" /></p>

<blockquote class="note">
  <p><strong>Note:</strong> I recommend turning the app off when not using it as it can hog quite a bit of memory and is even reported to have <a href="https://community.home-assistant.io/t/studio-code-server-add-on-memory-usage/829842" target="_blank">memory leaks</a>. Hopefully these memory issues get fixed eventually, but for right now I find it easy enough to just turn it on when I need it and off when I am finished.</p>
</blockquote>

<h2 id="setting-up-git">Setting up Git</h2>

<p>Now to set up the git repo you’ll use to version control your changes.</p>

<blockquote class="warning">
  <p><strong>Warning:</strong> Avoid the Git pull app. I tried it and ended up having to restore from a backup. The app appears to completely replace your config directory with the git repo contents. This is not only risky, but you shouldn’t be hosting the entire config directory in git anyway (files like <code class="language-plaintext highlighter-rouge">secrets.yaml</code> and Home Assistant internal storage should be avoided), so I’m not entirely sure how this app is intended to be used.</p>
</blockquote>

<p>First, create a new git repo on whatever hosting platform you prefer. Technically this step is optional as you can completely manage the git repo on your Home Assistant server locally, but I prefer to have additional tools at my disposal and not always work within a browser. I chose to host my configuration on GitHub.</p>

<p>Next, back in Studio Code Server, you need to generate an SSH key to use to authenticate to GitHub. The reason for using a deploy key instead of some other authentication mechanism such as a Personal Access Token (PAT) is to scope the credentials to this one repo, following the principle of least privilege. To generate the key, run:</p>

<div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code>ssh-keygen <span class="nt">-t</span> ed25519 <span class="nt">-C</span> <span class="s2">"home-assistant"</span> <span class="nt">-f</span> /data/.ssh/id_ed25519
</code></pre></div></div>

<p>Use an empty passphrase. This will create <code class="language-plaintext highlighter-rouge">/data/.ssh/id_ed25519</code> and <code class="language-plaintext highlighter-rouge">/data/.ssh/id_ed25519.pub</code>. Note that these files are being generated to <code class="language-plaintext highlighter-rouge">/data/.ssh/</code> instead of the default location of <code class="language-plaintext highlighter-rouge">~/.ssh/</code>. This is because apps run in docker containers so any changes would be lost on app restart. However, the Studio Code Server app symlinks <code class="language-plaintext highlighter-rouge">/data/.ssh/</code> (which is persistent) to <code class="language-plaintext highlighter-rouge">/root/.ssh/</code>, allowing for effective persistence of ssh keys.</p>

<blockquote class="note">
  <p><strong>Note:</strong> Unfortunately, as of writing the Studio Code Server app has a <a href="https://github.com/hassio-addons/addon-vscode/issues/1066" target="_blank">bug</a> where <code class="language-plaintext highlighter-rouge">/data/.ssh</code> is mistakenly mapped to <code class="language-plaintext highlighter-rouge">/root/.ssh/.ssh</code> instead of the intended location. The workaround is to run <code class="language-plaintext highlighter-rouge">cp /root/.ssh/.ssh/* /root/.ssh/</code> every time the app restarts. I <a href="https://github.com/hassio-addons/addon-vscode/pull/1098" target="_blank">submitted a PR</a> to fix this issue.</p>
</blockquote>

<p>Now add the public key to GitHub. Go to your repository’s Settings -&gt; Deploy keys, and click “Add deploy key”. Paste in the public key from running the following command:</p>

<div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">cat</span> /data/.ssh/id_ed25519.pub 
</code></pre></div></div>

<p>Ensure “Allow write access” is checked if you’d like to push changes from Home Assistant to the GitHub repository.</p>

<p><img src="/assets/images/home-assistant-config-git/github-add-deploy-key.png" alt="Adding a Deploy Key in GitHub" class="center" /></p>

<p>To ensure you’ve set up authentication correct, run the following command.</p>

<div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code>ssh <span class="nt">-T</span> git@github.com
</code></pre></div></div>

<p>The first time you run it you will be asked to confirm that you want to connect. Once confirmed, you should see a message like “Hi repoName! You’ve successfully authenticated, but GitHub does not provide shell access.”</p>

<p>Now the local git repo needs to be set up. From <code class="language-plaintext highlighter-rouge">/config</code>, run:</p>

<div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code>git init
git branch <span class="nt">-m</span> main
git remote add origin git@github.com:yourname/yourrepo.git
git config user.name <span class="s2">"Your Name"</span>
git config user.email <span class="s2">"your.email@example.com"</span>
</code></pre></div></div>

<p>Replace the placeholders as appropriate. Note that the <code class="language-plaintext highlighter-rouge">git config</code> commands intentionally are not using <code class="language-plaintext highlighter-rouge">--global</code> so they persist in the repo’s <code class="language-plaintext highlighter-rouge">.git/config</code> rather than the container’s home directory, which isn’t persistent across container restarts.</p>

<p>I would not recommend committing the entirety of your config directory to GitHub, so you will want to add a <code class="language-plaintext highlighter-rouge">.gitignore</code> file:</p>

<pre><code class="language-gitignore"># Home Assistant
.HA_VERSION
.ha_run.lock
.storage/
.cloud/
tts/

# Databases
*.db
*.db-shm
*.db-wal
pyozw.sqlite

# Logs
*.log
*.log.fault
OZW_Log.txt

# Python
__pycache__/
deps/

# Node
node_modules/

# macOS
.DS_Store
._*

# NFS
.nfs*

# Trash
.Trash-0/

# Secrets and credentials
secrets.yaml

# HACS
custom_components/
</code></pre>

<p>Double-check that only the files you want tracked are tracked and adjust the <code class="language-plaintext highlighter-rouge">.gitignore</code> as needed. Then commit and push the changes:</p>

<div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code>git add <span class="nb">.</span>
git commit <span class="nt">-m</span> <span class="s2">"Initial commit"</span>  
git push <span class="nt">-u</span> origin main
</code></pre></div></div>

<p>From there, just commit, push, and pull as you would any other git repo.</p>

<h2 id="migrating-dashboards-to-yaml-optional">Migrating Dashboards to yaml (optional)</h2>

<p>If you’d like to get the same benefits of version control for your dashboards, you can migrate them to yaml as well. However, using yaml-based dashboards does impose some limitations. Notably, you can no longer edit dashboards through the UI. Because of this, migrating dashboards to yaml may not be for everyone.</p>

<p>If you would like to migrate your dashboards, add the following to your <code class="language-plaintext highlighter-rouge">configuration.yaml</code>:</p>

<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="na">lovelace</span><span class="pi">:</span>
  <span class="c1"># Use resource_mode to load resources from YAML</span>
  <span class="na">resource_mode</span><span class="pi">:</span> <span class="s">yaml</span>
  <span class="na">dashboards</span><span class="pi">:</span>
    <span class="na">dashboard-overview</span><span class="pi">:</span>
      <span class="na">mode</span><span class="pi">:</span> <span class="s">yaml</span>
      <span class="na">filename</span><span class="pi">:</span> <span class="s">dashboards/overview.yaml</span>
      <span class="na">title</span><span class="pi">:</span> <span class="s">Overview</span>
      <span class="na">icon</span><span class="pi">:</span> <span class="s">mdi:view-dashboard</span>
      <span class="na">show_in_sidebar</span><span class="pi">:</span> <span class="no">true</span>
</code></pre></div></div>

<p>Note that the dashboard keys must contain a <code class="language-plaintext highlighter-rouge">-</code> for some reason. I only have one dashboard, but if you have multiples just add more siblings to <code class="language-plaintext highlighter-rouge">dashboard-overview</code>. To get the yaml for individual dashboards which currently don’t use yaml, edit them in the UI and then in the “…” there should be an option for the Raw configuration editor. Copy and paste that yaml into a yaml file. I put mine under <code class="language-plaintext highlighter-rouge">dashboards/</code> as per the example above.</p>

<p>Refer to the <a href="https://www.home-assistant.io/dashboards/dashboards/#adding-yaml-dashboards" target="_blank">Home Assistant docs</a> for a full list of configuration values.</p>]]></content><author><name></name></author><category term="Home Automation" /><category term="automation" /><category term="git" /><category term="home assistant" /><category term="home automation" /><category term="smart home" /><summary type="html"><![CDATA[Recently I wanted to do some refactoring of my collection of automations in Home Assistant which have grown and changed in various ways over the years and needed some TLC, including modernizing the structure like triggers:, actions:, and other syntax changes, as well as consolidating and simplifying some similar triggers across multiple automations, for example creating a “Home Mode” concept. Historically I have just used the File editor app (apps are the new name for add-ons), but that doesn’t scale for larger refactors, and also doesn’t provide the safety of being able to roll back without having to fully restore from backup.]]></summary></entry><entry><title type="html">How to rip Blu-rays and watch on Jellyfin</title><link href="https://dfederm.com/how-to-rip-blurays-and-watch-on-jellyfin/" rel="alternate" type="text/html" title="How to rip Blu-rays and watch on Jellyfin" /><published>2026-01-21T00:00:00+00:00</published><updated>2026-01-21T00:00:00+00:00</updated><id>https://dfederm.com/how-to-rip-blurays-and-watch-on-jellyfin</id><content type="html" xml:base="https://dfederm.com/how-to-rip-blurays-and-watch-on-jellyfin/"><![CDATA[<p>Previously I wrote about <a href="/building-a-nas-and-media-server-for-under-500/">building a NAS and Media Server for under $500</a> which explains how to set up a NAS and configure a Jellyfin server (although I’ve since moved away from the TrueNAS “App” in favor of a docker container). This post is a complete guide for how to populate your media library by ripping your Blu-ray discs onto your NAS so that you can watch on Jellyfin.</p>

<p>Table of Contents:</p>
<ul>
  <li><a href="#why">Why?</a></li>
  <li><a href="#where-to-get-blu-rays">Where to get Blu-rays</a></li>
  <li><a href="#optical-drive">Optical Drive</a></li>
  <li><a href="#makemkv">MakeMKV</a>
    <ul>
      <li><a href="#basics-of-ripping">Basics of Ripping</a></li>
      <li><a href="#extras-and-special-features">Extras and Special Features</a></li>
    </ul>
  </li>
  <li><a href="#re-encoding">Re-encoding</a></li>
  <li><a href="#library-organization">Library Organization</a></li>
  <li><a href="#parental-controls">Parental Controls</a></li>
  <li><a href="#legal-disclaimer-and-ethics">Legal Disclaimer and Ethics</a></li>
</ul>

<h2 id="why">Why?</h2>
<p>The first question I usually get is: why? Why rip your Blu-ray discs in the first place? Why not just stream? I have two primary reasons: ownership and quality.</p>

<p>For my point on ownership, if you’ve heard the phrase <a href="https://en.wikipedia.org/wiki/You'll_own_nothing_and_be_happy" target="_blank">“You’ll own nothing and be happy”</a>, you know what I mean. Essentially when you stream, you don’t own any of the movies or shows. The streaming platform may choose at their discretion to remove anything they want, or more likely their contracts expire. Owning physical media means that you, well, own it. No one can take it away from you and you can watch it whenever you want. There are some fringe benefits to this as well, such as avoiding data usage (for those with data caps), avoiding buffering, and being able to watch media when you don’t have internet, either via an outage or for example when on a roadtrip, although the latter requires some extra steps. I would also argue that this saves cost over time by not paying <a href="https://www.latimes.com/entertainment-arts/business/story/2025-11-21/why-do-streaming-prices-keep-rising-disney-netflix-paramount-what-to-know" target="_blank">streaming subscriptions</a>, and at risk of sounding like a privacy nut, it also shields you from the <a href="https://privacy.commonsense.org/evaluation/Netflix" target="_blank">tracking of the media you consume</a>.</p>

<p>My personal library that I’ve built up over the last 2 years has over 400 movies and over 1000 episodes of TV. At that size, there is plenty of variety and I never find myself without something to watch. In fact, I’ve only actually watched around one-third of the movies I currently have. I have a lot to catch up on!</p>

<p><img src="/assets/images/ripping/jellyfin-stats.png" alt="My Jellyfin Library stats" class="center" /></p>

<p>In terms of quality, Blu-ray discs and especially <a href="https://en.wikipedia.org/wiki/Ultra_HD_Blu-ray" target="_blank">UHD Blu-ray</a>, are <a href="https://www.reddit.com/r/Bluray/comments/18tzajt/how_much_better_is_bluray_from_streaming_on_a_4k/" target="_blank">much higher quality</a>. It mostly comes down to bitrate as the streaming services try to keep costs down by compressing the video and/or using a lower bitrate to avoid having to send tens of gigabytes over the internet just for one user watching one movie. For reference, most Blu-ray movies are around 40 GB and most UHD Blu-ray movies are around 70 GB. It’s totally infeasible to stream that much data, so there clearly must be compromises on quality. Additionally if you have a surround sound setup for your home theater, the audio quality is significantly better with physical media.</p>

<h2 id="where-to-get-blu-rays">Where to get Blu-rays</h2>
<p>I find the best place to get cheap Blu-rays is from thrift stores like <a href="https://www.goodwill.org/" target="_blank">Goodwill</a> or <a href="https://www.valuevillage.com/" target="_blank">Value Village</a>. I’ve had even better luck at local thrift stores. I can typically get Blu-rays for around $3 each, but I do find that pricing varies wildly from store to store so your mileage may vary. Obviously selection is another problem, but in my opinion the hunt is part of the fun. My wife and I seek out nearby thrift stores whenever we’re in a different area of town, and we check back periodically at places we’ve already been to see if there are some new gems.</p>

<p>Anecdotally, the stars aligned one day for me at a local thrift store which had a half-off sale combined with a seemingly huge drop of Blu-rays they got, so I walked out with 30 Blu-rays for around $70. One was even a 4k Blu-ray! I’m still chasing that high to this day.</p>

<p><img src="/assets/images/ripping/thrifting-haul.jpeg" alt="Thrifting Haul" class="center" /></p>

<p>One thing to watch out for when thrifting though is that you will want to check the condition of the disc before purchasing. Sometimes there will be smudges which can simply be cleaned, but other times the disc may be quite scratched or even missing entirely.</p>

<p>If you’re not into thrifting or you really want a specific piece of media you haven’t been able to find elsewhere, <a href="https://www.blu-ray.com/" target="_blank">Blu-ray.com</a> is a good source, as well as <a href="https://amzn.to/4aGWXYu" target="_blank" rel="nofollow">Amazon</a> of course. Paying retail prices can be justified if you don’t have streaming services. Just put the money you’d typically spend on subscriptions towards buying physical media instead, and over time you can build up quite the library.</p>

<p>Here are some of my favorites which are relatively cheap, at least at the time of writing:</p>
<ul>
  <li><a href="https://amzn.to/4pFHdti" target="_blank" rel="nofollow">The Lord of the Rings Trilogy</a></li>
  <li><a href="https://amzn.to/3XIn5dZ" target="_blank" rel="nofollow">Harry Potter Collection</a></li>
  <li><a href="https://amzn.to/44ifYfX" target="_blank" rel="nofollow">Dark Knight Trilogy</a></li>
  <li><a href="https://amzn.to/4rOP2hY" target="_blank" rel="nofollow">Interstellar</a></li>
  <li><a href="https://amzn.to/3KIkU7g" target="_blank" rel="nofollow">The Matrix</a></li>
  <li><a href="https://amzn.to/48KqSwa" target="_blank" rel="nofollow">Edge of Tomorrow</a></li>
  <li><a href="https://amzn.to/49PJooN" target="_blank" rel="nofollow">Jurassic Park</a></li>
  <li><a href="https://amzn.to/4q3XBUw" target="_blank" rel="nofollow">Oppenheimer</a></li>
  <li><a href="https://amzn.to/4pxPe3J" target="_blank" rel="nofollow">Terminator 2</a></li>
  <li><a href="https://amzn.to/44Ud1T1" target="_blank" rel="nofollow">Saving Private Ryan</a></li>
  <li><a href="https://amzn.to/3MQy16R" target="_blank" rel="nofollow">1917</a></li>
  <li><a href="https://amzn.to/4pB7RDY" target="_blank" rel="nofollow">Pulp Fiction</a></li>
  <li><a href="https://amzn.to/4pVEls3" target="_blank" rel="nofollow">Inglourious Basterds</a></li>
</ul>

<h2 id="optical-drive">Optical Drive</h2>
<p>To rip Blu-ray discs you need an optical drive for your computer. For regular Blu-ray discs, or DVDs for that matter, any old optical drive that can read those discs will do. However, for UHD Blu-ray AKA 4k Blu-ray you need special firmware flashed to the drive. While there is a detailed <a href="https://forum.makemkv.com/forum/viewtopic.php?f=16&amp;t=19634" target="_blank">flashing guide</a>, personally I recommend buying a pre-flashed drive for the peace of mind and convenience. I got a Pioneer BDR-212 from <a href="https://forum.makemkv.com/forum/viewtopic.php?f=20&amp;t=17831" target="_blank">Billycar11</a> and can attest to his legitimacy, professionalism, and promptness.</p>

<h2 id="makemkv">MakeMKV</h2>
<p>The best software to use for ripping is <a href="https://www.makemkv.com/" target="_blank">MakeMKV</a>. It extracts the video files from the disc and remuxes them into <a href="https://en.wikipedia.org/wiki/Matroska" target="_blank">mkv</a> files. This file format is widely supported and because it’s a remux as opposed to re-encoding, the video files are the exact same quality as on the disc.</p>

<p>Once you have a few rips under your belt, I highly recommend <a href="https://www.makemkv.com/buy/" target="_blank">buying MakeMKV</a>. It’s not strictly necessary, but it’s a lifetime license as opposed to a subscription, and I think it’s important to support the software you use and enjoy to ensure that it continues to receive updates and support.</p>

<h3 id="basics-of-ripping">Basics of Ripping</h3>
<p>First you need to open the disc in MakeMKV and select the video files you want. For some background, Blu-ray discs contain smaller video segments and then playlists of those segments which represent one semantic video file. This allows for deduplication of content for video files with identical parts. For example, a disc containing both theatrical and extended versions doesn’t need two complete copies of the film, which wouldn’t fit at full quality anyway, because most of the content is shared and only certain scenes differ between versions.</p>

<p>The reason why this matters when ripping is that many of the files you can select from in MakeMKV are irrelevant or not useful, so you should only select what you want. Some people may want just the movie, but personally I also rip the special features. This does make selection slightly more involved, but I’ll go into that more in a later section.</p>

<p><img src="/assets/images/ripping/makemkv-selection.png" alt="Selecting video files to rip in MakeMKV" class="center" /></p>

<p>In some cases studios try to deter ripping by creating several or even sometimes hundreds of playlists with different segment maps. The bad playlists end up with an incomplete and/or out-of-order video. Sometimes you can figure out the correct one from comparing the movie’s runtime to the playlist duration, but most of the time it’s easier to just do an online search as the community has already figured out the correct one. Usually a search for the movie name with “MakeMKV” at the end is sufficient to find what you need. Anecdotally, I’ve also noticed that for some discs which have only a couple of options (presumably not intentionally maliciously spamming playlists) if one of those options is <code class="language-plaintext highlighter-rouge">00800.mpls</code> and the other option is similar to <code class="language-plaintext highlighter-rouge">00801.mpls</code>, the 800 one is the correct one.</p>

<p>If you only want the movie, then selecting the correct single file is all you need to do and then you can start the rip! A short while later you will have the video file(s) ready to copy to your NAS. A Blu-ray takes around 45 minutes to rip, while a 4k Blu-ray can take up to an hour and a half, but these times can vary based on drive speed.</p>

<p><img src="/assets/images/ripping/makemkv-progress.png" alt="Progress indicator while MakeMKV is ripping" class="center" /></p>

<h3 id="extras-and-special-features">Extras and Special Features</h3>

<p>If you’re like me and want to also rip and organize the special features, then selection becomes a bit more involved, and you’ll need to identify what each video file actually is.</p>

<p>When selecting what to rip, there are many useless video files on the disc that you will want to filter out. Blank segments, videos related to the Blu-ray menus, piracy warnings, and production logos are all worth filtering out. However, because you can’t see the videos as part of the selection, usually I just filter out very short video (less than 10 seconds) and alternate versions of the movie, and just sort them out after the rip. MakeMKV has a setting which can help filter out short videos, and you may want to even set it to up to a minute.</p>

<p>Once the rip is complete and you have a superset of the video files you want, you can use a combination of two approaches to identify what the various files are. The first step is to simply play the video on your computer with <a href="https://www.videolan.org/vlc/" target="_blank">VLC</a> or another player that supports MKV files. Many of the junk files will be immediately obvious and can be deleted.</p>

<p>Once most of the junk is filtered out, you can use <a href="https://www.dvdcompare.net/" target="_blank">DVDCompare</a> to help identify the special features. You can search for a movie and then it will show the special feature titles and lengths which you can then match with the durations on the video files you have and rename your files to match the name of the special feature. It’s not always perfect and may take some manual scrubbing of the videos to figure out which is which, but with some experience it shouldn’t take more than a few minutes to sort through.</p>

<h2 id="re-encoding">Re-encoding</h2>
<p>I prefer to store the full quality videos, however that can take up quite a bit of storage. My current library of around 400 movies takes about 15 TB. One option to save on storage costs would be to re-encode the videos into another format, e.g. HEVC, using a tool like <a href="https://handbrake.fr/" target="_blank">HandBrake</a>.</p>

<p>As I prefer the highest possible quality, I do not re-encode and so I won’t go into more detail for that.</p>

<h2 id="library-organization">Library Organization</h2>
<p>I have a root media share on my NAS with 2 subdirectories, Movies and TV. I have these mapped into the Jellyfin docker container as separate folders, <code class="language-plaintext highlighter-rouge">/movies</code> and <code class="language-plaintext highlighter-rouge">/shows</code>, and each is added as its own library in Jellyfin with the associated content type.</p>

<p><img src="/assets/images/ripping/jellyfin-add-library.png" alt="Adding a library to Jellyfin" class="center" /></p>

<p>Jellyfin does have good <a href="https://jellyfin.org/docs/general/server/media/movies" target="_blank">documentation</a> for file organization within a library, but here’s the gist.</p>

<p>For movies, I have a folder for each with the name of the movie and the year of its release, eg “Willy Wonka and the Chocolate Factory (1971)”. This naming is human-readable while still being enough information for Jellyfin to find the metadata for it. Inside the folder I have the video file for the movie itself with the same name as the folder. If I have multiple versions of a movie, for example a 1080p version and a 2160p (4k) version, I’ll add a suffix to the name, so for example “Willy Wonka and the Chocolate Factory (1971) - 2160p.mkv”. I then put the special features in subdirectories which correspond to the type that they are. I find classifying special features to be of a judgement call in many cases, and ultimately whatever organization works for you is best.</p>

<p>A complete example for how I organize a complete title is as follows:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>movies/
└── Willy Wonka and the Chocolate Factory (1971)/
    ├── Willy Wonka and the Chocolate Factory (1971) - 1080p.mkv
    ├── Willy Wonka and the Chocolate Factory (1971) - 2160p.mkv
    ├── behind the scenes/
    │   ├── Pure Imagination - The Story of Willy Wonka &amp; the Chocolate Factory.mkv
    │   └── Tasty Vintage.mkv
    ├── extras/
    │   └── 4 Scrumptious sing-along songs.mkv
    └── trailers/
        └── Theatrical Trailer.mkv
</code></pre></div></div>

<p>For TV Shows, I have a folder for each show with the show name, then season subfolders containing episodes named with the standard <code class="language-plaintext highlighter-rouge">ShowName_S##E##_EpisodeName</code> format. Show-level extras go in an <code class="language-plaintext highlighter-rouge">extras</code> folder at the show level, while season-specific extras can go in an <code class="language-plaintext highlighter-rouge">extras</code> folder within each season folder.</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>shows/
└── Batman - The Animated Series/
    ├── extras/
    │   ├── Arkham Asylum.mkv
    │   ├── Batman - The Legacy Continues.mkv
    │   ├── Concepting Harley Quinn.mkv
    │   ├── The Heart of Batman.mkv
    │   └── ...
    ├── Season 1/
    │   ├── Batman - The Animated Series_S01E01_On Leather Wings.mkv
    │   ├── Batman - The Animated Series_S01E02_Christmas with the Joker.mkv
    │   ├── Batman - The Animated Series_S01E03_Nothing to Fear.mkv
    │   ├── ...
    │   └── extras/
    │       ├── A Conversation with the Director - On Leather Wings.mkv
    │       ├── A Conversation with the Director - Christmas with the Joker.mkv
    │       └── ...
    ├── Season 2/
    │   ├── Batman - The Animated Series_S02E01_Sideshow.mkv
    │   ├── Batman - The Animated Series_S02E02_A Bullet for Bullock.mkv
    │   └── ...
    └── Season 3/
        ├── Batman - The Animated Series_S03E01_Holiday Knights.mkv
        ├── Batman - The Animated Series_S03E02_Sins of the Father.mkv
        └── ...
</code></pre></div></div>

<h2 id="parental-controls">Parental Controls</h2>
<p>I have small children and not all content in my media library is appropriate for them. No one wants their kids getting nightmares from accidentally stumbling across <a href="https://amzn.to/44Ud1T1" target="_blank" rel="nofollow">Saving Private Ryan</a>.</p>

<p>Luckily it’s very easy to set up parental controls in Jellyfin. When editing the settings for their user, there is a Parental Controls tab which allows you to select from different sets of ratings.</p>

<p><img src="/assets/images/ripping/jellyfin-parental-controls.png" alt="Setting parental controls" class="center" /></p>

<p>You can also fine-tune by overriding the ratings for specific content. For example, the PG-13 rating <a href="https://www.syfy.com/syfy-wire/how-gremlins-helped-change-movie-ratings-forever-with-pg-13" target="_blank">did not exist until the 80’s</a> so some movies were rated PG at the time but would be rated PG-13 now. To do this, you navigate to the movie, click the three dots, and select “edit metadata”. You can then set a custom rating.</p>

<p><img src="/assets/images/ripping/jellyfin-edit-metadata.png" alt="Editing Media Metadata" class="center" /></p>

<p>You can even take this to the extreme and use the “Approved” custom rating to manually curate every piece of media you want to make available. Personally I find that approach to be overkill though and trust the MPA for the most part, as well as just generally supervise my kids when they’re watching TV.</p>

<h2 id="legal-disclaimer-and-ethics">Legal Disclaimer and Ethics</h2>
<p>Skip or ignore this section if you don’t care about or disagree with my opinion on this subject. It is, after all, just my <em>opinion</em>.</p>

<p>I am not a lawyer so cannot comment on the legality of ripping Blu-ray discs where you live. Even in terms of ethics, I do not claim to be the arbiter of what is right or wrong nor can I judge others for their choices or what they believe to be ethical. I can, however, explain how I personally operate within my own ethics.</p>

<p>I have two primary rules for myself. First, I am fine with making a digital copy of physical media that I own for personal use within my own home. I make sure that I do in fact own a disc for every movie I have on my NAS. I also choose not to make my Jellyfin server accessible to anyone outside my household; only myself and my immediate family members access these copies. These rules meet my personal bar for what is ethical, but again it is a personal choice and I am not one to judge others for having a different opinion.</p>

<h2 id="conclusion">Conclusion</h2>

<p>That’s it! With this setup, you’ll have your own personal streaming service with content you truly own, at quality that rivals or exceeds any streaming platform. Happy ripping!</p>]]></content><author><name></name></author><category term="Self Hosting" /><category term="Self Hosting" /><category term="NAS" /><category term="Media Server" /><category term="Jellyfin" /><category term="MakeMKV" /><summary type="html"><![CDATA[Previously I wrote about building a NAS and Media Server for under $500 which explains how to set up a NAS and configure a Jellyfin server (although I’ve since moved away from the TrueNAS “App” in favor of a docker container). This post is a complete guide for how to populate your media library by ripping your Blu-ray discs onto your NAS so that you can watch on Jellyfin.]]></summary></entry><entry><title type="html">Migrating pictures from OneDrive to Immich on TrueNAS Scale</title><link href="https://dfederm.com/migrating-pictures-from-onedrive-to-immich-on-truenas-scale/" rel="alternate" type="text/html" title="Migrating pictures from OneDrive to Immich on TrueNAS Scale" /><published>2024-07-17T00:00:00+00:00</published><updated>2024-07-17T00:00:00+00:00</updated><id>https://dfederm.com/migrating-pictures-from-onedrive-to-immich-on-truenas-scale</id><content type="html" xml:base="https://dfederm.com/migrating-pictures-from-onedrive-to-immich-on-truenas-scale/"><![CDATA[<p>As I continue on my <a href="/categories/#self%20hosting">self-hosting</a> journey, I decided to migrate my photos and videos from OneDrive to ensure my photos are stored safely, privately, and securely on my own server. After exploring various solutions, I chose <a href="https://immich.app/" target="_blank">Immich</a> for its extensive features and, perhaps more importantly, its active development community.</p>

<h2 id="installing-immich">Installing Immich</h2>

<p><a href="https://www.truenas.com/truenas-scale/" target="_blank">TrueNAS Scale</a> supports installing Apps, so the first step is to install the Immich app. Immich has <a href="https://immich.app/docs/install/truenas/" target="_blank">full instuctions</a>, but I’ll go over the specific configuration I used. In TrueNAS Scale, go to the Apps tab, click “Discover Apps”, search for “Immich” and click “Install”.</p>

<p><img src="/assets/images/truenas-immich/discover-applications.png" alt="Searching for the Immich app" class="center" /></p>

<p>For the configuration, I mostly used the defaults, including for the “Immich Libray Storage” section. Because I’m migrating my existing photos from OneDrive and using the NAS storage as the “source of truth”, I didn’t plan on actually storing anything directly in Immich to avoid it reorganizing the files itself. Instead, Immich has a notion of an “External Library”, which I’ll discuss more later once Immich is installed. To set that up though, a folder on the NAS holding all the photos will need to be shared with Immich. This is done in the “Additional Storage” section, and I mounted the host path <code class="language-plaintext highlighter-rouge">/mnt/Default/federshare/David/Pictures</code> (path on the NAS) to <code class="language-plaintext highlighter-rouge">/pictures</code> inside the app.</p>

<p><img src="/assets/images/truenas-immich/configuration-additional-storage.png" alt="Configuring Additional Storage for the Immich app" class="center" /></p>

<p>Finish up the app install and once running, you can navigate to the Immich Web Portal and go through the configuration process to create an admin user.</p>

<p>To import your pictures, you will create an External Library. Click “Adminitration” in the top right, then in the “External Libraries” tab, click “Create Library” and select an owner. The owner is the Immich account the pictures will be associated with.</p>

<h2 id="importing-pictures">Importing Pictures</h2>

<p><img src="/assets/images/truenas-immich/create-external-library.png" alt="Creating an external library" class="center" /></p>

<p>Then click the “…” and rename the External Library to whatever you desire; I chose “David’s Pictures (NAS)”. Then click the “…” again and click “Edit Import Paths”, “App path”, type “/pictures” (or whtever mount path you used earlier), and save.</p>

<p><img src="/assets/images/truenas-immich/add-import-path.png" alt="Adding an import path to the external library" class="center" /></p>

<p>You can then click “Scan All Libraries” to on-demand import the pictures from that folder at any time. To configure this to happen on a schedule, you can go to the “Settings” tab, expand the “External Library” section, and configure the “Library Watching” and “Periodic Scanning” values. Personally, I chose to enabled Library Watching and scan the library every hour. Anecdotally, I found the Library Watching feature to not work very well, so if you want a picture to appear immediately and don’t want to wait for the time, you’ll want to manually scan the external library.</p>

<p><img src="/assets/images/truenas-immich/library-watching.png" alt="Configuring Library watching" class="center" /></p>

<p><strong>NOTE! Deleting a picture in Immich (and emptying the trash) deletes it from the NAS!</strong> ie the External Library syncs both ways.</p>

<p>At this point you can manually copy all your pictures from OneDrive to your NAS. After scanning the External Library (manually or otherwise), your photos should be viewable in Immich!</p>

<p><img src="/assets/images/truenas-immich/imported-pictures.png" alt="Imported pictures" class="center" /></p>

<p>If you <a href="/backing-up-truenas-scale-to-onedrive/">back up your NAS to OneDrive</a>, the pictures will be copied back to OneDrive (ironic!), so you can safely use the NAS as the source of truth going forward. There are Android and iOS mobile apps for Immich, which you should install at this point as well.</p>

<h2 id="syncing-pictures-from-android">Syncing Pictures from Android</h2>

<p>Speaking of the NAS being the source of truth and mobile apps, the final step to complete the process is to sync your phone’s pictures to the NAS instead of OneDrive. There are many ways to do this, but as an Android user I chose to use <a href="https://play.google.com/store/apps/details?id=dk.tacit.android.foldersync.lite" target="_blank">FolderSync</a>.</p>

<p>First add an Account to sync to, and in this case at the bottom you’ll find SMB. Configure it with your SMB share name, SMB credentials, and the IP of your NAS. TrueNAS Scale supports SMB3, so be sure to select that for the fastest syncing.</p>

<p><img src="/assets/images/truenas-immich/foldersync-smb.jpg" alt="Adding an SMB share to FolderSync" class="center" /></p>

<p>Next, add one or more “folder pairs”. These are a pair of folders to sync. In this case you’ll configure the left to be your phone’s camera storage (eg <code class="language-plaintext highlighter-rouge">/storage/emulated/0/DCIM/Camera</code>) and the right will be the SMB share under the path where you want to place the pictures, in particular the same directory or a subdirectory of what you mounted in the Immich app. (<code class="language-plaintext highlighter-rouge">/David/Pictures/Camera</code> in my case). I configured the sync to be “to right folder”, which means it will sync only one-way from the phone to the NAS.</p>

<p><img src="/assets/images/truenas-immich/foldersync-folderpair.jpg" alt="Adding a folder pair to FolderSync" class="center" /></p>

<p>There are additional configurations to play around with for a folder pair, for instance I added a regular sync nightly at midnight, and also configured it to monitor the device folder so that it immediately syncs, although note that this fails when you’re not on your local network which is why I use a scheduled sync in additional.</p>

<p>I also was hitting some file conflicts, very likely due to bad file management from me originally on OneDrive, so I ended up configuring the left file to always “win” in the case of a conflict.</p>

<h2 id="looking-forward">Looking Forward</h2>

<p>At this point, the migration was complete as al my primary requirements were met. However, Immich has many features which make the experience after migration even better than it was before. For example, <a href="https://immich.app/docs/features/facial-recognition/" target="_blank">Facial Recognition</a>, which analyzes the photots to identify faces and associate them with distinct people. This all happens locally, which is a relief as allowing a remote third-party to analyze pictures of my family, including my yound children, is not something I’m terribly comfortable with. Relatedly, the <a href="https://immich.app/docs/features/smart-search" target="_blank">Smart Search</a> feature allows for searching your pictures without having to tag them specifically, for example I can search for “&lt;wife’s name&gt; holding &lt;son’s name&gt;” and it does a reasonably good job at finding relevant photots. It’s certainly not perfect, but impressive for running locally and should improve over time.</p>

<p>Lastly, I really enjoy the ability to create links to share specific photos externally. This does require some extra work though to expose Immich externally, the details of which I won’t go into here, but it makes sharing full-quality pictures with family who I still primarily talk to over SMS way easier.</p>]]></content><author><name></name></author><category term="Self Hosting" /><category term="Self Hosting" /><category term="NAS" /><category term="TrueNAS" /><category term="immich" /><summary type="html"><![CDATA[As I continue on my self-hosting journey, I decided to migrate my photos and videos from OneDrive to ensure my photos are stored safely, privately, and securely on my own server. After exploring various solutions, I chose Immich for its extensive features and, perhaps more importantly, its active development community.]]></summary></entry><entry><title type="html">Backing up TrueNAS Scale to OneDrive</title><link href="https://dfederm.com/backing-up-truenas-scale-to-onedrive/" rel="alternate" type="text/html" title="Backing up TrueNAS Scale to OneDrive" /><published>2024-03-20T00:00:00+00:00</published><updated>2024-03-20T00:00:00+00:00</updated><id>https://dfederm.com/backing-up-truenas-scale-to-onedrive</id><content type="html" xml:base="https://dfederm.com/backing-up-truenas-scale-to-onedrive/"><![CDATA[<p>Recently OneDrive was <a href="https://github.com/truenas/middleware/pull/11143" target="_blank">removed</a> as a CloudSync provider for <a href="https://www.truenas.com/truenas-scale/" target="_blank">TrueNAS Scale</a>. As I <a href="/building-a-nas-and-media-server-for-under-500/">built my first NAS</a> and use OneDrive for cloud storage, I was looking for alternate means to back up my NAS to OneDrive. I found individual pieces of possible solutions on the <a href="https://www.truenas.com/community/" target="_blank">TrueNAS forums</a>, but nothing approaching an end-to-end solution, so decided to do a write-up of what I ended up doing in hopes others may find it helpful as well.</p>

<p>TrueNAS Scale allows <a href="https://www.truenas.com/docs/scale/scaletutorials/apps/usingcustomapp/" target="_blank">custom docker containers</a> which they call “custom apps”, so the overall idea is just to use <a href="https://rclone.org/" target="_blank">rclone</a> in a Docker container. I like the solution because it’s decoupled from anything specific to TrueNAS, so very generic, easy to support, and there’s no “magic” involved. It’s very straightforward and understandable.</p>

<p>The first step is to create a new dataset which will contain your <a href="https://rclone.org/docs/#config-config-file" target="_blank">rclone configuration file</a>. I named mine “rclone” in my root “Default” dataset. I used the SMB share type, since that’s what I plan on using, but left the rest of the settings as default.</p>

<p>Next you’ll need to configure the SMB share for the dataset so that you can manage the config file from other machines. For mine, I just added the SMB share to the <code class="language-plaintext highlighter-rouge">/mnt/Default/rclone</code> path and used the default settings. When creating a new share it’ll ask to restart the SMB service.</p>

<p>Connect to the new SMB share and create a single file inside called <code class="language-plaintext highlighter-rouge">rclone.conf</code>. This file should be in the <a href="https://en.wikipedia.org/wiki/INI_file#Format" target="_blank">INI</a> and look like this:</p>

<div class="language-ini highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nn">[onedrivedavid]</span>
<span class="py">type</span> <span class="p">=</span> <span class="s">onedrive</span>
<span class="py">drive_type</span> <span class="p">=</span> <span class="s">personal</span>
<span class="py">drive_id</span> <span class="p">=</span> <span class="s">&lt;your-drive-id&gt;</span>
<span class="py">token</span> <span class="p">=</span> <span class="s">&lt;your-token&gt;</span>
</code></pre></div></div>

<p>All configuration can be found in the <a href="https://rclone.org/onedrive/" target="_blank">rclone docs for OneDrive</a>, but the boilerplate should be enough for most, so you just will need to fill in the two placeholders.</p>

<p>The section header is the name of the remote, so I used “onedrivedavid” since I plan to back up my wife’s data on the NAS to her OneDrive separately and wanted to disambiguate.</p>

<p>For <code class="language-plaintext highlighter-rouge">drive_id</code>, I found the easiest way is to use the <a href="https://developer.microsoft.com/en-us/graph/graph-explorer">Microsoft Graph Explorer</a>. There you’ll log in (by default you’ll see mock data), and execute the query <code class="language-plaintext highlighter-rouge">https://graph.microsoft.com/v1.0/me/drive</code>. The first time you do this you’ll see an error that says <code class="language-plaintext highlighter-rouge">Unauthorized - 401</code>. You can easily grant access to Graph Explorer by clicking the “Modify permissions” tab and consenting to <code class="language-plaintext highlighter-rouge">Files.Read</code>.</p>

<p><img src="/assets/images/truenas-onedrive/graph-explorer-permissions.png" alt="Consenting to Graph Explorer permissions" class="center" /></p>

<p>Run the query again and you should see the JSON response in the bottom pane. Use the <code class="language-plaintext highlighter-rouge">id</code> field of the response as your <code class="language-plaintext highlighter-rouge">drive_id</code>. You can also confirm that your <code class="language-plaintext highlighter-rouge">drive_type</code> is “personal” from the same response.</p>

<p><img src="/assets/images/truenas-onedrive/graph-explorer-response.png" alt="Graph Explorer response" class="center" /></p>

<p>For the <code class="language-plaintext highlighter-rouge">token</code>, you can follow the rclone <a href="https://rclone.org/remote_setup/" target="_blank">instructions</a> but basically you just download the rclone executable from the website and run <code class="language-plaintext highlighter-rouge">rclone authorize onedrive</code>. This will pop up a browser window for you to authenticate in and once completed spit out JSON content which you will copy and paste in entirety into the <code class="language-plaintext highlighter-rouge">rclone.conf</code>. The value should be of the form: <code class="language-plaintext highlighter-rouge">{"access_token":"...}</code>.</p>

<p>Save your <code class="language-plaintext highlighter-rouge">rclone.conf</code> file and it’s time to create the docker container or “custom app”. Go to the Apps tab, click “Discover Apps” and then “Custom App”. I named mine “rclone-david” since again I wanted to disambiguate with another user’s rclone backups.</p>

<p>I found <a href="https://github.com/robinostlund/docker-rclone-sync" target="_blank">robinostlund/docker-rclone-sync</a> on GitHub, which performs an <code class="language-plaintext highlighter-rouge">rclone sync</code> command on a schedule, which is exactly the scenario I’m targetting, so for the Image repository use <code class="language-plaintext highlighter-rouge">ghcr.io/robinostlund/docker-rclone-sync</code>.</p>

<p>As per the docs for that image, a few environment variables need to be set to configure it. Under the “Container Environment Variables” section, add the following environment variables:</p>

<ul>
  <li><code class="language-plaintext highlighter-rouge">SYNC_SRC=/rclone-data</code> - This can be any path, as long as it matched what you use below in the Storage section.</li>
  <li><code class="language-plaintext highlighter-rouge">SYNC_DEST=onedrivedavid:/nas-backup</code> - The left-hand side of the value needs to match the section header in the ini file, while the right-hand side is a path within OneDrive you’d like to back up to.</li>
  <li><code class="language-plaintext highlighter-rouge">CRON=0 0 * * *</code> - To schedule the sync daily at midnight.</li>
  <li><code class="language-plaintext highlighter-rouge">CRON_ABORT=0 6 * * *</code> - Schedules an abort in case the sync is taking too long.</li>
  <li><code class="language-plaintext highlighter-rouge">FORCE_SYNC=1</code> - This syncs on container startup, which makes for easier testing.</li>
  <li><code class="language-plaintext highlighter-rouge">SYNC_OPTS=-v --create-empty-src-dirs --metadata</code> - Additional options to pass to <code class="language-plaintext highlighter-rouge">rclone sync</code>. These are the options I prefer, but all options can be found in the <a href="https://rclone.org/commands/rclone_sync/" target="_blank">rclone docs</a>.</li>
</ul>

<p>Under the “Networking” section, add an interface so it can reach out to OneDrive properly.</p>

<p>Under the “Storage” section, add:</p>
<ol>
  <li>Config
    <ul>
      <li>Host path: <code class="language-plaintext highlighter-rouge">/mnt/Default/rclone</code>, or whatever yours is configured to be.</li>
      <li>Mount path: <code class="language-plaintext highlighter-rouge">/config</code>, which is what the image expects.</li>
      <li>Read Only: <em>unchecked</em>. rclone will write to the file, in particular to update the access token as it refreshes it.</li>
    </ul>
  </li>
  <li>Data
    <ul>
      <li>Host path: Whatever path on your NAS you’d like to back up</li>
      <li>Mount path: <code class="language-plaintext highlighter-rouge">/rclone-data</code>, or whatever you chose for <code class="language-plaintext highlighter-rouge">SYNC_SRC</code> above.</li>
      <li>Read Only: <em>checked</em>. rclone will only need to sync from the NAS, so only need read permission to the data.</li>
    </ul>
  </li>
</ol>

<p>Leave everything else as the defaults and click Install. Now you’ll need to wait for the container to deploy, which may take a few moments.</p>

<p><img src="/assets/images/truenas-onedrive/container-deploying.png" alt="Docker Container Deployment" class="center" /></p>

<p>Once the container is deployed, you can click on it and under “Workloads” there should be an icon to click on to show the logs for the container. You can use this to ensure the sync is happening properly.</p>

<p><img src="/assets/images/truenas-onedrive/container-logs.png" alt="Docker Container Logs" class="center" /></p>

<p>And that’s all there is to it! You can now have the benefits of storing your data locally in your NAS, while having the piece of mind of having a remote backup.</p>]]></content><author><name></name></author><category term="Home Networking" /><category term="Self Hosting" /><category term="Home Networking" /><category term="Self Hosting" /><category term="NAS" /><category term="TrueNAS" /><category term="OneDrive" /><category term="rclone" /><summary type="html"><![CDATA[Recently OneDrive was removed as a CloudSync provider for TrueNAS Scale. As I built my first NAS and use OneDrive for cloud storage, I was looking for alternate means to back up my NAS to OneDrive. I found individual pieces of possible solutions on the TrueNAS forums, but nothing approaching an end-to-end solution, so decided to do a write-up of what I ended up doing in hopes others may find it helpful as well.]]></summary></entry><entry><title type="html">Building a NAS and Media Server for under $500</title><link href="https://dfederm.com/building-a-nas-and-media-server-for-under-500/" rel="alternate" type="text/html" title="Building a NAS and Media Server for under $500" /><published>2024-01-06T00:00:00+00:00</published><updated>2024-01-06T00:00:00+00:00</updated><id>https://dfederm.com/building-a-nas-and-media-server-for-under-500</id><content type="html" xml:base="https://dfederm.com/building-a-nas-and-media-server-for-under-500/"><![CDATA[<p>Lately I’ve been realizing that purchased digital media isn’t really yours, and a <a href="https://www.ign.com/articles/sony-pulls-discovery-videos-playstation-users-already-own-sparking-concern-over-our-digital-future" target="_blank">recent event</a> in particular sparked me into doing something I’ve been wanting to do for a while now: build a NAS to contain all my legally purchased digital media, digital backups of physical media, as well as personal documents and photos.</p>

<p>First, why build a NAS rather than buy a prebuilt one like the <a href="https://www.amazon.com/Synology-5-bay-DiskStation-DS1522-Diskless/dp/B0B4DFBRZV?ref_=ast_sto_dp&amp;th=1&amp;_encoding=UTF8&amp;tag=dfederm-20&amp;linkCode=ur2&amp;linkId=f33bfb661653b318306e415a46022fa3&amp;camp=1789&amp;creative=9325" target="_blank" rel="nofollow">Synology DS1522+</a>? There are pros and cons to each approach, but the DIY was attractive to me due to the better expansion, cost effectiveness, and subjectively I just have fun building PCs.</p>

<p>As this is a <em>long</em> post, here is a table of contents if you’d like to skip to a specific section:</p>
<ul>
  <li><a href="#parts">Parts</a>
    <ul>
      <li><a href="#parts-summary">Parts Summary</a></li>
    </ul>
  </li>
  <li><a href="#build">Build</a></li>
  <li><a href="#configuration">Configuration</a>
    <ul>
      <li><a href="#configuring-truenas-scale">Configuring TrueNAS Scale</a></li>
      <li><a href="#configuring-jellyfin">Configuring Jellyfin</a></li>
    </ul>
  </li>
</ul>

<h2 id="parts">Parts</h2>

<p>The first part of building your own NAS is purchace the various components needed to build it. This is very similar to building any other PC, and this guide assumes you’re reasonably comfortable with building a PC. If this is your first time building a PC, I highly recommend checking out Linus Tech Tip’s <a href="https://www.youtube.com/watch?v=BL4DCEp7blY" target="_blank">How to build a PC, the last guide you’ll ever need!</a>.</p>

<p>Note that I built my NAS is just over $400, although I did not include the price of tax since that very much depends on your location, nor did I include the cost of storage since that depends on your specific needs. However, that still is leaps and bounds better more economical than the prebuilt I mentioned above which at the time of writing is $700, not to mention much more powerful and flexible.</p>

<p>As prices and availability are always fluxuating, obviously any prices you see here may be different than what I saw, so feel free to change things up.</p>

<p>When selecting parts, I strongly recommend using <a href="https://pcpartpicker.com/" target="_blank">PCPartPicker</a> as it will help identify compatibility issues as well as help filter out incompatible parts based on what you’ve already chosen which massively helps with the tremendous amount of options out there.</p>

<h3 id="case">Case</h3>

<p><a href="https://www.amazon.com/dp/B09WZLHCZG?&amp;_encoding=UTF8&amp;tag=dfederm-20&amp;linkCode=ur2&amp;linkId=6a45f181eff3d34fa5e03a77397ae970&amp;camp=1789&amp;creative=9325" target="_blank" rel="nofollow"><img src="/assets/images/nas/jonsbo-n1.jpg" alt="JONSBO N1" class="right" /></a></p>

<p>At risk of copying every other NAS build guide out there, I recommend the <a href="https://www.amazon.com/dp/B09WZLHCZG?&amp;_encoding=UTF8&amp;tag=dfederm-20&amp;linkCode=ur2&amp;linkId=6a45f181eff3d34fa5e03a77397ae970&amp;camp=1789&amp;creative=9325" target="_blank" rel="nofollow">JONSBO N1</a>. It’s specifically designed for NAS machines, so it’s pretty small. It requires a Mini-ITX motherboard, an SFX power supply, and has room for five 3.5” HDDs and well as one 2.5” drive.</p>

<h3 id="cpu">CPU</h3>

<p><a href="https://www.amazon.com/dp/B0876Y2TMZ?&amp;_encoding=UTF8&amp;tag=dfederm-20&amp;linkCode=ur2&amp;linkId=e1a84d017c81f41998a2aa49ec85eecb&amp;camp=1789&amp;creative=9325" target="_blank" rel="nofollow"><img src="/assets/images/nas/amd-ryzen-3-3100.jpg" alt="AMD Ryzen 3 3100" class="right" /></a></p>

<p>A NAS doesn’t need to be that powerful, even one which doubles as a media server, so I opted to go for something a few years older. I went with an <a href="https://www.amazon.com/dp/B0876Y2TMZ?&amp;_encoding=UTF8&amp;tag=dfederm-20&amp;linkCode=ur2&amp;linkId=e1a84d017c81f41998a2aa49ec85eecb&amp;camp=1789&amp;creative=9325" target="_blank" rel="nofollow">AMD Ryzen 3 3100</a> which I found used on eBay. Because it’s fairly low-end, it’s also pretty power-efficient, which is great for a NAS which is intended to be powered 24/7. Do make sure that if you buy it used, it comes with the stock cooler so that you don’t have to buy one separately.</p>

<p>One thing to note is that this CPU does <em>not</em> have integrated graphics. This causes a bit of pain during the build, at least for me, but it’s something that can be easily worked past. If you want a smoother experience though, go with something with integrated graphics.</p>

<h3 id="motherboard">Motherboard</h3>

<p><a href="https://www.amazon.com/dp/B08G1WLVR2?&amp;_encoding=UTF8&amp;tag=dfederm-20&amp;linkCode=ur2&amp;linkId=065a45c5e6f116a06ddc3d1de46e028c&amp;camp=1789&amp;creative=9325" target="_blank" rel="nofollow"><img src="/assets/images/nas/asrock-a520m-itx-ac.jpg" alt="ASRock A520M-ITX/AC" class="right" /></a></p>

<p>With the Mini-ITX form factor and AM4 CPU slot selected, there wasn’t really a ton of options for the motherboard. I ended up going with the <a href="https://www.amazon.com/dp/B08G1WLVR2?&amp;_encoding=UTF8&amp;tag=dfederm-20&amp;linkCode=ur2&amp;linkId=065a45c5e6f116a06ddc3d1de46e028c&amp;camp=1789&amp;creative=9325" target="_blank" rel="nofollow">ASRock A520M-ITX/AC</a> because well, it was cheap and compatible. It also has 4 SATA ports, which at the moment is good enough for me and eventually I can just buy a cheap HBA for future expansion. The AM4 slot also should allow an upgrade to a better CPU in the future as well, if that becomes necessary.</p>

<p>The only downside is that it “only” has gigabit ethernet and not the faster 2.5 gigabit, but honestly plain old gigabit is good enough for me. If you need 2.5 gigabit since a NAS is a <em>network</em> attached storage and thus needs only the finest of network connectivity, you can spend a bit more for something like a <a href="https://www.amazon.com/dp/B089FWWN62?&amp;_encoding=UTF8&amp;tag=dfederm-20&amp;linkCode=ur2&amp;linkId=9d7a1d37eafff1976396b58599019cb7&amp;camp=1789&amp;creative=9325" target="_blank" rel="nofollow">GIGABYTE B550I AORUS PRO AX</a> or an <a href="https://www.amazon.com/dp/B089VRZ6JX?&amp;_encoding=UTF8&amp;tag=dfederm-20&amp;linkCode=ur2&amp;linkId=53d19b6896962008f4421eb292310139&amp;camp=1789&amp;creative=9325" target="_blank" rel="nofollow">ASRock B550 Phantom Gaming-ITX/ax</a>.</p>

<h3 id="ram">RAM</h3>

<p><a href="https://www.amazon.com/dp/B07XJLDHW4?&amp;_encoding=UTF8&amp;tag=dfederm-20&amp;linkCode=ur2&amp;linkId=8c421106902c4182c99eac95622cc036&amp;camp=1789&amp;creative=9325" target="_blank" rel="nofollow"><img src="/assets/images/nas/gskill-ripjaws-v-16gb.jpg" alt="G.SKILL Ripjaws V 16GB (2x8 GB)" class="right" /></a></p>

<p>For RAM I went with <a href="https://www.amazon.com/dp/B07XJLDHW4?&amp;_encoding=UTF8&amp;tag=dfederm-20&amp;linkCode=ur2&amp;linkId=8c421106902c4182c99eac95622cc036&amp;camp=1789&amp;creative=9325" target="_blank" rel="nofollow">G.SKILL Ripjaws V 16GB (2x8 GB)</a>. I basically just wanted something cheap and DDR4. I was debating going up to 32GB of RAM, but I found a good deal on eBay and so just went with 16 GB for now. In practice that’s more than enough for my usage at least.</p>

<h3 id="boot-drive">Boot Drive</h3>

<p><a href="https://www.amazon.com/dp/B0781Z7Y3S?&amp;_encoding=UTF8&amp;tag=dfederm-20&amp;linkCode=ur2&amp;linkId=9da5f6a4512c640bcf105b32724ed374&amp;camp=1789&amp;creative=9325" target="_blank" rel="nofollow"><img src="/assets/images/nas/samsung-860-evo-500gb.jpg" alt="Samsung 860 EVO 500GB" class="right" /></a></p>

<p>I was lucky enough to have a <a href="https://www.amazon.com/dp/B0781Z7Y3S?&amp;_encoding=UTF8&amp;tag=dfederm-20&amp;linkCode=ur2&amp;linkId=9da5f6a4512c640bcf105b32724ed374&amp;camp=1789&amp;creative=9325" target="_blank" rel="nofollow">Samsung 860 EVO 500GB</a> laying around from when I used to used it in my desktop several years ago. If I were to have bought something though, I would have just gone with the cheapest SSD I could have, like this <a href="https://www.amazon.com/dp/B07XHMB5GP?&amp;_encoding=UTF8&amp;tag=dfederm-20&amp;linkCode=ur2&amp;linkId=ba71f4077bbab438b9d30b661be6b260&amp;camp=1789&amp;creative=9325" target="_blank" rel="nofollow">TEAMGROUP 256GB NVMe</a> or if you want to save the NVMe slot for an HBA, perhaps this <a href="https://www.amazon.com/dp/B07G3YNLJB?&amp;_encoding=UTF8&amp;tag=dfederm-20&amp;linkCode=ur2&amp;linkId=6a2f5b516a2b563307d30813f9f165cd&amp;camp=1789&amp;creative=9325" target="_blank" rel="nofollow">Crucial BX500 240GB SATA SSD</a></p>

<h3 id="psu">PSU</h3>

<p><a href="https://www.amazon.com/dp/B075M5FRQS?&amp;_encoding=UTF8&amp;tag=dfederm-20&amp;linkCode=ur2&amp;linkId=6febaed03ccf8df6112eb0c50ac4311a&amp;camp=1789&amp;creative=9325" target="_blank" rel="nofollow"><img src="/assets/images/nas/silverstone-sx500-g.jpg" alt="Silverstone SX500-G" class="right" /></a></p>

<p>The NAS is pretty low power, so nothing super special is needed here. Personally though I wanted a fully modular power supply for the ease of use, and the 80+ Gold rating for the efficiency, so that combined with the need for the SFX form factor and I was basically just left with the <a href="https://www.amazon.com/dp/B075M5FRQS?&amp;_encoding=UTF8&amp;tag=dfederm-20&amp;linkCode=ur2&amp;linkId=6febaed03ccf8df6112eb0c50ac4311a&amp;camp=1789&amp;creative=9325" target="_blank" rel="nofollow">Silverstone SX500-G</a>. It’s a bit overkill as PCPartPicker estimates that the system only needs ~149W, but this should give plenty of headroom for future expansion. Plus, as I mentioned with all the requirements above I didn’t really have much of an option.</p>

<h3 id="storage">Storage</h3>

<p><a href="https://www.amazon.com/dp/B09RMRKC9P?&amp;_encoding=UTF8&amp;tag=dfederm-20&amp;linkCode=ur2&amp;linkId=ac0461f9f76a81934b182e2532d88336&amp;camp=1789&amp;creative=9325" target="_blank" rel="nofollow"><img src="/assets/images/nas/seagate-ironwolf-4tb.jpg" alt="Seagate IronWolf 4 TB" class="right" /></a></p>

<p>This will very much be dependent on your needs. I don’t have a massive amount of data to store (yet!) but I did want some resiliency, so I put a pair of <a href="https://www.amazon.com/dp/B09RMRKC9P?&amp;_encoding=UTF8&amp;tag=dfederm-20&amp;linkCode=ur2&amp;linkId=ac0461f9f76a81934b182e2532d88336&amp;camp=1789&amp;creative=9325" target="_blank" rel="nofollow">Seagate IronWolf 4 TB</a> drives in. This should give me plenty of room to drop in a few more drives in the future as my storage needs increase.</p>

<h3 id="parts-summary">Parts Summary</h3>

<p>To summarize my parts list and the prices I was able to get, here’s my build on <a href="https://pcpartpicker.com/list/pr8pQP" target="_blank">PCPartPicker</a> as well as a cost breakdown.</p>

<table>
<thead>
<tr class="header">
<th>Part</th>
<th>Name</th>
<th>Price</th>
</tr>
</thead>
<tbody>
<tr>
<td>Case</td>
<td><a href="https://www.amazon.com/dp/B09WZLHCZG?&amp;_encoding=UTF8&amp;tag=dfederm-20&amp;linkCode=ur2&amp;linkId=6a45f181eff3d34fa5e03a77397ae970&amp;camp=1789&amp;creative=9325" target="_blank" rel="nofollow">JONSBO N1</a></td>
<td>$120 ($130 with $10 promotional gift card w/ purchase)</td>
</tr>
<tr>
<td>CPU</td>
<td><a href="https://www.amazon.com/dp/B0876Y2TMZ?&amp;_encoding=UTF8&amp;tag=dfederm-20&amp;linkCode=ur2&amp;linkId=e1a84d017c81f41998a2aa49ec85eecb&amp;camp=1789&amp;creative=9325" target="_blank" rel="nofollow">AMD Ryzen 3 3100</a></td>
<td>$45.15 (used)</td>
</tr>
<tr>
<td>Motherboard</td>
<td><a href="https://www.amazon.com/dp/B08G1WLVR2?&amp;_encoding=UTF8&amp;tag=dfederm-20&amp;linkCode=ur2&amp;linkId=065a45c5e6f116a06ddc3d1de46e028c&amp;camp=1789&amp;creative=9325" target="_blank" rel="nofollow">ASRock A520M-ITX/AC</a></td>
<td>$104.99</td>
</tr>
<tr>
<td>RAM</td>
<td><a href="https://www.amazon.com/dp/B07XJLDHW4?&amp;_encoding=UTF8&amp;tag=dfederm-20&amp;linkCode=ur2&amp;linkId=8c421106902c4182c99eac95622cc036&amp;camp=1789&amp;creative=9325" target="_blank" rel="nofollow">G.SKILL Ripjaws V 16GB (2x8 GB)</a></td>
<td>$33 (used)</td>
</tr>
<tr>
<td>Boot Drive</td>
<td><a href="https://www.amazon.com/dp/B0781Z7Y3S?&amp;_encoding=UTF8&amp;tag=dfederm-20&amp;linkCode=ur2&amp;linkId=9da5f6a4512c640bcf105b32724ed374&amp;camp=1789&amp;creative=9325" target="_blank" rel="nofollow">Samsung 860 EVO 500GB</a></td>
<td>$0 (already owned)</td>
</tr>
<tr>
<td>PSU</td>
<td><a href="https://www.amazon.com/dp/B075M5FRQS?&amp;_encoding=UTF8&amp;tag=dfederm-20&amp;linkCode=ur2&amp;linkId=6febaed03ccf8df6112eb0c50ac4311a&amp;camp=1789&amp;creative=9325" target="_blank" rel="nofollow">Silverstone SX500-G</a></td>
<td>$101.99</td>
</tr>
<tr>
<td colspan="2"><strong>Total (without storage)</strong></td>
<td><strong>$405.13</strong></td>
</tr>
<tr>
<td>Storage</td>
<td>2 x <a href="https://www.amazon.com/dp/B09RMRKC9P?&amp;_encoding=UTF8&amp;tag=dfederm-20&amp;linkCode=ur2&amp;linkId=ac0461f9f76a81934b182e2532d88336&amp;camp=1789&amp;creative=9325" target="_blank" rel="nofollow">Seagate IronWolf 4 TB</a></td>
<td>2 x $93.99 = $187.98</td>
</tr>
<tr>
<td colspan="2"><strong>Total (including storage)</strong></td>
<td><strong>$593.11</strong></td>
</tr>
</tbody>
</table>

<h2 id="build">Build</h2>

<p><img src="/assets/images/nas/parts.jpg" alt="All parts unassembled" class="center" /></p>

<p>Now it’s time to build! As mentioned earlier, this won’t be super detailed, but I will go through most the steps and point out specific pain points I ran into with the specific parts I used.</p>

<p>The first thing to do is take the motherboard out and place it onto the box it came in. Socketing the CPU is pretty straightforward; you just undo the latch, line up the triangle on the socket and the CPU, gently drop it in, and reengage the latch.</p>

<p><img src="/assets/images/nas/cpu-install.jpg" alt="CPU Install" class="center" /></p>

<p>Next, put a pea-sized blob of thermal paste on and install the cooler. I had some <a href="https://www.amazon.com/gp/product/B00ZJSF5LM?&amp;_encoding=UTF8&amp;tag=dfederm-20&amp;linkCode=ur2&amp;linkId=15557ff22797009ef77f522832564dee&amp;camp=1789&amp;creative=9325" target="_blank" rel="nofollow">Thermal Grizzly Kryonaut Thermal Paste</a> left over from when I built my desktop, so I just used that, but I imagine anything will do.</p>

<p>A tip when installing the cooler, or really anything with 4 or more screws, is to use a star pattern when tightening. Also tighten in two passes such that the first pass is a gentle tightening and the second pass is where you tighten to the final torque. That way any alignment issues can still be corrected before the screws are too tight.</p>

<p><img src="/assets/images/nas/cooler-install.jpg" alt="Cooler Install" class="center" /></p>

<p>Install the RAM next, which is as simple as just aligning the sticks the right way since they’re keyed and firmly pressing them in until you hear the click.</p>

<p>Looking back, you should actually wait to install the RAM until you mount the motherboard in the case and connect the power switch and reset switch wires. I found it extremely difficult to do so while the RAM was installed, but perhaps I just don’t have the nimblest fingers.</p>

<p><img src="/assets/images/nas/ram-install.jpg" alt="RAM Install" class="center" /></p>

<p>Next bring out the case and take off the outer shell. I had never worked with the Mini-ITX form factor previously, so I don’t have much reference to go on, but I found that the JONSBO N1 was relatively easy to work in. Don’t get me wrong, Mini-ITX was certainly challenging, but I feel like the case wasn’t the problem.</p>

<p><img src="/assets/images/nas/case.jpg" alt="Case" class="center" />
<img src="/assets/images/nas/case-open.jpg" alt="Case opened" class="center" /></p>

<p>Mount the motherboard on the preinstalled standoffs using the same two-pass star pattern described earlier. I forgot the I/O shield as you’ll notice from the picture below and had to correct my mistake later. I noticed with the I/O shield in place though that the motherboard was a bit difficult to get perfectly aligned on the standoffs, so using the screw-tightening strategy described really helped ensure things went smoothly anyway.</p>

<p><img src="/assets/images/nas/motherboard-install.jpg" alt="Motherboard Install" class="center" /></p>

<p>Next install the power supply. This, and when you start plugging in cables (which I held off on a little later), is where the benefits of a fully modular power supply becomes apparent. I have to imagine a non-modular would be extremely challenging to work with in a Mini-ITX form factor.</p>

<p><img src="/assets/images/nas/psu-install.jpg" alt="PSU Install" class="center" /></p>

<p>Installing the HDDs is an interesting part of working in the JONSBO N1. It has a host-swappable backplane for the SATA drives which is pretty neat. It was certainly a bit nerve-wracking to shove the drives into the bays and hope the connectors aligned properly, but after installing them, it seemed like a really elegant mechanism I ended up liking a lot.</p>

<p>If you’re using a SATA SSD for your boot drive, you’ll want to install that now as well, which is next to the power supply. I found the mounting solution there to be a bit sketchy, but as it’s an SSD and has no moving parts, it doesn’t really matter.</p>

<p><img src="/assets/images/nas/hdd-install.jpg" alt="HDD Install" class="center" /></p>

<p>Now to start the messy work of plugging in cables. This is my least favorite part as I can never seem to have the patience (or skill?) to manage the cables properly and so it ends up being just a rat nest. Luckily the case ends up hiding the mess in the end, so just don’t let anyone go opening up the machine and discovering your dirty secret.</p>

<p><img src="/assets/images/nas/sata-cables.jpg" alt="Sata Cables" class="center" /></p>

<p>An important note is that you will want to route <em>all</em> cables through the middle of the chassis rather than try and go around the outside. I made that mistake with the CPU power cable and the outer shell of the case ended up not sliding on later. Don’t make my mistakes. Route it through the case the long way, the proper way.</p>

<p><img src="/assets/images/nas/bad-cpu-power-cables.jpg" alt="Bad CPU Power Cables" class="center" /></p>

<p>Ok, all the cables are shoved in there now I guess. Time to put the shell of the case back on and try things out!</p>

<p><img src="/assets/images/nas/cable-mess.jpg" alt="Cable Mess" class="center" /></p>

<h2 id="configuration">Configuration</h2>

<p>I decided to use <a href="https://www.truenas.com/truenas-scale/" target="_blank">TrueNAS Scale</a> as the NAS OS and <a href="https://jellyfin.org/" target="_blank">Jellyfin</a> for the media server. Both are fully open source and well supported, so great options for the build.</p>

<h3 id="booting">Booting</h3>

<p>To get started with TrueNAS Scale, first you must <a href="https://www.truenas.com/download-truenas-scale/" target="_blank">download the iso</a> from their website and flash it to a USB drive using something like <a href="https://etcher.balena.io/" target="_blank">balenaEtcher</a>.</p>

<p>Because I didn’t choose a CPU with integrated graphics, I decided to put the boot SSD into my desktop computer and use the newly flashed USB install media there. Just be very careful you install TrueNAS to the correct drive or you may accidentally wipe the wrong one.</p>

<p>After installing the OS and putting the boot SSD back into the NAS machine, I tried booting and… nothing happened. Well, the power turned on and the fan was blowing, but without a display it wasn’t clear what was happening.</p>

<p>I had expected this, but as TrueNAS Scale is supposed to connect to the network via DHCP and be configurable via a web portal, I expected it to show up in my router, but it didn’t.</p>

<p>This is where it would benefit having a CPU with integrated graphics or a cheap spare GPU to temporarily slot in while you do the initial configuration. I had neither so I ended up doing some mad science…</p>

<p><img src="/assets/images/nas/mad-science.jpg" alt="Mad Science" class="center" /></p>

<p>Yea, wow. So I took the graphs card out of my desktop, leaving the power cables though since the PSU in the NAS was very tight. But, a 3070 TI obviously won’t fit in the NAS, so I took the motherboard out but left everything I could attached and in the case. Now when turning on both machines (remember, my desktop PSU was powering the graphics card) I was able to see the video output.</p>

<p>Frustratingly, It ended up being one simple keystroke I needed to confirm something about the fTPM and then it booted properly into TrueNAS Scale. At this point I probably should can configured the BIOS as well, but my desktop machine kept turning itself off after some time, probably due to no display device being plugged into the motherboard, and I was just ready to move on.</p>

<p>So I put everything back together, put the NAS in its home, powered it on, and success! I was able to see a new device on my network and was able to hit the web portal for TrueNAS Scale.</p>

<h3 id="configuring-truenas-scale">Configuring TrueNAS Scale</h3>

<p>I was able to connect to the web portal via <code class="language-plaintext highlighter-rouge">http://truenas.local</code>, but depending on your local network you may need to use the IP address instead, which you can get from your router’s web portal.</p>

<p>Personally I prefer to have my “infrastructure” devices to have static IP addresses, like my Raspberry Pi running <a href="/setting-up-a-security-system-with-home-assistant/">Home Assistant</a>, the Pi running my alarm panel and AdGuard Home instance, and yes the NAS. That way if something gets messed up with DNS or DHCP, I should always be able to access those devices.</p>

<p>To do this in TrueNAS Scale you click on the Network tab and in the Interfaces section you can edit the ethernet interface. You just need to uncheck DHCP and under “Aliases” add the IP and subnet you want.</p>

<p><img src="/assets/images/nas/truenas-static-ip.png" alt="Configuring a static IP" class="center" /></p>

<p>After this you’ll need to “Test Changes”, which is a convenient feature so you don’t misconfigure anything. It will automatically revert the network configuration if you don’t confirm it after some timeout. So after you make the changes, navigate to the new static IP and confirm the changes. At least for me, using the static IP was required as the host name resolution was stale and still pointing to the old DHCP-based IP.</p>

<p>Next I changed the host name since I wanted to use <code class="language-plaintext highlighter-rouge">nas.local</code> instead of <code class="language-plaintext highlighter-rouge">truenas.local</code> (admittedly, very minor). Since the host name resolution was stale anyway, I figured why not. To do this, you go back to the Network tab and edit the Global Configuration with the desired host name. Because I have <a href="https://adguard.com/en/adguard-home/overview.html" target="_blank">AdGuard Home</a>, I also added that as the Nameserver.</p>

<p><img src="/assets/images/nas/truenas-host-name.png" alt="Configuring the host name" class="center" /></p>

<p>Now that that’s out of the way, it’s time to actually set up the storage aspect of the NAS.</p>

<p>First a pool needs to be set up. A pool is organizes your physical disks into a virtual device, or VDEV. This is where you configure your desired disk layout, for example your desired RAID settings, or in the case of TrueNas Scale your RAID-Z settings. RAID-Z is a non-standard RAID which uses the ZFS file system. In my case I only have 2 data disks, so I chose to just use a mirror. When I add more disks I’ll end up converting to a RAIDZ1 (one drive can fail without data loss).</p>

<p>Note that Mirroring or RAID/RAID-Z are not proper substitutes for backups! They’re mostly to avoid inconvenience and downtime. You should always still take proper backups of your important data.</p>

<p>If you have an extra NVMe drive to use as a cache, you will also configure that here. Note though that it’s <a href="https://www.truenas.com/community/threads/truenas-scale-ssd-cache.96912/" target="_blank">not recommended</a> unless you have over 64GB of RAM, as the RAM cache (called “ARC”) is faster and the overhead to support the SSD Cache (“L2ARC”) requires RAM and thus eats into and reduces the size of the ARC cache. At least for me, the network is the bottleneck anyway, although that’s exacerbated by the fact that I stuck to a 1 gig interface.</p>

<p>Once the pool is configured, you’ll also need to configure a “dataset”, which is a logical folder within the pool, or perhaps it can be though of as a volume on a drive. Permissions are applied at the dataset level though, so if you intend to partition your data, this is where you would do so.</p>

<p>Next you’ll want to set up a share so you can transfer data to and from the NAS. In my case I use Windows for my primary desktop machine, so I set up an SMB share. Once you create the share it’ll prompt you to start the SMB service on the NAS, which is the server process which actually handles SMB traffic.</p>

<p>Finally, you’ll need to set up a user to access the SMB share. Go to the Credentials -&gt; Local Users tab and add a new user. You’ll want to set up additional users for any family members who you want to access the SMB share directly. Note that later when configuring Jellyfin there will be separate user accounts to access the Jellyfin server, so if for example you only want your kids to consume media from the NAS but not directly access the data, you wouldn’t want to set up a user in TrueNAS Scale for them.</p>

<p>Now you should be able to access the share via <code class="language-plaintext highlighter-rouge">\\nas.local\&lt;share-name&gt;</code> from your Windows PC.</p>

<p>I recommend mapping the share as a network drive to avoid needing to re-enter credentials:</p>

<p><img src="/assets/images/nas/map-network-drive.png" alt="Map network drive" class="center" /></p>

<p>This allows you to see it as if it were a drive on your machine, in my case <code class="language-plaintext highlighter-rouge">Z:</code>.</p>

<p><img src="/assets/images/nas/mapped-network-drive.png" alt="Mapped network drive" class="center" /></p>

<p>At this point you can copy all your data!</p>

<h3 id="configuring-jellyfin">Configuring Jellyfin</h3>

<p>TrueNAS Scale supports installing “Apps”, which are effectively just docker containers. One such supported app is the Jellyfin app, a media server.</p>

<p>First go to the Apps tab and find the Settings drop down to choose a pool to use for the applications. It will create a dataset inside the selected pool called “ix-applications” to store the application data. TrueNAS recommends using an SSD pool if possible, but in my case I only have 1 pool, the HDD pool, so I just used it.</p>

<p>Now that an application pool is selected, you can install the Jellyfin app. Click “Discover Apps” and search for and install Jellyfin.</p>

<p>You’ll mostly just use the default settings, but there is one key piece you need to configure. You will need to give the Jellyfin app access to your data.</p>

<p>Under the Storage Configuration you should see “Additional Storage”. Click “Add”, and use Type: Host Path. For the Mount Path, use whatever path you want to be visible on the Jellyfin side, eg <code class="language-plaintext highlighter-rouge">/movies</code>. For the Host Path, select the path to the dataset with your movies, eg <code class="language-plaintext highlighter-rouge">/mnt/Default/federshare/Media/Movies</code>. Repeat this process for TV Shows, for example <code class="language-plaintext highlighter-rouge">/shows</code> as the mount path and <code class="language-plaintext highlighter-rouge">/mnt/Default/federshare/Media/TV</code> as the host path.</p>

<p><img src="/assets/images/nas/jellyfin-storage-config.png" alt="Jellyfin storage configuration" class="center" /></p>

<p>It’ll take a minute or two for the Jellyfin app to install and start, but once it’s done you can click the “Web Portal” button which will take you to the Jellyfin web portal where you can configure Jellyfin. Here you’ll need to configure Jellyfin user names and libraries.</p>

<p>The users you configure will be how users log into a Jellyfin client application to watch media, so is likely the users you will need to set up for your family members. I set up separate accounts for each of my family members so that I could apply parental controls.</p>

<p>I did run into a permissions quirk where the Jellyfin app didn’t have permissions to the <code class="language-plaintext highlighter-rouge">/movies</code> and <code class="language-plaintext highlighter-rouge">/shows</code> mount paths I configured, possibly because of the SMB share, but I’m not certain of the reason. I ended up needing to go to the dataset and editing the permissions and granting the <code class="language-plaintext highlighter-rouge">everyone@</code> group read permissions.</p>

<p><img src="/assets/images/nas/truenas-acl.png" alt="TrueNAS ACL" class="center" /></p>

<p>Another quirk I ran into is that my subtitles were not named in a way which Jellyfin was able to automatically pick up. They were named <code class="language-plaintext highlighter-rouge">&lt;Movie Title&gt;-en-us_cc.srt</code> where Jellyfin requires <code class="language-plaintext highlighter-rouge">&lt;Movie Title&gt;.en-us_cc.srt</code>. This was fixed easily enough with <a href="https://learn.microsoft.com/en-us/windows/powertoys/powerrename" target="_blank">PowerRename</a>.</p>

<p><img src="/assets/images/nas/powerrename.png" alt="PowerRename" class="center" /></p>

<p>Now you’re ready to install the Jellyfin client on your various devices and enjoy your own personal local media streaming service!</p>]]></content><author><name></name></author><category term="Home Networking" /><category term="Self Hosting" /><category term="Home Networking" /><category term="Self Hosting" /><category term="NAS" /><category term="Media Server" /><category term="Jellyfin" /><category term="TrueNAS" /><summary type="html"><![CDATA[Lately I’ve been realizing that purchased digital media isn’t really yours, and a recent event in particular sparked me into doing something I’ve been wanting to do for a while now: build a NAS to contain all my legally purchased digital media, digital backups of physical media, as well as personal documents and photos.]]></summary></entry><entry><title type="html">Limited Parallelism Work Queue</title><link href="https://dfederm.com/limited-parallelism-work-queue/" rel="alternate" type="text/html" title="Limited Parallelism Work Queue" /><published>2023-12-03T00:00:00+00:00</published><updated>2023-12-03T00:00:00+00:00</updated><id>https://dfederm.com/limited-parallelism-work-queue</id><content type="html" xml:base="https://dfederm.com/limited-parallelism-work-queue/"><![CDATA[<p>In the realm of asynchronous operations within C# applications, maintaining optimal performance often requires a delicate balance between execution and resource allocation. The need to prevent CPU oversubscription while managing numerous concurrent tasks is a common challenge faced by developers.</p>

<p>This blog post delves into a crucial strategy to navigate this challenge: the implementation of a limited parallelism work queue. Rather than allowing unchecked parallelism that might overwhelm system resources, employing a limited parallelism work queue offers a systematic approach to manage asynchronous tasks effectively.</p>

<p>The heart of implementing a work queue is the producer/consumer model. This can be well-represented by <a href="https://learn.microsoft.com/en-us/dotnet/core/extensions/channels" target="_blank">Channels</a>. If you’re not familiar with channels, Stephen Toub has a <a href="https://devblogs.microsoft.com/dotnet/an-introduction-to-system-threading-channels/" target="_blank">great introduction</a>. Essentially a channel stores data from one or more producers to be consumed by one or more consumers.</p>

<p>In our case, the producers will be the components which enqueue work and the consumers will be the workers we spin up to process the work.</p>

<p>If desired, you can skip the explanation and go straight to the <a href="https://gist.github.com/dfederm/445e971abf5340ab4b5b9ec8ef41a460" target="_blank">Gist</a>.</p>

<p>Let’s start with creating the channel and the worker tasks. For now, we don’t know exactly what kind of data we need to store, so we’re just using <code class="language-plaintext highlighter-rouge">object</code>.</p>

<div class="language-cs highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">public</span> <span class="k">sealed</span> <span class="k">class</span> <span class="nc">WorkQueue</span>
<span class="p">{</span>
    <span class="k">private</span> <span class="k">readonly</span> <span class="n">Channel</span><span class="p">&lt;</span><span class="kt">object</span><span class="p">&gt;</span> <span class="n">_channel</span><span class="p">;</span>
    <span class="k">private</span> <span class="k">readonly</span> <span class="n">Task</span><span class="p">[]</span> <span class="n">_workerTasks</span><span class="p">;</span>

    <span class="k">public</span> <span class="nf">WorkQueue</span><span class="p">(</span><span class="kt">int</span> <span class="n">parallelism</span><span class="p">)</span>
    <span class="p">{</span>
        <span class="n">_channel</span> <span class="p">=</span> <span class="n">Channel</span><span class="p">.</span><span class="n">CreateUnbounded</span><span class="p">&lt;</span><span class="kt">object</span><span class="p">&gt;();</span>

        <span class="c1">// Create a bunch of worker tasks to process the work.</span>
        <span class="n">_workerTasks</span> <span class="p">=</span> <span class="k">new</span> <span class="n">Task</span><span class="p">[</span><span class="n">parallelism</span><span class="p">];</span>
        <span class="k">for</span> <span class="p">(</span><span class="kt">int</span> <span class="n">i</span> <span class="p">=</span> <span class="m">0</span><span class="p">;</span> <span class="n">i</span> <span class="p">&lt;</span> <span class="n">_workerTasks</span><span class="p">.</span><span class="n">Length</span><span class="p">;</span> <span class="n">i</span><span class="p">++)</span>
        <span class="p">{</span>
            <span class="n">_workerTasks</span><span class="p">[</span><span class="n">i</span><span class="p">]</span> <span class="p">=</span> <span class="n">Task</span><span class="p">.</span><span class="nf">Run</span><span class="p">(</span>
                <span class="k">async</span> <span class="p">()</span> <span class="p">=&gt;</span>
                <span class="p">{</span>
                    <span class="k">await</span> <span class="k">foreach</span> <span class="p">(</span><span class="kt">object</span> <span class="n">context</span> <span class="k">in</span> <span class="n">_channel</span><span class="p">.</span><span class="n">Reader</span><span class="p">.</span><span class="nf">ReadAllAsync</span><span class="p">())</span>
                    <span class="p">{</span>
                        <span class="c1">// TODO: Process work</span>
                    <span class="p">}</span>
                <span class="p">});</span>
        <span class="p">}</span>
    <span class="p">}</span>
<span class="p">}</span>
</code></pre></div></div>

<p>This simply creates the <code class="language-plaintext highlighter-rouge">Channel</code> and creates multiple worker tasks whick simply continuously try reading from the channel. <code class="language-plaintext highlighter-rouge">Channel.Reader.ReadAllAsync</code> will yield until there is data to read, so it’s not blocking any threads.</p>

<p>Now we need the producer side of things. For this initial implementation with <code class="language-plaintext highlighter-rouge">Task</code> and not <code class="language-plaintext highlighter-rouge">Task&lt;T&gt;</code>, so we know the return type for the method needs to be a <code class="language-plaintext highlighter-rouge">Task</code>. The caller needs to provide a factory for actually performaing the work, so the parameter can be a <code class="language-plaintext highlighter-rouge">Func&lt;Task&gt;</code>. This leads us to the following signature:</p>

<div class="language-cs highlighter-rouge"><div class="highlight"><pre class="highlight"><code>    <span class="k">public</span> <span class="k">async</span> <span class="n">Task</span> <span class="nf">EnqueueWorkAsync</span><span class="p">(</span><span class="n">Func</span><span class="p">&lt;</span><span class="n">Task</span><span class="p">&gt;</span> <span class="n">taskFunc</span><span class="p">);</span>
</code></pre></div></div>

<p>As we want to manage the parallelism of the work, we cannot call the <code class="language-plaintext highlighter-rouge">Func&lt;Task&gt;</code> to get the <code class="language-plaintext highlighter-rouge">Task</code>, as that would start execution of the task. The way to return a <code class="language-plaintext highlighter-rouge">Task</code> when we don’t have one is to use <code class="language-plaintext highlighter-rouge">TaskCompletionSource</code>. This allows us to return a <code class="language-plaintext highlighter-rouge">Task</code> which we can later complete with a result, cancellation, or exception, based on what happens with the provided work.</p>

<p>We also know we need to write <em>something</em> to the channel, but we still don’t know what yet, so let’s continue to use <code class="language-plaintext highlighter-rouge">object</code>.</p>

<div class="language-cs highlighter-rouge"><div class="highlight"><pre class="highlight"><code>    <span class="k">public</span> <span class="k">async</span> <span class="n">Task</span> <span class="nf">EnqueueWorkAsync</span><span class="p">(</span><span class="n">Func</span><span class="p">&lt;</span><span class="n">Task</span><span class="p">&gt;</span> <span class="n">taskFunc</span><span class="p">)</span>
    <span class="p">{</span>
        <span class="n">TaskCompletionSource</span> <span class="n">taskCompletionSource</span> <span class="p">=</span> <span class="k">new</span><span class="p">();</span>
        <span class="kt">object</span> <span class="n">context</span> <span class="p">=</span> <span class="k">new</span><span class="p">();</span>
        <span class="k">await</span> <span class="n">_channel</span><span class="p">.</span><span class="n">Writer</span><span class="p">.</span><span class="nf">WriteAsync</span><span class="p">(</span><span class="n">context</span><span class="p">);</span>
        <span class="k">await</span> <span class="n">taskCompletionSource</span><span class="p">.</span><span class="n">Task</span><span class="p">;</span>
    <span class="p">}</span>
</code></pre></div></div>

<p>Now that we have the channel reader and writer usages, we can figure out what we actually need to store in the channel. The caller provided a <code class="language-plaintext highlighter-rouge">Func&lt;Task&gt;</code> to perform the work, and we need to capture the <code class="language-plaintext highlighter-rouge">TaskCompletionSource</code> so we can complete the <code class="language-plaintext highlighter-rouge">Task</code> we returned to the caller. So let’s define the context as a simple record struct with those two members:</p>

<div class="language-cs highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">private</span> <span class="k">readonly</span> <span class="n">record</span> <span class="k">struct</span> <span class="nc">WorkContext</span><span class="p">(</span><span class="n">Func</span><span class="p">&lt;</span><span class="n">Task</span><span class="p">&gt;</span> <span class="n">TaskFunc</span><span class="p">,</span> <span class="n">TaskCompletionSource</span> <span class="n">TaskCompletionSource</span><span class="p">);</span>
</code></pre></div></div>

<p>The <code class="language-plaintext highlighter-rouge">Channel&lt;object&gt;</code> should be updated to use <code class="language-plaintext highlighter-rouge">WorkContext</code> instead, as the reader and writer call sites should also be adjusted. We now have the following:</p>

<div class="language-cs highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">public</span> <span class="k">sealed</span> <span class="k">class</span> <span class="nc">WorkQueue</span>
<span class="p">{</span>
    <span class="k">private</span> <span class="k">readonly</span> <span class="n">Channel</span><span class="p">&lt;</span><span class="n">WorkContext</span><span class="p">&gt;</span> <span class="n">_channel</span><span class="p">;</span>
    <span class="k">private</span> <span class="k">readonly</span> <span class="n">Task</span><span class="p">[]</span> <span class="n">_workerTasks</span><span class="p">;</span>

    <span class="k">private</span> <span class="k">readonly</span> <span class="n">record</span> <span class="k">struct</span> <span class="nc">WorkContext</span><span class="p">(</span><span class="n">Func</span><span class="p">&lt;</span><span class="n">Task</span><span class="p">&gt;</span> <span class="n">TaskFunc</span><span class="p">,</span> <span class="n">TaskCompletionSource</span> <span class="n">TaskCompletionSource</span><span class="p">);</span>

    <span class="k">public</span> <span class="nf">WorkQueue</span><span class="p">(</span><span class="kt">int</span> <span class="n">parallelism</span><span class="p">)</span>
    <span class="p">{</span>
        <span class="n">_channel</span> <span class="p">=</span> <span class="n">Channel</span><span class="p">.</span><span class="n">CreateUnbounded</span><span class="p">&lt;</span><span class="n">WorkContext</span><span class="p">&gt;();</span>

        <span class="c1">// Create a bunch of worker tasks to process the work.</span>
        <span class="n">_workerTasks</span> <span class="p">=</span> <span class="k">new</span> <span class="n">Task</span><span class="p">[</span><span class="n">parallelism</span><span class="p">];</span>
        <span class="k">for</span> <span class="p">(</span><span class="kt">int</span> <span class="n">i</span> <span class="p">=</span> <span class="m">0</span><span class="p">;</span> <span class="n">i</span> <span class="p">&lt;</span> <span class="n">_workerTasks</span><span class="p">.</span><span class="n">Length</span><span class="p">;</span> <span class="n">i</span><span class="p">++)</span>
        <span class="p">{</span>
            <span class="n">_workerTasks</span><span class="p">[</span><span class="n">i</span><span class="p">]</span> <span class="p">=</span> <span class="n">Task</span><span class="p">.</span><span class="nf">Run</span><span class="p">(</span>
                <span class="k">async</span> <span class="p">()</span> <span class="p">=&gt;</span>
                <span class="p">{</span>
                    <span class="k">await</span> <span class="k">foreach</span> <span class="p">(</span><span class="n">WorkContext</span> <span class="n">context</span> <span class="k">in</span> <span class="n">_channel</span><span class="p">.</span><span class="n">Reader</span><span class="p">.</span><span class="nf">ReadAllAsync</span><span class="p">())</span>
                    <span class="p">{</span>
                        <span class="c1">// TODO: Process work</span>
                    <span class="p">}</span>
                <span class="p">});</span>
        <span class="p">}</span>
    <span class="p">}</span>

    <span class="k">public</span> <span class="k">async</span> <span class="n">Task</span> <span class="nf">EnqueueWorkAsync</span><span class="p">(</span><span class="n">Func</span><span class="p">&lt;</span><span class="n">Task</span><span class="p">&gt;</span> <span class="n">taskFunc</span><span class="p">)</span>
    <span class="p">{</span>
        <span class="n">TaskCompletionSource</span> <span class="n">taskCompletionSource</span> <span class="p">=</span> <span class="k">new</span><span class="p">();</span>
        <span class="n">WorkContext</span> <span class="n">context</span> <span class="p">=</span> <span class="k">new</span><span class="p">(</span><span class="n">taskFunc</span><span class="p">,</span> <span class="n">taskCompletionSource</span><span class="p">);</span>
        <span class="k">await</span> <span class="n">_channel</span><span class="p">.</span><span class="n">Writer</span><span class="p">.</span><span class="nf">WriteAsync</span><span class="p">(</span><span class="n">context</span><span class="p">);</span>
        <span class="k">await</span> <span class="n">taskCompletionSource</span><span class="p">.</span><span class="n">Task</span><span class="p">;</span>
    <span class="p">}</span>
</code></pre></div></div>

<p>Now we need to actually process the work. This involes executing the provided <code class="language-plaintext highlighter-rouge">Func&lt;Task&gt;</code> and handling the result appropriately. We will simply invoke the <code class="language-plaintext highlighter-rouge">Func</code> and await the resulting <code class="language-plaintext highlighter-rouge">Task</code>. Whether that <code class="language-plaintext highlighter-rouge">Task</code> completed successfully, threw an exception, or was cancelled, we should pass through to the <code class="language-plaintext highlighter-rouge">Task</code> we returned to the caller who queued up the work.</p>

<div class="language-cs highlighter-rouge"><div class="highlight"><pre class="highlight"><code>    <span class="k">private</span> <span class="k">static</span> <span class="k">async</span> <span class="n">Task</span> <span class="nf">ProcessWorkAsync</span><span class="p">(</span><span class="n">WorkContext</span> <span class="n">context</span><span class="p">)</span>
    <span class="p">{</span>
        <span class="k">try</span>
        <span class="p">{</span>
            <span class="k">await</span> <span class="n">context</span><span class="p">.</span><span class="nf">TaskFunc</span><span class="p">();</span>
            <span class="n">context</span><span class="p">.</span><span class="n">TaskCompletionSource</span><span class="p">.</span><span class="nf">TrySetResult</span><span class="p">();</span>
        <span class="p">}</span>
        <span class="k">catch</span> <span class="p">(</span><span class="n">OperationCanceledException</span> <span class="n">ex</span><span class="p">)</span>
        <span class="p">{</span>
            <span class="n">context</span><span class="p">.</span><span class="n">TaskCompletionSource</span><span class="p">.</span><span class="nf">TrySetCanceled</span><span class="p">(</span><span class="n">ex</span><span class="p">.</span><span class="n">CancellationToken</span><span class="p">);</span>
        <span class="p">}</span>
        <span class="k">catch</span> <span class="p">(</span><span class="n">Exception</span> <span class="n">ex</span><span class="p">)</span>
        <span class="p">{</span>
            <span class="n">context</span><span class="p">.</span><span class="n">TaskCompletionSource</span><span class="p">.</span><span class="nf">TrySetException</span><span class="p">(</span><span class="n">ex</span><span class="p">);</span>
        <span class="p">}</span>
    <span class="p">}</span>
</code></pre></div></div>

<p>Finally, we need to handle shutting down the work queue. This is done by completeing the channel and waiting for the worker tasks to drain. Calling <code class="language-plaintext highlighter-rouge">Channel.Writer.Complete</code> will disallow additional items from being written, and as a side-effect cause <code class="language-plaintext highlighter-rouge">Channel.Reader.ReadAllAsync</code> enumerable to stop awaiting more results and complete. This in turn allows our worker tasks to complete.</p>

<p>For convenience, we will make <code class="language-plaintext highlighter-rouge">WorkQueue : IAsyncDisposable</code> so the <code class="language-plaintext highlighter-rouge">WorkQueue</code> can simply be disposed to shut it down.</p>

<div class="language-cs highlighter-rouge"><div class="highlight"><pre class="highlight"><code>    <span class="k">public</span> <span class="k">async</span> <span class="n">ValueTask</span> <span class="nf">DisposeAsync</span><span class="p">()</span>
    <span class="p">{</span>
        <span class="n">_channel</span><span class="p">.</span><span class="n">Writer</span><span class="p">.</span><span class="nf">Complete</span><span class="p">();</span>
        <span class="k">await</span> <span class="n">_channel</span><span class="p">.</span><span class="n">Reader</span><span class="p">.</span><span class="n">Completion</span><span class="p">;</span>
        <span class="k">await</span> <span class="n">Task</span><span class="p">.</span><span class="nf">WhenAll</span><span class="p">(</span><span class="n">_workerTasks</span><span class="p">);</span>
    <span class="p">}</span>
</code></pre></div></div>

<p>On thing we’ve left out is cancellation, both for executing work to be cancelled when the work queue is shut down, and for allowing the caller enqueing a work item to cancel that work item.</p>

<p>To address this, a <code class="language-plaintext highlighter-rouge">CancellationToken</code> should be provided by the caller enqueueing a work item. Additionally, the <code class="language-plaintext highlighter-rouge">WorkQueue</code> itself will need to manage a <code class="language-plaintext highlighter-rouge">CancellationTokenSource</code> which it cancels on <code class="language-plaintext highlighter-rouge">DisposeAsync</code>. Finally, when a work item is enqueued, the two cancellation tokens need to be linked and provided to the work item so it can properly cancel when either the caller who enqueued the work item cancels, or if the work queue is being shut down entirely. Putting all that together:</p>

<div class="language-cs highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">public</span> <span class="k">sealed</span> <span class="k">class</span> <span class="nc">WorkQueue</span> <span class="p">:</span> <span class="n">IAsyncDisposable</span>
<span class="p">{</span>
    <span class="k">private</span> <span class="k">readonly</span> <span class="n">CancellationTokenSource</span> <span class="n">_cancellationTokenSource</span><span class="p">;</span>
    <span class="k">private</span> <span class="k">readonly</span> <span class="n">Channel</span><span class="p">&lt;</span><span class="n">WorkContext</span><span class="p">&gt;</span> <span class="n">_channel</span><span class="p">;</span>
    <span class="k">private</span> <span class="k">readonly</span> <span class="n">Task</span><span class="p">[]</span> <span class="n">_workerTasks</span><span class="p">;</span>

    <span class="k">private</span> <span class="k">readonly</span> <span class="n">record</span> <span class="k">struct</span> <span class="nc">WorkContext</span><span class="p">(</span><span class="n">Func</span><span class="p">&lt;</span><span class="n">CancellationToken</span><span class="p">,</span> <span class="n">Task</span><span class="p">&gt;</span> <span class="n">TaskFunc</span><span class="p">,</span> <span class="n">TaskCompletionSource</span> <span class="n">TaskCompletionSource</span><span class="p">,</span> <span class="n">CancellationToken</span> <span class="n">CancellationToken</span><span class="p">);</span>

    <span class="k">public</span> <span class="nf">WorkQueue</span><span class="p">()</span>
        <span class="p">:</span> <span class="k">this</span> <span class="p">(</span><span class="n">Environment</span><span class="p">.</span><span class="n">ProcessorCount</span><span class="p">)</span>
    <span class="p">{</span>
    <span class="p">}</span>

    <span class="k">public</span> <span class="nf">WorkQueue</span><span class="p">(</span><span class="kt">int</span> <span class="n">parallelism</span><span class="p">)</span>
    <span class="p">{</span>
        <span class="n">_cancellationTokenSource</span> <span class="p">=</span> <span class="k">new</span> <span class="nf">CancellationTokenSource</span><span class="p">();</span>
        <span class="n">_channel</span> <span class="p">=</span> <span class="n">Channel</span><span class="p">.</span><span class="n">CreateUnbounded</span><span class="p">&lt;</span><span class="n">WorkContext</span><span class="p">&gt;();</span>

        <span class="c1">// Create a bunch of worker tasks to process the work.</span>
        <span class="n">_workerTasks</span> <span class="p">=</span> <span class="k">new</span> <span class="n">Task</span><span class="p">[</span><span class="n">parallelism</span><span class="p">];</span>
        <span class="k">for</span> <span class="p">(</span><span class="kt">int</span> <span class="n">i</span> <span class="p">=</span> <span class="m">0</span><span class="p">;</span> <span class="n">i</span> <span class="p">&lt;</span> <span class="n">_workerTasks</span><span class="p">.</span><span class="n">Length</span><span class="p">;</span> <span class="n">i</span><span class="p">++)</span>
        <span class="p">{</span>
            <span class="n">_workerTasks</span><span class="p">[</span><span class="n">i</span><span class="p">]</span> <span class="p">=</span> <span class="n">Task</span><span class="p">.</span><span class="nf">Run</span><span class="p">(</span>
                <span class="k">async</span> <span class="p">()</span> <span class="p">=&gt;</span>
                <span class="p">{</span>
                    <span class="c1">// Not passing using the cancellation token here as we need to drain the entire channel to ensure we don't leave dangling Tasks.</span>
                    <span class="k">await</span> <span class="k">foreach</span> <span class="p">(</span><span class="n">WorkContext</span> <span class="n">context</span> <span class="k">in</span> <span class="n">_channel</span><span class="p">.</span><span class="n">Reader</span><span class="p">.</span><span class="nf">ReadAllAsync</span><span class="p">())</span>
                    <span class="p">{</span>
                        <span class="k">await</span> <span class="nf">ProcessWorkAsync</span><span class="p">(</span><span class="n">context</span><span class="p">);</span>
                    <span class="p">}</span>
                <span class="p">});</span>
        <span class="p">}</span>
    <span class="p">}</span>

    <span class="k">public</span> <span class="k">async</span> <span class="n">Task</span> <span class="nf">EnqueueWorkAsync</span><span class="p">(</span><span class="n">Func</span><span class="p">&lt;</span><span class="n">CancellationToken</span><span class="p">,</span> <span class="n">Task</span><span class="p">&gt;</span> <span class="n">taskFunc</span><span class="p">,</span> <span class="n">CancellationToken</span> <span class="n">cancellationToken</span> <span class="p">=</span> <span class="k">default</span><span class="p">)</span>
    <span class="p">{</span>
        <span class="n">cancellationToken</span><span class="p">.</span><span class="nf">ThrowIfCancellationRequested</span><span class="p">();</span>
        <span class="n">TaskCompletionSource</span> <span class="n">taskCompletionSource</span> <span class="p">=</span> <span class="k">new</span><span class="p">();</span>
        <span class="n">CancellationToken</span> <span class="n">linkedToken</span> <span class="p">=</span> <span class="n">cancellationToken</span><span class="p">.</span><span class="n">CanBeCanceled</span>
            <span class="p">?</span> <span class="n">CancellationTokenSource</span><span class="p">.</span><span class="nf">CreateLinkedTokenSource</span><span class="p">(</span><span class="n">cancellationToken</span><span class="p">,</span> <span class="n">_cancellationTokenSource</span><span class="p">.</span><span class="n">Token</span><span class="p">).</span><span class="n">Token</span>
            <span class="p">:</span> <span class="n">_cancellationTokenSource</span><span class="p">.</span><span class="n">Token</span><span class="p">;</span>
        <span class="n">WorkContext</span> <span class="n">context</span> <span class="p">=</span> <span class="k">new</span><span class="p">(</span><span class="n">taskFunc</span><span class="p">,</span> <span class="n">taskCompletionSource</span><span class="p">,</span> <span class="n">linkedToken</span><span class="p">);</span>
        <span class="k">await</span> <span class="n">_channel</span><span class="p">.</span><span class="n">Writer</span><span class="p">.</span><span class="nf">WriteAsync</span><span class="p">(</span><span class="n">context</span><span class="p">,</span> <span class="n">linkedToken</span><span class="p">);</span>
        <span class="k">await</span> <span class="n">taskCompletionSource</span><span class="p">.</span><span class="n">Task</span><span class="p">;</span>
    <span class="p">}</span>

    <span class="k">public</span> <span class="k">async</span> <span class="n">ValueTask</span> <span class="nf">DisposeAsync</span><span class="p">()</span>
    <span class="p">{</span>
        <span class="k">await</span> <span class="n">_cancellationTokenSource</span><span class="p">.</span><span class="nf">CancelAsync</span><span class="p">();</span>
        <span class="n">_channel</span><span class="p">.</span><span class="n">Writer</span><span class="p">.</span><span class="nf">Complete</span><span class="p">();</span>
        <span class="k">await</span> <span class="n">_channel</span><span class="p">.</span><span class="n">Reader</span><span class="p">.</span><span class="n">Completion</span><span class="p">;</span>
        <span class="k">await</span> <span class="n">Task</span><span class="p">.</span><span class="nf">WhenAll</span><span class="p">(</span><span class="n">_workerTasks</span><span class="p">);</span>
        <span class="n">_cancellationTokenSource</span><span class="p">.</span><span class="nf">Dispose</span><span class="p">();</span>
    <span class="p">}</span>

    <span class="k">private</span> <span class="k">static</span> <span class="k">async</span> <span class="n">Task</span> <span class="nf">ProcessWorkAsync</span><span class="p">(</span><span class="n">WorkContext</span> <span class="n">context</span><span class="p">)</span>
    <span class="p">{</span>
        <span class="k">if</span> <span class="p">(</span><span class="n">context</span><span class="p">.</span><span class="n">CancellationToken</span><span class="p">.</span><span class="n">IsCancellationRequested</span><span class="p">)</span>
        <span class="p">{</span>
            <span class="n">context</span><span class="p">.</span><span class="n">TaskCompletionSource</span><span class="p">.</span><span class="nf">TrySetCanceled</span><span class="p">(</span><span class="n">context</span><span class="p">.</span><span class="n">CancellationToken</span><span class="p">);</span>
            <span class="k">return</span><span class="p">;</span>
        <span class="p">}</span>

        <span class="k">try</span>
        <span class="p">{</span>
            <span class="k">await</span> <span class="n">context</span><span class="p">.</span><span class="nf">TaskFunc</span><span class="p">(</span><span class="n">context</span><span class="p">.</span><span class="n">CancellationToken</span><span class="p">);</span>
            <span class="n">context</span><span class="p">.</span><span class="n">TaskCompletionSource</span><span class="p">.</span><span class="nf">TrySetResult</span><span class="p">();</span>
        <span class="p">}</span>
        <span class="k">catch</span> <span class="p">(</span><span class="n">OperationCanceledException</span> <span class="n">ex</span><span class="p">)</span>
        <span class="p">{</span>
            <span class="n">context</span><span class="p">.</span><span class="n">TaskCompletionSource</span><span class="p">.</span><span class="nf">TrySetCanceled</span><span class="p">(</span><span class="n">ex</span><span class="p">.</span><span class="n">CancellationToken</span><span class="p">);</span>
        <span class="p">}</span>
        <span class="k">catch</span> <span class="p">(</span><span class="n">Exception</span> <span class="n">ex</span><span class="p">)</span>
        <span class="p">{</span>
            <span class="n">context</span><span class="p">.</span><span class="n">TaskCompletionSource</span><span class="p">.</span><span class="nf">TrySetException</span><span class="p">(</span><span class="n">ex</span><span class="p">);</span>
        <span class="p">}</span>
    <span class="p">}</span>
<span class="p">}</span>
</code></pre></div></div>

<p>If the result of the work is required, this approach may be awkward since the provided <code class="language-plaintext highlighter-rouge">Func&lt;CancellationToken, Task&gt;</code> would need to have side-effects. For example, something like the following:</p>

<div class="language-cs highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kt">string</span> <span class="n">input</span> <span class="p">=</span> <span class="c1">// ...</span>
<span class="kt">int</span> <span class="n">result</span> <span class="p">=</span> <span class="p">-</span><span class="m">1</span><span class="p">;</span>
<span class="k">await</span> <span class="n">queue</span><span class="p">.</span><span class="nf">EnqueueWorkAsync</span><span class="p">(</span><span class="k">async</span> <span class="n">ct</span> <span class="p">=&gt;</span> <span class="n">result</span> <span class="p">=</span> <span class="k">await</span> <span class="nf">ProcessAsync</span><span class="p">(</span><span class="n">input</span><span class="p">,</span> <span class="n">ct</span><span class="p">),</span> <span class="n">cancellationToken</span><span class="p">));</span>

<span class="c1">// Do something with the result here</span>
<span class="c1">// ...</span>
</code></pre></div></div>

<p>An alternate approach would be to have the <code class="language-plaintext highlighter-rouge">WorkQueue</code> take the processing function and then the <code class="language-plaintext highlighter-rouge">EnqueueWorkAsync</code> method could return the result directly. This requires the work queue to process inputs of the same type and in the same way, but can make the calling pattern more elegant:</p>

<div class="language-cs highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kt">string</span> <span class="n">input</span> <span class="p">=</span> <span class="c1">// ...</span>
<span class="kt">int</span> <span class="n">result</span> <span class="p">=</span> <span class="k">await</span> <span class="n">queue</span><span class="p">.</span><span class="nf">EnqueueWorkAsync</span><span class="p">(</span><span class="n">input</span><span class="p">,</span> <span class="n">cancellationToken</span><span class="p">);</span>

<span class="c1">// Do something with the result here</span>
<span class="c1">// ...</span>
</code></pre></div></div>

<p>The change to the implementation is straightforward. <code class="language-plaintext highlighter-rouge">WorkQueue</code> becomes the generic <code class="language-plaintext highlighter-rouge">WorkQueue&lt;TInput, TResult&gt;</code> and the <code class="language-plaintext highlighter-rouge">Func&lt;CancellationToken, Task&gt;</code> becomes a <code class="language-plaintext highlighter-rouge">Func&lt;TInput, CancellationToken, Task&lt;TResult&gt;&gt;</code> and can move from <code class="language-plaintext highlighter-rouge">EnqueueWorkAsync</code> to the constructor.</p>

<div class="language-cs highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">public</span> <span class="k">sealed</span> <span class="k">class</span> <span class="nc">WorkQueue</span><span class="p">&lt;</span><span class="n">TInput</span><span class="p">,</span> <span class="n">TResult</span><span class="p">&gt;</span> <span class="p">:</span> <span class="n">IAsyncDisposable</span>
<span class="p">{</span>
    <span class="k">private</span> <span class="k">readonly</span> <span class="n">Func</span><span class="p">&lt;</span><span class="n">TInput</span><span class="p">,</span> <span class="n">CancellationToken</span><span class="p">,</span> <span class="n">Task</span><span class="p">&lt;</span><span class="n">TResult</span><span class="p">&gt;&gt;</span> <span class="n">_processFunc</span><span class="p">;</span>
    <span class="k">private</span> <span class="k">readonly</span> <span class="n">CancellationTokenSource</span> <span class="n">_cancellationTokenSource</span><span class="p">;</span>
    <span class="k">private</span> <span class="k">readonly</span> <span class="n">Channel</span><span class="p">&lt;</span><span class="n">WorkContext</span><span class="p">&gt;</span> <span class="n">_channel</span><span class="p">;</span>
    <span class="k">private</span> <span class="k">readonly</span> <span class="n">Task</span><span class="p">[]</span> <span class="n">_workerTasks</span><span class="p">;</span>

    <span class="k">private</span> <span class="k">readonly</span> <span class="n">record</span> <span class="k">struct</span> <span class="nc">WorkContext</span><span class="p">(</span><span class="n">TInput</span> <span class="n">Input</span><span class="p">,</span> <span class="n">TaskCompletionSource</span><span class="p">&lt;</span><span class="n">TResult</span><span class="p">&gt;</span> <span class="n">TaskCompletionSource</span><span class="p">,</span> <span class="n">CancellationToken</span> <span class="n">CancellationToken</span><span class="p">);</span>

    <span class="k">public</span> <span class="nf">WorkQueue</span><span class="p">(</span><span class="n">Func</span><span class="p">&lt;</span><span class="n">TInput</span><span class="p">,</span> <span class="n">CancellationToken</span><span class="p">,</span> <span class="n">Task</span><span class="p">&lt;</span><span class="n">TResult</span><span class="p">&gt;&gt;</span> <span class="n">processFunc</span><span class="p">)</span>
        <span class="p">:</span> <span class="k">this</span><span class="p">(</span><span class="n">processFunc</span><span class="p">,</span> <span class="n">Environment</span><span class="p">.</span><span class="n">ProcessorCount</span><span class="p">)</span>
    <span class="p">{</span>
    <span class="p">}</span>

    <span class="k">public</span> <span class="nf">WorkQueue</span><span class="p">(</span><span class="n">Func</span><span class="p">&lt;</span><span class="n">TInput</span><span class="p">,</span> <span class="n">CancellationToken</span><span class="p">,</span> <span class="n">Task</span><span class="p">&lt;</span><span class="n">TResult</span><span class="p">&gt;&gt;</span> <span class="n">processFunc</span><span class="p">,</span> <span class="kt">int</span> <span class="n">parallelism</span><span class="p">)</span>
    <span class="p">{</span>
        <span class="n">_processFunc</span> <span class="p">=</span> <span class="n">processFunc</span><span class="p">;</span>
        <span class="n">_cancellationTokenSource</span> <span class="p">=</span> <span class="k">new</span> <span class="nf">CancellationTokenSource</span><span class="p">();</span>
        <span class="n">_channel</span> <span class="p">=</span> <span class="n">Channel</span><span class="p">.</span><span class="n">CreateUnbounded</span><span class="p">&lt;</span><span class="n">WorkContext</span><span class="p">&gt;();</span>

        <span class="c1">// Create a bunch of worker tasks to process the work.</span>
        <span class="n">_workerTasks</span> <span class="p">=</span> <span class="k">new</span> <span class="n">Task</span><span class="p">[</span><span class="n">parallelism</span><span class="p">];</span>
        <span class="k">for</span> <span class="p">(</span><span class="kt">int</span> <span class="n">i</span> <span class="p">=</span> <span class="m">0</span><span class="p">;</span> <span class="n">i</span> <span class="p">&lt;</span> <span class="n">_workerTasks</span><span class="p">.</span><span class="n">Length</span><span class="p">;</span> <span class="n">i</span><span class="p">++)</span>
        <span class="p">{</span>
            <span class="n">_workerTasks</span><span class="p">[</span><span class="n">i</span><span class="p">]</span> <span class="p">=</span> <span class="n">Task</span><span class="p">.</span><span class="nf">Run</span><span class="p">(</span>
                <span class="k">async</span> <span class="p">()</span> <span class="p">=&gt;</span>
                <span class="p">{</span>
                    <span class="c1">// Not passing using the cancellation token here as we need to drain the entire channel to ensure we don't leave dangling Tasks.</span>
                    <span class="k">await</span> <span class="k">foreach</span> <span class="p">(</span><span class="n">WorkContext</span> <span class="n">context</span> <span class="k">in</span> <span class="n">_channel</span><span class="p">.</span><span class="n">Reader</span><span class="p">.</span><span class="nf">ReadAllAsync</span><span class="p">())</span>
                    <span class="p">{</span>
                        <span class="k">await</span> <span class="nf">ProcessWorkAsync</span><span class="p">(</span><span class="n">context</span><span class="p">,</span> <span class="n">_cancellationTokenSource</span><span class="p">.</span><span class="n">Token</span><span class="p">);</span>
                    <span class="p">}</span>
                <span class="p">});</span>
        <span class="p">}</span>
    <span class="p">}</span>

    <span class="k">public</span> <span class="k">async</span> <span class="n">Task</span><span class="p">&lt;</span><span class="n">TResult</span><span class="p">&gt;</span> <span class="nf">EnqueueWorkAsync</span><span class="p">(</span><span class="n">TInput</span> <span class="n">input</span><span class="p">,</span> <span class="n">CancellationToken</span> <span class="n">cancellationToken</span> <span class="p">=</span> <span class="k">default</span><span class="p">)</span>
    <span class="p">{</span>
        <span class="n">cancellationToken</span><span class="p">.</span><span class="nf">ThrowIfCancellationRequested</span><span class="p">();</span>
        <span class="n">TaskCompletionSource</span><span class="p">&lt;</span><span class="n">TResult</span><span class="p">&gt;</span> <span class="n">taskCompletionSource</span> <span class="p">=</span> <span class="k">new</span><span class="p">();</span>
        <span class="n">CancellationToken</span> <span class="n">linkedToken</span> <span class="p">=</span> <span class="n">cancellationToken</span><span class="p">.</span><span class="n">CanBeCanceled</span>
            <span class="p">?</span> <span class="n">CancellationTokenSource</span><span class="p">.</span><span class="nf">CreateLinkedTokenSource</span><span class="p">(</span><span class="n">cancellationToken</span><span class="p">,</span> <span class="n">_cancellationTokenSource</span><span class="p">.</span><span class="n">Token</span><span class="p">).</span><span class="n">Token</span>
            <span class="p">:</span> <span class="n">_cancellationTokenSource</span><span class="p">.</span><span class="n">Token</span><span class="p">;</span>
        <span class="n">WorkContext</span> <span class="n">context</span> <span class="p">=</span> <span class="k">new</span><span class="p">(</span><span class="n">input</span><span class="p">,</span> <span class="n">taskCompletionSource</span><span class="p">,</span> <span class="n">linkedToken</span><span class="p">);</span>
        <span class="k">await</span> <span class="n">_channel</span><span class="p">.</span><span class="n">Writer</span><span class="p">.</span><span class="nf">WriteAsync</span><span class="p">(</span><span class="n">context</span><span class="p">,</span> <span class="n">linkedToken</span><span class="p">);</span>
        <span class="k">return</span> <span class="k">await</span> <span class="n">taskCompletionSource</span><span class="p">.</span><span class="n">Task</span><span class="p">;</span>
    <span class="p">}</span>

    <span class="k">public</span> <span class="k">async</span> <span class="n">ValueTask</span> <span class="nf">DisposeAsync</span><span class="p">()</span>
    <span class="p">{</span>
        <span class="k">await</span> <span class="n">_cancellationTokenSource</span><span class="p">.</span><span class="nf">CancelAsync</span><span class="p">();</span>
        <span class="n">_channel</span><span class="p">.</span><span class="n">Writer</span><span class="p">.</span><span class="nf">Complete</span><span class="p">();</span>
        <span class="k">await</span> <span class="n">_channel</span><span class="p">.</span><span class="n">Reader</span><span class="p">.</span><span class="n">Completion</span><span class="p">;</span>
        <span class="k">await</span> <span class="n">Task</span><span class="p">.</span><span class="nf">WhenAll</span><span class="p">(</span><span class="n">_workerTasks</span><span class="p">);</span>
        <span class="n">_cancellationTokenSource</span><span class="p">.</span><span class="nf">Dispose</span><span class="p">();</span>
    <span class="p">}</span>

    <span class="k">private</span> <span class="k">async</span> <span class="n">Task</span> <span class="nf">ProcessWorkAsync</span><span class="p">(</span><span class="n">WorkContext</span> <span class="n">context</span><span class="p">,</span> <span class="n">CancellationToken</span> <span class="n">cancellationToken</span><span class="p">)</span>
    <span class="p">{</span>
        <span class="k">if</span> <span class="p">(</span><span class="n">cancellationToken</span><span class="p">.</span><span class="n">IsCancellationRequested</span><span class="p">)</span>
        <span class="p">{</span>
            <span class="n">context</span><span class="p">.</span><span class="n">TaskCompletionSource</span><span class="p">.</span><span class="nf">TrySetCanceled</span><span class="p">(</span><span class="n">cancellationToken</span><span class="p">);</span>
            <span class="k">return</span><span class="p">;</span>
        <span class="p">}</span>

        <span class="k">try</span>
        <span class="p">{</span>
            <span class="n">TResult</span> <span class="n">result</span> <span class="p">=</span> <span class="k">await</span> <span class="nf">_processFunc</span><span class="p">(</span><span class="n">context</span><span class="p">.</span><span class="n">Input</span><span class="p">,</span> <span class="n">cancellationToken</span><span class="p">);</span>
            <span class="n">context</span><span class="p">.</span><span class="n">TaskCompletionSource</span><span class="p">.</span><span class="nf">TrySetResult</span><span class="p">(</span><span class="n">result</span><span class="p">);</span>
        <span class="p">}</span>
        <span class="k">catch</span> <span class="p">(</span><span class="n">OperationCanceledException</span> <span class="n">ex</span><span class="p">)</span>
        <span class="p">{</span>
            <span class="n">context</span><span class="p">.</span><span class="n">TaskCompletionSource</span><span class="p">.</span><span class="nf">TrySetCanceled</span><span class="p">(</span><span class="n">ex</span><span class="p">.</span><span class="n">CancellationToken</span><span class="p">);</span>
        <span class="p">}</span>
        <span class="k">catch</span> <span class="p">(</span><span class="n">Exception</span> <span class="n">ex</span><span class="p">)</span>
        <span class="p">{</span>
            <span class="n">context</span><span class="p">.</span><span class="n">TaskCompletionSource</span><span class="p">.</span><span class="nf">TrySetException</span><span class="p">(</span><span class="n">ex</span><span class="p">);</span>
        <span class="p">}</span>
    <span class="p">}</span>
<span class="p">}</span>
</code></pre></div></div>

<p>This blog post shows two alternate approaches for implementing a limited parallelism work queue to manage many tasks while avoiding overscheduling. These two implementations are best suited to different usage patterns and can be further customized or optimized for your specific use-cases.</p>]]></content><author><name></name></author><category term=".NET" /><category term=".NET" /><summary type="html"><![CDATA[In the realm of asynchronous operations within C# applications, maintaining optimal performance often requires a delicate balance between execution and resource allocation. The need to prevent CPU oversubscription while managing numerous concurrent tasks is a common challenge faced by developers.]]></summary></entry><entry><title type="html">Scripting Machine Setup</title><link href="https://dfederm.com/scripting-machine-setup/" rel="alternate" type="text/html" title="Scripting Machine Setup" /><published>2023-05-23T00:00:00+00:00</published><updated>2023-05-23T00:00:00+00:00</updated><id>https://dfederm.com/scripting-machine-setup</id><content type="html" xml:base="https://dfederm.com/scripting-machine-setup/"><![CDATA[<p>Lately, I’ve found myself setting up multiple computers, and with <a href="https://aka.ms/devbox" target="_blank">Microsoft DevBox</a> on the horizon, I anticipate working with “fresh” machines more frequently. Like many developers, I thrive in a familiar environment with my preferred tools and settings, as muscle memory kicks in and I can efficiently tackle any task. Unfortunately, the process of setting up a new machine can be quite cumbersome. To address this challenge, I took matters into my own hands and developed a script that streamlines the entire setup process for me.</p>

<p>Note that I previously wrote about a <a href="/setting-up-a-roaming-developer-console/">roaming developer console</a>, but it was not as robust as I needed, and a lot has changed since then, for example the release of <code class="language-plaintext highlighter-rouge">winget</code>.</p>

<p>You can find my completed script which I use for my personal setup on <a href="https://github.com/dfederm/MachineSetup" target="_blank">GitHub</a>. I’d recommend forking it and tuning it for your own personal preferences.</p>

<h2 id="requirements">Requirements</h2>

<p>A key requirement for this project, especially since I expected to iterate on it quite a bit at first, was to ensure the script was idempotent. The goal was to run and re-run the script multiple times while consistently achieving the desired machine state. This flexibility allowed me to make changes and easily apply them. As a result, I could even schedule the script to automatically incorporate any modifications I had made.</p>

<p>To enhance user-friendliness, I aimed for the script to skip unnecessary actions. Instead of blindly setting a registry key, I designed it to first check if the key was already in the desired state. This approach served two purposes: it provided valuable logging information to indicate what the script actually changed and avoided unnecessary elevation prompts.</p>

<p>Furthermore, I prioritized security and implemented a strategy to handle elevation. The script was designed to run unelevated by default, and only specific commands would require elevation if necessary. This adherence to the <a href="https://en.wikipedia.org/wiki/Principle_of_least_privilege" target="_blank">principle of least privilege</a> improved security measures and mitigated potential issues related to file creation as an administrator. Admittedly, this has debatable value since this script is personal and so should be deemed trustworthy before executing it.</p>

<p>Overall, these considerations, including idempotence, skipping unnecessary actions, and managing elevation, played crucial roles in making the script more robust, user-friendly, and secure.</p>

<h2 id="defining-machine-specific-paths">Defining Machine-Specific Paths</h2>

<p>To ensure compatibility with different machine setups, the script begins by defining two crucial paths that might vary based on the drive topography of each machine. For example, on my personal machine I have a separate OS drive and data drive, while on my work machine I have a single drive. Specifically, these paths are the <code class="language-plaintext highlighter-rouge">CodeDir</code> and <code class="language-plaintext highlighter-rouge">BinDir</code>.</p>

<p><code class="language-plaintext highlighter-rouge">CodeDir</code> represents the root directory for all code, where I typically clone git repositories and store project files. <code class="language-plaintext highlighter-rouge">BinDir</code> is the designated location for scripts and standalone tools.</p>

<p>The setup script initiates a prompt to determine the locations of <code class="language-plaintext highlighter-rouge">CodeDir</code> and <code class="language-plaintext highlighter-rouge">BinDir</code>, assuming they haven’t been defined previously. Once the user provides the necessary input, the script proceeds to set these paths as user-wide environment variables. Additionally, <code class="language-plaintext highlighter-rouge">BinDir</code> is added to the user-wide <code class="language-plaintext highlighter-rouge">PATH</code>, ensuring convenient access to scripts and tools from anywhere within the system.</p>

<h2 id="configuring-windows">Configuring Windows</h2>

<p>Configuring Windows revolves around making modifications to the registry. The setup script encompasses several essential registry tweaks and configuration adjustments, including:</p>

<ul>
  <li>Configuring cmd Autorun to run <code class="language-plaintext highlighter-rouge">%BinDir%\init.cmd</code> (more on that later)</li>
  <li>Showing file extensions in Explorer</li>
  <li>Showing hidden files and directories in Explorer</li>
  <li>Restore classic context menu</li>
  <li>Disabling Edge tabs showing in Alt+Tab</li>
  <li>Enable Developer Mode</li>
  <li>Enable Remote Desktop</li>
  <li>Enable Long Paths</li>
  <li>Opting out of Windows Telemetry</li>
  <li>Excluding <code class="language-plaintext highlighter-rouge">CodeDir</code> from Defender</li>
</ul>

<p>I will certainly be adding more to this list as time goes on.</p>

<h2 id="uninstalling-bloatware">Uninstalling Bloatware</h2>

<p>When it comes to debloating scripts and tools, it’s important to strike a balance. I find that many available scripts tend to be overly aggressive, removing applications that might actually be useful or causing unintended harm to the system. In my personal experience, I find it unnecessary to uninstall essential applications like the Edge browser or OneDrive. Additionally, it’s worth noting that Microsoft <a href="https://support.microsoft.com/en-us/topic/microsoft-support-policy-for-the-use-of-registry-cleaning-utilities-0485f4df-9520-3691-2461-7b0fd54e8b3a" target="_blank">discourages</a> the use of registry cleaners due to potential malware risks, and honestly orphaned registry keys take up virtually no disk space and don’t slow the system down in any way.</p>

<p>Nevertheless, I do believe there is value in uninstalling a few specific applications that come bundled with Windows. These include:</p>

<ul>
  <li>Cortana: Personally I don’t find Cortana useful.</li>
  <li>Bing Weather: It’s not my preferred method for checking the weather</li>
  <li>Get Help: I haven’t found this app useful.</li>
  <li>Get Started: I haven’t found this app useful.</li>
  <li>Mixed Reality Portal: I don’t use virtual reality experienced on my desktop computer (or at all for that matter).</li>
</ul>

<p>Beyond that, a clean install of Windows should be relatively free of bloatware applications.</p>

<h2 id="installing-applications">Installing Applications</h2>

<p>Many applications these days can be installed and updated via <a href="https://github.com/microsoft/winget-cli" target="_blank">winget</a>. Winget can easily be scripted to install a list of applications, and for me that list includes:</p>

<ul>
  <li>7zip: A versatile file compression tool.</li>
  <li>DiffMerge: For file or folder comparisons.</li>
  <li>Git: For version control</li>
  <li>HWiNFO: For monitoring CPU/CPU temperatures and clock speeds</li>
  <li>ILSpy: A decompiler for .NET assemblies.</li>
  <li>Microsoft Teams: For work</li>
  <li>MSBuild Structured Log Viewer: A tool for <a href="/debugging-msbuild/">debugging MSBuild</a>.</li>
  <li>.NET 7 SDK: For developing with .NET</li>
  <li>Node.js: For developing with JavaScript</li>
  <li>Notepad++: One of my favored text editors</li>
  <li>NuGet: Package manager for .NET</li>
  <li>NuGet Package Explorer: A UI for inspecting NuGet packages</li>
  <li>PowerShell: Better than Windows Powershell</li>
  <li>PowerToys: Various useful utilities</li>
  <li>Remote Desktop Client: modern version of <code class="language-plaintext highlighter-rouge">mstsc</code></li>
  <li>Regex Hero: Helpful for working with regular expressions</li>
  <li>SQL Server Management Studio: For working with SQL databases</li>
  <li>Sysinternals Suite: Various useful utilities</li>
  <li>Telegram: Favored communication app</li>
  <li>Visual Studio Code: One of my favored text/code editors</li>
  <li>Visual Studio 2022 Enterprise: Code editor</li>
  <li>Visual Studio 2022 Enterprise Preview: Daily driver code editor</li>
  <li>WinDirStat: For viewing disk usage</li>
  <li>Windows Terminal: Better than the stock one</li>
</ul>

<p>While most applications can be installed via Winget, there are a few exceptions. In those cases, the script takes care of installing those applications separately. One such app is the Azure Artifacts Credential Provider (for Azure Feeds), and WSL. Note that installing WSL involves enabling some Windows Components which require a reboot to fully finish installing.</p>

<h2 id="configuring-applications">Configuring Applications</h2>

<p>Once the applications are installed, the setup script proceeds to configure them. Some applications are configured by the registry while other use environment variables and some even use configuration files. The following configurations are performed by the script:</p>

<ul>
  <li>Setting git config and aliases</li>
  <li>Enable WAM integration for Git: Promptless auth for Azure Repos</li>
  <li>Force NuGet to use auth dialogs: Avoid device code auth for Azure Feeds in favor of a browser popup window</li>
  <li>Configure the NuGet cache locations: The defaults are under the user profile but I find a path under the <code class="language-plaintext highlighter-rouge">CodeDir</code> to be more appropriate.</li>
  <li>Opting out of VS Telemetry: Prioritizing privacy</li>
  <li>Opting out of .NET Telemetry: Prioritizing privacy</li>
  <li>Copying Windows Terminal settings (more on this later)</li>
</ul>

<h2 id="bootstrapping">Bootstrapping</h2>

<p>A keen eye may have noticed that the setup script installs Git, but the script lives on GitHub, so there is a bootstrapping problem. How can we download the script and other assets from GitHub?</p>

<p>Luckily it’s fairly easy to download an entire GitHub repository as a zip file. The following PowerShell will download the zip, extract it, and run it:</p>

<div class="language-ps highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nf">$TempDir</span> <span class="nf">=</span> <span class="nf">"$env:TEMP\MachineSetup"</span>
<span class="nf">Remove-Item</span> <span class="nf">$TempDir</span> <span class="nf">-Recurse</span> <span class="nf">-Force</span> <span class="nf">-ErrorAction</span> <span class="nf">SilentlyContinue</span>
<span class="nf">New-Item</span> <span class="nf">-Path</span> <span class="nf">$TempDir</span> <span class="nf">-ItemType</span> <span class="nf">Directory</span> <span class="p">&gt;</span> <span class="nf">$null</span>
<span class="nf">$ZipPath</span> <span class="nf">=</span> <span class="nf">"$TempDir\bundle.zip"</span>
<span class="nf">$ProgressPreference</span> <span class="nf">=</span> <span class="nf">'SilentlyContinue'</span>
<span class="nf">Invoke-WebRequest</span> <span class="nf">-Uri</span> <span class="nf">https:</span><span class="err">/</span><span class="nv">/github.com/dfederm/MachineSetup/archive/refs/heads/main.zip</span> <span class="nf">-OutFile</span> <span class="nf">$ZipPath</span>
<span class="nf">$ProgressPreference</span> <span class="nf">=</span> <span class="nf">'Continue'</span>
<span class="nf">Expand-Archive</span> <span class="nf">-LiteralPath</span> <span class="nf">$ZipPath</span> <span class="nf">-DestinationPath</span> <span class="nf">$TempDir</span>
<span class="nf">$SetupScript</span> <span class="nf">=</span> <span class="s">(Get-ChildItem -Path $TempDir -Filter setup.ps1 -Recurse)</span><span class="nf">.FullName</span>
<span class="nf">&amp;</span> <span class="nf">$SetupScript</span> <span class="nf">@args</span>
<span class="nf">Remove-Item</span> <span class="nf">$TempDir</span> <span class="nf">-Recurse</span> <span class="nf">-Force</span>
</code></pre></div></div>

<p>That’s a bit much to copy and paste though, so I saved that as a <code class="language-plaintext highlighter-rouge">bootstrap.ps1</code> script in the repo, so the full bootstrapping is a one-liner:</p>

<div class="language-ps highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nf">iex</span> <span class="nf">"&amp;</span> <span class="p">{</span> <span class="nf">$</span><span class="s">(iwr https://raw.githubusercontent.com/dfederm/MachineSetup/main/bootstrap.ps1)</span> <span class="p">}</span><span class="nf">"</span> <span class="nf">|</span> <span class="nf">Out-Null</span>
</code></pre></div></div>

<p>It’s a bit roundabout but the one-liner will download and execute <code class="language-plaintext highlighter-rouge">bootstrap.ps1</code>, which will in turn download the entire repo as a zip file, extract it, and run the setup script.</p>

<h2 id="bindir-and-autorun"><code class="language-plaintext highlighter-rouge">BinDir</code> and autorun</h2>

<p>With the bootstrap process in place, we can finally complete the picture with the aforementioned <code class="language-plaintext highlighter-rouge">init.cmd</code> autorun script and the <code class="language-plaintext highlighter-rouge">BinDir</code>. The repo contains a <code class="language-plaintext highlighter-rouge">bin</code> directory which is copied to <code class="language-plaintext highlighter-rouge">BinDir</code> and contains the <code class="language-plaintext highlighter-rouge">init.cmd</code> autorun and other necessary scripts or files.</p>

<p>The <code class="language-plaintext highlighter-rouge">init.cmd</code> autorun is described in more detail in my <a href="/setting-up-a-roaming-developer-console/">previous blog post</a>, but essentially it’s a script that runs every time <code class="language-plaintext highlighter-rouge">cmd</code> is launched. I use it primarily to set up <code class="language-plaintext highlighter-rouge">DOSKEY</code> macros like <code class="language-plaintext highlighter-rouge">n</code> for launching Nodepad++. Note that if you prefer PowerShell, you can set up similar behavior using Profiles (<code class="language-plaintext highlighter-rouge">%UserProfile%\Documents\PowerShell\Profile.ps1</code>).</p>

<p>Additionally, the reason why <code class="language-plaintext highlighter-rouge">BinDir</code> is on the <code class="language-plaintext highlighter-rouge">PATH</code> is because any other helpful scripts can be added there and be invoked anywhere.</p>

<p>Finally, a backup of my Windows Terminal <code class="language-plaintext highlighter-rouge">settings.json</code> is in this directory so that the setup script can simply copy it to configure Windows Terminal.</p>

<h2 id="conclusion">Conclusion</h2>

<p>Setting up a new machine doesn’t have to be a cumbersome process. By adopting this setup script and following the outlined steps, you can significantly reduce the time and effort required to configure new machines while ensuring a consistent and optimized working environment. With the power of automation and the flexibility of customization, the setup script presented in this blog post offers a practical solution to streamline the machine setup experience. Embrace the script, tailor it to your preferences, and let it handle the heavy lifting for you, allowing you to focus on what matters most—writing code and building remarkable software.</p>]]></content><author><name></name></author><category term="Developer Environment" /><category term="developer command prompt" /><category term="developer console" /><category term="powershell" /><summary type="html"><![CDATA[Lately, I’ve found myself setting up multiple computers, and with Microsoft DevBox on the horizon, I anticipate working with “fresh” machines more frequently. Like many developers, I thrive in a familiar environment with my preferred tools and settings, as muscle memory kicks in and I can efficiently tackle any task. Unfortunately, the process of setting up a new machine can be quite cumbersome. To address this challenge, I took matters into my own hands and developed a script that streamlines the entire setup process for me.]]></summary></entry><entry><title type="html">Removing unused dependencies with ReferenceTrimmer</title><link href="https://dfederm.com/removing-unused-dependencies-with-referencetrimmer/" rel="alternate" type="text/html" title="Removing unused dependencies with ReferenceTrimmer" /><published>2023-02-12T00:00:00+00:00</published><updated>2023-02-12T00:00:00+00:00</updated><id>https://dfederm.com/removing-unused-dependencies-with-referencetrimmer</id><content type="html" xml:base="https://dfederm.com/removing-unused-dependencies-with-referencetrimmer/"><![CDATA[<p>It’s <a href="/trimming-unnecessary-dependencies-from-projects/">been a while</a> since I first introduced <a href="https://github.com/dfederm/ReferenceTrimmer" target="_blank">ReferenceTrimmer</a> and a lot has changed.</p>

<p>For background, ReferenceTrimmer is a <a href="https://www.nuget.org/packages/ReferenceTrimmer" target="_blank">NuGet package</a> which helps identify unused dependencies which can be safely removed from your C# projects. Whether it’s old style <code class="language-plaintext highlighter-rouge">&lt;Reference&gt;</code>, other projects in your repository referenced via <code class="language-plaintext highlighter-rouge">&lt;ProjectReference&gt;</code>, or NuGet’s <code class="language-plaintext highlighter-rouge">&lt;PackageReference&gt;</code>, ReferenceTrimmer will help determine what isn’t required and simplify your dependency graph. This can lead to faster builds, smaller outputs, and better maintainability for your repository.</p>

<p>Most notably among the changes are that it’s now implemented as a combination of an <a href="https://learn.microsoft.com/en-us/visualstudio/msbuild/msbuild-tasks" target="_blank">MSBuild task</a> and a <a href="https://learn.microsoft.com/en-us/visualstudio/code-quality/roslyn-analyzers-overview" target="_blank">Roslyn analyzer</a> which seamlessly hook into your build process. A close second, and very related to the first, is that it uses the <a href="https://learn.microsoft.com/en-us/dotnet/api/microsoft.codeanalysis.compilation.getusedassemblyreferences" target="_blank"><code class="language-plaintext highlighter-rouge">GetUsedAssemblyReferences</code></a> Roslyn API to determine exactly which references the compiler used during compilation.</p>

<h2 id="getting-started">Getting started</h2>

<p>Because of the implementation being in an MSBuild task and Roslyn analyzer, the bulk of the work to use ReferenceTrimmer is to simply add a <code class="language-plaintext highlighter-rouge">PackageReference</code> to the <a href="https://www.nuget.org/packages/ReferenceTrimmer" target="_blank">ReferenceTrimmer NuGet package</a>. That will automatically enable its logic as part of your build. It’s recommended to add this to your <code class="language-plaintext highlighter-rouge">Directory.Build.props</code> or <code class="language-plaintext highlighter-rouge">Directory.Build.targets</code>, or if you’re using NuGet’s <a href="https://learn.microsoft.com/en-us/nuget/consume-packages/Central-Package-Management" target="_blank">Central Package Management</a>, which I highly recommend, your <code class="language-plaintext highlighter-rouge">Directory.Packages.props</code> file at the root of your repo.</p>

<p>For better results, <a href="https://learn.microsoft.com/en-us/dotnet/fundamentals/code-analysis/style-rules/ide0005" target="_blank">IDE0005</a> (Remove unnecessary using directives) should also be enabled, and unfortunately to enable this rule you need to enable <a href="https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/xmldoc/" target="_blank">XML documentation comments</a> (xmldoc) due to <a href="https://github.com/dotnet/roslyn/issues/41640" target="_blank">dotnet/roslyn issue 41640</a>. This causes many new analyzers to kick in which you may have many violations for, so those would need to be fixed or suppressed. To enable xmldoc, set the <code class="language-plaintext highlighter-rouge">&lt;GenerateDocumentationFile&gt;</code> property to <code class="language-plaintext highlighter-rouge">true</code>.</p>

<p>And that’s it, ReferenceTrimmer should now run as part of your build!</p>

<h2 id="how-it-works">How it works</h2>

<p>ReferenceTrimmer consists of two parts: an MSBuild task and a Roslyn analyzer.</p>

<p>The task is named <code class="language-plaintext highlighter-rouge">CollectDeclaredReferencesTask</code> and as you can guess, its job is to gather the declared references. It gathers the list of references passed to the compiler and associates each of them with the <code class="language-plaintext highlighter-rouge">&lt;Reference&gt;</code>, <code class="language-plaintext highlighter-rouge">&lt;ProjectReference&gt;</code>, or <code class="language-plaintext highlighter-rouge">&lt;PackageReference&gt;</code> from which they originate. It also filters out references which are unavoidable such as implicitly defined references from the .NET SDK, as well as packages which contain build logic since that may be the true purpose of that packages as opposed to providing a referenced library.</p>

<p>This information from the task is dumped into a file <code class="language-plaintext highlighter-rouge">_ReferenceTrimmer_DeclaredReferences.json</code> under the project’s intermediate output folder (usually <code class="language-plaintext highlighter-rouge">obj\Debug</code> or <code class="language-plaintext highlighter-rouge">obj\Release</code>) and this path is added as a <code class="language-plaintext highlighter-rouge">AdditionalFiles</code> item to <a href="https://github.com/dotnet/roslyn/blob/main/docs/analyzers/Using%20Additional%20Files.md#in-a-project-file" target="_blank">pass it to the analyzer</a>.</p>

<p>Next, as part of compilation, the analyzer named <code class="language-plaintext highlighter-rouge">ReferenceTrimmerAnalyzer</code> will call the <code class="language-plaintext highlighter-rouge">GetUsedAssemblyReferences</code> API as previously mentioned to get the used references and compare them to the compared references provided by the task. Any declared references which are not used will cause a warning to be raised.</p>

<p>The warning code raised will depend on the originating reference type. It will be <code class="language-plaintext highlighter-rouge">RT0001</code> for <code class="language-plaintext highlighter-rouge">&lt;Reference&gt;</code> items, <code class="language-plaintext highlighter-rouge">RT0002</code> for <code class="language-plaintext highlighter-rouge">&lt;ProjectReference&gt;</code> items, or <code class="language-plaintext highlighter-rouge">RT0003</code> for <code class="language-plaintext highlighter-rouge">&lt;PackageReference&gt;</code> items. These are treated like any other compilation warning and so can be suppressed on a per-project basic with <code class="language-plaintext highlighter-rouge">&lt;NoWarn&gt;</code>. Additionally ReferenceTrimmer can be disabled entirely for a project by setting <code class="language-plaintext highlighter-rouge">$(EnableReferenceTrimmer)</code> to false.</p>

<p>Note that because the warnings are raised as part of compilation, projects with other language types like C++ or even <a href="https://github.com/microsoft/MSBuildSdks/blob/main/src/NoTargets/README.md" target="_blank">NoTargets projects</a> will not cause warning to be raised nor need to be explicitly excluded from ReferenceTrimmer.</p>

<h2 id="future-work">Future work</h2>

<p>Ideas for future improvement include:</p>

<ul>
  <li>Better identifying <code class="language-plaintext highlighter-rouge">&lt;ProjectReference&gt;</code> which are required for runtime only. Or at least documenting explicit guidance around how to reference those properly such that the compiler doesn’t “see” them but the outputs are still copied.</li>
  <li>Add the ability to exclude specific references from being warned on by adding some metadata to the reference. This would allow for more granular control rather than having to disable an entire rule for an entire project.</li>
  <li>Add support for C++ projects.</li>
  <li>Find and fix more edge-case bugs!</li>
</ul>

<p>Contributions and bug reports are always welcome on <a href="https://github.com/dfederm/ReferenceTrimmer" target="_blank">GitHub</a>, and I’m hopeful ReferenceTrimmer can be helpful in detangling your repos!</p>]]></content><author><name></name></author><category term=".NET" /><category term="MSBuild" /><category term=".NET" /><category term="dependencies" /><category term="msbuild" /><category term="references" /><summary type="html"><![CDATA[It’s been a while since I first introduced ReferenceTrimmer and a lot has changed.]]></summary></entry><entry><title type="html">Async Mutex</title><link href="https://dfederm.com/async-mutex/" rel="alternate" type="text/html" title="Async Mutex" /><published>2022-11-03T00:00:00+00:00</published><updated>2022-11-03T00:00:00+00:00</updated><id>https://dfederm.com/async-mutex</id><content type="html" xml:base="https://dfederm.com/async-mutex/"><![CDATA[<p>The <a href="https://learn.microsoft.com/en-us/dotnet/standard/threading/mutexes" target="_blank"><code class="language-plaintext highlighter-rouge">Mutex</code></a> class in .NET helps manage exclusive access to a resource. When given a name, this can even be done across processes which can be extremely handy.</p>

<p>Though if you’ve ever used a <code class="language-plaintext highlighter-rouge">Mutex</code> you may have found that it cannot be used in conjunction with <code class="language-plaintext highlighter-rouge">async</code>/<code class="language-plaintext highlighter-rouge">await</code>. More specifically, from the documentation:</p>

<blockquote>
  <p>Mutexes have thread affinity; that is, the mutex can be released only by the thread that owns it.</p>
</blockquote>

<p>This can make the <code class="language-plaintext highlighter-rouge">Mutex</code> class hard to use at times and may require use of ugliness like <code class="language-plaintext highlighter-rouge">GetAwaiter().GetResult()</code>.</p>

<p>For in-process synchronization, <a href="https://learn.microsoft.com/en-us/dotnet/api/system.threading.semaphoreslim" target="_blank"><code class="language-plaintext highlighter-rouge">SemaphoreSlim</code></a> can be a good choice as it has a <a href="https://learn.microsoft.com/en-us/dotnet/api/system.threading.semaphoreslim.waitasync" target="_blank"><code class="language-plaintext highlighter-rouge">WaitAsync()</code></a> method. However semaphores aren’t ideal for managing exclusive access (<code class="language-plaintext highlighter-rouge">new SemaphoreSlim(1)</code> works but is less clear) and do not support system-wide synchronization eg. <code class="language-plaintext highlighter-rouge">new Mutex(initiallyOwned: false, @"Global\MyMutex")</code>.</p>

<p>Below I’ll explain how to implement an async mutex, but the full code can be found at the bottom or in the <a href="https://gist.github.com/dfederm/35c729f6218834b764fa04c219181e4e" target="_blank">Gist</a>.</p>

<p><strong>EDIT</strong> Based on a bunch of feedback, it’s clear to me that I over-generalized this post. This implementation was <em>specifically</em> for synchronizing across processes, <strong>not</strong> within a process. The code below is absolutely not thread-safe. So think of this more as an “Async Global Mutex” and stick with <code class="language-plaintext highlighter-rouge">SemaphoreSlim</code> to synchronization across threads.</p>

<h2 id="how-to-use-a-mutex">How to use a Mutex</h2>

<p>First, some background on how to properly use a <code class="language-plaintext highlighter-rouge">Mutex</code>. The simplest example is:</p>

<div class="language-cs highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1">// Create the named system-wide mutex</span>
<span class="k">using</span> <span class="nn">Mutex</span> <span class="n">mutex</span> <span class="p">=</span> <span class="k">new</span><span class="p">(</span><span class="k">false</span><span class="p">,</span> <span class="s">@"Global\MyMutex"</span><span class="p">);</span>

<span class="c1">// Acquire the Mutex</span>
<span class="n">mutex</span><span class="p">.</span><span class="nf">WaitOne</span><span class="p">();</span>

<span class="c1">// Do work...</span>

<span class="c1">// Release the Mutex</span>
<span class="n">mutex</span><span class="p">.</span><span class="nf">ReleaseMutex</span><span class="p">();</span>
</code></pre></div></div>

<p>As <code class="language-plaintext highlighter-rouge">Mutex</code> derives from <a href="https://learn.microsoft.com/en-us/dotnet/api/system.threading.waithandle" target="_blank"><code class="language-plaintext highlighter-rouge">WaitHandle</code></a>, <code class="language-plaintext highlighter-rouge">WaitOne()</code> is the mechanism to acquire it.</p>

<p>However, if a <code class="language-plaintext highlighter-rouge">Mutex</code> is not properly released when a thread holding it exits, the <code class="language-plaintext highlighter-rouge">WaitOne()</code> will throw a <a href="https://learn.microsoft.com/en-us/dotnet/api/system.threading.abandonedmutexexception"><code class="language-plaintext highlighter-rouge">AbandonedMutexException</code></a>. The reason for this is explained as:</p>

<blockquote>
  <p>An abandoned mutex often indicates a serious error in the code. When a thread exits without releasing the mutex, the data structures protected by the mutex might not be in a consistent state. The next thread to request ownership of the mutex can handle this exception and proceed, if the integrity of the data structures can be verified.</p>
</blockquote>

<p>So the next thread to acquire the <code class="language-plaintext highlighter-rouge">Mutex</code> is responsible for verifying data integrity, if applicable. Note that a thread can exit without properly releasing the <code class="language-plaintext highlighter-rouge">Mutex</code> if the user kills the process, so <code class="language-plaintext highlighter-rouge">AbandonedMutexException</code> should always be caught when trying to acquire a <code class="language-plaintext highlighter-rouge">Mutex</code>.</p>

<p>With this our new example becomes:</p>
<div class="language-cs highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1">// Create the named system-wide mutex</span>
<span class="k">using</span> <span class="nn">Mutex</span> <span class="n">mutex</span> <span class="p">=</span> <span class="k">new</span><span class="p">(</span><span class="k">false</span><span class="p">,</span> <span class="s">@"Global\MyMutex"</span><span class="p">);</span>
<span class="k">try</span>
<span class="p">{</span>
    <span class="c1">// Acquire the Mutex</span>
    <span class="n">mutex</span><span class="p">.</span><span class="nf">WaitOne</span><span class="p">();</span>
<span class="p">}</span>
<span class="k">catch</span> <span class="p">(</span><span class="n">AbandonedMutexException</span><span class="p">)</span>
<span class="p">{</span>
    <span class="c1">// Abandoned by another process, we acquired it.</span>
<span class="p">}</span>

<span class="c1">// Do work...</span>

<span class="c1">// Release the Mutex</span>
<span class="n">mutex</span><span class="p">.</span><span class="nf">ReleaseMutex</span><span class="p">();</span>
</code></pre></div></div>

<p>However, what if the work we want to do while holding the <code class="language-plaintext highlighter-rouge">Mutex</code> is async?</p>

<h2 id="asyncmutex"><code class="language-plaintext highlighter-rouge">AsyncMutex</code></h2>
<p>First let’s define what we want the shape of the class to look like. We want to be able to acquire and release the mutex asynchronously, so the following seems reasonable:</p>

<div class="language-cs highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">public</span> <span class="k">sealed</span> <span class="k">class</span> <span class="nc">AsyncMutex</span> <span class="p">:</span> <span class="n">IAsyncDisposable</span>
<span class="p">{</span>
    <span class="k">public</span> <span class="nf">AsyncMutex</span><span class="p">(</span><span class="kt">string</span> <span class="n">name</span><span class="p">);</span>

    <span class="k">public</span> <span class="n">Task</span> <span class="nf">AcquireAsync</span><span class="p">(</span><span class="n">CancellationToken</span> <span class="n">cancellationToken</span><span class="p">);</span>

    <span class="k">public</span> <span class="n">Task</span> <span class="nf">ReleaseAsync</span><span class="p">();</span>

    <span class="k">public</span> <span class="n">ValueTask</span> <span class="nf">DisposeAsync</span><span class="p">();</span>
<span class="p">}</span>
</code></pre></div></div>

<p>And so the intended usage would look like:</p>

<div class="language-cs highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1">// Create the named system-wide mutex</span>
<span class="k">await</span> <span class="k">using</span> <span class="nn">AsyncMutex</span> <span class="n">mutex</span> <span class="p">=</span> <span class="k">new</span><span class="p">(</span><span class="s">@"Global\MyMutex"</span><span class="p">);</span>

<span class="c1">// Acquire the Mutex</span>
<span class="k">await</span> <span class="n">mutex</span><span class="p">.</span><span class="nf">AcquireAsync</span><span class="p">(</span><span class="n">cancellationToken</span><span class="p">);</span>

<span class="c1">// Do async work...</span>

<span class="c1">// Release the Mutex</span>
<span class="k">await</span> <span class="n">mutex</span><span class="p">.</span><span class="nf">ReleaseAsync</span><span class="p">();</span>
</code></pre></div></div>

<p>Now that we know what we want it to look like, we can start implementing.</p>

<h2 id="acquiring">Acquiring</h2>
<p>Because <code class="language-plaintext highlighter-rouge">Mutex</code> must be in a single thread, and because we want to return a <code class="language-plaintext highlighter-rouge">Task</code> so the mutex can be acquired async, we can start a new <code class="language-plaintext highlighter-rouge">Task</code> which uses the <code class="language-plaintext highlighter-rouge">Mutex</code> and return that.</p>

<div class="language-cs highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">public</span> <span class="n">Task</span> <span class="nf">AcquireAsync</span><span class="p">()</span>
<span class="p">{</span>
    <span class="n">TaskCompletionSource</span> <span class="n">taskCompletionSource</span> <span class="p">=</span> <span class="k">new</span><span class="p">();</span>

    <span class="c1">// Putting all mutex manipulation in its own task as it doesn't work in async contexts</span>
    <span class="c1">// Note: this task should not throw.</span>
    <span class="n">Task</span><span class="p">.</span><span class="n">Factory</span><span class="p">.</span><span class="nf">StartNew</span><span class="p">(</span>
        <span class="n">state</span> <span class="p">=&gt;</span>
        <span class="p">{</span>
            <span class="k">try</span>
            <span class="p">{</span>
                <span class="k">using</span> <span class="nn">var</span> <span class="n">mutex</span> <span class="p">=</span> <span class="k">new</span> <span class="nf">Mutex</span><span class="p">(</span><span class="k">false</span><span class="p">,</span> <span class="n">_name</span><span class="p">);</span>
                <span class="k">try</span>
                <span class="p">{</span>
                    <span class="c1">// Acquire the Mutex</span>
                    <span class="n">mutex</span><span class="p">.</span><span class="nf">WaitOne</span><span class="p">();</span>
                <span class="p">}</span>
                <span class="k">catch</span> <span class="p">(</span><span class="n">AbandonedMutexException</span><span class="p">)</span>
                <span class="p">{</span>
                    <span class="c1">// Abandoned by another process, we acquired it.</span>
                <span class="p">}</span>

                <span class="n">taskCompletionSource</span><span class="p">.</span><span class="nf">SetResult</span><span class="p">();</span>

                <span class="c1">// TODO: We need to release the mutex at some point</span>
            <span class="p">}</span>
            <span class="k">catch</span> <span class="p">(</span><span class="n">Exception</span> <span class="n">ex</span><span class="p">)</span>
            <span class="p">{</span>
                <span class="n">taskCompletionSource</span><span class="p">.</span><span class="nf">TrySetException</span><span class="p">(</span><span class="n">ex</span><span class="p">);</span>
            <span class="p">}</span>
        <span class="p">}</span>
        <span class="n">state</span><span class="p">:</span> <span class="k">null</span><span class="p">,</span>
        <span class="n">cancellationToken</span><span class="p">,</span>
        <span class="n">TaskCreationOptions</span><span class="p">.</span><span class="n">LongRunning</span><span class="p">,</span>
        <span class="n">TaskScheduler</span><span class="p">.</span><span class="n">Default</span><span class="p">);</span>

    <span class="k">return</span> <span class="n">taskCompletionSource</span><span class="p">.</span><span class="n">Task</span><span class="p">;</span>
<span class="p">}</span>
</code></pre></div></div>

<p>So now <code class="language-plaintext highlighter-rouge">AcquireAsync</code> returns a <code class="language-plaintext highlighter-rouge">Task</code> which doesn’t complete until the <code class="language-plaintext highlighter-rouge">Mutex</code> is acquired.</p>

<h2 id="releasing">Releasing</h2>
<p>At some point the code needs to release the <code class="language-plaintext highlighter-rouge">Mutex</code>. Because the mutex must be released in the same thread it was acquired in, it must be released in the <code class="language-plaintext highlighter-rouge">Task</code> which <code class="language-plaintext highlighter-rouge">AcquireAsync</code> started. However, we don’t want to actually release the mutex until <code class="language-plaintext highlighter-rouge">ReleaseAsync</code> is called, so we need the <code class="language-plaintext highlighter-rouge">Task</code> to wait until that time.</p>

<p>To accomplish this, we need a <code class="language-plaintext highlighter-rouge">ManualResetEventSlim</code> which the <code class="language-plaintext highlighter-rouge">Task</code> can wait for a signal from, which <code class="language-plaintext highlighter-rouge">ReleaseAsync</code> will set.</p>

<div class="language-cs highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">private</span> <span class="n">Task</span><span class="p">?</span> <span class="n">_mutexTask</span><span class="p">;</span>
<span class="k">private</span> <span class="n">ManualResetEventSlim</span><span class="p">?</span> <span class="n">_releaseEvent</span><span class="p">;</span>

<span class="k">public</span> <span class="n">Task</span> <span class="nf">AcquireAsync</span><span class="p">(</span><span class="n">CancellationToken</span> <span class="n">cancellationToken</span><span class="p">)</span>
<span class="p">{</span>
    <span class="n">TaskCompletionSource</span> <span class="n">taskCompletionSource</span> <span class="p">=</span> <span class="k">new</span><span class="p">();</span>

    <span class="n">_releaseEvent</span> <span class="p">=</span> <span class="k">new</span> <span class="nf">ManualResetEventSlim</span><span class="p">();</span>

    <span class="c1">// Putting all mutex manipulation in its own task as it doesn't work in async contexts</span>
    <span class="c1">// Note: this task should not throw.</span>
    <span class="n">_mutexTask</span> <span class="p">=</span> <span class="n">Task</span><span class="p">.</span><span class="n">Factory</span><span class="p">.</span><span class="nf">StartNew</span><span class="p">(</span>
        <span class="n">state</span> <span class="p">=&gt;</span>
        <span class="p">{</span>
            <span class="k">try</span>
            <span class="p">{</span>
                <span class="k">using</span> <span class="nn">var</span> <span class="n">mutex</span> <span class="p">=</span> <span class="k">new</span> <span class="nf">Mutex</span><span class="p">(</span><span class="k">false</span><span class="p">,</span> <span class="n">_name</span><span class="p">);</span>
                <span class="k">try</span>
                <span class="p">{</span>
                    <span class="c1">// Acquire the Mutex</span>
                    <span class="n">mutex</span><span class="p">.</span><span class="nf">WaitOne</span><span class="p">();</span>
                <span class="p">}</span>
                <span class="k">catch</span> <span class="p">(</span><span class="n">AbandonedMutexException</span><span class="p">)</span>
                <span class="p">{</span>
                    <span class="c1">// Abandoned by another process, we acquired it.</span>
                <span class="p">}</span>

                <span class="n">taskCompletionSource</span><span class="p">.</span><span class="nf">SetResult</span><span class="p">();</span>

                <span class="c1">// Wait until the release call</span>
                <span class="n">_releaseEvent</span><span class="p">.</span><span class="nf">Wait</span><span class="p">();</span>

                <span class="n">mutex</span><span class="p">.</span><span class="nf">ReleaseMutex</span><span class="p">();</span>
            <span class="p">}</span>
            <span class="k">catch</span> <span class="p">(</span><span class="n">Exception</span> <span class="n">ex</span><span class="p">)</span>
            <span class="p">{</span>
                <span class="n">taskCompletionSource</span><span class="p">.</span><span class="nf">TrySetException</span><span class="p">(</span><span class="n">ex</span><span class="p">);</span>
            <span class="p">}</span>
        <span class="p">},</span>
        <span class="n">state</span><span class="p">:</span> <span class="k">null</span><span class="p">,</span>
        <span class="n">cancellationToken</span><span class="p">,</span>
        <span class="n">TaskCreationOptions</span><span class="p">.</span><span class="n">LongRunning</span><span class="p">,</span>
        <span class="n">TaskScheduler</span><span class="p">.</span><span class="n">Default</span><span class="p">);</span>

    <span class="k">return</span> <span class="n">taskCompletionSource</span><span class="p">.</span><span class="n">Task</span><span class="p">;</span>
<span class="p">}</span>

<span class="k">public</span> <span class="k">async</span> <span class="n">Task</span> <span class="nf">ReleaseAsync</span><span class="p">()</span>
<span class="p">{</span>
    <span class="n">_releaseEvent</span><span class="p">?.</span><span class="nf">Set</span><span class="p">();</span>

    <span class="k">if</span> <span class="p">(</span><span class="n">_mutexTask</span> <span class="p">!=</span> <span class="k">null</span><span class="p">)</span>
    <span class="p">{</span>
        <span class="k">await</span> <span class="n">_mutexTask</span><span class="p">;</span>
    <span class="p">}</span>
<span class="p">}</span>
</code></pre></div></div>

<p>Now the <code class="language-plaintext highlighter-rouge">Task</code> will acquire the <code class="language-plaintext highlighter-rouge">Mutex</code>, then wait for a signal from the <code class="language-plaintext highlighter-rouge">ReleaseAsync</code> method to release the mutex.</p>

<p>Additionally, the <code class="language-plaintext highlighter-rouge">ReleaseAsync</code> waits for the <code class="language-plaintext highlighter-rouge">Task</code> to finish to ensure its <code class="language-plaintext highlighter-rouge">Task</code> will not complete until the mutex is released.</p>

<h2 id="cancellation">Cancellation</h2>
<p>The caller may not want to wait forever for the mutex acquisition, so we need cancellation support. This is fairly straightforward since <code class="language-plaintext highlighter-rouge">Mutex</code> is a <code class="language-plaintext highlighter-rouge">WaitHandle</code>, and <a href="https://learn.microsoft.com/en-us/dotnet/api/system.threading.cancellationtoken" target="_blank"><code class="language-plaintext highlighter-rouge">CancellationToken</code></a> has a <a href="https://learn.microsoft.com/en-us/dotnet/api/system.threading.cancellationtoken.waithandle" target="_blank"><code class="language-plaintext highlighter-rouge">WaitHandle</code></a> property, so we can use <code class="language-plaintext highlighter-rouge">WaitHandle.WaitAny()</code></p>

<div class="language-cs highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">public</span> <span class="n">Task</span> <span class="nf">AcquireAsync</span><span class="p">(</span><span class="n">CancellationToken</span> <span class="n">cancellationToken</span><span class="p">)</span>
<span class="p">{</span>
    <span class="n">cancellationToken</span><span class="p">.</span><span class="nf">ThrowIfCancellationRequested</span><span class="p">();</span>

    <span class="n">TaskCompletionSource</span> <span class="n">taskCompletionSource</span> <span class="p">=</span> <span class="k">new</span><span class="p">();</span>

    <span class="n">_releaseEvent</span> <span class="p">=</span> <span class="k">new</span> <span class="nf">ManualResetEventSlim</span><span class="p">();</span>

    <span class="c1">// Putting all mutex manipulation in its own task as it doesn't work in async contexts</span>
    <span class="c1">// Note: this task should not throw.</span>
    <span class="n">_mutexTask</span> <span class="p">=</span> <span class="n">Task</span><span class="p">.</span><span class="n">Factory</span><span class="p">.</span><span class="nf">StartNew</span><span class="p">(</span>
        <span class="n">state</span> <span class="p">=&gt;</span>
        <span class="p">{</span>
            <span class="k">try</span>
            <span class="p">{</span>
                <span class="k">using</span> <span class="nn">var</span> <span class="n">mutex</span> <span class="p">=</span> <span class="k">new</span> <span class="nf">Mutex</span><span class="p">(</span><span class="k">false</span><span class="p">,</span> <span class="n">_name</span><span class="p">);</span>
                <span class="k">try</span>
                <span class="p">{</span>
                    <span class="c1">// Wait for either the mutex to be acquired, or cancellation</span>
                    <span class="k">if</span> <span class="p">(</span><span class="n">WaitHandle</span><span class="p">.</span><span class="nf">WaitAny</span><span class="p">(</span><span class="k">new</span><span class="p">[]</span> <span class="p">{</span> <span class="n">mutex</span><span class="p">,</span> <span class="n">cancellationToken</span><span class="p">.</span><span class="n">WaitHandle</span> <span class="p">})</span> <span class="p">!=</span> <span class="m">0</span><span class="p">)</span>
                    <span class="p">{</span>
                        <span class="n">taskCompletionSource</span><span class="p">.</span><span class="nf">SetCanceled</span><span class="p">(</span><span class="n">cancellationToken</span><span class="p">);</span>
                        <span class="k">return</span><span class="p">;</span>
                    <span class="p">}</span>
                <span class="p">}</span>
                <span class="k">catch</span> <span class="p">(</span><span class="n">AbandonedMutexException</span><span class="p">)</span>
                <span class="p">{</span>
                    <span class="c1">// Abandoned by another process, we acquired it.</span>
                <span class="p">}</span>

                <span class="n">taskCompletionSource</span><span class="p">.</span><span class="nf">SetResult</span><span class="p">();</span>

                <span class="c1">// Wait until the release call</span>
                <span class="n">_releaseEvent</span><span class="p">.</span><span class="nf">Wait</span><span class="p">();</span>

                <span class="n">mutex</span><span class="p">.</span><span class="nf">ReleaseMutex</span><span class="p">();</span>
            <span class="p">}</span>
            <span class="k">catch</span> <span class="p">(</span><span class="n">OperationCanceledException</span><span class="p">)</span>
            <span class="p">{</span>
                <span class="n">taskCompletionSource</span><span class="p">.</span><span class="nf">TrySetCanceled</span><span class="p">(</span><span class="n">cancellationToken</span><span class="p">);</span>
            <span class="p">}</span>
            <span class="k">catch</span> <span class="p">(</span><span class="n">Exception</span> <span class="n">ex</span><span class="p">)</span>
            <span class="p">{</span>
                <span class="n">taskCompletionSource</span><span class="p">.</span><span class="nf">TrySetException</span><span class="p">(</span><span class="n">ex</span><span class="p">);</span>
            <span class="p">}</span>
        <span class="p">},</span>
        <span class="n">state</span><span class="p">:</span> <span class="k">null</span><span class="p">,</span>
        <span class="n">cancellationToken</span><span class="p">,</span>
        <span class="n">TaskCreationOptions</span><span class="p">.</span><span class="n">LongRunning</span><span class="p">,</span>
        <span class="n">TaskScheduler</span><span class="p">.</span><span class="n">Default</span><span class="p">);</span>

    <span class="k">return</span> <span class="n">taskCompletionSource</span><span class="p">.</span><span class="n">Task</span><span class="p">;</span>
<span class="p">}</span>
</code></pre></div></div>

<h2 id="disposal">Disposal</h2>
<p>To ensure the mutex gets released, we should implement disposal. This should release the mutex if held. It should also cancel any currently waiting acquiring of the mutex, which requires a linked cancellation token.</p>

<div class="language-cs highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">private</span> <span class="n">CancellationTokenSource</span><span class="p">?</span> <span class="n">_cancellationTokenSource</span><span class="p">;</span>

<span class="k">public</span> <span class="n">Task</span> <span class="nf">AcquireAsync</span><span class="p">(</span><span class="n">CancellationToken</span> <span class="n">cancellationToken</span><span class="p">)</span>
<span class="p">{</span>
    <span class="n">cancellationToken</span><span class="p">.</span><span class="nf">ThrowIfCancellationRequested</span><span class="p">();</span>

    <span class="n">TaskCompletionSource</span> <span class="n">taskCompletionSource</span> <span class="p">=</span> <span class="k">new</span><span class="p">();</span>

    <span class="n">_releaseEvent</span> <span class="p">=</span> <span class="k">new</span> <span class="nf">ManualResetEventSlim</span><span class="p">();</span>
    <span class="n">_cancellationTokenSource</span> <span class="p">=</span> <span class="n">CancellationTokenSource</span><span class="p">.</span><span class="nf">CreateLinkedTokenSource</span><span class="p">(</span><span class="n">cancellationToken</span><span class="p">);</span>

    <span class="c1">// Putting all mutex manipulation in its own task as it doesn't work in async contexts</span>
    <span class="c1">// Note: this task should not throw.</span>
    <span class="n">_mutexTask</span> <span class="p">=</span> <span class="n">Task</span><span class="p">.</span><span class="n">Factory</span><span class="p">.</span><span class="nf">StartNew</span><span class="p">(</span>
        <span class="n">state</span> <span class="p">=&gt;</span>
        <span class="p">{</span>
            <span class="k">try</span>
            <span class="p">{</span>
                <span class="n">CancellationToken</span> <span class="n">cancellationToken</span> <span class="p">=</span> <span class="n">_cancellationTokenSource</span><span class="p">.</span><span class="n">Token</span><span class="p">;</span>
                <span class="k">using</span> <span class="nn">var</span> <span class="n">mutex</span> <span class="p">=</span> <span class="k">new</span> <span class="nf">Mutex</span><span class="p">(</span><span class="k">false</span><span class="p">,</span> <span class="n">_name</span><span class="p">);</span>
                <span class="k">try</span>
                <span class="p">{</span>
                    <span class="c1">// Wait for either the mutex to be acquired, or cancellation</span>
                    <span class="k">if</span> <span class="p">(</span><span class="n">WaitHandle</span><span class="p">.</span><span class="nf">WaitAny</span><span class="p">(</span><span class="k">new</span><span class="p">[]</span> <span class="p">{</span> <span class="n">mutex</span><span class="p">,</span> <span class="n">cancellationToken</span><span class="p">.</span><span class="n">WaitHandle</span> <span class="p">})</span> <span class="p">!=</span> <span class="m">0</span><span class="p">)</span>
                    <span class="p">{</span>
                        <span class="n">taskCompletionSource</span><span class="p">.</span><span class="nf">SetCanceled</span><span class="p">(</span><span class="n">cancellationToken</span><span class="p">);</span>
                        <span class="k">return</span><span class="p">;</span>
                    <span class="p">}</span>
                <span class="p">}</span>
                <span class="k">catch</span> <span class="p">(</span><span class="n">AbandonedMutexException</span><span class="p">)</span>
                <span class="p">{</span>
                    <span class="c1">// Abandoned by another process, we acquired it.</span>
                <span class="p">}</span>

                <span class="n">taskCompletionSource</span><span class="p">.</span><span class="nf">SetResult</span><span class="p">();</span>

                <span class="c1">// Wait until the release call</span>
                <span class="n">_releaseEvent</span><span class="p">.</span><span class="nf">Wait</span><span class="p">();</span>

                <span class="n">mutex</span><span class="p">.</span><span class="nf">ReleaseMutex</span><span class="p">();</span>
            <span class="p">}</span>
            <span class="k">catch</span> <span class="p">(</span><span class="n">OperationCanceledException</span><span class="p">)</span>
            <span class="p">{</span>
                <span class="n">taskCompletionSource</span><span class="p">.</span><span class="nf">TrySetCanceled</span><span class="p">(</span><span class="n">cancellationToken</span><span class="p">);</span>
            <span class="p">}</span>
            <span class="k">catch</span> <span class="p">(</span><span class="n">Exception</span> <span class="n">ex</span><span class="p">)</span>
            <span class="p">{</span>
                <span class="n">taskCompletionSource</span><span class="p">.</span><span class="nf">TrySetException</span><span class="p">(</span><span class="n">ex</span><span class="p">);</span>
            <span class="p">}</span>
        <span class="p">},</span>
        <span class="n">state</span><span class="p">:</span> <span class="k">null</span><span class="p">,</span>
        <span class="n">cancellationToken</span><span class="p">,</span>
        <span class="n">TaskCreationOptions</span><span class="p">.</span><span class="n">LongRunning</span><span class="p">,</span>
        <span class="n">TaskScheduler</span><span class="p">.</span><span class="n">Default</span><span class="p">);</span>

    <span class="k">return</span> <span class="n">taskCompletionSource</span><span class="p">.</span><span class="n">Task</span><span class="p">;</span>
<span class="p">}</span>

<span class="k">public</span> <span class="k">async</span> <span class="n">ValueTask</span> <span class="nf">DisposeAsync</span><span class="p">()</span>
<span class="p">{</span>
    <span class="c1">// Ensure the mutex task stops waiting for any acquire</span>
    <span class="n">_cancellationTokenSource</span><span class="p">?.</span><span class="nf">Cancel</span><span class="p">();</span>

    <span class="c1">// Ensure the mutex is released</span>
    <span class="k">await</span> <span class="nf">ReleaseAsync</span><span class="p">();</span>

    <span class="n">_releaseEvent</span><span class="p">?.</span><span class="nf">Dispose</span><span class="p">();</span>
    <span class="n">_cancellationTokenSource</span><span class="p">?.</span><span class="nf">Dispose</span><span class="p">();</span>
<span class="p">}</span>
</code></pre></div></div>

<h2 id="conclusion">Conclusion</h2>
<p><code class="language-plaintext highlighter-rouge">AsyncMutex</code> allows usage of <code class="language-plaintext highlighter-rouge">Mutex</code> with <code class="language-plaintext highlighter-rouge">async</code>/<code class="language-plaintext highlighter-rouge">await</code>.</p>

<p>Putting the whole thing together (or view the <a href="https://gist.github.com/dfederm/35c729f6218834b764fa04c219181e4e" target="_blank">Gist</a>):</p>
<div class="language-cs highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">public</span> <span class="k">sealed</span> <span class="k">class</span> <span class="nc">AsyncMutex</span> <span class="p">:</span> <span class="n">IAsyncDisposable</span>
<span class="p">{</span>
    <span class="k">private</span> <span class="k">readonly</span> <span class="kt">string</span> <span class="n">_name</span><span class="p">;</span>
    <span class="k">private</span> <span class="n">Task</span><span class="p">?</span> <span class="n">_mutexTask</span><span class="p">;</span>
    <span class="k">private</span> <span class="n">ManualResetEventSlim</span><span class="p">?</span> <span class="n">_releaseEvent</span><span class="p">;</span>
    <span class="k">private</span> <span class="n">CancellationTokenSource</span><span class="p">?</span> <span class="n">_cancellationTokenSource</span><span class="p">;</span>

    <span class="k">public</span> <span class="nf">AsyncMutex</span><span class="p">(</span><span class="kt">string</span> <span class="n">name</span><span class="p">)</span>
    <span class="p">{</span>
        <span class="n">_name</span> <span class="p">=</span> <span class="n">name</span><span class="p">;</span>
    <span class="p">}</span>

    <span class="k">public</span> <span class="n">Task</span> <span class="nf">AcquireAsync</span><span class="p">(</span><span class="n">CancellationToken</span> <span class="n">cancellationToken</span><span class="p">)</span>
    <span class="p">{</span>
        <span class="n">cancellationToken</span><span class="p">.</span><span class="nf">ThrowIfCancellationRequested</span><span class="p">();</span>

        <span class="n">TaskCompletionSource</span> <span class="n">taskCompletionSource</span> <span class="p">=</span> <span class="k">new</span><span class="p">();</span>

        <span class="n">_releaseEvent</span> <span class="p">=</span> <span class="k">new</span> <span class="nf">ManualResetEventSlim</span><span class="p">();</span>
        <span class="n">_cancellationTokenSource</span> <span class="p">=</span> <span class="n">CancellationTokenSource</span><span class="p">.</span><span class="nf">CreateLinkedTokenSource</span><span class="p">(</span><span class="n">cancellationToken</span><span class="p">);</span>

        <span class="c1">// Putting all mutex manipulation in its own task as it doesn't work in async contexts</span>
        <span class="c1">// Note: this task should not throw.</span>
        <span class="n">_mutexTask</span> <span class="p">=</span> <span class="n">Task</span><span class="p">.</span><span class="n">Factory</span><span class="p">.</span><span class="nf">StartNew</span><span class="p">(</span>
            <span class="n">state</span> <span class="p">=&gt;</span>
            <span class="p">{</span>
                <span class="k">try</span>
                <span class="p">{</span>
                    <span class="n">CancellationToken</span> <span class="n">cancellationToken</span> <span class="p">=</span> <span class="n">_cancellationTokenSource</span><span class="p">.</span><span class="n">Token</span><span class="p">;</span>
                    <span class="k">using</span> <span class="nn">var</span> <span class="n">mutex</span> <span class="p">=</span> <span class="k">new</span> <span class="nf">Mutex</span><span class="p">(</span><span class="k">false</span><span class="p">,</span> <span class="n">_name</span><span class="p">);</span>
                    <span class="k">try</span>
                    <span class="p">{</span>
                        <span class="c1">// Wait for either the mutex to be acquired, or cancellation</span>
                        <span class="k">if</span> <span class="p">(</span><span class="n">WaitHandle</span><span class="p">.</span><span class="nf">WaitAny</span><span class="p">(</span><span class="k">new</span><span class="p">[]</span> <span class="p">{</span> <span class="n">mutex</span><span class="p">,</span> <span class="n">cancellationToken</span><span class="p">.</span><span class="n">WaitHandle</span> <span class="p">})</span> <span class="p">!=</span> <span class="m">0</span><span class="p">)</span>
                        <span class="p">{</span>
                            <span class="n">taskCompletionSource</span><span class="p">.</span><span class="nf">SetCanceled</span><span class="p">(</span><span class="n">cancellationToken</span><span class="p">);</span>
                            <span class="k">return</span><span class="p">;</span>
                        <span class="p">}</span>
                    <span class="p">}</span>
                    <span class="k">catch</span> <span class="p">(</span><span class="n">AbandonedMutexException</span><span class="p">)</span>
                    <span class="p">{</span>
                        <span class="c1">// Abandoned by another process, we acquired it.</span>
                    <span class="p">}</span>

                    <span class="n">taskCompletionSource</span><span class="p">.</span><span class="nf">SetResult</span><span class="p">();</span>

                    <span class="c1">// Wait until the release call</span>
                    <span class="n">_releaseEvent</span><span class="p">.</span><span class="nf">Wait</span><span class="p">();</span>

                    <span class="n">mutex</span><span class="p">.</span><span class="nf">ReleaseMutex</span><span class="p">();</span>
                <span class="p">}</span>
                <span class="k">catch</span> <span class="p">(</span><span class="n">OperationCanceledException</span><span class="p">)</span>
                <span class="p">{</span>
                    <span class="n">taskCompletionSource</span><span class="p">.</span><span class="nf">TrySetCanceled</span><span class="p">(</span><span class="n">cancellationToken</span><span class="p">);</span>
                <span class="p">}</span>
                <span class="k">catch</span> <span class="p">(</span><span class="n">Exception</span> <span class="n">ex</span><span class="p">)</span>
                <span class="p">{</span>
                    <span class="n">taskCompletionSource</span><span class="p">.</span><span class="nf">TrySetException</span><span class="p">(</span><span class="n">ex</span><span class="p">);</span>
                <span class="p">}</span>
            <span class="p">},</span>
            <span class="n">state</span><span class="p">:</span> <span class="k">null</span><span class="p">,</span>
            <span class="n">cancellationToken</span><span class="p">,</span>
            <span class="n">TaskCreationOptions</span><span class="p">.</span><span class="n">LongRunning</span><span class="p">,</span>
            <span class="n">TaskScheduler</span><span class="p">.</span><span class="n">Default</span><span class="p">);</span>

        <span class="k">return</span> <span class="n">taskCompletionSource</span><span class="p">.</span><span class="n">Task</span><span class="p">;</span>
    <span class="p">}</span>

    <span class="k">public</span> <span class="k">async</span> <span class="n">Task</span> <span class="nf">ReleaseAsync</span><span class="p">()</span>
    <span class="p">{</span>
        <span class="n">_releaseEvent</span><span class="p">?.</span><span class="nf">Set</span><span class="p">();</span>

        <span class="k">if</span> <span class="p">(</span><span class="n">_mutexTask</span> <span class="p">!=</span> <span class="k">null</span><span class="p">)</span>
        <span class="p">{</span>
            <span class="k">await</span> <span class="n">_mutexTask</span><span class="p">;</span>
        <span class="p">}</span>
    <span class="p">}</span>

    <span class="k">public</span> <span class="k">async</span> <span class="n">ValueTask</span> <span class="nf">DisposeAsync</span><span class="p">()</span>
    <span class="p">{</span>
        <span class="c1">// Ensure the mutex task stops waiting for any acquire</span>
        <span class="n">_cancellationTokenSource</span><span class="p">?.</span><span class="nf">Cancel</span><span class="p">();</span>

        <span class="c1">// Ensure the mutex is released</span>
        <span class="k">await</span> <span class="nf">ReleaseAsync</span><span class="p">();</span>

        <span class="n">_releaseEvent</span><span class="p">?.</span><span class="nf">Dispose</span><span class="p">();</span>
        <span class="n">_cancellationTokenSource</span><span class="p">?.</span><span class="nf">Dispose</span><span class="p">();</span>
    <span class="p">}</span>
<span class="p">}</span>
</code></pre></div></div>]]></content><author><name></name></author><category term=".NET" /><category term=".NET" /><summary type="html"><![CDATA[The Mutex class in .NET helps manage exclusive access to a resource. When given a name, this can even be done across processes which can be extremely handy.]]></summary></entry></feed>