<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en"><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="https://jacobvanorder.github.io/feed.xml" rel="self" type="application/atom+xml" /><link href="https://jacobvanorder.github.io/" rel="alternate" type="text/html" hreflang="en" /><updated>2026-02-25T01:52:15+00:00</updated><id>https://jacobvanorder.github.io/feed.xml</id><title type="html">It’s Just Text Files, People</title><subtitle>Jacob&apos;s Learnings While Developing</subtitle><author><name>Jacob</name></author><entry><title type="html">UIKit Gestures in a SwiftUI Context</title><link href="https://jacobvanorder.github.io/uikit-gestures-in-a-swiftui-context/" rel="alternate" type="text/html" title="UIKit Gestures in a SwiftUI Context" /><published>2026-02-20T15:54:00+00:00</published><updated>2026-02-20T15:54:00+00:00</updated><id>https://jacobvanorder.github.io/uikit-gestures-in-a-swiftui-context</id><content type="html" xml:base="https://jacobvanorder.github.io/uikit-gestures-in-a-swiftui-context/"><![CDATA[<p>The <a href="https://en.wikipedia.org/wiki/Multi-touch">multi-touch</a> capacitive touch interface is so inherently crucial to the entire iPhone, iPad, and Apple Watch experience that it was one of the key parts of the keynote.</p>

<p><img src="/assets/images/2026-02-20-uikit-gestures-in-a-swiftui-context/iphone-multi-touch-slide.jpg" alt="a slide from the 2007 iphone reveal" /></p>

<p>Fast-forward nearly 20 years and SwiftUI is the preferred way to build interfaces on iOS devices but, despite being nearly 7 years old, we are not to a point where we can accomplish the same level of capability that was released in iPhoneOS 3.2 way back in 2010.</p>

<p>In this entry, I’ll be talking about the difference between UIKit’s <code class="language-plaintext highlighter-rouge">UIGestureRecognizer</code> and concrete subclasses, SwiftUI’s <code class="language-plaintext highlighter-rouge">Gesture</code>, and the bridge between with <code class="language-plaintext highlighter-rouge">UIGestureRecognizerRepresentable</code>. At the end, I’ll introduce a Swift Package Manager library and sample app that I wrote to help bridge the gap.</p>

<h2 id="uigesturerecognizer">UIGestureRecognizer</h2>

<p>Before iPhoneOS 3.2, developers had to manually keep track of the state of touches by utilizing the functions on <code class="language-plaintext highlighter-rouge">UIView</code> around touches began, moved, ended, and cancelled. As a result, Apple introduced <code class="language-plaintext highlighter-rouge">UIGestureRecognizer</code> with concrete subclasses for specific, common behaviors: tapping, panning, swiping, rotation, etc…. These were absolutely a godsend given the load of keeping track of state, doing associated math, and reusing the code was a nightmare.</p>

<p>As a high overview, the gestures follow a typical target/action pattern but also have delegation in the mix for whether to a gesture should begin, which gestures to block or interact with, and if the gesture should receive a touch. Additionally, when the action is triggered, you are passed the gesture which contains its current location as well as the state such as possible, started, changed, ended, cancelled, and failed.</p>

<p>As someone who has utilized the code multiple times in various ways, it is one of my favorite APIs in that it works consistently, is easy to understand, and abstracts the right amount of complexity away from the day-to-day development cycle. Plus, if you want to drop down to a more complex use-case, you can make your own <code class="language-plaintext highlighter-rouge">UIGestureRecognizer</code> subclass or fall back to touches began, changed, ended, or failed.</p>

<h2 id="swiftui-gesture">SwiftUI Gesture</h2>

<p>As with some SwiftUI views (<code class="language-plaintext highlighter-rouge">ScrollView</code> comes to mind), there is definitely a subset of functionality within SwiftUI’s gestures when compared to UIKit’s.</p>

<p>Let’s take a look at <code class="language-plaintext highlighter-rouge">DragGesture</code>. It’s counterpart in UIKit is <code class="language-plaintext highlighter-rouge">UIPanGestureRecognizer</code>. Because <code class="language-plaintext highlighter-rouge">DragGesture</code> in continuous, it has a <code class="language-plaintext highlighter-rouge">.onChanged</code> and <code class="language-plaintext highlighter-rouge">.onEnded</code> modifier but even right there, we see a limitation: what about <code class="language-plaintext highlighter-rouge">.onBegan</code>? Additionally, velocity and the ability to set more than one touch is missing.</p>

<p>It is true that there is shared functionality like ability to set a gesture simultaneously and getting the touch’s location and translation within a view.</p>

<p>Regardless, there are gaps and this means that when converting a UIKit app to SwiftUI, you’re going to run into these.</p>

<h2 id="uigesturerecognizerrepresentable"><code class="language-plaintext highlighter-rouge">UIGestureRecognizerRepresentable</code></h2>

<p>If you were a 4 trillion dollar company, what would you do? Pour resources into making your new technology at feature parity with your old technology? How about you add a bridge!</p>

<p>That’s what <code class="language-plaintext highlighter-rouge">UIGestureRecognizerRepresentable</code> is. Just like <code class="language-plaintext highlighter-rouge">UIViewRepresentable</code> and <code class="language-plaintext highlighter-rouge">UIViewControllerRepresentable</code>, this is a bridge that allows you to interop UIKit code with SwiftUI. Just like the aforementioned representables, <code class="language-plaintext highlighter-rouge">UIGestureRecognizerRepresentable</code> has a few specific methods to manage the lifecycle of the UIKit gesture within a SwiftUI view:</p>

<ul>
  <li><code class="language-plaintext highlighter-rouge">makeUIGestureRecognizer(context:):</code>; where you instantiate your gesture (e.g., a <code class="language-plaintext highlighter-rouge">UIPanGestureRecognizer</code>).</li>
  <li><code class="language-plaintext highlighter-rouge">updateUIGestureRecognizer(_:context:):</code>; when the SwiftUI state changes, you need to sync properties, like number of touches, from your SwiftUI view to the UIKit gesture.</li>
  <li><code class="language-plaintext highlighter-rouge">Coordinator:</code> You can optionally use a coordinator to handle delegation or other target-action events</li>
</ul>

<p>You can read more in the docs <a href="https://developer.apple.com/documentation/swiftui/uigesturerecognizerrepresentable">here</a>.</p>

<p>I am not going to write <em>exactly</em> how this is used (you’ll see why in a moment) but <a href="https://swiftwithmajid.com/2024/12/17/introducing-uigesturerecognizerrepresentable-protocol-in-swiftui/">here</a> is a good overview by Swift With Majid.</p>

<p>This is <em>way</em> better than what people were doing which is using <code class="language-plaintext highlighter-rouge">UIViewRepresentable</code>, attaching UIKit gestures to <em>that</em> and overlaying the representable view over a SwiftUI view.</p>

<h3 id="theres-always-a-catch">There’s Always a Catch</h3>

<p>When usings these bridged UIKit gestures, it is as easy as using <code class="language-plaintext highlighter-rouge">func gesture(_ representable: some UIGestureRecognizerRepresentable) -&gt; some View</code> modifier but you’ll notice that the argument isn’t <code class="language-plaintext highlighter-rouge">Gesture</code> but <code class="language-plaintext highlighter-rouge">UIGestureRecognizerRepresentable</code>.</p>

<p>The catch is that you can’t use <code class="language-plaintext highlighter-rouge">func simultaneousGesture&lt;T&gt;(_ gesture: T, including mask: GestureMask = .all) -&gt; some View where T : Gesture</code> because <code class="language-plaintext highlighter-rouge">T</code> <em>isn’t</em> <code class="language-plaintext highlighter-rouge">Gesture</code>. Same for <code class="language-plaintext highlighter-rouge">func simultaneousGesture&lt;T&gt;(T, including: GestureMask) -&gt; some View</code> and anything else mentioned in <a href="https://developer.apple.com/documentation/swiftui/composing-swiftui-gestures">this article</a> about composing gestures.</p>

<p>You <em>could</em> possibly use the <code class="language-plaintext highlighter-rouge">UIGestureRecognizerDelegate</code> method of <code class="language-plaintext highlighter-rouge">func gestureRecognizer(UIGestureRecognizer, shouldRecognizeSimultaneouslyWith: UIGestureRecognizer) -&gt; Bool
</code> but when you do, you get this unhelpful type:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>&lt;SwiftUI.UIKitResponderGestureRecognizer: 0x10560bb00; id = 95; baseClass = UIGestureRecognizer; state = Possible; delaysTouchesEnded = NO; view = &lt;_TtCGC7SwiftUI32NavigationStackHostingControllerVS_7AnyView_P10$1dadfd8f011HostingView: 0x105605940&gt;; target= &lt;(action=flushActions, target=&lt;SwiftUI.UIKitResponderEventBindingBridge 0x600000cb34b0&gt;)&gt;&gt;
</code></pre></div></div>

<p>So, you just have to use the modifier you get and hope for the best which is not ideal but it’s better than nothing!</p>

<h2 id="uikitgesturesforswiftui">UIKitGesturesForSwiftUI</h2>

<p>While I explore this at my J.O.B., I thought that perhaps there could be a library that is a wrapper for each <code class="language-plaintext highlighter-rouge">UIGestureRecognizer</code> subclass so that you don’t have to write all of that boilerplate yourself and so that’s what I did. <a href="https://github.com/jacobvanorder/UIKitGesturesForSwiftUI">You can find a Swift Package Manager package here</a>.</p>

<p>I also wrote a companion app which utilizes the library that you can <a href="https://github.com/jacobvanorder/UIKitGesturesForSwiftUIPlayground">find here</a>.</p>

<p>I’d love it if you’d check it out and, if you have any feedback, let me know on Mastodon <a href="https://mastodon.social/@jacobvo">here</a>.</p>]]></content><author><name>Jacob</name></author><summary type="html"><![CDATA[The multi-touch capacitive touch interface is so inherently crucial to the entire iPhone, iPad, and Apple Watch experience that it was one of the key parts of the keynote. Fast-forward nearly 20 years and SwiftUI is the preferred way to build interfaces on iOS devices but, despite being nearly 7 years old, we are not to a point where we can accomplish the same level of capability that was released in iPhoneOS 3.2 way back in 2010. In this entry, I’ll be talking about the difference between UIKit’s UIGestureRecognizer and concrete subclasses, SwiftUI’s Gesture, and the bridge between with UIGestureRecognizerRepresentable. At the end, I’ll introduce a Swift Package Manager library and sample app that I wrote to help bridge the gap. UIGestureRecognizer Before iPhoneOS 3.2, developers had to manually keep track of the state of touches by utilizing the functions on UIView around touches began, moved, ended, and cancelled. As a result, Apple introduced UIGestureRecognizer with concrete subclasses for specific, common behaviors: tapping, panning, swiping, rotation, etc…. These were absolutely a godsend given the load of keeping track of state, doing associated math, and reusing the code was a nightmare. As a high overview, the gestures follow a typical target/action pattern but also have delegation in the mix for whether to a gesture should begin, which gestures to block or interact with, and if the gesture should receive a touch. Additionally, when the action is triggered, you are passed the gesture which contains its current location as well as the state such as possible, started, changed, ended, cancelled, and failed. As someone who has utilized the code multiple times in various ways, it is one of my favorite APIs in that it works consistently, is easy to understand, and abstracts the right amount of complexity away from the day-to-day development cycle. Plus, if you want to drop down to a more complex use-case, you can make your own UIGestureRecognizer subclass or fall back to touches began, changed, ended, or failed. SwiftUI Gesture As with some SwiftUI views (ScrollView comes to mind), there is definitely a subset of functionality within SwiftUI’s gestures when compared to UIKit’s. Let’s take a look at DragGesture. It’s counterpart in UIKit is UIPanGestureRecognizer. Because DragGesture in continuous, it has a .onChanged and .onEnded modifier but even right there, we see a limitation: what about .onBegan? Additionally, velocity and the ability to set more than one touch is missing. It is true that there is shared functionality like ability to set a gesture simultaneously and getting the touch’s location and translation within a view. Regardless, there are gaps and this means that when converting a UIKit app to SwiftUI, you’re going to run into these. UIGestureRecognizerRepresentable If you were a 4 trillion dollar company, what would you do? Pour resources into making your new technology at feature parity with your old technology? How about you add a bridge! That’s what UIGestureRecognizerRepresentable is. Just like UIViewRepresentable and UIViewControllerRepresentable, this is a bridge that allows you to interop UIKit code with SwiftUI. Just like the aforementioned representables, UIGestureRecognizerRepresentable has a few specific methods to manage the lifecycle of the UIKit gesture within a SwiftUI view: makeUIGestureRecognizer(context:):; where you instantiate your gesture (e.g., a UIPanGestureRecognizer). updateUIGestureRecognizer(_:context:):; when the SwiftUI state changes, you need to sync properties, like number of touches, from your SwiftUI view to the UIKit gesture. Coordinator: You can optionally use a coordinator to handle delegation or other target-action events You can read more in the docs here. I am not going to write exactly how this is used (you’ll see why in a moment) but here is a good overview by Swift With Majid. This is way better than what people were doing which is using UIViewRepresentable, attaching UIKit gestures to that and overlaying the representable view over a SwiftUI view. There’s Always a Catch When usings these bridged UIKit gestures, it is as easy as using func gesture(_ representable: some UIGestureRecognizerRepresentable) -&gt; some View modifier but you’ll notice that the argument isn’t Gesture but UIGestureRecognizerRepresentable. The catch is that you can’t use func simultaneousGesture&lt;T&gt;(_ gesture: T, including mask: GestureMask = .all) -&gt; some View where T : Gesture because T isn’t Gesture. Same for func simultaneousGesture&lt;T&gt;(T, including: GestureMask) -&gt; some View and anything else mentioned in this article about composing gestures. You could possibly use the UIGestureRecognizerDelegate method of func gestureRecognizer(UIGestureRecognizer, shouldRecognizeSimultaneouslyWith: UIGestureRecognizer) -&gt; Bool but when you do, you get this unhelpful type: &lt;SwiftUI.UIKitResponderGestureRecognizer: 0x10560bb00; id = 95; baseClass = UIGestureRecognizer; state = Possible; delaysTouchesEnded = NO; view = &lt;_TtCGC7SwiftUI32NavigationStackHostingControllerVS_7AnyView_P10$1dadfd8f011HostingView: 0x105605940&gt;; target= &lt;(action=flushActions, target=&lt;SwiftUI.UIKitResponderEventBindingBridge 0x600000cb34b0&gt;)&gt;&gt; So, you just have to use the modifier you get and hope for the best which is not ideal but it’s better than nothing! UIKitGesturesForSwiftUI While I explore this at my J.O.B., I thought that perhaps there could be a library that is a wrapper for each UIGestureRecognizer subclass so that you don’t have to write all of that boilerplate yourself and so that’s what I did. You can find a Swift Package Manager package here. I also wrote a companion app which utilizes the library that you can find here. I’d love it if you’d check it out and, if you have any feedback, let me know on Mastodon here.]]></summary></entry><entry><title type="html">Structured Concurrency Conversion (Part 5)</title><link href="https://jacobvanorder.github.io/structured-concurrency-conversion-part-5/" rel="alternate" type="text/html" title="Structured Concurrency Conversion (Part 5)" /><published>2025-08-08T16:52:00+00:00</published><updated>2025-08-08T16:52:00+00:00</updated><id>https://jacobvanorder.github.io/structured-concurrency-conversion-part-5</id><content type="html" xml:base="https://jacobvanorder.github.io/structured-concurrency-conversion-part-5/"><![CDATA[<p>Part 5! Just as a recap: <a href="/structured-concurrency-conversion-part-1">part 1</a>, I laid out the code structure for Apple’s <a href="https://developer.apple.com/documentation/uikit/asynchronously-loading-images-into-table-and-collection-views">Async Image Loading</a> sample code. In <a href="/structured-concurrency-conversion-part-2">part 2</a>, I fixed up the Xcode project—something you’ll need to do when starting a new project because Swift 6 and strict checking aren’t on by default (as of June 2025). Today, we’ll actually convert the dispatch and closure-based code to use Actors and async/await. In <a href="/structured-concurrency-conversion-part-3">part 3</a> we converted the <code class="language-plaintext highlighter-rouge">ImageCache</code>, responsible for fetching and caching image data to an <code class="language-plaintext highlighter-rouge">Actor</code> in order to safely use and mutate across contexts. <a href="/structured-concurrency-conversion-part-4">Part 4</a> featured me getting rid of the <code class="language-plaintext highlighter-rouge">URLProtocol</code> subclass in order to utilize the Repository Pattern in order to chose between mock and live data being vended using Structured Concurrency.</p>

<h2 id="swiftui">SwiftUI</h2>

<p>Introduced in 2019, SwiftUI is the “best way to build apps for Apple platforms” <a href="https://developer.apple.com/news/?id=zqzlvxlm">according</a> to Apple. So, let’s utilize our new, modern version of the image loading in SwiftUI.</p>

<p>I merged the commit into <code class="language-plaintext highlighter-rouge">main</code> and you can follow along <a href="https://github.com/jacobvanorder/StructuredConcurrencyAsynchronouslyLoadingImagesIntoTableAndCollectionViews/blob/main/AsynchronouslyLoadingImagesIntoTableAndCollectionViews/Async%20Image%20Loading/SwiftUIView.swift">here</a>.</p>

<h3 id="implementation">Implementation</h3>

<p>Because we want the image to only load when it is on screen, we will use a <code class="language-plaintext highlighter-rouge">LazyVGrid</code> with five columns.</p>

<div class="language-swift highlighter-rouge"><div class="highlight"><pre class="highlight"><code>    <span class="k">var</span> <span class="nv">body</span><span class="p">:</span> <span class="kd">some</span> <span class="kt">View</span> <span class="p">{</span>
        <span class="kt">ScrollView</span> <span class="p">{</span>
            <span class="kt">LazyVGrid</span><span class="p">(</span><span class="nv">columns</span><span class="p">:</span> <span class="n">columns</span><span class="p">,</span> <span class="nv">spacing</span><span class="p">:</span> <span class="mi">0</span><span class="p">)</span> <span class="p">{</span>
                <span class="kt">ForEach</span><span class="p">(</span><span class="n">items</span><span class="p">)</span> <span class="p">{</span> <span class="n">item</span> <span class="k">in</span>
                    <span class="kt">ItemView</span><span class="p">(</span><span class="nv">item</span><span class="p">:</span> <span class="n">item</span><span class="p">)</span>
                <span class="p">}</span>
            <span class="p">}</span>
        <span class="p">}</span>
    <span class="p">}</span>
</code></pre></div></div>

<p>Nothing too wild here. Instead of using <code class="language-plaintext highlighter-rouge">AsyncImage</code>, let’s write our own <code class="language-plaintext highlighter-rouge">ItemView</code> that uses our <code class="language-plaintext highlighter-rouge">ImageCacheActor</code>.</p>

<div class="language-swift highlighter-rouge"><div class="highlight"><pre class="highlight"><code>    <span class="kd">private</span> <span class="kd">struct</span> <span class="kt">ItemView</span><span class="p">:</span> <span class="kt">View</span> <span class="p">{</span>
        <span class="kd">static</span> <span class="k">let</span> <span class="nv">imageCacheActor</span><span class="p">:</span> <span class="kt">ImageCacheActor</span> <span class="o">=</span> <span class="kt">ImageCacheActor</span><span class="o">.</span><span class="n">publicCache</span>
        <span class="k">let</span> <span class="nv">item</span><span class="p">:</span> <span class="kt">Item</span>
        <span class="kd">@State</span> <span class="kd">private</span> <span class="k">var</span> <span class="nv">image</span><span class="p">:</span> <span class="kt">UIImage</span><span class="p">?</span>

        <span class="k">var</span> <span class="nv">body</span><span class="p">:</span> <span class="kd">some</span> <span class="kt">View</span> <span class="p">{</span>
            <span class="k">if</span> <span class="k">let</span> <span class="nv">image</span> <span class="p">{</span>
                <span class="kt">Image</span><span class="p">(</span><span class="nv">uiImage</span><span class="p">:</span> <span class="n">image</span><span class="p">)</span>
                    <span class="o">.</span><span class="nf">resizable</span><span class="p">()</span>
                    <span class="o">.</span><span class="nf">aspectRatio</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span> <span class="nv">contentMode</span><span class="p">:</span> <span class="o">.</span><span class="n">fit</span><span class="p">)</span>
            <span class="p">}</span> <span class="k">else</span> <span class="p">{</span>
                <span class="kt">Image</span><span class="p">(</span><span class="nv">uiImage</span><span class="p">:</span> <span class="kt">ImageCacheActor</span><span class="o">.</span><span class="n">placeholderImage</span><span class="p">)</span>
                    <span class="o">.</span><span class="nf">resizable</span><span class="p">()</span>
                    <span class="o">.</span><span class="nf">aspectRatio</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span> <span class="nv">contentMode</span><span class="p">:</span> <span class="o">.</span><span class="n">fill</span><span class="p">)</span>
                    <span class="o">.</span><span class="nf">scaleEffect</span><span class="p">(</span><span class="mf">0.5</span><span class="p">)</span>
                    <span class="o">.</span><span class="n">task</span> <span class="p">{</span>
                        <span class="k">do</span> <span class="p">{</span>
                            <span class="k">self</span><span class="o">.</span><span class="n">image</span> <span class="o">=</span> <span class="k">try</span> <span class="k">await</span> <span class="k">Self</span><span class="o">.</span><span class="n">imageCacheActor</span><span class="o">.</span><span class="nf">load</span><span class="p">(</span><span class="nv">imageAtURL</span><span class="p">:</span> <span class="n">item</span><span class="o">.</span><span class="n">imageURL</span><span class="p">)</span>
                        <span class="p">}</span> <span class="k">catch</span> <span class="p">{</span>
                            <span class="k">self</span><span class="o">.</span><span class="n">image</span> <span class="o">=</span> <span class="kt">UIImage</span><span class="p">(</span><span class="nv">systemName</span><span class="p">:</span> <span class="s">"wifi.slash"</span><span class="p">)</span>
                        <span class="p">}</span>
                    <span class="p">}</span>
            <span class="p">}</span>
        <span class="p">}</span>
    <span class="p">}</span>
</code></pre></div></div>

<p>The code above is vastly easier to write than UIKit! Effectively, we have a static variable that all of the cells will use: our <code class="language-plaintext highlighter-rouge">publicCache</code> singleton. We have an <code class="language-plaintext highlighter-rouge">Item</code> to provide the URL for the image and then an optional <code class="language-plaintext highlighter-rouge">@State</code> variable to hold onto the image data if it is present.</p>

<p>Within the body, we check to see if that image has been loaded and, if so, use it as is. If the image <em>is not loaded</em> then we use our placeholder image and then use a <code class="language-plaintext highlighter-rouge">.task</code> modifier to fetch the image using our <code class="language-plaintext highlighter-rouge">imageCacheActor</code>. The <code class="language-plaintext highlighter-rouge">.task</code> modifier has two advantages: it allows for asynchronous code within <em>and</em> has built-in cancelation if the view is no longer needed.</p>

<h3 id="easy-breezy">Easy Breezy</h3>

<p>And that’s it! In 42 lines of code, we have done something that took many lines in UIKit. Most importantly, we utlized the same mechanism for fetching the image that we used in UIKit with no modification which is a sign of a useful API.</p>

<p>Some improvements could be injecting the <code class="language-plaintext highlighter-rouge">ImageCacheActor</code> so we aren’t so tied to a particular fetching mechanism but I left it this was for the sake of simplicity.</p>]]></content><author><name>Jacob</name></author><summary type="html"><![CDATA[Part 5! Just as a recap: part 1, I laid out the code structure for Apple’s Async Image Loading sample code. In part 2, I fixed up the Xcode project—something you’ll need to do when starting a new project because Swift 6 and strict checking aren’t on by default (as of June 2025). Today, we’ll actually convert the dispatch and closure-based code to use Actors and async/await. In part 3 we converted the ImageCache, responsible for fetching and caching image data to an Actor in order to safely use and mutate across contexts. Part 4 featured me getting rid of the URLProtocol subclass in order to utilize the Repository Pattern in order to chose between mock and live data being vended using Structured Concurrency. SwiftUI Introduced in 2019, SwiftUI is the “best way to build apps for Apple platforms” according to Apple. So, let’s utilize our new, modern version of the image loading in SwiftUI. I merged the commit into main and you can follow along here. Implementation Because we want the image to only load when it is on screen, we will use a LazyVGrid with five columns. var body: some View { ScrollView { LazyVGrid(columns: columns, spacing: 0) { ForEach(items) { item in ItemView(item: item) } } } } Nothing too wild here. Instead of using AsyncImage, let’s write our own ItemView that uses our ImageCacheActor. private struct ItemView: View { static let imageCacheActor: ImageCacheActor = ImageCacheActor.publicCache let item: Item @State private var image: UIImage? var body: some View { if let image { Image(uiImage: image) .resizable() .aspectRatio(1, contentMode: .fit) } else { Image(uiImage: ImageCacheActor.placeholderImage) .resizable() .aspectRatio(1, contentMode: .fill) .scaleEffect(0.5) .task { do { self.image = try await Self.imageCacheActor.load(imageAtURL: item.imageURL) } catch { self.image = UIImage(systemName: "wifi.slash") } } } } } The code above is vastly easier to write than UIKit! Effectively, we have a static variable that all of the cells will use: our publicCache singleton. We have an Item to provide the URL for the image and then an optional @State variable to hold onto the image data if it is present. Within the body, we check to see if that image has been loaded and, if so, use it as is. If the image is not loaded then we use our placeholder image and then use a .task modifier to fetch the image using our imageCacheActor. The .task modifier has two advantages: it allows for asynchronous code within and has built-in cancelation if the view is no longer needed. Easy Breezy And that’s it! In 42 lines of code, we have done something that took many lines in UIKit. Most importantly, we utlized the same mechanism for fetching the image that we used in UIKit with no modification which is a sign of a useful API. Some improvements could be injecting the ImageCacheActor so we aren’t so tied to a particular fetching mechanism but I left it this was for the sake of simplicity.]]></summary></entry><entry><title type="html">Structured Concurrency Conversion (Part 4)</title><link href="https://jacobvanorder.github.io/structured-concurrency-conversion-part-4/" rel="alternate" type="text/html" title="Structured Concurrency Conversion (Part 4)" /><published>2025-07-02T23:18:00+00:00</published><updated>2025-07-02T23:18:00+00:00</updated><id>https://jacobvanorder.github.io/structured-concurrency-conversion-part-4</id><content type="html" xml:base="https://jacobvanorder.github.io/structured-concurrency-conversion-part-4/"><![CDATA[<p>To catch you up, in <a href="/structured-concurrency-conversion-part-1">part 1</a>, I laid out the code structure for Apple’s <a href="https://developer.apple.com/documentation/uikit/asynchronously-loading-images-into-table-and-collection-views">Async Image Loading</a> sample code. In <a href="/structured-concurrency-conversion-part-2">part 2</a>, I fixed up the Xcode project—something you’ll need to do when starting a new project because Swift 6 and strict checking aren’t on by default (as of June 2025). Today, we’ll actually convert the dispatch and closure-based code to use Actors and async/await. In <a href="/structured-concurrency-conversion-part-3">part 3</a> we converted the <code class="language-plaintext highlighter-rouge">ImageCache</code>, responsible for fetching and caching image data to an <code class="language-plaintext highlighter-rouge">Actor</code> in order to safely use and mutate across contexts.</p>

<p>Today, we are going to finish the last bit of infrastructure that will get the images for the Table/Collection Views.</p>

<p>Again, my code is located <a href="https://github.com/jacobvanorder/StructuredConcurrencyAsynchronouslyLoadingImagesIntoTableAndCollectionViews">here</a>.</p>

<h2 id="ios-2-was-a-long-time-ago">iOS 2 was a Long Time Ago</h2>

<p><a href="https://en.wikipedia.org/wiki/IPhone_OS_2">iPhoneOS 2.0</a> was released on July 11, 2008 which was 17 years ago.</p>

<p>The mechanism that uses Apple’s sample code to bypass the network and, instead, go to the bundle is <a href="https://developer.apple.com/documentation/foundation/urlprotocol"><code class="language-plaintext highlighter-rouge">URLProtocol</code></a>. What’s amazing to think about is that <code class="language-plaintext highlighter-rouge">URLProtocol</code> uses <code class="language-plaintext highlighter-rouge">URLSession</code> which came out five years later with iOS 7.0. Before that, you’d use <a href="https://developer.apple.com/documentation/foundation/nsurlconnection"><code class="language-plaintext highlighter-rouge">NSURLConnection</code></a>.</p>

<p>Needless to say, this does not support Swift Concurrency.</p>

<h2 id="imageurlprotocol">ImageURLProtocol</h2>

<p>Let’s do our best to change this class that uses <code class="language-plaintext highlighter-rouge">DispatchSerialQueue</code> into something that can use <code class="language-plaintext highlighter-rouge">Task</code>. You can follow along in <a href="https://github.com/jacobvanorder/StructuredConcurrencyAsynchronouslyLoadingImagesIntoTableAndCollectionViews/commit/b7d43e50f9e338289f988c7b40068052d5dd125d">this commit</a>.</p>

<p>In the <code class="language-plaintext highlighter-rouge">startLoading()</code> function, instead of creating a <code class="language-plaintext highlighter-rouge">DispatchWorkItem</code>, we will create a <code class="language-plaintext highlighter-rouge">Task</code> that we need to hold on in case we need to cancel it later in <code class="language-plaintext highlighter-rouge">stopLoading()</code>. Within this <code class="language-plaintext highlighter-rouge">Task</code>, we sleep for a random amount between 0.5 and 3.0 seconds. The rest of the code is basically the same with the only other difference being that we wrap it all in a <code class="language-plaintext highlighter-rouge">do/catch</code> in order to catch the errors, log them, and fail with an error.</p>

<p>Job done! Right? Right?</p>

<h3 id="those-damn-errors">Those Damn Errors</h3>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>❌ Passing closure as a 'sending' parameter risks causing data races between code in the current task and concurrent execution of the closure; this is an error in the Swift 6 language mode`
	ℹ️ Closure captures 'self' which is accessible to code in the current task
</code></pre></div></div>

<p>That’s right, <code class="language-plaintext highlighter-rouge">self</code> is not safe to send between contexts because it is neither Sendable nor isolated. If you try to make it <code class="language-plaintext highlighter-rouge">Sendable</code> that’s a no-no because this class inherits from <code class="language-plaintext highlighter-rouge">URLProtocol</code> and <code class="language-plaintext highlighter-rouge">'Sendable' class 'ImageURLAsyncProtocol' cannot inherit from another class other than 'NSObject'</code>. Besides, you have a property of the <code class="language-plaintext highlighter-rouge">Task</code> that is not safe to mutate from various contexts.</p>

<p>You can’t take <code class="language-plaintext highlighter-rouge">self</code> out of the equation because you need to send <code class="language-plaintext highlighter-rouge">self</code> to all of the <code class="language-plaintext highlighter-rouge">URLProtocolClient</code> functions you need to call in order to signal to the client that a response was received or if you failed.</p>

<p>If we weren’t using <code class="language-plaintext highlighter-rouge">self</code>, we could pass only the actual information, after making sure it was <code class="language-plaintext highlighter-rouge">Sendable</code>, by only capturing what you need in an <a href="https://forums.swift.org/t/swift-6-concurrency-error-of-passing-sending-closure/77048/3">explicit capture list</a> but, again, <code class="language-plaintext highlighter-rouge">self</code> is being used.</p>

<h3 id="preconcurrency-to-the-rescue-"><code class="language-plaintext highlighter-rouge">@preconcurrency</code> to the Rescue (?)</h3>

<p>A former me would have thought, “Well, this API is definitely before Swift Concurrency.”, let’s mark it <code class="language-plaintext highlighter-rouge">@preconcurrency</code>. And that’s what I did in this commit but it’s not that simple. As <a href="https://www.massicotte.org/preconcurrency">Matt Massicote points out</a>, <code class="language-plaintext highlighter-rouge">@preconcurrency</code> <em>“alters how definitions are interpreted by the compiler. In general, it relaxes some rules that might otherwise make a definition difficult or even impossible to use.”</em>. Notice he said <em>“relaxes some rules”</em> and not <em>“fixes the underlying issues”</em>.</p>

<p>The code Apple provided has a <code class="language-plaintext highlighter-rouge">static</code> <code class="language-plaintext highlighter-rouge">URLSession</code> that it uses to make network calls. <a href="https://github.com/jacobvanorder/StructuredConcurrencyAsynchronouslyLoadingImagesIntoTableAndCollectionViews/commit/b7d43e50f9e338289f988c7b40068052d5dd125d#diff-5d11c3dfe5f4c8ab2ce5f1c202dc079762f709dbb4a243d9193489eb77b721d7R47-R49">We use that</a> to make network calls but we have no guarantee that the <code class="language-plaintext highlighter-rouge">ImageURLAsyncProtocol</code> will be unique despite me printing out <code class="language-plaintext highlighter-rouge">self</code> each time <code class="language-plaintext highlighter-rouge">startLoading()</code> is called and see:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>&lt;Async_Image_Loading.ImageURLAsyncProtocol: 0x600000c912c0&gt;
&lt;Async_Image_Loading.ImageURLAsyncProtocol: 0x600000c91380&gt;
&lt;Async_Image_Loading.ImageURLAsyncProtocol: 0x600000c17360&gt;
&lt;Async_Image_Loading.ImageURLAsyncProtocol: 0x600000c91410&gt;
&lt;Async_Image_Loading.ImageURLAsyncProtocol: 0x600000c91320&gt;
&lt;Async_Image_Loading.ImageURLAsyncProtocol: 0x600000c91200&gt;
&lt;Async_Image_Loading.ImageURLAsyncProtocol: 0x600000c12f10&gt;
&lt;Async_Image_Loading.ImageURLAsyncProtocol: 0x600000c19ec0&gt;
</code></pre></div></div>

<p>But we can’t <em>guarantee</em> that. There is a very real chance that our class could be used multiple times to <code class="language-plaintext highlighter-rouge">startLoading()</code> and then <code class="language-plaintext highlighter-rouge">self</code> is captured in the <code class="language-plaintext highlighter-rouge">Task</code> and, before the request is fulfilled, another call to <code class="language-plaintext highlighter-rouge">startLoading()</code> is made and replaces the reference to <code class="language-plaintext highlighter-rouge">self</code>. Thus <code class="language-plaintext highlighter-rouge">self</code> does <code class="language-plaintext highlighter-rouge">risks causing data races between code in the current task and concurrent execution of the closure</code>.</p>

<p><em>FINE.</em></p>

<h2 id="an-alternative">An Alternative</h2>

<p>What are we <em>really</em> trying to do here? Apple is trying to show how to fetch data from <em>somewhere</em> and they build in a delay to mimic an <a href="https://en.wikipedia.org/wiki/EDGE_(telecommunication)">EDGE</a> connection. That <em>somewhere</em> tips me off that we might be able to do something better.</p>

<h3 id="repository-pattern">Repository Pattern</h3>

<p>Surprisingly, there is no wiki entry on the <a href="https://www.geeksforgeeks.org/system-design/repository-design-pattern/">Repository Pattern</a> but it’s been something I’ve been using since 2011 when it was introduced to me in a code base I was hired to update.</p>

<p>Effectively, you provide an interface that dictates how you will provide data. You can then determine concrete functionality that will fulfill the promise of that interface in a specific way. For instance, you may want to provide your data from the network but you could also provide it from in-memory store or mock data in your bundle. Each one of those could be structures that adhere to the repository protocol but have distinct internal workings in order to provide the data.</p>

<p>Back to our issue at hand, we have a need to fetch images from <em>somewhere</em> and build in a delay.</p>

<h3 id="the-interface">The Interface</h3>

<p>We will set up a protocol that gives the API of sorts to the consumer of what this provider will provide. Simply, it looks like <a href="https://github.com/jacobvanorder/StructuredConcurrencyAsynchronouslyLoadingImagesIntoTableAndCollectionViews/blob/main/AsynchronouslyLoadingImagesIntoTableAndCollectionViews/Async%20Image%20Loading/ImageURLRepository.swift#L12-L14">this</a>:</p>

<div class="language-swift highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kd">public</span> <span class="kd">protocol</span> <span class="kt">ImageURLRepository</span><span class="p">:</span> <span class="kt">Actor</span> <span class="p">{</span>
    <span class="kd">func</span> <span class="nf">loadImage</span><span class="p">(</span><span class="n">atURL</span> <span class="nv">url</span><span class="p">:</span> <span class="kt">URL</span><span class="p">)</span> <span class="k">async</span> <span class="k">throws</span> <span class="o">-&gt;</span> <span class="kt">UIImage</span>
<span class="p">}</span>
</code></pre></div></div>

<p>You’ll notice a couple things:</p>

<ul>
  <li>We made whatever adheres to the protocol be an <code class="language-plaintext highlighter-rouge">Actor</code></li>
  <li>It loads and image at a URL</li>
  <li>It is <code class="language-plaintext highlighter-rouge">async</code>, <code class="language-plaintext highlighter-rouge">throw</code>ing, and returns a <code class="language-plaintext highlighter-rouge">UIImage</code></li>
</ul>

<h3 id="mock">Mock</h3>

<p>Now that we have a contract of what we need to do, let’s make our mock version. Thinking about what we’ll want this mock provider to do, we’ll want something that:</p>

<ul>
  <li>Has a delay that could be random</li>
  <li>Goes to a <code class="language-plaintext highlighter-rouge">Bundle</code> and looks for the file name at the end of that <code class="language-plaintext highlighter-rouge">URL</code></li>
  <li>Returns the image</li>
</ul>

<p><a href="https://github.com/jacobvanorder/StructuredConcurrencyAsynchronouslyLoadingImagesIntoTableAndCollectionViews/blob/main/AsynchronouslyLoadingImagesIntoTableAndCollectionViews/Async%20Image%20Loading/ImageURLMockRepository.swift#L13-L31">Here</a> is the code:</p>

<div class="language-swift highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kd">actor</span> <span class="kt">ImageURLMockRepository</span><span class="p">:</span> <span class="kt">ImageURLRepository</span> <span class="p">{</span>

    <span class="k">let</span> <span class="nv">delayRange</span><span class="p">:</span> <span class="kt">ClosedRange</span><span class="o">&lt;</span><span class="kt">Double</span><span class="o">&gt;</span>
    <span class="k">let</span> <span class="nv">bundle</span><span class="p">:</span> <span class="kt">Bundle</span>

    <span class="kd">func</span> <span class="nf">loadImage</span><span class="p">(</span><span class="n">atURL</span> <span class="nv">url</span><span class="p">:</span> <span class="kt">URL</span><span class="p">)</span> <span class="k">async</span> <span class="k">throws</span> <span class="o">-&gt;</span> <span class="kt">UIImage</span> <span class="p">{</span>
        <span class="k">try</span> <span class="k">await</span> <span class="kt">Task</span><span class="o">.</span><span class="nf">sleep</span><span class="p">(</span><span class="nv">for</span><span class="p">:</span> <span class="o">.</span><span class="nf">randomSeconds</span><span class="p">(</span><span class="nv">in</span><span class="p">:</span> <span class="n">delayRange</span><span class="p">))</span>
        
        <span class="k">let</span> <span class="nv">name</span> <span class="o">=</span> <span class="n">url</span><span class="o">.</span><span class="n">lastPathComponent</span>
        <span class="k">guard</span> <span class="k">let</span> <span class="nv">bundleURL</span> <span class="o">=</span> <span class="n">bundle</span><span class="o">.</span><span class="nf">url</span><span class="p">(</span><span class="nv">forResource</span><span class="p">:</span> <span class="n">name</span><span class="p">,</span> <span class="nv">withExtension</span><span class="p">:</span> <span class="s">""</span><span class="p">)</span> <span class="k">else</span> <span class="p">{</span> <span class="k">throw</span> <span class="kt">ImageURLRepositoryError</span><span class="o">.</span><span class="n">imageDataNotFound</span> <span class="p">}</span>
        <span class="k">let</span> <span class="nv">data</span> <span class="o">=</span> <span class="k">try</span> <span class="kt">Data</span><span class="p">(</span><span class="nv">contentsOf</span><span class="p">:</span> <span class="n">bundleURL</span><span class="p">)</span>
        <span class="k">return</span> <span class="k">try</span> <span class="k">Self</span><span class="o">.</span><span class="nf">image</span><span class="p">(</span><span class="nv">fromData</span><span class="p">:</span> <span class="n">data</span><span class="p">)</span>
    <span class="p">}</span>

    <span class="nf">init</span><span class="p">(</span><span class="n">delayedBetween</span> <span class="nv">start</span><span class="p">:</span> <span class="kt">Double</span><span class="p">,</span> <span class="n">and</span> <span class="nv">end</span><span class="p">:</span> <span class="kt">Double</span><span class="p">,</span> <span class="nv">bundle</span><span class="p">:</span> <span class="kt">Bundle</span> <span class="o">=</span> <span class="o">.</span><span class="n">main</span><span class="p">)</span> <span class="p">{</span>
        <span class="k">self</span><span class="o">.</span><span class="n">delayRange</span> <span class="o">=</span> <span class="n">start</span><span class="o">...</span><span class="n">end</span>
        <span class="k">self</span><span class="o">.</span><span class="n">bundle</span> <span class="o">=</span> <span class="n">bundle</span>
    <span class="p">}</span>
<span class="p">}</span>
</code></pre></div></div>

<p>We have our delay range which we choose a random number from when we ask <code class="language-plaintext highlighter-rouge">Task</code> to <code class="language-plaintext highlighter-rouge">sleep</code> for a number of seconds. We also don’t want to hard code our <code class="language-plaintext highlighter-rouge">Bundle</code> that we pull from as we might need a different one for testing or other scenarios.</p>

<p>Then it’s a matter of sleeping, pulling out the image name, getting the data, and returning the image.</p>

<h3 id="network">Network</h3>

<p>With the same contract as the Mock source, let’s make our network version. Thinking about what we’ll want this network provider to do, we’ll want something that:</p>

<ul>
  <li>Takes in a <code class="language-plaintext highlighter-rouge">URLSession</code> to use for fetching</li>
  <li>Loads the data asynchronously</li>
</ul>

<p>This is more clear-cut and the code looks like <a href="https://github.com/jacobvanorder/StructuredConcurrencyAsynchronouslyLoadingImagesIntoTableAndCollectionViews/blob/main/AsynchronouslyLoadingImagesIntoTableAndCollectionViews/Async%20Image%20Loading/ImageURLNetworkRepository.swift#L12-L25">this</a>:</p>

<div class="language-swift highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kd">actor</span> <span class="kt">ImageURLNetworkRepository</span><span class="p">:</span> <span class="kt">ImageURLRepository</span> <span class="p">{</span>

    <span class="k">let</span> <span class="nv">urlSession</span><span class="p">:</span> <span class="kt">URLSession</span>

    <span class="kd">func</span> <span class="nf">loadImage</span><span class="p">(</span><span class="n">atURL</span> <span class="nv">url</span><span class="p">:</span> <span class="kt">URL</span><span class="p">)</span> <span class="k">async</span> <span class="k">throws</span> <span class="o">-&gt;</span> <span class="kt">UIImage</span> <span class="p">{</span>
        <span class="k">let</span> <span class="p">(</span><span class="nv">data</span><span class="p">,</span> <span class="nv">response</span><span class="p">)</span> <span class="o">=</span> <span class="k">try</span> <span class="k">await</span> <span class="n">urlSession</span><span class="o">.</span><span class="nf">data</span><span class="p">(</span><span class="nv">from</span><span class="p">:</span> <span class="n">url</span><span class="p">)</span>
        <span class="nf">guard</span> <span class="p">(</span><span class="n">response</span> <span class="k">as?</span> <span class="kt">HTTPURLResponse</span><span class="p">)?</span><span class="o">.</span><span class="n">statusCode</span> <span class="o">!=</span> <span class="mi">404</span> <span class="k">else</span> <span class="p">{</span> <span class="k">throw</span> <span class="kt">ImageURLRepositoryError</span><span class="o">.</span><span class="n">imageDataNotFound</span> <span class="p">}</span>
        <span class="k">return</span> <span class="k">try</span> <span class="k">Self</span><span class="o">.</span><span class="nf">image</span><span class="p">(</span><span class="nv">fromData</span><span class="p">:</span> <span class="n">data</span><span class="p">)</span>
    <span class="p">}</span>

    <span class="nf">init</span><span class="p">(</span><span class="nv">urlSession</span><span class="p">:</span> <span class="kt">URLSession</span> <span class="o">=</span> <span class="o">.</span><span class="n">shared</span><span class="p">)</span> <span class="p">{</span>
        <span class="k">self</span><span class="o">.</span><span class="n">urlSession</span> <span class="o">=</span> <span class="n">urlSession</span>
    <span class="p">}</span>
<span class="p">}</span>
</code></pre></div></div>

<p>This is even less complicated because we aren’t delaying. We fetch the data, check the response code, and return the image. You’ll notice that we have a protocol extension for instantiating the <code class="language-plaintext highlighter-rouge">UIImage</code> from data. Both mock and network versions do that so I moved it into a <a href="https://github.com/jacobvanorder/StructuredConcurrencyAsynchronouslyLoadingImagesIntoTableAndCollectionViews/blob/main/AsynchronouslyLoadingImagesIntoTableAndCollectionViews/Async%20Image%20Loading/ImageURLRepository.swift#L16-L21">protocol extension</a>.</p>

<h3 id="why-an-actor">Why an Actor?</h3>

<p>Initially, if you were to get rid of the condition that the protocol must be <code class="language-plaintext highlighter-rouge">Actor</code> and you make the mock and network versions <code class="language-plaintext highlighter-rouge">final class</code>, initially, you’ll get a <code class="language-plaintext highlighter-rouge">Sending 'self.repository' risks causing data races; this is an error in the Swift 6 language mode</code> error when you wrap <code class="language-plaintext highlighter-rouge">ImageCacheActor</code>’s usage in a <code class="language-plaintext highlighter-rouge">Task</code> <a href="https://github.com/jacobvanorder/StructuredConcurrencyAsynchronouslyLoadingImagesIntoTableAndCollectionViews/blob/main/AsynchronouslyLoadingImagesIntoTableAndCollectionViews/Async%20Image%20Loading/ImageCacheActor.swift#L52">here</a>. That might seem like a deal breaker but what it’s mad about isn’t <code class="language-plaintext highlighter-rouge">repository</code> but the <code class="language-plaintext highlighter-rouge">self.</code> part. To get around this, you capture only what you can add:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Task { [repository] in 
	…
}  
</code></pre></div></div>

<p>I found this when dealing with something similar with a SwiftUI’s view and found <a href="https://forums.swift.org/t/whats-the-best-way-to-resolve-the-concurrency-warning-capture-of-self-with-non-sendable-type-loadingview-in-a-sendable-closure/63631">this</a> Swift Forums post. Now in our case, it actually <a href="https://forums.swift.org/t/task-capture-list-silence-data-race-error/78275">might be a bug</a> so, in order to be safe, I’ll change my repositories back to an Actor. Additionally, I have been keeping some internal state in my repositories when using this pattern in production. Things like <code class="language-plaintext highlighter-rouge">next_page</code> or how many times something has been accessed.</p>

<h2 id="in-conclusion">In Conclusion</h2>

<p>This is my final piece of the puzzle to change the Apple code over to something that is more modern. Ultimately, because <code class="language-plaintext highlighter-rouge">URLProtocol</code> is a byproduct of a different era of Apple development, it is too incompatible with Swift strict concurrency and a different approach needs to be made.</p>

<p>So, that wraps this all up! If you see any issues or have any corrections, <a href="mailto:jacob@sushigrass.com">please reach out</a>. I’d like to send a very special thanks out to <a href="https://www.massicotte.org">Matt Massicotte</a> for sanity-checking early versions of this series of blog posts.</p>

<h2 id="whats-next">What’s Next?</h2>

<p>I have some ideas about where to take this. How would I use the same code in SwiftUI? What impact do the changes in Swift 6.2 have towards this code base? Anything else you’d like to see? Reach out on <a href="https://mastodon.social/@jacobvo">Mastodon</a>.</p>]]></content><author><name>Jacob</name></author><summary type="html"><![CDATA[To catch you up, in part 1, I laid out the code structure for Apple’s Async Image Loading sample code. In part 2, I fixed up the Xcode project—something you’ll need to do when starting a new project because Swift 6 and strict checking aren’t on by default (as of June 2025). Today, we’ll actually convert the dispatch and closure-based code to use Actors and async/await. In part 3 we converted the ImageCache, responsible for fetching and caching image data to an Actor in order to safely use and mutate across contexts. Today, we are going to finish the last bit of infrastructure that will get the images for the Table/Collection Views. Again, my code is located here. iOS 2 was a Long Time Ago iPhoneOS 2.0 was released on July 11, 2008 which was 17 years ago. The mechanism that uses Apple’s sample code to bypass the network and, instead, go to the bundle is URLProtocol. What’s amazing to think about is that URLProtocol uses URLSession which came out five years later with iOS 7.0. Before that, you’d use NSURLConnection. Needless to say, this does not support Swift Concurrency. ImageURLProtocol Let’s do our best to change this class that uses DispatchSerialQueue into something that can use Task. You can follow along in this commit. In the startLoading() function, instead of creating a DispatchWorkItem, we will create a Task that we need to hold on in case we need to cancel it later in stopLoading(). Within this Task, we sleep for a random amount between 0.5 and 3.0 seconds. The rest of the code is basically the same with the only other difference being that we wrap it all in a do/catch in order to catch the errors, log them, and fail with an error. Job done! Right? Right? Those Damn Errors ❌ Passing closure as a 'sending' parameter risks causing data races between code in the current task and concurrent execution of the closure; this is an error in the Swift 6 language mode` ℹ️ Closure captures 'self' which is accessible to code in the current task That’s right, self is not safe to send between contexts because it is neither Sendable nor isolated. If you try to make it Sendable that’s a no-no because this class inherits from URLProtocol and 'Sendable' class 'ImageURLAsyncProtocol' cannot inherit from another class other than 'NSObject'. Besides, you have a property of the Task that is not safe to mutate from various contexts. You can’t take self out of the equation because you need to send self to all of the URLProtocolClient functions you need to call in order to signal to the client that a response was received or if you failed. If we weren’t using self, we could pass only the actual information, after making sure it was Sendable, by only capturing what you need in an explicit capture list but, again, self is being used. @preconcurrency to the Rescue (?) A former me would have thought, “Well, this API is definitely before Swift Concurrency.”, let’s mark it @preconcurrency. And that’s what I did in this commit but it’s not that simple. As Matt Massicote points out, @preconcurrency “alters how definitions are interpreted by the compiler. In general, it relaxes some rules that might otherwise make a definition difficult or even impossible to use.”. Notice he said “relaxes some rules” and not “fixes the underlying issues”. The code Apple provided has a static URLSession that it uses to make network calls. We use that to make network calls but we have no guarantee that the ImageURLAsyncProtocol will be unique despite me printing out self each time startLoading() is called and see: &lt;Async_Image_Loading.ImageURLAsyncProtocol: 0x600000c912c0&gt; &lt;Async_Image_Loading.ImageURLAsyncProtocol: 0x600000c91380&gt; &lt;Async_Image_Loading.ImageURLAsyncProtocol: 0x600000c17360&gt; &lt;Async_Image_Loading.ImageURLAsyncProtocol: 0x600000c91410&gt; &lt;Async_Image_Loading.ImageURLAsyncProtocol: 0x600000c91320&gt; &lt;Async_Image_Loading.ImageURLAsyncProtocol: 0x600000c91200&gt; &lt;Async_Image_Loading.ImageURLAsyncProtocol: 0x600000c12f10&gt; &lt;Async_Image_Loading.ImageURLAsyncProtocol: 0x600000c19ec0&gt; But we can’t guarantee that. There is a very real chance that our class could be used multiple times to startLoading() and then self is captured in the Task and, before the request is fulfilled, another call to startLoading() is made and replaces the reference to self. Thus self does risks causing data races between code in the current task and concurrent execution of the closure. FINE. An Alternative What are we really trying to do here? Apple is trying to show how to fetch data from somewhere and they build in a delay to mimic an EDGE connection. That somewhere tips me off that we might be able to do something better. Repository Pattern Surprisingly, there is no wiki entry on the Repository Pattern but it’s been something I’ve been using since 2011 when it was introduced to me in a code base I was hired to update. Effectively, you provide an interface that dictates how you will provide data. You can then determine concrete functionality that will fulfill the promise of that interface in a specific way. For instance, you may want to provide your data from the network but you could also provide it from in-memory store or mock data in your bundle. Each one of those could be structures that adhere to the repository protocol but have distinct internal workings in order to provide the data. Back to our issue at hand, we have a need to fetch images from somewhere and build in a delay. The Interface We will set up a protocol that gives the API of sorts to the consumer of what this provider will provide. Simply, it looks like this:]]></summary></entry><entry><title type="html">Structured Concurrency Conversion (Part 3)</title><link href="https://jacobvanorder.github.io/structured-concurrency-conversion-part-3/" rel="alternate" type="text/html" title="Structured Concurrency Conversion (Part 3)" /><published>2025-07-01T15:49:00+00:00</published><updated>2025-07-01T15:49:00+00:00</updated><id>https://jacobvanorder.github.io/structured-concurrency-conversion-part-3</id><content type="html" xml:base="https://jacobvanorder.github.io/structured-concurrency-conversion-part-3/"><![CDATA[<p>Alright, in <a href="/structured-concurrency-conversion-part-1">part 1</a>, I laid out the code structure is for Apple’s <a href="https://developer.apple.com/documentation/uikit/asynchronously-loading-images-into-table-and-collection-views">Async Image Loading</a> sample code. In <a href="/structured-concurrency-conversion-part-2">part 2</a>, I fixed up the Xcode project—something you’ll need to do when starting a new project because Swift 6 and strict checking aren’t on by default (as of June 2025). Today, we’ll actually convert the dispatch and closure-based code to use Actors and async/await.</p>

<p>Just as a reminder my code is located <a href="https://github.com/jacobvanorder/StructuredConcurrencyAsynchronouslyLoadingImagesIntoTableAndCollectionViews">here</a>.</p>

<h4 id="a-disclosure">A Disclosure</h4>

<p>Before we start, I want to reference back to Swift’s release in 2014. There was a tendency for developers with years of Objective-C experience to write Swift code using the same patterns, which often resulted in the fuzzy, subjective judgment that the code wasn’t “Swifty” enough. Something similar is happening again with Structured Concurrency. After years of writing code that explicitly manages threading and handles locks, we now have a handy tool that allows us to catch potential data races before compilation. However, as we learn this new approach, our code might initially mimic older patterns to some degree. The reason I mention this is if you squint at the old code in the exercise, you should be able to see how it translates to something more modern. The underlying patterns and structure are somewhat present, which might help you bridge from the old way of doing things to the new, even if it doesn’t fully embrace all the new paradigm has to offer.</p>

<h2 id="imagecacheactor"><code class="language-plaintext highlighter-rouge">ImageCacheActor</code></h2>

<p>Let’s begin by making an Actor called <code class="language-plaintext highlighter-rouge">ImageCacheActor</code>.</p>

<h3 id="properties">Properties</h3>

<p>We’ll have a <a href="https://github.com/jacobvanorder/StructuredConcurrencyAsynchronouslyLoadingImagesIntoTableAndCollectionViews/commit/6a24b8f1a45fbae6b13a2ea1e0110e0c5cff8e48#diff-5d11c3dfe5f4c8ab2ce5f1c202dc079762f709dbb4a243d9193489eb77b721d7R16">singleton static variable</a> that can be accessed from various places just like the old version.</p>

<p>The old version uses <code class="language-plaintext highlighter-rouge">NSCache&lt;NSURL, UIImage&gt;</code> to cache the images but is that <code class="language-plaintext highlighter-rouge">Sendable</code> or thread-safe? The documentation does say:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>&gt; You can add, remove, and query items in the cache from different threads without having to lock the cache yourself.
</code></pre></div></div>

<p>Let’s <a href="https://github.com/jacobvanorder/StructuredConcurrencyAsynchronouslyLoadingImagesIntoTableAndCollectionViews/commit/6a24b8f1a45fbae6b13a2ea1e0110e0c5cff8e48#diff-5d11c3dfe5f4c8ab2ce5f1c202dc079762f709dbb4a243d9193489eb77b721d7R21-R22">roll with it</a>! We are making it <code class="language-plaintext highlighter-rouge">@MainActor</code> because we will want to access it later from the Table/CollectionView in order to determine if we even <em>need</em> to fetch the image. This will need to be done from the Main Actor.</p>

<p>The next and final property is a dictionary that used to contain a Dictionary where the key was the <code class="language-plaintext highlighter-rouge">NSURL</code> and the value was <code class="language-plaintext highlighter-rouge">[(Item, UIImage?) -&gt; Swift.Void]</code> <em>(Note, that’s an Array of closures)</em>. The purpose of this Dictionary is for the case the image was already loading but a newly dequeued cell requested the image again which would pass in a new completion closure. When the image is loaded, it will call the original closure but also all subsequent closures if other callers requested the same URL.</p>

<p>What we’ll be doing is converting that to a Dictionary where the key is still <code class="language-plaintext highlighter-rouge">NSURL</code> but the value will be the <strong>first</strong> task (<a href="https://github.com/jacobvanorder/StructuredConcurrencyAsynchronouslyLoadingImagesIntoTableAndCollectionViews/commit/6a24b8f1a45fbae6b13a2ea1e0110e0c5cff8e48#diff-5d11c3dfe5f4c8ab2ce5f1c202dc079762f709dbb4a243d9193489eb77b721d7R25">See here</a>). <strong>This is the first big shift in thinking because of the top-down approach of async/await.</strong> We’ll get into what that means in the implementation described down below.</p>

<p>Okay, the properties are out of the way so let’s get to the meat of it.</p>

<h3 id="functions">Functions</h3>

<p>Our first function follows what we had before with a public interface for our <code class="language-plaintext highlighter-rouge">NSCache</code>. This works, though, because well, <em>“You can add, remove, and query items in the cache from different threads without having to lock the cache yourself.”</em>. If you say so! We are annotating it as <code class="language-plaintext highlighter-rouge">@MainActor</code> for reasoned explained above.</p>

<h4 id="load-url">Load URL</h4>

<p>The real work gets done in <code class="language-plaintext highlighter-rouge">final func load(url: URL) async throws -&gt; UIImage</code>. You’ll notice that we changed the <a href="https://github.com/jacobvanorder/StructuredConcurrencyAsynchronouslyLoadingImagesIntoTableAndCollectionViews/commit/6a24b8f1a45fbae6b13a2ea1e0110e0c5cff8e48#diff-5d11c3dfe5f4c8ab2ce5f1c202dc079762f709dbb4a243d9193489eb77b721d7R34">signature</a> to take in a <code class="language-plaintext highlighter-rouge">URL</code> but no <code class="language-plaintext highlighter-rouge">Item</code> as I don’t want to tie this utilitarian functionality to a specific model object type. Also, we change the completion handler to something that is <code class="language-plaintext highlighter-rouge">async</code>, <code class="language-plaintext highlighter-rouge">throws</code> an error, and returns <code class="language-plaintext highlighter-rouge">UIImage</code>. Before, the function gave no indication that something went wrong, it just returned <code class="language-plaintext highlighter-rouge">nil</code> for the image which is better than returning the placeholder image, I guess.</p>

<p>Let’s get to the meat of the function!</p>

<p>I’m actually going to defer my explaination of the <code class="language-plaintext highlighter-rouge">defer</code> to the end. Moving on…</p>

<p>After the <code class="language-plaintext highlighter-rouge">defer</code>, the first step is to check the cache using the function outlined above and returned the cached image if there is one.</p>

<div class="language-swift highlighter-rouge"><div class="highlight"><pre class="highlight"><code>        <span class="c1">// Check for a cached image.</span>
        <span class="k">if</span> <span class="k">let</span> <span class="nv">cachedImage</span> <span class="o">=</span> <span class="k">await</span> <span class="nf">image</span><span class="p">(</span><span class="nv">url</span><span class="p">:</span> <span class="n">url</span><span class="p">)</span> <span class="p">{</span>
            <span class="k">return</span> <span class="n">cachedImage</span>
        <span class="p">}</span>
</code></pre></div></div>
<p>We need to <code class="language-plaintext highlighter-rouge">await</code> because we are changing contexts between our Actor and the <code class="language-plaintext highlighter-rouge">@MainActor</code>. Not a huge deal as <code class="language-plaintext highlighter-rouge">NSCache</code> is safe and our function is <code class="language-plaintext highlighter-rouge">async</code> anyway.</p>

<p>Remember that Dictionary where the key was the <code class="language-plaintext highlighter-rouge">NSURL</code> and the value was a <code class="language-plaintext highlighter-rouge">Task</code>? Time to shine <code class="language-plaintext highlighter-rouge">loadingResponses</code>!</p>

<div class="language-swift highlighter-rouge"><div class="highlight"><pre class="highlight"><code>        <span class="c1">// In case there are more than one requestor for the image, we wait for the previous request and</span>
        <span class="c1">// return the image (or throw)</span>
        <span class="k">if</span> <span class="k">let</span> <span class="nv">previousTask</span> <span class="o">=</span> <span class="n">loadingResponses</span><span class="p">[</span><span class="n">url</span><span class="p">]</span> <span class="p">{</span>
            <span class="k">return</span> <span class="k">try</span> <span class="k">await</span> <span class="n">previousTask</span><span class="o">.</span><span class="n">value</span>
        <span class="p">}</span>
</code></pre></div></div>

<p>What this code does is check for a previous request (we will discuss shortly) in a <code class="language-plaintext highlighter-rouge">Task</code> at that <code class="language-plaintext highlighter-rouge">URL</code> which is stored in a Dictionary. If we did make a request earlier, we will tell whatever subsequent caller that is loading the image at that <code class="language-plaintext highlighter-rouge">URL</code> to hold on for the result of that first call. <em>This is quite the shift in thinking!</em>
Previously, we <a href="https://github.com/jacobvanorder/StructuredConcurrencyAsynchronouslyLoadingImagesIntoTableAndCollectionViews/commit/5b0169831551202154c78c20da0e81cde67f3ee2#diff-5d11c3dfe5f4c8ab2ce5f1c202dc079762f709dbb4a243d9193489eb77b721d7R30-R35">held each request’s completion handler in an array</a> and then <a href="https://github.com/jacobvanorder/StructuredConcurrencyAsynchronouslyLoadingImagesIntoTableAndCollectionViews/commit/5b0169831551202154c78c20da0e81cde67f3ee2#diff-5d11c3dfe5f4c8ab2ce5f1c202dc079762f709dbb4a243d9193489eb77b721d7R48-R54">iterate over each completion closure stored</a> once the image comes back and is valid.</p>

<p>Next up, we make a <code class="language-plaintext highlighter-rouge">Task&lt;(UIImage), any Error&gt;</code> and save it to a local <code class="language-plaintext highlighter-rouge">let</code> variable.</p>

<div class="language-swift highlighter-rouge"><div class="highlight"><pre class="highlight"><code>        <span class="c1">// Go fetch the image.</span>
        <span class="k">let</span> <span class="nv">currentTask</span> <span class="o">=</span> <span class="kt">Task</span> <span class="p">{</span>
            <span class="k">let</span> <span class="p">(</span><span class="nv">data</span><span class="p">,</span> <span class="nv">_</span><span class="p">)</span> <span class="o">=</span> <span class="k">try</span> <span class="k">await</span> <span class="kt">ImageURLAsyncProtocol</span><span class="o">.</span><span class="nf">urlSession</span><span class="p">()</span><span class="o">.</span><span class="nf">data</span><span class="p">(</span><span class="nv">from</span><span class="p">:</span> <span class="n">url</span><span class="p">)</span>
            <span class="c1">// Try to create the image. If not, throw bad image data error.</span>
            <span class="k">guard</span> <span class="k">let</span> <span class="nv">image</span> <span class="o">=</span> <span class="kt">UIImage</span><span class="p">(</span><span class="nv">data</span><span class="p">:</span> <span class="n">data</span><span class="p">)</span> <span class="k">else</span> <span class="p">{</span>
                <span class="k">throw</span> <span class="kt">LoadingError</span><span class="o">.</span><span class="n">badImageData</span>
            <span class="p">}</span>
            <span class="c1">// Cache the image.</span>
            <span class="k">await</span> <span class="nf">setCachedImage</span><span class="p">(</span><span class="n">image</span><span class="p">,</span> <span class="nv">atUrl</span><span class="p">:</span> <span class="n">url</span><span class="p">)</span>
            <span class="k">return</span> <span class="n">image</span>
        <span class="p">}</span>
</code></pre></div></div>

<p>In the task, we asynchronously fetch the data from the <code class="language-plaintext highlighter-rouge">URL</code>. When that is done, we make sure it’s a valid <code class="language-plaintext highlighter-rouge">UIImage</code> or <code class="language-plaintext highlighter-rouge">throw</code> an error. If it is valid, we use a <em>function</em> (more on that in a second) to set the image to the cache, and, finally, return the image as the <code class="language-plaintext highlighter-rouge">Task</code>’s value.</p>

<p>This is the part of the code that lines up well with the old way of fetching <code class="language-plaintext highlighter-rouge">URLSession</code> data, getting a completion closure, and handling the result. In fact, it lines up so well, it’s probably doesn’t go far enough to transform the code to adjust to the new way of working which <a href="#a-disclosure">I apologized for before</a>. Much like Apple probably looks at their original code sample and might cringe, so will I in five years.</p>

<p>After that <code class="language-plaintext highlighter-rouge">Task</code> is made, we store it in the <code class="language-plaintext highlighter-rouge">loadingResponses</code> Dictionary for the <code class="language-plaintext highlighter-rouge">URL</code> and then asynchronously return the eventual value of the task or throwing a possible error.</p>

<p>Back to the <code class="language-plaintext highlighter-rouge">defer</code> at top of the function. This one:</p>

<div class="language-swift highlighter-rouge"><div class="highlight"><pre class="highlight"><code>	<span class="k">defer</span> <span class="p">{</span> <span class="n">loadingResponses</span><span class="o">.</span><span class="nf">removeValue</span><span class="p">(</span><span class="nv">forKey</span><span class="p">:</span> <span class="n">url</span><span class="p">)</span> <span class="p">}</span>
</code></pre></div></div>

<p>If you think about it, we have a shared Singleton of <code class="language-plaintext highlighter-rouge">ImageCacheActor</code> which has a <code class="language-plaintext highlighter-rouge">NSCache</code> and a <code class="language-plaintext highlighter-rouge">Dictionary&lt;URL: Task&lt;(UIImage), any Error&gt;]</code>. That <code class="language-plaintext highlighter-rouge">Task</code> will hold on to the value as long as we keep it around. In essence, it could be our own cache if we wanted it to but, <code class="language-plaintext highlighter-rouge">NSCache</code> has some nice features such as flushing memory, if needed, that we get for free. In order to hold less memory, let’s remove the <code class="language-plaintext highlighter-rouge">Task</code> from the dictionary and it will get freed up.</p>

<p>This is the end of our big <code class="language-plaintext highlighter-rouge">func load(url: URL) async throws -&gt; UIImage</code> function!</p>

<p><em>But why a function for setting the image to the cache?</em></p>

<p>If you try to set the object directly on the <code class="language-plaintext highlighter-rouge">NSCache</code> via <code class="language-plaintext highlighter-rouge">cachedImages.setObject(image, forKey: url as NSURL)</code>, you will get the helpful message of <code class="language-plaintext highlighter-rouge">Non-sendable type 'NSCache&lt;NSURL, UIImage&gt;' of property 'cachedImages' cannot exit main actor-isolated context; this is an error in the Swift 6 language mode</code>. Calling with an <code class="language-plaintext highlighter-rouge">await</code> doesn’t matter. This is why we come up with <a href="https://github.com/jacobvanorder/StructuredConcurrencyAsynchronouslyLoadingImagesIntoTableAndCollectionViews/commit/6a24b8f1a45fbae6b13a2ea1e0110e0c5cff8e48#diff-5d11c3dfe5f4c8ab2ce5f1c202dc079762f709dbb4a243d9193489eb77b721d7R63-R66">this function</a>:</p>

<div class="language-swift highlighter-rouge"><div class="highlight"><pre class="highlight"><code>    <span class="kd">@MainActor</span>
    <span class="kd">private</span> <span class="kd">func</span> <span class="nf">setCachedImage</span><span class="p">(</span><span class="n">_</span> <span class="nv">cachedImage</span><span class="p">:</span> <span class="kt">UIImage</span><span class="p">,</span> <span class="n">atUrl</span> <span class="nv">url</span><span class="p">:</span> <span class="kt">URL</span><span class="p">)</span> <span class="p">{</span>
        <span class="n">cachedImages</span><span class="o">.</span><span class="nf">setObject</span><span class="p">(</span><span class="n">cachedImage</span><span class="p">,</span> <span class="nv">forKey</span><span class="p">:</span> <span class="n">url</span> <span class="k">as</span> <span class="kt">NSURL</span><span class="p">)</span>
    <span class="p">}</span>
</code></pre></div></div>

<p>The property we are trying to alter is <code class="language-plaintext highlighter-rouge">@MainActor</code> so let’s annotate this function the same. Once we do that, we can set the property directly in this function and cross the contexts by <code class="language-plaintext highlighter-rouge">await</code>ing when we call it.</p>

<h2 id="tablecollection-view-implementation">Table/Collection View Implementation</h2>

<p>You can look at the <a href="https://github.com/jacobvanorder/StructuredConcurrencyAsynchronouslyLoadingImagesIntoTableAndCollectionViews/commit/6a24b8f1a45fbae6b13a2ea1e0110e0c5cff8e48#diff-8036ee87cac1d8185a01c5994466a2f90fa62663e984965d58915ae176a4026dL25-R44">diff</a> for this change but previously Apple set the image no matter of what it was and, even if the image was loaded, it would fetch the image no matter what. Now, this isn’t a big deal because we pull from the cache but it seems unnecessary. Now, we have a three step process:</p>

<ol>
  <li><a href="https://github.com/jacobvanorder/StructuredConcurrencyAsynchronouslyLoadingImagesIntoTableAndCollectionViews/commit/6a24b8f1a45fbae6b13a2ea1e0110e0c5cff8e48#diff-8036ee87cac1d8185a01c5994466a2f90fa62663e984965d58915ae176a4026dR26-R27">Does our model object have it? Use that</a>.</li>
  <li>Does the shared cache have it in their <code class="language-plaintext highlighter-rouge">NSCache</code>? <a href="https://github.com/jacobvanorder/StructuredConcurrencyAsynchronouslyLoadingImagesIntoTableAndCollectionViews/commit/6a24b8f1a45fbae6b13a2ea1e0110e0c5cff8e48#diff-8036ee87cac1d8185a01c5994466a2f90fa62663e984965d58915ae176a4026dR28-R29">Use that</a>.</li>
  <li>Set a placeholder image and go fetch the image, <a href="https://github.com/jacobvanorder/StructuredConcurrencyAsynchronouslyLoadingImagesIntoTableAndCollectionViews/commit/6a24b8f1a45fbae6b13a2ea1e0110e0c5cff8e48#diff-8036ee87cac1d8185a01c5994466a2f90fa62663e984965d58915ae176a4026dR31-R32">like so</a>.</li>
</ol>

<p>In order to fetch the image, we create a <code class="language-plaintext highlighter-rouge">Task</code> where we use our new <code class="language-plaintext highlighter-rouge">ImageCacheActor</code> to load the image from the <code class="language-plaintext highlighter-rouge">URL</code> asynchronously. If an error is thrown, we now set a broken image. Then it is a matter of setting the image to the <code class="language-plaintext highlighter-rouge">Item</code> that is used to drive the diffable data source and then asynchronously apply the updated snapshot. The cell will reload and it will use the first scenario of the model object’s image.</p>

<h2 id="in-conclusion">In Conclusion</h2>

<p>That was a massive change!</p>

<p>We have some optimizations done here where we are no longer holding onto an array of completion handlers which themselves held a strong reference to the collection views. Additionally, we do not load the cached version of the image even though the item has loaded it and it is stored in memory in the <code class="language-plaintext highlighter-rouge">Item</code>.</p>

<p>From a code organization standpoint, we had to shift some blocks of the code around now that we were no longer capturing functionality and storing it for later. Instead, if we have a loading task in progress, we ask subsequent requesters to wait for the first call.</p>

<p>Coming up in <a href="/structured-concurrency-conversion-part-4">part 4</a>, we’ll be focusing on that URLProtocol subclass.</p>]]></content><author><name>Jacob</name></author><summary type="html"><![CDATA[Alright, in part 1, I laid out the code structure is for Apple’s Async Image Loading sample code. In part 2, I fixed up the Xcode project—something you’ll need to do when starting a new project because Swift 6 and strict checking aren’t on by default (as of June 2025). Today, we’ll actually convert the dispatch and closure-based code to use Actors and async/await. Just as a reminder my code is located here. A Disclosure Before we start, I want to reference back to Swift’s release in 2014. There was a tendency for developers with years of Objective-C experience to write Swift code using the same patterns, which often resulted in the fuzzy, subjective judgment that the code wasn’t “Swifty” enough. Something similar is happening again with Structured Concurrency. After years of writing code that explicitly manages threading and handles locks, we now have a handy tool that allows us to catch potential data races before compilation. However, as we learn this new approach, our code might initially mimic older patterns to some degree. The reason I mention this is if you squint at the old code in the exercise, you should be able to see how it translates to something more modern. The underlying patterns and structure are somewhat present, which might help you bridge from the old way of doing things to the new, even if it doesn’t fully embrace all the new paradigm has to offer. ImageCacheActor Let’s begin by making an Actor called ImageCacheActor. Properties We’ll have a singleton static variable that can be accessed from various places just like the old version. The old version uses NSCache&lt;NSURL, UIImage&gt; to cache the images but is that Sendable or thread-safe? The documentation does say: &gt; You can add, remove, and query items in the cache from different threads without having to lock the cache yourself. Let’s roll with it! We are making it @MainActor because we will want to access it later from the Table/CollectionView in order to determine if we even need to fetch the image. This will need to be done from the Main Actor. The next and final property is a dictionary that used to contain a Dictionary where the key was the NSURL and the value was [(Item, UIImage?) -&gt; Swift.Void] (Note, that’s an Array of closures). The purpose of this Dictionary is for the case the image was already loading but a newly dequeued cell requested the image again which would pass in a new completion closure. When the image is loaded, it will call the original closure but also all subsequent closures if other callers requested the same URL. What we’ll be doing is converting that to a Dictionary where the key is still NSURL but the value will be the first task (See here). This is the first big shift in thinking because of the top-down approach of async/await. We’ll get into what that means in the implementation described down below. Okay, the properties are out of the way so let’s get to the meat of it. Functions Our first function follows what we had before with a public interface for our NSCache. This works, though, because well, “You can add, remove, and query items in the cache from different threads without having to lock the cache yourself.”. If you say so! We are annotating it as @MainActor for reasoned explained above. Load URL The real work gets done in final func load(url: URL) async throws -&gt; UIImage. You’ll notice that we changed the signature to take in a URL but no Item as I don’t want to tie this utilitarian functionality to a specific model object type. Also, we change the completion handler to something that is async, throws an error, and returns UIImage. Before, the function gave no indication that something went wrong, it just returned nil for the image which is better than returning the placeholder image, I guess. Let’s get to the meat of the function! I’m actually going to defer my explaination of the defer to the end. Moving on… After the defer, the first step is to check the cache using the function outlined above and returned the cached image if there is one. // Check for a cached image. if let cachedImage = await image(url: url) { return cachedImage } We need to await because we are changing contexts between our Actor and the @MainActor. Not a huge deal as NSCache is safe and our function is async anyway. Remember that Dictionary where the key was the NSURL and the value was a Task? Time to shine loadingResponses! // In case there are more than one requestor for the image, we wait for the previous request and // return the image (or throw) if let previousTask = loadingResponses[url] { return try await previousTask.value } What this code does is check for a previous request (we will discuss shortly) in a Task at that URL which is stored in a Dictionary. If we did make a request earlier, we will tell whatever subsequent caller that is loading the image at that URL to hold on for the result of that first call. This is quite the shift in thinking! Previously, we held each request’s completion handler in an array and then iterate over each completion closure stored once the image comes back and is valid. Next up, we make a Task&lt;(UIImage), any Error&gt; and save it to a local let variable. // Go fetch the image. let currentTask = Task { let (data, _) = try await ImageURLAsyncProtocol.urlSession().data(from: url) // Try to create the image. If not, throw bad image data error. guard let image = UIImage(data: data) else { throw LoadingError.badImageData } // Cache the image. await setCachedImage(image, atUrl: url) return image } In the task, we asynchronously fetch the data from the URL. When that is done, we make sure it’s a valid UIImage or throw an error. If it is valid, we use a function (more on that in a second) to set the image to the cache, and, finally, return the image as the Task’s value. This is the part of the code that lines up well with the old way of fetching URLSession data, getting a completion closure, and handling the result. In fact, it lines up so well, it’s probably doesn’t go far enough to transform the code to adjust to the new way of working which I apologized for before. Much like Apple probably looks at their original code sample and might cringe, so will I in five years. After that Task is made, we store it in the loadingResponses Dictionary for the URL and then asynchronously return the eventual value of the task or throwing a possible error. Back to the defer at top of the function. This one: defer { loadingResponses.removeValue(forKey: url) } If you think about it, we have a shared Singleton of ImageCacheActor which has a NSCache and a Dictionary&lt;URL: Task&lt;(UIImage), any Error&gt;]. That Task will hold on to the value as long as we keep it around. In essence, it could be our own cache if we wanted it to but, NSCache has some nice features such as flushing memory, if needed, that we get for free. In order to hold less memory, let’s remove the Task from the dictionary and it will get freed up. This is the end of our big func load(url: URL) async throws -&gt; UIImage function! But why a function for setting the image to the cache? If you try to set the object directly on the NSCache via cachedImages.setObject(image, forKey: url as NSURL), you will get the helpful message of Non-sendable type 'NSCache&lt;NSURL, UIImage&gt;' of property 'cachedImages' cannot exit main actor-isolated context; this is an error in the Swift 6 language mode. Calling with an await doesn’t matter. This is why we come up with this function: @MainActor private func setCachedImage(_ cachedImage: UIImage, atUrl url: URL) { cachedImages.setObject(cachedImage, forKey: url as NSURL) } The property we are trying to alter is @MainActor so let’s annotate this function the same. Once we do that, we can set the property directly in this function and cross the contexts by awaiting when we call it. Table/Collection View Implementation You can look at the diff for this change but previously Apple set the image no matter of what it was and, even if the image was loaded, it would fetch the image no matter what. Now, this isn’t a big deal because we pull from the cache but it seems unnecessary. Now, we have a three step process: Does our model object have it? Use that. Does the shared cache have it in their NSCache? Use that. Set a placeholder image and go fetch the image, like so. In order to fetch the image, we create a Task where we use our new ImageCacheActor to load the image from the URL asynchronously. If an error is thrown, we now set a broken image. Then it is a matter of setting the image to the Item that is used to drive the diffable data source and then asynchronously apply the updated snapshot. The cell will reload and it will use the first scenario of the model object’s image. In Conclusion That was a massive change! We have some optimizations done here where we are no longer holding onto an array of completion handlers which themselves held a strong reference to the collection views. Additionally, we do not load the cached version of the image even though the item has loaded it and it is stored in memory in the Item. From a code organization standpoint, we had to shift some blocks of the code around now that we were no longer capturing functionality and storing it for later. Instead, if we have a loading task in progress, we ask subsequent requesters to wait for the first call. Coming up in part 4, we’ll be focusing on that URLProtocol subclass.]]></summary></entry><entry><title type="html">Structured Concurrency Conversion (Part 2)</title><link href="https://jacobvanorder.github.io/structured-concurrency-conversion-part-2/" rel="alternate" type="text/html" title="Structured Concurrency Conversion (Part 2)" /><published>2025-05-03T18:54:00+00:00</published><updated>2025-05-03T18:54:00+00:00</updated><id>https://jacobvanorder.github.io/structured-concurrency-conversion-part-2</id><content type="html" xml:base="https://jacobvanorder.github.io/structured-concurrency-conversion-part-2/"><![CDATA[<p>Alright, in <a href="/structured-concurrency-conversion-part-1">part 1</a>, I laid out the code structure is for Apple’s <a href="https://developer.apple.com/documentation/uikit/asynchronously-loading-images-into-table-and-collection-views">Async Image Loading</a> sample code. In this session, we’ll update the settings of the project which is located here:</p>

<p><a href="https://github.com/jacobvanorder/StructuredConcurrencyAsynchronouslyLoadingImagesIntoTableAndCollectionViews">Updated Code Repo</a></p>

<h2 id="updating-the-project">Updating the Project</h2>

<p>First thing is that we want not capture <code class="language-plaintext highlighter-rouge">self</code> in closures, add <code class="language-plaintext highlighter-rouge">func urlProtocol(_ protocol: URLProtocol, didReceive response: URLResponse, cacheStoragePolicy policy: URLCache.StoragePolicy)</code>, modernize the Xcode project settings, and, most importantly, turn on Swift 6 and <code class="language-plaintext highlighter-rouge">Strict Concurrency Checking</code> to <code class="language-plaintext highlighter-rouge">Complete</code>.</p>

<p><img src="/assets/images/2025-05-03-structured-concurrency-conversion-part-2/Xcode_Project_Settings.png" alt="Xcode Project Settings" /></p>

<p>We also want to bump the minimum deployment target to iOS 18.0.</p>

<h2 id="two-bugs">Two Bugs</h2>

<h3 id="weak-self"><code class="language-plaintext highlighter-rouge">[weak self]</code></h3>

<p>When using completion closures and classes, it important to make sure you’re not capturing a reference to <code class="language-plaintext highlighter-rouge">self</code> in the closure in case the closure is never called which would cause the strong reference to <code class="language-plaintext highlighter-rouge">self</code> to never be released.</p>

<p>Apple does this in a <a href="https://github.com/jacobvanorder/StructuredConcurrencyAsynchronouslyLoadingImagesIntoTableAndCollectionViews/commit/7af3fbdcc3c6970468b3c073173abd34aa20a28d">could places</a> and it’s important to fix those even though we’ll probably replacing the closures with async/await variants.</p>

<h3 id="urlprotocol-did-receive-response">URLProtocol Did Receive Response</h3>

<p>For some reason, when using Apple’s code this does not crash but when changing to an Actor later, it <em>does</em> crash.</p>

<p>The <a href="https://developer.apple.com/documentation/foundation/urlprotocol">documentation</a> is sparse and gives no indication what might be required. There is this <a href="https://developer.apple.com/library/archive/samplecode/CustomHTTPProtocol/Listings/Read_Me_About_CustomHTTPProtocol_txt.html#//apple_ref/doc/uid/DTS40013653-Read_Me_About_CustomHTTPProtocol_txt-DontLinkElementID_23">ancient code sample</a> but it says that the authentication calls are also required but I’m not seeing that. but Stack Overflow <a href="https://stackoverflow.com/a/76231740">comes to the rescue</a>.</p>

<p>Well, let’s fix that <a href="https://github.com/jacobvanorder/StructuredConcurrencyAsynchronouslyLoadingImagesIntoTableAndCollectionViews/commit/0d1ff294ad1eafb0c7ac413aa21da878b8a42047#diff-046d5f20089704bd618489bfe91acefefb2cb2c114418b5e855fbfd9601ddc5bR38-R44">here</a>.</p>

<h2 id="ready-to-go">Ready to Go</h2>

<p>With those changes done, we are ready to make our modern changes so look for that in <a href="/structured-concurrency-conversion-part-3">part 3</a> and <a href="/structured-concurrency-conversion-part-4">part 4</a>.</p>]]></content><author><name>Jacob</name></author><summary type="html"><![CDATA[Alright, in part 1, I laid out the code structure is for Apple’s Async Image Loading sample code. In this session, we’ll update the settings of the project which is located here: Updated Code Repo Updating the Project First thing is that we want not capture self in closures, add func urlProtocol(_ protocol: URLProtocol, didReceive response: URLResponse, cacheStoragePolicy policy: URLCache.StoragePolicy), modernize the Xcode project settings, and, most importantly, turn on Swift 6 and Strict Concurrency Checking to Complete. We also want to bump the minimum deployment target to iOS 18.0. Two Bugs [weak self] When using completion closures and classes, it important to make sure you’re not capturing a reference to self in the closure in case the closure is never called which would cause the strong reference to self to never be released. Apple does this in a could places and it’s important to fix those even though we’ll probably replacing the closures with async/await variants. URLProtocol Did Receive Response For some reason, when using Apple’s code this does not crash but when changing to an Actor later, it does crash. The documentation is sparse and gives no indication what might be required. There is this ancient code sample but it says that the authentication calls are also required but I’m not seeing that. but Stack Overflow comes to the rescue. Well, let’s fix that here. Ready to Go With those changes done, we are ready to make our modern changes so look for that in part 3 and part 4.]]></summary></entry><entry><title type="html">Structured Concurrency Conversion (Part 1)</title><link href="https://jacobvanorder.github.io/structured-concurrency-conversion-part-1/" rel="alternate" type="text/html" title="Structured Concurrency Conversion (Part 1)" /><published>2025-04-12T20:24:00+00:00</published><updated>2025-04-12T20:24:00+00:00</updated><id>https://jacobvanorder.github.io/structured-concurrency-conversion-part-1</id><content type="html" xml:base="https://jacobvanorder.github.io/structured-concurrency-conversion-part-1/"><![CDATA[<p>Introduced in <a href="https://github.com/swiftlang/swift-evolution/blob/main/proposals/0304-structured-concurrency.md">Swift 5.5</a> way back in September of 2021, what I’m going to call “Structured Concurrency” is a mixture of <a href="https://github.com/swiftlang/swift-evolution/blob/main/proposals/0296-async-await.md">async/await</a>, <a href="https://github.com/swiftlang/swift-evolution/blob/main/proposals/0304-structured-concurrency.md#task-api">Task</a>, and <a href="https://github.com/swiftlang/swift-evolution/blob/main/proposals/0306-actors.md">Actors</a>. In short, though, it’s the way to accomplish potentially long-running operations in a way that can be checked by the compiler in order to reduce (but not completely eliminate!) race conditions and corruption of data.</p>

<p>For me, these new technologies has been a very difficult concept to grasp. New concepts to grasp being difficult is nothing new. I remember struggling with Swift after Objective-C being the only programming language I used on a daily basis ever. I remember struggling with SwiftUI after 10 years of using UIKit. The difference with these was that if you failed, it was easily visible but also the community was sort of failing and learning together in a relatively short amount of time. Additionally, there was no pressure to adapt either language as Objective-C interop was there from the beginning and SwiftUI adoption wasn’t really feasible until recently just because so many APIs weren’t at par with UIKit. If anything, with Swift 2 to 3 conversion being super painful, it actually benefitted you to sort of sit back and wait.</p>

<p>Structured Concurrency is not similar to either Swift or SwiftUI becuase if you want to use Swift 6, odds are you’re going to want to learn the techniques to make it correct otherwise you’ll start getting warnings and errors in your code base. Whereas you can still write apps in Objective-C in UIKit, Apple is somewhat forcing us to adopt this model and if they don’t someone else on your team might.</p>

<p>So, what am I to do? My idea is to take a piece of Apple sample code and convert it over to something that uses async/await, Tasks, and Actors. I might not get it right but I seldomly do the first time and that’s okay as long as I try. In this first post in a series, I’m going to talk about the code that is posted by Apple in order to give an overview of what’s happening before I convert it.</p>

<h2 id="asynchronously-loading-images">Asynchronously Loading Images</h2>

<p>The code from Apple is posted <a href="https://developer.apple.com/documentation/uikit/asynchronously-loading-images-into-table-and-collection-views">here</a> and is from March of 2020.</p>

<h3 id="normal-uikit-setup">Normal UIKit Setup</h3>

<p>There is a <code class="language-plaintext highlighter-rouge">UICollectionViewController</code> and <code class="language-plaintext highlighter-rouge">UITableViewController</code> subclass each which utilize two bespoke mechansims to fetch images asynchronously.</p>

<p>In each of these subclasses, they access the image within a diffable data source cell registration. Because it uses a diffable data source, it needs an object that is the basis of the snapshots. In this case, they call it <code class="language-plaintext highlighter-rouge">Item</code> and it looks like this:</p>

<div class="language-swift highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kd">class</span> <span class="kt">Item</span><span class="p">:</span> <span class="kt">Hashable</span> <span class="p">{</span>
    
    <span class="k">var</span> <span class="nv">image</span><span class="p">:</span> <span class="kt">UIImage</span><span class="o">!</span>
    <span class="k">let</span> <span class="nv">url</span><span class="p">:</span> <span class="kt">URL</span><span class="o">!</span>
    <span class="k">let</span> <span class="nv">identifier</span> <span class="o">=</span> <span class="kt">UUID</span><span class="p">()</span>
    
    <span class="kd">func</span> <span class="nf">hash</span><span class="p">(</span><span class="n">into</span> <span class="nv">hasher</span><span class="p">:</span> <span class="k">inout</span> <span class="kt">Hasher</span><span class="p">)</span> <span class="p">{</span>
        <span class="n">hasher</span><span class="o">.</span><span class="nf">combine</span><span class="p">(</span><span class="n">identifier</span><span class="p">)</span>
    <span class="p">}</span>
    <span class="kd">static</span> <span class="kd">func</span> <span class="o">==</span> <span class="p">(</span><span class="nv">lhs</span><span class="p">:</span> <span class="kt">Item</span><span class="p">,</span> <span class="nv">rhs</span><span class="p">:</span> <span class="kt">Item</span><span class="p">)</span> <span class="o">-&gt;</span> <span class="kt">Bool</span> <span class="p">{</span>
        <span class="k">return</span> <span class="n">lhs</span><span class="o">.</span><span class="n">identifier</span> <span class="o">==</span> <span class="n">rhs</span><span class="o">.</span><span class="n">identifier</span>
    <span class="p">}</span>
    
    <span class="nf">init</span><span class="p">(</span><span class="nv">image</span><span class="p">:</span> <span class="kt">UIImage</span><span class="p">,</span> <span class="nv">url</span><span class="p">:</span> <span class="kt">URL</span><span class="p">)</span> <span class="p">{</span>
        <span class="k">self</span><span class="o">.</span><span class="n">image</span> <span class="o">=</span> <span class="n">image</span>
        <span class="k">self</span><span class="o">.</span><span class="n">url</span> <span class="o">=</span> <span class="n">url</span>
    <span class="p">}</span>

<span class="p">}</span>
</code></pre></div></div>

<p>That’s right, it’s a class that has a <code class="language-plaintext highlighter-rouge">var image: UIImage!</code>. Within both the collection and table view controllers, they instantiates the <code class="language-plaintext highlighter-rouge">Item</code>s with a placeholder image which is initially shown. Later, we will asynchronously fetch the correct image at the URL and replacing that image. We’ll be taking a look at the <code class="language-plaintext highlighter-rouge">UITableViewController</code> version that does that when it makes the cell registration for the data source.</p>

<div class="language-swift highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">dataSource</span> <span class="o">=</span> <span class="kt">UITableViewDiffableDataSource</span><span class="o">&lt;</span><span class="kt">Section</span><span class="p">,</span> <span class="kt">Item</span><span class="o">&gt;</span><span class="p">(</span><span class="nv">tableView</span><span class="p">:</span> <span class="n">tableView</span><span class="p">)</span> <span class="p">{</span>
    <span class="p">(</span><span class="nv">tableView</span><span class="p">:</span> <span class="kt">UITableView</span><span class="p">,</span> <span class="nv">indexPath</span><span class="p">:</span> <span class="kt">IndexPath</span><span class="p">,</span> <span class="nv">item</span><span class="p">:</span> <span class="kt">Item</span><span class="p">)</span> <span class="o">-&gt;</span> <span class="kt">UITableViewCell</span><span class="p">?</span> <span class="k">in</span>
    <span class="k">let</span> <span class="nv">cell</span> <span class="o">=</span> <span class="n">tableView</span><span class="o">.</span><span class="nf">dequeueReusableCell</span><span class="p">(</span><span class="nv">withIdentifier</span><span class="p">:</span> <span class="s">"cell"</span><span class="p">,</span> <span class="nv">for</span><span class="p">:</span> <span class="n">indexPath</span><span class="p">)</span>
    <span class="c1">/// - Tag: update</span>
    <span class="k">var</span> <span class="nv">content</span> <span class="o">=</span> <span class="n">cell</span><span class="o">.</span><span class="nf">defaultContentConfiguration</span><span class="p">()</span>
    <span class="n">content</span><span class="o">.</span><span class="n">image</span> <span class="o">=</span> <span class="n">item</span><span class="o">.</span><span class="n">image</span>
    <span class="kt">ImageCache</span><span class="o">.</span><span class="n">publicCache</span><span class="o">.</span><span class="nf">load</span><span class="p">(</span><span class="nv">url</span><span class="p">:</span> <span class="n">item</span><span class="o">.</span><span class="n">url</span> <span class="k">as</span> <span class="kt">NSURL</span><span class="p">,</span> <span class="nv">item</span><span class="p">:</span> <span class="n">item</span><span class="p">)</span> <span class="p">{</span> <span class="p">(</span><span class="n">fetchedItem</span><span class="p">,</span> <span class="n">image</span><span class="p">)</span> <span class="k">in</span>
        <span class="k">if</span> <span class="k">let</span> <span class="nv">img</span> <span class="o">=</span> <span class="n">image</span><span class="p">,</span> <span class="n">img</span> <span class="o">!=</span> <span class="n">fetchedItem</span><span class="o">.</span><span class="n">image</span> <span class="p">{</span>
            <span class="k">var</span> <span class="nv">updatedSnapshot</span> <span class="o">=</span> <span class="k">self</span><span class="o">.</span><span class="n">dataSource</span><span class="o">.</span><span class="nf">snapshot</span><span class="p">()</span>
            <span class="k">if</span> <span class="k">let</span> <span class="nv">datasourceIndex</span> <span class="o">=</span> <span class="n">updatedSnapshot</span><span class="o">.</span><span class="nf">indexOfItem</span><span class="p">(</span><span class="n">fetchedItem</span><span class="p">)</span> <span class="p">{</span>
                <span class="k">let</span> <span class="nv">item</span> <span class="o">=</span> <span class="k">self</span><span class="o">.</span><span class="n">imageObjects</span><span class="p">[</span><span class="n">datasourceIndex</span><span class="p">]</span>
                <span class="n">item</span><span class="o">.</span><span class="n">image</span> <span class="o">=</span> <span class="n">img</span>
                <span class="n">updatedSnapshot</span><span class="o">.</span><span class="nf">reloadItems</span><span class="p">([</span><span class="n">item</span><span class="p">])</span>
                <span class="k">self</span><span class="o">.</span><span class="n">dataSource</span><span class="o">.</span><span class="nf">apply</span><span class="p">(</span><span class="n">updatedSnapshot</span><span class="p">,</span> <span class="nv">animatingDifferences</span><span class="p">:</span> <span class="kc">true</span><span class="p">)</span>
            <span class="p">}</span>
        <span class="p">}</span>
    <span class="p">}</span>
    <span class="n">cell</span><span class="o">.</span><span class="n">contentConfiguration</span> <span class="o">=</span> <span class="n">content</span>
    <span class="k">return</span> <span class="n">cell</span>
<span class="p">}</span>
</code></pre></div></div>

<p>Walking through this, the cell gets dequeued from the table view and we make a content configuration. It immediately sets the item’s image to the content configuration’s image property. This <em>could</em> be the placeholder but, later, it could <em>also</em> be the real image. At this point, even if it isn’t the placeholder image, we still go and fetch the image at the URL. We’ll talk about that block of code in just a minute. While that asynchronous work is being done, the cell’s content configuration is set and we return the cell.</p>

<p>In the closure, when the image comes back from an undetermined timeframe, we check to make sure the image is not <code class="language-plaintext highlighter-rouge">nil</code> and that the fetched image is <strong>not</strong> the image previously set. This seems inefficient since we are making a network call (or getting the cached data) no matter what but then again, we are also capturing <code class="language-plaintext highlighter-rouge">self</code> in the closure and not making it <code class="language-plaintext highlighter-rouge">weak</code> but pobody’s nerfect<sup id="fnref:1" role="doc-noteref"><a href="#fn:1" class="footnote" rel="footnote">1</a></sup>. Moving on, we make a mutable snapshot, get the index of the object we want to alter and get a reference to a reference to the item which we are storing in an array as a property. (In my opinion, I don’t think we should have two sources of truth with the diffable data source <strong>and</strong> this array.) We set the item’s image and then tell the snapshot to reload the item and apply the snapshot. It will then run through this cell registration again and set the cell configuration’s image to the updated image.</p>

<h3 id="image-cache">Image Cache</h3>

<p>The <code class="language-plaintext highlighter-rouge">ImageCache</code> is an object that holds onto an <code class="language-plaintext highlighter-rouge">NSCache</code> with the key being the <code class="language-plaintext highlighter-rouge">NSURL</code> and the value being <code class="language-plaintext highlighter-rouge">UIImage</code>. The <code class="language-plaintext highlighter-rouge">ImageCache</code> also has a Dictionary where the key is <code class="language-plaintext highlighter-rouge">NSURL</code> again but the value is an Array of closures that has the arguments of <code class="language-plaintext highlighter-rouge">(Item, UIImage?)</code> and returns <code class="language-plaintext highlighter-rouge">Void</code>. They look like this:</p>

<div class="language-swift highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kd">private</span> <span class="k">let</span> <span class="nv">cachedImages</span> <span class="o">=</span> <span class="kt">NSCache</span><span class="o">&lt;</span><span class="kt">NSURL</span><span class="p">,</span> <span class="kt">UIImage</span><span class="o">&gt;</span><span class="p">()</span>
<span class="kd">private</span> <span class="k">var</span> <span class="nv">loadingResponses</span> <span class="o">=</span> <span class="p">[</span><span class="kt">NSURL</span><span class="p">:</span> <span class="p">[(</span><span class="kt">Item</span><span class="p">,</span> <span class="kt">UIImage</span><span class="p">?)</span> <span class="o">-&gt;</span> <span class="kt">Swift</span><span class="o">.</span><span class="kt">Void</span><span class="p">]]()</span>
</code></pre></div></div>

<p>There is a simple function for returning an optional image from the cache using the url. Not entirely sure why it’s there and why it’s <code class="language-plaintext highlighter-rouge">public</code> given the only caller is the <code class="language-plaintext highlighter-rouge">ImageCache</code> itself.</p>

<p>The meat of the work is done in this big function:</p>

<div class="language-swift highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kd">final</span> <span class="kd">func</span> <span class="nf">load</span><span class="p">(</span><span class="nv">url</span><span class="p">:</span> <span class="kt">NSURL</span><span class="p">,</span> <span class="nv">item</span><span class="p">:</span> <span class="kt">Item</span><span class="p">,</span> <span class="nv">completion</span><span class="p">:</span> <span class="kd">@escaping</span> <span class="p">(</span><span class="kt">Item</span><span class="p">,</span> <span class="kt">UIImage</span><span class="p">?)</span> <span class="o">-&gt;</span> <span class="kt">Swift</span><span class="o">.</span><span class="kt">Void</span><span class="p">)</span> <span class="p">{</span>
    <span class="c1">// Check for a cached image.</span>
    <span class="k">if</span> <span class="k">let</span> <span class="nv">cachedImage</span> <span class="o">=</span> <span class="nf">image</span><span class="p">(</span><span class="nv">url</span><span class="p">:</span> <span class="n">url</span><span class="p">)</span> <span class="p">{</span>
        <span class="kt">DispatchQueue</span><span class="o">.</span><span class="n">main</span><span class="o">.</span><span class="k">async</span> <span class="p">{</span>
            <span class="nf">completion</span><span class="p">(</span><span class="n">item</span><span class="p">,</span> <span class="n">cachedImage</span><span class="p">)</span>
        <span class="p">}</span>
        <span class="k">return</span>
    <span class="p">}</span>
    <span class="c1">// In case there are more than one requestor for the image, we append their completion block.</span>
    <span class="k">if</span> <span class="n">loadingResponses</span><span class="p">[</span><span class="n">url</span><span class="p">]</span> <span class="o">!=</span> <span class="kc">nil</span> <span class="p">{</span>
        <span class="n">loadingResponses</span><span class="p">[</span><span class="n">url</span><span class="p">]?</span><span class="o">.</span><span class="nf">append</span><span class="p">(</span><span class="n">completion</span><span class="p">)</span>
        <span class="k">return</span>
    <span class="p">}</span> <span class="k">else</span> <span class="p">{</span>
        <span class="n">loadingResponses</span><span class="p">[</span><span class="n">url</span><span class="p">]</span> <span class="o">=</span> <span class="p">[</span><span class="n">completion</span><span class="p">]</span>
    <span class="p">}</span>
    <span class="c1">// Go fetch the image.</span>
    <span class="kt">ImageURLProtocol</span><span class="o">.</span><span class="nf">urlSession</span><span class="p">()</span><span class="o">.</span><span class="nf">dataTask</span><span class="p">(</span><span class="nv">with</span><span class="p">:</span> <span class="n">url</span> <span class="k">as</span> <span class="kt">URL</span><span class="p">)</span> <span class="p">{</span> <span class="p">(</span><span class="n">data</span><span class="p">,</span> <span class="n">response</span><span class="p">,</span> <span class="n">error</span><span class="p">)</span> <span class="k">in</span>
        <span class="c1">// Check for the error, then data and try to create the image.</span>
        <span class="k">guard</span> <span class="k">let</span> <span class="nv">responseData</span> <span class="o">=</span> <span class="n">data</span><span class="p">,</span> <span class="k">let</span> <span class="nv">image</span> <span class="o">=</span> <span class="kt">UIImage</span><span class="p">(</span><span class="nv">data</span><span class="p">:</span> <span class="n">responseData</span><span class="p">),</span>
            <span class="k">let</span> <span class="nv">blocks</span> <span class="o">=</span> <span class="k">self</span><span class="o">.</span><span class="n">loadingResponses</span><span class="p">[</span><span class="n">url</span><span class="p">],</span> <span class="n">error</span> <span class="o">==</span> <span class="kc">nil</span> <span class="k">else</span> <span class="p">{</span>
            <span class="kt">DispatchQueue</span><span class="o">.</span><span class="n">main</span><span class="o">.</span><span class="k">async</span> <span class="p">{</span>
                <span class="nf">completion</span><span class="p">(</span><span class="n">item</span><span class="p">,</span> <span class="kc">nil</span><span class="p">)</span>
            <span class="p">}</span>
            <span class="k">return</span>
        <span class="p">}</span>
        <span class="c1">// Cache the image.</span>
        <span class="k">self</span><span class="o">.</span><span class="n">cachedImages</span><span class="o">.</span><span class="nf">setObject</span><span class="p">(</span><span class="n">image</span><span class="p">,</span> <span class="nv">forKey</span><span class="p">:</span> <span class="n">url</span><span class="p">,</span> <span class="nv">cost</span><span class="p">:</span> <span class="n">responseData</span><span class="o">.</span><span class="n">count</span><span class="p">)</span>
        <span class="c1">// Iterate over each requestor for the image and pass it back.</span>
        <span class="k">for</span> <span class="n">block</span> <span class="k">in</span> <span class="n">blocks</span> <span class="p">{</span>
            <span class="kt">DispatchQueue</span><span class="o">.</span><span class="n">main</span><span class="o">.</span><span class="k">async</span> <span class="p">{</span>
                <span class="nf">block</span><span class="p">(</span><span class="n">item</span><span class="p">,</span> <span class="n">image</span><span class="p">)</span>
            <span class="p">}</span>
            <span class="k">return</span>
        <span class="p">}</span>
    <span class="p">}</span><span class="o">.</span><span class="nf">resume</span><span class="p">()</span>
<span class="p">}</span>
</code></pre></div></div>

<p>Whew! The comments do a good job of explaing what’s going on there but we check for the cached image and call the completion if it exists. Next up, they add the closure to the array for that URL in the dictionary and return OR create an entry in the dictionary for the URL and an array with the first completion.</p>

<p>Moving on, we use the <code class="language-plaintext highlighter-rouge">ImageURLProtocol.urlSession()</code> (more on this later) data task with a completion that has the arguments of optional data, response, and error. We immediately resume that data task. When the data task is complete, the data task’s closure gets executed. Again, no <code class="language-plaintext highlighter-rouge">[weak self]</code> here but we first check the data, that a <code class="language-plaintext highlighter-rouge">UIImage</code> can be created with the data, that there is an array of closures to call, and that the error is <code class="language-plaintext highlighter-rouge">nil</code> otherwise we call the completion with no image (on the main thread). With those pieces, we then set the image to the cache but also go through each completion closure and execute it with the image (on the main thread).</p>

<h3 id="imageurlprotocol">ImageURLProtocol?</h3>

<p>Did you know you can sort of override <code class="language-plaintext highlighter-rouge">URLSession</code> to have the same API but act differently? You need to create something that adheres to the <a href="https://developer.apple.com/documentation/foundation/urlprotocol"><code class="language-plaintext highlighter-rouge">URLProtocol</code></a>. This is largely done in this protocol function:</p>

<div class="language-swift highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kd">final</span> <span class="k">override</span> <span class="kd">func</span> <span class="nf">startLoading</span><span class="p">()</span> <span class="p">{</span>
    <span class="k">guard</span> <span class="k">let</span> <span class="nv">reqURL</span> <span class="o">=</span> <span class="n">request</span><span class="o">.</span><span class="n">url</span><span class="p">,</span> <span class="k">let</span> <span class="nv">urlClient</span> <span class="o">=</span> <span class="n">client</span> <span class="k">else</span> <span class="p">{</span>
        <span class="k">return</span>
    <span class="p">}</span>
    
    <span class="n">block</span> <span class="o">=</span> <span class="kt">DispatchWorkItem</span><span class="p">(</span><span class="nv">block</span><span class="p">:</span> <span class="p">{</span>
        <span class="k">if</span> <span class="k">self</span><span class="o">.</span><span class="n">cancelledOrComplete</span> <span class="o">==</span> <span class="kc">false</span> <span class="p">{</span>
            <span class="k">let</span> <span class="nv">fileURL</span> <span class="o">=</span> <span class="kt">URL</span><span class="p">(</span><span class="nv">fileURLWithPath</span><span class="p">:</span> <span class="n">reqURL</span><span class="o">.</span><span class="n">path</span><span class="p">)</span>
            <span class="k">if</span> <span class="k">let</span> <span class="nv">data</span> <span class="o">=</span> <span class="k">try</span><span class="p">?</span> <span class="kt">Data</span><span class="p">(</span><span class="nv">contentsOf</span><span class="p">:</span> <span class="n">fileURL</span><span class="p">)</span> <span class="p">{</span>
                <span class="n">urlClient</span><span class="o">.</span><span class="nf">urlProtocol</span><span class="p">(</span><span class="k">self</span><span class="p">,</span> <span class="nv">didLoad</span><span class="p">:</span> <span class="n">data</span><span class="p">)</span>
                <span class="n">urlClient</span><span class="o">.</span><span class="nf">urlProtocolDidFinishLoading</span><span class="p">(</span><span class="k">self</span><span class="p">)</span>
            <span class="p">}</span>
        <span class="p">}</span>
        <span class="k">self</span><span class="o">.</span><span class="n">cancelledOrComplete</span> <span class="o">=</span> <span class="kc">true</span>
    <span class="p">})</span>
    
    <span class="kt">ImageURLProtocol</span><span class="o">.</span><span class="n">queue</span><span class="o">.</span><span class="nf">asyncAfter</span><span class="p">(</span><span class="nv">deadline</span><span class="p">:</span> <span class="kt">DispatchTime</span><span class="p">(</span><span class="nv">uptimeNanoseconds</span><span class="p">:</span> <span class="mi">500</span> <span class="o">*</span> <span class="kt">NSEC_PER_MSEC</span><span class="p">),</span> <span class="nv">execute</span><span class="p">:</span> <span class="n">block</span><span class="p">)</span>
<span class="p">}</span>
</code></pre></div></div>

<p>What is happening here is that we make sure we have a url and client but then set up a closure that will get the data from the URL and call the protocol’s functions signaling that the work is done<sup id="fnref:2" role="doc-noteref"><a href="#fn:2" class="footnote" rel="footnote">2</a></sup>. That closure is sent to a <code class="language-plaintext highlighter-rouge">DispatchSerialQueue</code> to be executed in 0.5 seconds. There is also a property (<code class="language-plaintext highlighter-rouge">cancelledOrComplete</code>) on the class that signifies that it is done.</p>

<p>This <code class="language-plaintext highlighter-rouge">cancelledOrComplete</code> is used in case the data task is cancelled.</p>

<div class="language-swift highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kd">final</span> <span class="k">override</span> <span class="kd">func</span> <span class="nf">stopLoading</span><span class="p">()</span> <span class="p">{</span>
    <span class="kt">ImageURLProtocol</span><span class="o">.</span><span class="n">queue</span><span class="o">.</span><span class="k">async</span> <span class="p">{</span>
        <span class="k">if</span> <span class="k">self</span><span class="o">.</span><span class="n">cancelledOrComplete</span> <span class="o">==</span> <span class="kc">false</span><span class="p">,</span> <span class="k">let</span> <span class="nv">cancelBlock</span> <span class="o">=</span> <span class="k">self</span><span class="o">.</span><span class="n">block</span> <span class="p">{</span>
            <span class="n">cancelBlock</span><span class="o">.</span><span class="nf">cancel</span><span class="p">()</span>
            <span class="k">self</span><span class="o">.</span><span class="n">cancelledOrComplete</span> <span class="o">=</span> <span class="kc">true</span>
        <span class="p">}</span>
    <span class="p">}</span>
<span class="p">}</span>
</code></pre></div></div>

<p>This class also has a deprecated <code class="language-plaintext highlighter-rouge">OS_dispatch_queue_serial</code> which is renamed <code class="language-plaintext highlighter-rouge">DispatchSerialQueue</code>.</p>

<h2 id="in-conclusion">In Conclusion</h2>

<p>Okay, we have three pieces that need to be addressed.</p>

<ul>
  <li>We have our table/collection view call sites that have an asynchronous closure to update the model driving the diffable data source.</li>
  <li>We have the object that asynchronously returns the cached image or fetches the image data and manages all of the requests to do so.</li>
  <li>We have an override for <code class="language-plaintext highlighter-rouge">URLSession</code> that gets the image off disk and returns it after a half second.</li>
</ul>

<p>In the following parts of this conversion, I’ll be working from the bottom of this list up to the table/collection view. As a bonus, I’ll be using this modern mechanism to drive a SwiftUI equivalent view. Up next is <a href="/structured-concurrency-conversion-part-2">part 2</a>!</p>

<div class="footnotes" role="doc-endnotes">
  <ol>
    <li id="fn:1" role="doc-endnote">
      <p>Normally, I would not dunk on someone else’s code but Apple should really know better here. <a href="#fnref:1" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:2" role="doc-endnote">
      <p>Apple missed a protocol function call of <code class="language-plaintext highlighter-rouge">func urlProtocol(_ protocol: URLProtocol, didReceive response: URLResponse, cacheStoragePolicy policy: URLCache.StoragePolicy)</code> before calling <code class="language-plaintext highlighter-rouge">func urlProtocol(_ protocol: URLProtocol, didLoad data: Data)</code>. With that missing, the app would crash when I later translate over to Structured Concurrency. Fun! <a href="#fnref:2" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
  </ol>
</div>]]></content><author><name>Jacob</name></author><summary type="html"><![CDATA[Introduced in Swift 5.5 way back in September of 2021, what I’m going to call “Structured Concurrency” is a mixture of async/await, Task, and Actors. In short, though, it’s the way to accomplish potentially long-running operations in a way that can be checked by the compiler in order to reduce (but not completely eliminate!) race conditions and corruption of data. For me, these new technologies has been a very difficult concept to grasp. New concepts to grasp being difficult is nothing new. I remember struggling with Swift after Objective-C being the only programming language I used on a daily basis ever. I remember struggling with SwiftUI after 10 years of using UIKit. The difference with these was that if you failed, it was easily visible but also the community was sort of failing and learning together in a relatively short amount of time. Additionally, there was no pressure to adapt either language as Objective-C interop was there from the beginning and SwiftUI adoption wasn’t really feasible until recently just because so many APIs weren’t at par with UIKit. If anything, with Swift 2 to 3 conversion being super painful, it actually benefitted you to sort of sit back and wait. Structured Concurrency is not similar to either Swift or SwiftUI becuase if you want to use Swift 6, odds are you’re going to want to learn the techniques to make it correct otherwise you’ll start getting warnings and errors in your code base. Whereas you can still write apps in Objective-C in UIKit, Apple is somewhat forcing us to adopt this model and if they don’t someone else on your team might. So, what am I to do? My idea is to take a piece of Apple sample code and convert it over to something that uses async/await, Tasks, and Actors. I might not get it right but I seldomly do the first time and that’s okay as long as I try. In this first post in a series, I’m going to talk about the code that is posted by Apple in order to give an overview of what’s happening before I convert it. Asynchronously Loading Images The code from Apple is posted here and is from March of 2020. Normal UIKit Setup There is a UICollectionViewController and UITableViewController subclass each which utilize two bespoke mechansims to fetch images asynchronously. In each of these subclasses, they access the image within a diffable data source cell registration. Because it uses a diffable data source, it needs an object that is the basis of the snapshots. In this case, they call it Item and it looks like this: class Item: Hashable { var image: UIImage! let url: URL! let identifier = UUID() func hash(into hasher: inout Hasher) { hasher.combine(identifier) } static func == (lhs: Item, rhs: Item) -&gt; Bool { return lhs.identifier == rhs.identifier } init(image: UIImage, url: URL) { self.image = image self.url = url } } That’s right, it’s a class that has a var image: UIImage!. Within both the collection and table view controllers, they instantiates the Items with a placeholder image which is initially shown. Later, we will asynchronously fetch the correct image at the URL and replacing that image. We’ll be taking a look at the UITableViewController version that does that when it makes the cell registration for the data source. dataSource = UITableViewDiffableDataSource&lt;Section, Item&gt;(tableView: tableView) { (tableView: UITableView, indexPath: IndexPath, item: Item) -&gt; UITableViewCell? in let cell = tableView.dequeueReusableCell(withIdentifier: "cell", for: indexPath) /// - Tag: update var content = cell.defaultContentConfiguration() content.image = item.image ImageCache.publicCache.load(url: item.url as NSURL, item: item) { (fetchedItem, image) in if let img = image, img != fetchedItem.image { var updatedSnapshot = self.dataSource.snapshot() if let datasourceIndex = updatedSnapshot.indexOfItem(fetchedItem) { let item = self.imageObjects[datasourceIndex] item.image = img updatedSnapshot.reloadItems([item]) self.dataSource.apply(updatedSnapshot, animatingDifferences: true) } } } cell.contentConfiguration = content return cell } Walking through this, the cell gets dequeued from the table view and we make a content configuration. It immediately sets the item’s image to the content configuration’s image property. This could be the placeholder but, later, it could also be the real image. At this point, even if it isn’t the placeholder image, we still go and fetch the image at the URL. We’ll talk about that block of code in just a minute. While that asynchronous work is being done, the cell’s content configuration is set and we return the cell. In the closure, when the image comes back from an undetermined timeframe, we check to make sure the image is not nil and that the fetched image is not the image previously set. This seems inefficient since we are making a network call (or getting the cached data) no matter what but then again, we are also capturing self in the closure and not making it weak but pobody’s nerfect1. Moving on, we make a mutable snapshot, get the index of the object we want to alter and get a reference to a reference to the item which we are storing in an array as a property. (In my opinion, I don’t think we should have two sources of truth with the diffable data source and this array.) We set the item’s image and then tell the snapshot to reload the item and apply the snapshot. It will then run through this cell registration again and set the cell configuration’s image to the updated image. Image Cache The ImageCache is an object that holds onto an NSCache with the key being the NSURL and the value being UIImage. The ImageCache also has a Dictionary where the key is NSURL again but the value is an Array of closures that has the arguments of (Item, UIImage?) and returns Void. They look like this: private let cachedImages = NSCache&lt;NSURL, UIImage&gt;() private var loadingResponses = [NSURL: [(Item, UIImage?) -&gt; Swift.Void]]() There is a simple function for returning an optional image from the cache using the url. Not entirely sure why it’s there and why it’s public given the only caller is the ImageCache itself. The meat of the work is done in this big function: final func load(url: NSURL, item: Item, completion: @escaping (Item, UIImage?) -&gt; Swift.Void) { // Check for a cached image. if let cachedImage = image(url: url) { DispatchQueue.main.async { completion(item, cachedImage) } return } // In case there are more than one requestor for the image, we append their completion block. if loadingResponses[url] != nil { loadingResponses[url]?.append(completion) return } else { loadingResponses[url] = [completion] } // Go fetch the image. ImageURLProtocol.urlSession().dataTask(with: url as URL) { (data, response, error) in // Check for the error, then data and try to create the image. guard let responseData = data, let image = UIImage(data: responseData), let blocks = self.loadingResponses[url], error == nil else { DispatchQueue.main.async { completion(item, nil) } return } // Cache the image. self.cachedImages.setObject(image, forKey: url, cost: responseData.count) // Iterate over each requestor for the image and pass it back. for block in blocks { DispatchQueue.main.async { block(item, image) } return } }.resume() } Whew! The comments do a good job of explaing what’s going on there but we check for the cached image and call the completion if it exists. Next up, they add the closure to the array for that URL in the dictionary and return OR create an entry in the dictionary for the URL and an array with the first completion. Moving on, we use the ImageURLProtocol.urlSession() (more on this later) data task with a completion that has the arguments of optional data, response, and error. We immediately resume that data task. When the data task is complete, the data task’s closure gets executed. Again, no [weak self] here but we first check the data, that a UIImage can be created with the data, that there is an array of closures to call, and that the error is nil otherwise we call the completion with no image (on the main thread). With those pieces, we then set the image to the cache but also go through each completion closure and execute it with the image (on the main thread). ImageURLProtocol? Did you know you can sort of override URLSession to have the same API but act differently? You need to create something that adheres to the URLProtocol. This is largely done in this protocol function: final override func startLoading() { guard let reqURL = request.url, let urlClient = client else { return } block = DispatchWorkItem(block: { if self.cancelledOrComplete == false { let fileURL = URL(fileURLWithPath: reqURL.path) if let data = try? Data(contentsOf: fileURL) { urlClient.urlProtocol(self, didLoad: data) urlClient.urlProtocolDidFinishLoading(self) } } self.cancelledOrComplete = true }) ImageURLProtocol.queue.asyncAfter(deadline: DispatchTime(uptimeNanoseconds: 500 * NSEC_PER_MSEC), execute: block) } What is happening here is that we make sure we have a url and client but then set up a closure that will get the data from the URL and call the protocol’s functions signaling that the work is done2. That closure is sent to a DispatchSerialQueue to be executed in 0.5 seconds. There is also a property (cancelledOrComplete) on the class that signifies that it is done. This cancelledOrComplete is used in case the data task is cancelled. final override func stopLoading() { ImageURLProtocol.queue.async { if self.cancelledOrComplete == false, let cancelBlock = self.block { cancelBlock.cancel() self.cancelledOrComplete = true } } } This class also has a deprecated OS_dispatch_queue_serial which is renamed DispatchSerialQueue. In Conclusion Okay, we have three pieces that need to be addressed. We have our table/collection view call sites that have an asynchronous closure to update the model driving the diffable data source. We have the object that asynchronously returns the cached image or fetches the image data and manages all of the requests to do so. We have an override for URLSession that gets the image off disk and returns it after a half second. In the following parts of this conversion, I’ll be working from the bottom of this list up to the table/collection view. As a bonus, I’ll be using this modern mechanism to drive a SwiftUI equivalent view. Up next is part 2! Normally, I would not dunk on someone else’s code but Apple should really know better here. &#8617; Apple missed a protocol function call of func urlProtocol(_ protocol: URLProtocol, didReceive response: URLResponse, cacheStoragePolicy policy: URLCache.StoragePolicy) before calling func urlProtocol(_ protocol: URLProtocol, didLoad data: Data). With that missing, the app would crash when I later translate over to Structured Concurrency. Fun! &#8617;]]></summary></entry><entry><title type="html">SwiftUI Bindings: Digging a Little Deeper</title><link href="https://jacobvanorder.github.io/swiftui-bindings-digging-a-little-deeper/" rel="alternate" type="text/html" title="SwiftUI Bindings: Digging a Little Deeper" /><published>2025-03-23T18:26:00+00:00</published><updated>2025-03-23T18:26:00+00:00</updated><id>https://jacobvanorder.github.io/swiftui-bindings-digging-a-little-deeper</id><content type="html" xml:base="https://jacobvanorder.github.io/swiftui-bindings-digging-a-little-deeper/"><![CDATA[<h2 id="bindings">@Bindings</h2>

<p>In his <a href="https://chris.eidhof.nl/post/binding-with-get-set/">post about Bindings</a>, the delightful Chris Eidhof gives an overview about how synthesizing SwiftUI Bindings might not turn out how you expect. The tl;dr is using <code class="language-plaintext highlighter-rouge">Binding(get:set:)</code> might be convenient, but can introduce performance bottlenecks, especially in complex views or when creating member bindings. Chris recommends against using this in production code.</p>

<p>As someone who <strong>has</strong> used a <code class="language-plaintext highlighter-rouge">Binding(get:set:)</code> in production, I wanted to investigate a little further as well as give a little backstory as to <em>why</em> you’d want to use <code class="language-plaintext highlighter-rouge">Binding(get:set:)</code> in the first place.</p>

<h3 id="swiftui-alert">SwiftUI Alert</h3>

<p>I’m going to use SwiftUI Alert as an example but there are other instances in SwiftUI where the <a href="https://developer.apple.com/documentation/swiftui/view/alert(_:ispresented:presenting:actions:message:)-8584l">view modifier</a> that dictates whether something is to be shown takes a <code class="language-plaintext highlighter-rouge">Binding&lt;Bool&gt;</code>. It will this Binding <code class="language-plaintext highlighter-rouge">get</code> determines whether it will be shown and the <code class="language-plaintext highlighter-rouge">set</code> gets called when the alert is no longer shown.</p>

<p>The result is that you might have a <code class="language-plaintext highlighter-rouge">@State var shouldShowAlert: Bool = false</code> declared at the top of your view and you’re ready to go! Very few apps are simply there to show an alert. In fact, a determination to show an alert is usually dependent on a condition or state change with actual real data, e.g., a response object came back as nil.  This means that you’ll probably have logic in your view that controls flipping your <code class="language-plaintext highlighter-rouge">shouldShowAlert</code> bool based on certain conditions. If you’ve been around, you might have read “logic in your view” and replaced it in your mind with “logic in your view that is not easily unit tested”.</p>

<p>So, you might write something like:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Binding(get: { return yourObject == nil }, 
	    set: { if $0 { resetState() } } )
</code></pre></div></div>

<h3 id="when-in-doubt-measure">When In Doubt, Measure</h3>

<p>Fortunately, we have a tool at our disposal that might give some clarity as to whether it’s as bad as we suspect: Good Ol’ Instruments. As I learned at the delightful <a href="https://developer.apple.com/events/view/DA5NDP29C3/dashboard">Bring SwiftUI to Your App</a> workshop, Instruments has a <a href="https://www.hackingwithswift.com/quick-start/swiftui/how-to-use-instruments-to-profile-your-swiftui-code-and-identify-slow-layouts">SwiftUI template</a> to measure how many layouts are occurring and how long they take.</p>

<p>You can find the code I used to measure here: https://github.com/jacobvanorder/BooBindings. My procedure is to put all examples in a <code class="language-plaintext highlighter-rouge">TabView</code> and then select the tab, present the alert, dismiss the alert, and wait five seconds before I try the next option. Also, I converted all of the timings to microseconds.</p>

<h4 id="option-number-one-a-state-property">Option Number One: A State Property</h4>

<p>In <a href="https://github.com/jacobvanorder/BooBindings/blob/main/BooBindings/ViewLogicBindingView.swift">this option</a>, you manually control boolean for showing the alert. Button gets tapped and we set the model object and flip the bool. The alert is driven by a <code class="language-plaintext highlighter-rouge">@State</code> bool variable that you’ll have to remember to flip each scenario that happens and will happen if you need to add on in the future.</p>

<p><img src="/assets/images/2025-03-23-swiftui-bindings-digging-a-little-deeper/ViewLogic.png" alt="An Instruments Result for the View Logic Option" /></p>

<p>We have 50 total layouts with a duration of 897.88 microseconds. Two of the layouts are for the <code class="language-plaintext highlighter-rouge">ViewLogicBindingView</code> itself.</p>

<h4 id="option-number-two-synthesized">Option Number Two: Synthesized</h4>

<p><a href="https://github.com/jacobvanorder/BooBindings/blob/main/BooBindings/SynthesizedBindingView.swift">Here</a> we use the <code class="language-plaintext highlighter-rouge">Binding(get:set:)</code> option. No extra <code class="language-plaintext highlighter-rouge">@State</code> property and no logic to maintain.</p>

<p><img src="/assets/images/2025-03-23-swiftui-bindings-digging-a-little-deeper/Synthesized.png" alt="An Instruments Result for the Synthesize Option" /></p>

<p>We have 71 total layouts with a duration of 1,250 microseconds. Three of the layouts are for the <code class="language-plaintext highlighter-rouge">SynthesizedBindingView</code> itself.</p>

<h4 id="option-number-three-view-model-driven-option">Option Number Three: View Model Driven Option</h4>

<p>At the “Bring SwiftUI to Your App” workshop, they also talked about they preferred using <code class="language-plaintext highlighter-rouge">@Observable</code> classes when the logic within a view gets unwieldy or difficult to manage. In this case, I create a <a href="https://github.com/jacobvanorder/BooBindings/blob/main/BooBindings/ViewModelDrivingView.swift#L34-L41">view model</a> that has both the model object and a <code class="language-plaintext highlighter-rouge">var</code> boolean. When the object gets changed, so does the boolean in a <code class="language-plaintext highlighter-rouge">willSet</code> on the object. This class is then used as a <code class="language-plaintext highlighter-rouge">@State</code> var on the view itself and will trigger a view update when variable change. The plus side to this is that you can unit test this class independently and fairly easily.</p>

<p><img src="/assets/images/2025-03-23-swiftui-bindings-digging-a-little-deeper/ViewModel.png" alt="An Instruments Result for the View Model Option" /></p>

<p>We have 56 total layouts with a duration of 2,620 microseconds. Three of the layouts are for the <code class="language-plaintext highlighter-rouge">ViewModelDrivingView</code> itself. That is considerably slower, though.</p>

<h4 id="option-number-four-side-effect-on-the-view">Option Number Four: Side Effect on the View</h4>

<p>What if we got rid of the view model but had <a href="https://github.com/jacobvanorder/BooBindings/blob/main/BooBindings/SideEffectView.swift#L11-L16">similar logic</a> on the view where you have both the model object and a <code class="language-plaintext highlighter-rouge">var</code> boolean. Again, when the object gets changed, so does the boolean in a <code class="language-plaintext highlighter-rouge">willSet</code>. Can’t easily unit test but thems the breaks.</p>

<p><img src="/assets/images/2025-03-23-swiftui-bindings-digging-a-little-deeper/SideEffect.png" alt="An Instruments Result for the View Model Option" /></p>

<p>We have 56 total layouts with a duration of 944.21 microseconds. Three of the layouts are for the <code class="language-plaintext highlighter-rouge">SideEffectView</code> itself.</p>

<h2 id="in-conclusion">In Conclusion</h2>

<p>From a purely numbers aspect, the simple solution is the winner and the view model class is the loser but there are other factors to consider. This was the easiest example I could cobble together on a Sunday. Real world apps have complex scenarios that should be unit tested and are maintained by teams of people with varying skill levels.</p>

<p>The real answer to whether you should use <code class="language-plaintext highlighter-rouge">Binding(get:set:)</code> is to consider the trade offs of doing so. Run it through instruments and then consider whether the logic you’re introducing is easily testable and maintainable.</p>]]></content><author><name>Jacob</name></author><summary type="html"><![CDATA[@Bindings In his post about Bindings, the delightful Chris Eidhof gives an overview about how synthesizing SwiftUI Bindings might not turn out how you expect. The tl;dr is using Binding(get:set:) might be convenient, but can introduce performance bottlenecks, especially in complex views or when creating member bindings. Chris recommends against using this in production code. As someone who has used a Binding(get:set:) in production, I wanted to investigate a little further as well as give a little backstory as to why you’d want to use Binding(get:set:) in the first place. SwiftUI Alert I’m going to use SwiftUI Alert as an example but there are other instances in SwiftUI where the view modifier that dictates whether something is to be shown takes a Binding&lt;Bool&gt;. It will this Binding get determines whether it will be shown and the set gets called when the alert is no longer shown. The result is that you might have a @State var shouldShowAlert: Bool = false declared at the top of your view and you’re ready to go! Very few apps are simply there to show an alert. In fact, a determination to show an alert is usually dependent on a condition or state change with actual real data, e.g., a response object came back as nil. This means that you’ll probably have logic in your view that controls flipping your shouldShowAlert bool based on certain conditions. If you’ve been around, you might have read “logic in your view” and replaced it in your mind with “logic in your view that is not easily unit tested”. So, you might write something like: Binding(get: { return yourObject == nil }, set: { if $0 { resetState() } } ) When In Doubt, Measure Fortunately, we have a tool at our disposal that might give some clarity as to whether it’s as bad as we suspect: Good Ol’ Instruments. As I learned at the delightful Bring SwiftUI to Your App workshop, Instruments has a SwiftUI template to measure how many layouts are occurring and how long they take. You can find the code I used to measure here: https://github.com/jacobvanorder/BooBindings. My procedure is to put all examples in a TabView and then select the tab, present the alert, dismiss the alert, and wait five seconds before I try the next option. Also, I converted all of the timings to microseconds. Option Number One: A State Property In this option, you manually control boolean for showing the alert. Button gets tapped and we set the model object and flip the bool. The alert is driven by a @State bool variable that you’ll have to remember to flip each scenario that happens and will happen if you need to add on in the future. We have 50 total layouts with a duration of 897.88 microseconds. Two of the layouts are for the ViewLogicBindingView itself. Option Number Two: Synthesized Here we use the Binding(get:set:) option. No extra @State property and no logic to maintain. We have 71 total layouts with a duration of 1,250 microseconds. Three of the layouts are for the SynthesizedBindingView itself. Option Number Three: View Model Driven Option At the “Bring SwiftUI to Your App” workshop, they also talked about they preferred using @Observable classes when the logic within a view gets unwieldy or difficult to manage. In this case, I create a view model that has both the model object and a var boolean. When the object gets changed, so does the boolean in a willSet on the object. This class is then used as a @State var on the view itself and will trigger a view update when variable change. The plus side to this is that you can unit test this class independently and fairly easily. We have 56 total layouts with a duration of 2,620 microseconds. Three of the layouts are for the ViewModelDrivingView itself. That is considerably slower, though. Option Number Four: Side Effect on the View What if we got rid of the view model but had similar logic on the view where you have both the model object and a var boolean. Again, when the object gets changed, so does the boolean in a willSet. Can’t easily unit test but thems the breaks. We have 56 total layouts with a duration of 944.21 microseconds. Three of the layouts are for the SideEffectView itself. In Conclusion From a purely numbers aspect, the simple solution is the winner and the view model class is the loser but there are other factors to consider. This was the easiest example I could cobble together on a Sunday. Real world apps have complex scenarios that should be unit tested and are maintained by teams of people with varying skill levels. The real answer to whether you should use Binding(get:set:) is to consider the trade offs of doing so. Run it through instruments and then consider whether the logic you’re introducing is easily testable and maintainable.]]></summary></entry><entry><title type="html">Estimation is a Trap</title><link href="https://jacobvanorder.github.io/estimation-is-a-trap/" rel="alternate" type="text/html" title="Estimation is a Trap" /><published>2024-08-25T00:04:00+00:00</published><updated>2024-08-25T00:04:00+00:00</updated><id>https://jacobvanorder.github.io/estimation-is-a-trap</id><content type="html" xml:base="https://jacobvanorder.github.io/estimation-is-a-trap/"><![CDATA[<h1 id="estimation-is-a-trap">Estimation is a Trap</h1>

<p>As a software developer, you’ll often be given a list of requirements, immediately followed by, “When will it be done?” It’s a perfectly reasonable question! The person asking probably has a boss and needs to provide an answer when they are asked the same question, because, well, their boss is asking the same thing. Additionally, knowing when something will be done helps others prepare for the next step in the software’s lifecycle. Again, all reasonable!</p>

<p>We, as problem solvers, have tried to come up with systems to predict the future using Agile methodologies, points, and burndown charts, as if a million different variables aren’t at play all at once. We’ve even turned it into a <a href="https://en.wikipedia.org/wiki/Planning_poker">nice game</a>! Or, instead of a number, we can use <a href="https://asana.com/resources/t-shirt-sizing">garment vernacular</a>! How fun!</p>

<h2 id="so-why-is-it-a-trap">So, why is it a trap?</h2>

<p>Let’s discuss underpromising and overdelivering. It’s a combination that often goes well together. However, when tasked with estimation, it’s very easy to slip into the realm of overpromising and underdelivering. It’s not that you mean to; you had the best intentions after all. It is a perfectly natural tendency for humans to do this, not just with time but also with time’s close relative, money. Sometimes, both time and money are improperly estimated, especially in <a href="https://en.m.wikipedia.org/wiki/Big_Dig">government projects</a>. If you maintain a home, you know how an expert can come in, give an estimate, and then blow right past it significantly.</p>

<p>Some people are well aware of the <a href="https://en.wikipedia.org/wiki/Sunk_cost">sunk cost</a> fallacy and will use it to their advantage. In Robert Caro’s <a href="https://en.wikipedia.org/wiki/The_Power_Broker">“The Power Broker: Robert Moses and the Fall of New York”</a>, there are stories where Moses would estimate a fraction of the true cost of a public works project. He’d start the project knowing full well that legislators wouldn’t want an empty hole where a highway should be when the initial funds ran out.</p>

<h2 id="well-can-i-just-not-give-an-estimate">Well, can I just not give an estimate?</h2>

<p>Ah, yes! The only way to win is not to play! Unfortunately, you will be viewed as stubborn and unhelpful. I once worked with someone who would cross his arms, put his nose in the air, and state that he <em>simply wouldn’t estimate</em>. My dude, it’s not like the person asking is doing it for fun! Show some empathy and work with your comrade, alright? If you don’t provide the estimate, sometimes the person asking will just come up with their own, which is usually based on fantasy, but there’s still an expectation that you’ll meet that made-up deadline. No one wants that.</p>

<h2 id="but-why-are-we-so-bad-at-it">But why are we so bad at it?</h2>

<p><strong>Bias, ego, and a willingness to please.</strong></p>

<p>For those of us with a breadth of experience, when given a task, we instantly reach back into our memory banks to remember if we’ve dealt with a similar task before to use as a baseline. Our <a href="https://en.wikipedia.org/wiki/Rosy_retrospection">own biases</a> color this measurement, and other factors might have changed, impacting your progress. Plus, we think we’re really good at our jobs. So good that this time will be a <em>breeze</em>.</p>

<p>If you don’t have that breadth of experience or if you feel uncertain, you might blurt out an estimate that you think will impress the person asking. It’s okay! We all do it. Being fast is often equated with superior skill. If you give a quick estimate and then meet it, it just proves how awesome you are.</p>

<p>These factors lead to you not being able to say the truth, which is that you don’t know for certain. If you’re just starting out or feel insecure in your job, showing those cards feels like a weakness.</p>

<h2 id="so-whats-the-plan">So, what’s the plan?</h2>

<p>There are a few tools I use to try to give the best possible answer when put on the spot, but some of them won’t work if you don’t work in a safe and trusting environment. That’s a whole other issue I haven’t solved yet but fortunately don’t have to deal with currently.</p>

<h3 id="confer">Confer</h3>

<p>The act of pointing tickets is essentially meaningless. We all say it’s not based on time, but you and everyone else knows that it is. The real value in pointing is discussing with your peers what you need to accomplish and how you might approach it. They might have a better solution, know of potential traps, or ask clarifying questions. As a more seasoned developer tasked with estimating as part of being a lead, your guess is based on what it would take <strong>you</strong> to accomplish the task, not accounting for various levels of experience and skill. Team members of varying skills pointing allows you to remember that, as long as they feel comfortable expressing their real number. The downside of this approach is that it takes time and requires context switching for your team. It also obviously doesn’t work if you’re alone on your team.</p>

<h3 id="spike">Spike</h3>

<p>If you work in an environment where you can say, “I’m not sure, but I can quickly gather some info for a better guess,” this can also lead to a better estimate. If someone is asking you to do something you’re not certain about, determine a set time to do some research to gain knowledge about whether the task is even possible. This requires looking into your own code, documentation, institutional knowledge, and resources like Google, Stack Overflow, and blogs. Try to gauge how complex it is based on what others have gone through to accomplish a similar task, bearing in mind that their experiences may also be biased.</p>

<h3 id="discuss-tradeoffs">Discuss Tradeoffs</h3>

<p>Perhaps there are three easy parts of the task, but one part is unknown, such as a wild animation or a new navigation style or technology you aren’t familiar with. Have a discussion with the person asking. Maybe they are flexible about it. Don’t just say, “No, I won’t do that.” Explain that it’s an unknown but offer a solution. Perhaps the wild animation is not crucial, or a time-tested navigation style might suffice. Just discuss it and let the person asking know the cost associated with what they’re requesting. After all, they don’t know, which is why they are <a href="https://xkcd.com/1425/">asking you</a>.</p>

<h3 id="fudge">Fudge</h3>

<p>If you don’t have any of these luxuries or if you work somewhere that doesn’t understand that you’re making your best guess, then take the time that is in your head and <strong>double it</strong>. This gives you a buffer for a busted code base, sudden requirements, sickness, and other unknown obstacles. I understand this takes confidence because you might fear that stating a longer time will result in being replaced by someone who can meet the original expectation. However, if they balk at the extended time, try discussing what could be trimmed so you can be more confident in meeting the revised deadline. <em>“But what happens if you finish in half the time? Won’t that hurt your integrity?”</em> If it’s extremely egregious, yes, but generally, the person asking will be thrilled that it’s ready, and that will outweigh any concerns. Most of the time, issues arise, and you’ll be glad to have that padding.</p>

<h3 id="constantly-communicatedocument">Constantly Communicate/Document</h3>

<p>Even if you have the luxury of using any of the aforementioned tools, but especially if you don’t, it’s in your best interest to communicate your current status to the requesting party. Show progress, communicate roadblocks, discuss how incoming requirement changes might impact the timeline, and if something that seemed easy turns out to be challenging, explain why and offer alternatives. Document everything because what tends to happen is that the person asking stops listening after receiving an estimate, <strong>especially</strong> when new requirements come in after the estimation. Beware! Even with after doing all that copious communcation, I have been bitten when I didn’t meet a date I set months ago. If you have the documentation, it can come off as “I told you so” or blamey so tread lightly!</p>

<h2 id="embrace-the-suck">Embrace the Suck</h2>

<p>In my opinion, estimation should be a combination of how long you think it will take and how certain you are about it. If you don’t feel confident and the requesting party wants a concrete date, you should be given time to shore up any blind spots.</p>

<p>I like to use the analogy of cooking when explaining this to people who insist on a specific date:</p>

<p>I ask them how long it would take to make a peanut butter and jelly sandwich and usually get an answer like “five minutes.” Then I ask, “What if your house were on fire, you were missing butter knives, and the bread was moldy? What if, instead of a peanut butter and jelly sandwich, you had to make <a href="https://en.wikipedia.org/wiki/Hákarl">fermented shark</a>? What’s the estimate then?” This is somewhat analogous to being a software developer. But to extend the analogy, the person asking is usually the waitstaff, and the customer is really hungry, so understand that they are just doing their job and it’s all part of the system.</p>

<p>In a completely new app with a logical and sane API to interface with, I could spin up a table view in iOS within twenty minutes. But that’s not the world we live in. We work within imperfect code bases, interfacing with wacky APIs, dealing with scenarios not anticipated, while working with underdeveloped requirements, and handling shifting desires. I wish we could not only state how long it would take but also how certain we are about it to provide an over/under estimate. But people generally care about the date and not about how we feel about it. In the meantime, use the techniques above to highlight the hazards and complexities you’ll need to work around. If you know any other techniques, drop me a line!</p>]]></content><author><name>Jacob</name></author><summary type="html"><![CDATA[Estimation is a Trap As a software developer, you’ll often be given a list of requirements, immediately followed by, “When will it be done?” It’s a perfectly reasonable question! The person asking probably has a boss and needs to provide an answer when they are asked the same question, because, well, their boss is asking the same thing. Additionally, knowing when something will be done helps others prepare for the next step in the software’s lifecycle. Again, all reasonable! We, as problem solvers, have tried to come up with systems to predict the future using Agile methodologies, points, and burndown charts, as if a million different variables aren’t at play all at once. We’ve even turned it into a nice game! Or, instead of a number, we can use garment vernacular! How fun! So, why is it a trap? Let’s discuss underpromising and overdelivering. It’s a combination that often goes well together. However, when tasked with estimation, it’s very easy to slip into the realm of overpromising and underdelivering. It’s not that you mean to; you had the best intentions after all. It is a perfectly natural tendency for humans to do this, not just with time but also with time’s close relative, money. Sometimes, both time and money are improperly estimated, especially in government projects. If you maintain a home, you know how an expert can come in, give an estimate, and then blow right past it significantly. Some people are well aware of the sunk cost fallacy and will use it to their advantage. In Robert Caro’s “The Power Broker: Robert Moses and the Fall of New York”, there are stories where Moses would estimate a fraction of the true cost of a public works project. He’d start the project knowing full well that legislators wouldn’t want an empty hole where a highway should be when the initial funds ran out. Well, can I just not give an estimate? Ah, yes! The only way to win is not to play! Unfortunately, you will be viewed as stubborn and unhelpful. I once worked with someone who would cross his arms, put his nose in the air, and state that he simply wouldn’t estimate. My dude, it’s not like the person asking is doing it for fun! Show some empathy and work with your comrade, alright? If you don’t provide the estimate, sometimes the person asking will just come up with their own, which is usually based on fantasy, but there’s still an expectation that you’ll meet that made-up deadline. No one wants that. But why are we so bad at it? Bias, ego, and a willingness to please. For those of us with a breadth of experience, when given a task, we instantly reach back into our memory banks to remember if we’ve dealt with a similar task before to use as a baseline. Our own biases color this measurement, and other factors might have changed, impacting your progress. Plus, we think we’re really good at our jobs. So good that this time will be a breeze. If you don’t have that breadth of experience or if you feel uncertain, you might blurt out an estimate that you think will impress the person asking. It’s okay! We all do it. Being fast is often equated with superior skill. If you give a quick estimate and then meet it, it just proves how awesome you are. These factors lead to you not being able to say the truth, which is that you don’t know for certain. If you’re just starting out or feel insecure in your job, showing those cards feels like a weakness. So, what’s the plan? There are a few tools I use to try to give the best possible answer when put on the spot, but some of them won’t work if you don’t work in a safe and trusting environment. That’s a whole other issue I haven’t solved yet but fortunately don’t have to deal with currently. Confer The act of pointing tickets is essentially meaningless. We all say it’s not based on time, but you and everyone else knows that it is. The real value in pointing is discussing with your peers what you need to accomplish and how you might approach it. They might have a better solution, know of potential traps, or ask clarifying questions. As a more seasoned developer tasked with estimating as part of being a lead, your guess is based on what it would take you to accomplish the task, not accounting for various levels of experience and skill. Team members of varying skills pointing allows you to remember that, as long as they feel comfortable expressing their real number. The downside of this approach is that it takes time and requires context switching for your team. It also obviously doesn’t work if you’re alone on your team. Spike If you work in an environment where you can say, “I’m not sure, but I can quickly gather some info for a better guess,” this can also lead to a better estimate. If someone is asking you to do something you’re not certain about, determine a set time to do some research to gain knowledge about whether the task is even possible. This requires looking into your own code, documentation, institutional knowledge, and resources like Google, Stack Overflow, and blogs. Try to gauge how complex it is based on what others have gone through to accomplish a similar task, bearing in mind that their experiences may also be biased. Discuss Tradeoffs Perhaps there are three easy parts of the task, but one part is unknown, such as a wild animation or a new navigation style or technology you aren’t familiar with. Have a discussion with the person asking. Maybe they are flexible about it. Don’t just say, “No, I won’t do that.” Explain that it’s an unknown but offer a solution. Perhaps the wild animation is not crucial, or a time-tested navigation style might suffice. Just discuss it and let the person asking know the cost associated with what they’re requesting. After all, they don’t know, which is why they are asking you. Fudge If you don’t have any of these luxuries or if you work somewhere that doesn’t understand that you’re making your best guess, then take the time that is in your head and double it. This gives you a buffer for a busted code base, sudden requirements, sickness, and other unknown obstacles. I understand this takes confidence because you might fear that stating a longer time will result in being replaced by someone who can meet the original expectation. However, if they balk at the extended time, try discussing what could be trimmed so you can be more confident in meeting the revised deadline. “But what happens if you finish in half the time? Won’t that hurt your integrity?” If it’s extremely egregious, yes, but generally, the person asking will be thrilled that it’s ready, and that will outweigh any concerns. Most of the time, issues arise, and you’ll be glad to have that padding. Constantly Communicate/Document Even if you have the luxury of using any of the aforementioned tools, but especially if you don’t, it’s in your best interest to communicate your current status to the requesting party. Show progress, communicate roadblocks, discuss how incoming requirement changes might impact the timeline, and if something that seemed easy turns out to be challenging, explain why and offer alternatives. Document everything because what tends to happen is that the person asking stops listening after receiving an estimate, especially when new requirements come in after the estimation. Beware! Even with after doing all that copious communcation, I have been bitten when I didn’t meet a date I set months ago. If you have the documentation, it can come off as “I told you so” or blamey so tread lightly! Embrace the Suck In my opinion, estimation should be a combination of how long you think it will take and how certain you are about it. If you don’t feel confident and the requesting party wants a concrete date, you should be given time to shore up any blind spots. I like to use the analogy of cooking when explaining this to people who insist on a specific date: I ask them how long it would take to make a peanut butter and jelly sandwich and usually get an answer like “five minutes.” Then I ask, “What if your house were on fire, you were missing butter knives, and the bread was moldy? What if, instead of a peanut butter and jelly sandwich, you had to make fermented shark? What’s the estimate then?” This is somewhat analogous to being a software developer. But to extend the analogy, the person asking is usually the waitstaff, and the customer is really hungry, so understand that they are just doing their job and it’s all part of the system. In a completely new app with a logical and sane API to interface with, I could spin up a table view in iOS within twenty minutes. But that’s not the world we live in. We work within imperfect code bases, interfacing with wacky APIs, dealing with scenarios not anticipated, while working with underdeveloped requirements, and handling shifting desires. I wish we could not only state how long it would take but also how certain we are about it to provide an over/under estimate. But people generally care about the date and not about how we feel about it. In the meantime, use the techniques above to highlight the hazards and complexities you’ll need to work around. If you know any other techniques, drop me a line!]]></summary></entry><entry><title type="html">Presenting 3D Assets on Vision Pro</title><link href="https://jacobvanorder.github.io/presenting-3d-assets-on-vision-pro/" rel="alternate" type="text/html" title="Presenting 3D Assets on Vision Pro" /><published>2024-03-23T16:36:00+00:00</published><updated>2024-03-23T16:36:00+00:00</updated><id>https://jacobvanorder.github.io/presenting-3d-assets-on-vision-pro</id><content type="html" xml:base="https://jacobvanorder.github.io/presenting-3d-assets-on-vision-pro/"><![CDATA[<h1 id="vision-pro">Vision Pro!</h1>

<p>Let’s not get into how $3,500 could be better spent, if this is <a href="https://en.wikipedia.org/wiki/Microsoft_Tablet_PC">really the best time for the release</a> of this hardware, or if iPadOS was the best choice of a platform to base “spatial computing” on.</p>

<p>It is what it is.</p>

<p>I truly believe that a form of what the Vision Pro <strong>is</strong> <em>will be</em> integral to computing in the future. I don’t think it’s as good as the first iPhone in terms of hitting the target right at the start but I hope it’s not like the iPad where a promising beginning is hampered by being tied to an operating system that limits it.</p>

<h2 id="presenting-3d-models">Presenting 3D Models</h2>

<p>I don’t think that, unlike the phone, the compelling mode of the Vision Pro is looking at an endless scrollview of content. Instead, being able to see a 3D asset in stereo 3D gives the most bang for the buck. Watching a movie on a place on screen the size of a theater is cool but watching truly immersive material wherever you are is that much more special and worth the tradeoffs of having a hunk of metal and glass strapped to your face.</p>

<h3 id="different-methods-of-presenting-a-3d-asset">Different Methods of Presenting a 3D Asset</h3>

<p>As I <a href="https://jacobvanorder.github.io/a-glimpse-into-the-future/">posted before</a>, there are ways to generate 3D assets using your phone. As a brief update, Apple has released this functionality now <a href="https://developer.apple.com/videos/play/wwdc2023/10191/">completely on your phone</a> and the results are spectacular. You can generate a 3D model using your iPhone but, grumble, grumble, not on your $3,500 Vision Pro.</p>

<p>Unlike on the phone, though, VisionOS provides very easy ways to present the 3D content to the user whether embedded within the user interface or within a more freeform manner. In this post, we’ll be touching on the more simpler form of presenting 3D content. The more complicated form, <code class="language-plaintext highlighter-rouge">RealityView</code> could fill a series of blog posts which I’ll be tackling.</p>

<p>The sample code for these examples is <a href="https://github.com/jacobvanorder/VisionProPlacement">here</a>.</p>

<h4 id="model3d">Model3D</h4>

<p><code class="language-plaintext highlighter-rouge">Model3D</code> is a <a href="https://developer.apple.com/documentation/realitykit/model3d/">SwiftUI view</a> that, in the words of the documentation: “asynchronously loads and displays a 3D model”. This, though, undersells it’s capability. It can do it from a local file OR a URL. Because both of these methods can be time consuming, it is done in a way similar to the <a href="https://developer.apple.com/documentation/swiftui/asyncimage"><code class="language-plaintext highlighter-rouge">AsyncImage</code></a> view does for images loaded from the network.</p>

<p>This means that you have the view itself but then, when the 3D asset is loaded, you are presented with a <a href="https://developer.apple.com/documentation/realitykit/resolvedmodel3d"><code class="language-plaintext highlighter-rouge">ResolvedModel3D</code></a> that you can then alter.</p>

<h5 id="animation">Animation</h5>

<p>Given we have this 3D asset in our space, we can then animate the content just like we would a normal SwiftUI view but, again, this will need to be done to the <code class="language-plaintext highlighter-rouge">ResolvedModel3D</code> content. The traditional way of a continuous animation would be where you add a <code class="language-plaintext highlighter-rouge">@State</code> property that keeps track of if the content has appeared and uses that to use as the basis for the before and after values for the animation. Then it is a matter of using the new <code class="language-plaintext highlighter-rouge">rotation3DEffect</code> on the resolved content. Alternatively, you can use the new <code class="language-plaintext highlighter-rouge">PhaseAnimation</code> and not have a need for the <code class="language-plaintext highlighter-rouge">@State</code> property.</p>

<p>For either way you go, beware that the layout frame might not be what you expect. Alternatively, because the layout is based on the width and the height of the model <em>but not the depth</em>, when you rotate along the y-axis, the depth will now become the width and you layout might look wrong. You can utilize the new <code class="language-plaintext highlighter-rouge">GeometryReader3D</code> in order to gather the height, width, and now depth of the view and adjust accordingly.</p>

<h5 id="gestures">Gestures</h5>

<p>For both examples, we’ll be modifying the views with <code class="language-plaintext highlighter-rouge">.gesture</code> but whichever gesture we choose, we need to tell the view that these will apply not to the view but to the entity contained within via the <code class="language-plaintext highlighter-rouge">.targetedToAnyEntity()</code> modifier on the gesture. You can also specify which entity you want to attach the gesture to by using <code class="language-plaintext highlighter-rouge">.targetedToEntity(entity: Entity)</code> or <code class="language-plaintext highlighter-rouge">.targetedToEntity(where: QueryPredicate&lt;Entity&gt;)</code>. The <code class="language-plaintext highlighter-rouge">.onChanged</code> and <code class="language-plaintext highlighter-rouge">.onEnded</code> modifier will now have 3D-specific types passed in.</p>

<h6 id="drag-gestures">Drag Gestures</h6>

<p>We can use a traditional <code class="language-plaintext highlighter-rouge">DragGesture</code> in order to rotate the content using the <code class="language-plaintext highlighter-rouge">rotation3DEffect</code> we used for the animation. In the <a href="https://github.com/jacobvanorder/Presenting3DContent/blob/main/Presenting3DContent/Examples/Model3DDragGestureView.swift">example</a>, we have a <code class="language-plaintext highlighter-rouge">startSpinValue</code> and a <code class="language-plaintext highlighter-rouge">spinValue</code> that we’ll be keeping track of. The difference between the two is that <code class="language-plaintext highlighter-rouge">startSpinValue</code> is sort of the baseline value that we keep track of while the drag gesture is happening. We get the delta of the drag by calculating the difference between the start and current position and applying that <em>plus</em> the <code class="language-plaintext highlighter-rouge">startSpinValue</code> to set the <code class="language-plaintext highlighter-rouge">spinValue</code>. If we did not have the <code class="language-plaintext highlighter-rouge">startSpinValue</code>, if we were to rotate the entity for a second time, it would begin rotating from 0.0 and not from the previous value we rotated to.</p>

<h6 id="rotate-gesture-3d">Rotate Gesture 3D</h6>

<p>Because this is the Vision Pro, you can rotate by pinching both of your hands and acting like you’re turning a wheel in space in order to rotate an item. This will allow you to save your drag gesture for when you want to move your item but still reserve the ability to rotate it. The <a href="https://github.com/jacobvanorder/Presenting3DContent/blob/main/Presenting3DContent/Examples/Model3DRotateGestureView.swift">code</a> for this example is different because we don’t store a value of amount we are spinning the entity but we do have an optional value that is meant to be the baseline value of the rotation that has happened. Additionally, we don’t use the <code class="language-plaintext highlighter-rouge">rotation3DEffect</code> and instead change the entity’s <code class="language-plaintext highlighter-rouge">transform</code> value by <em>multiplying</em> the baseline value times the gesture’s rotation value.  I added <code class="language-plaintext highlighter-rouge">Model3DDragGestureAltView</code> in order to show how you might do this way of rotating the item using the drag gesture.</p>

<h5 id="gotchas">Gotchas</h5>

<p>Because you have a container of the <code class="language-plaintext highlighter-rouge">Model3D</code> and then the actual content of the <code class="language-plaintext highlighter-rouge">ResolvedModel3D</code>, you can get into a situation where the layout frame of the container might not be what you expect it to be based on the actual content.</p>

<h6 id="sizing">Sizing</h6>

<p>Just like <code class="language-plaintext highlighter-rouge">AsyncImage</code>, the view doesn’t know the resulting content size. Usually, it “just works” but if you start animating or altering the resolved 3D content, be aware that you’re not dealing with both width and height but also depth.</p>

<p>For instance, because it defaults to placing the content where the back is placed against the front of the view you are placing it in, perspective and the depth of the model might hide the other content in the <code class="language-plaintext highlighter-rouge">VStack</code> or <code class="language-plaintext highlighter-rouge">HStack</code> so be mindful.</p>

<p><img src="/assets/images/2024-03-23-presenting-3d-assets-on-vision-pro/hidden-text.png" alt="Hidden text under a 3D Model" /></p>

<h6 id="view-modifiers">View Modifiers</h6>

<p>Because these are all extensions on <code class="language-plaintext highlighter-rouge">View</code> that throw a view modifier into the great next responder chain that is <code class="language-plaintext highlighter-rouge">@environment</code>, view modifiers such as <code class="language-plaintext highlighter-rouge">blur(radius:)</code> or <code class="language-plaintext highlighter-rouge">blendMode(_:)</code> don’t work but <code class="language-plaintext highlighter-rouge">.opacity(_:)</code> <em>does</em> (grumble, grumble, grumble).</p>]]></content><author><name>Jacob</name></author><summary type="html"><![CDATA[Vision Pro! Let’s not get into how $3,500 could be better spent, if this is really the best time for the release of this hardware, or if iPadOS was the best choice of a platform to base “spatial computing” on. It is what it is. I truly believe that a form of what the Vision Pro is will be integral to computing in the future. I don’t think it’s as good as the first iPhone in terms of hitting the target right at the start but I hope it’s not like the iPad where a promising beginning is hampered by being tied to an operating system that limits it. Presenting 3D Models I don’t think that, unlike the phone, the compelling mode of the Vision Pro is looking at an endless scrollview of content. Instead, being able to see a 3D asset in stereo 3D gives the most bang for the buck. Watching a movie on a place on screen the size of a theater is cool but watching truly immersive material wherever you are is that much more special and worth the tradeoffs of having a hunk of metal and glass strapped to your face. Different Methods of Presenting a 3D Asset As I posted before, there are ways to generate 3D assets using your phone. As a brief update, Apple has released this functionality now completely on your phone and the results are spectacular. You can generate a 3D model using your iPhone but, grumble, grumble, not on your $3,500 Vision Pro. Unlike on the phone, though, VisionOS provides very easy ways to present the 3D content to the user whether embedded within the user interface or within a more freeform manner. In this post, we’ll be touching on the more simpler form of presenting 3D content. The more complicated form, RealityView could fill a series of blog posts which I’ll be tackling. The sample code for these examples is here. Model3D Model3D is a SwiftUI view that, in the words of the documentation: “asynchronously loads and displays a 3D model”. This, though, undersells it’s capability. It can do it from a local file OR a URL. Because both of these methods can be time consuming, it is done in a way similar to the AsyncImage view does for images loaded from the network. This means that you have the view itself but then, when the 3D asset is loaded, you are presented with a ResolvedModel3D that you can then alter. Animation Given we have this 3D asset in our space, we can then animate the content just like we would a normal SwiftUI view but, again, this will need to be done to the ResolvedModel3D content. The traditional way of a continuous animation would be where you add a @State property that keeps track of if the content has appeared and uses that to use as the basis for the before and after values for the animation. Then it is a matter of using the new rotation3DEffect on the resolved content. Alternatively, you can use the new PhaseAnimation and not have a need for the @State property. For either way you go, beware that the layout frame might not be what you expect. Alternatively, because the layout is based on the width and the height of the model but not the depth, when you rotate along the y-axis, the depth will now become the width and you layout might look wrong. You can utilize the new GeometryReader3D in order to gather the height, width, and now depth of the view and adjust accordingly. Gestures For both examples, we’ll be modifying the views with .gesture but whichever gesture we choose, we need to tell the view that these will apply not to the view but to the entity contained within via the .targetedToAnyEntity() modifier on the gesture. You can also specify which entity you want to attach the gesture to by using .targetedToEntity(entity: Entity) or .targetedToEntity(where: QueryPredicate&lt;Entity&gt;). The .onChanged and .onEnded modifier will now have 3D-specific types passed in. Drag Gestures We can use a traditional DragGesture in order to rotate the content using the rotation3DEffect we used for the animation. In the example, we have a startSpinValue and a spinValue that we’ll be keeping track of. The difference between the two is that startSpinValue is sort of the baseline value that we keep track of while the drag gesture is happening. We get the delta of the drag by calculating the difference between the start and current position and applying that plus the startSpinValue to set the spinValue. If we did not have the startSpinValue, if we were to rotate the entity for a second time, it would begin rotating from 0.0 and not from the previous value we rotated to. Rotate Gesture 3D Because this is the Vision Pro, you can rotate by pinching both of your hands and acting like you’re turning a wheel in space in order to rotate an item. This will allow you to save your drag gesture for when you want to move your item but still reserve the ability to rotate it. The code for this example is different because we don’t store a value of amount we are spinning the entity but we do have an optional value that is meant to be the baseline value of the rotation that has happened. Additionally, we don’t use the rotation3DEffect and instead change the entity’s transform value by multiplying the baseline value times the gesture’s rotation value. I added Model3DDragGestureAltView in order to show how you might do this way of rotating the item using the drag gesture. Gotchas Because you have a container of the Model3D and then the actual content of the ResolvedModel3D, you can get into a situation where the layout frame of the container might not be what you expect it to be based on the actual content. Sizing Just like AsyncImage, the view doesn’t know the resulting content size. Usually, it “just works” but if you start animating or altering the resolved 3D content, be aware that you’re not dealing with both width and height but also depth. For instance, because it defaults to placing the content where the back is placed against the front of the view you are placing it in, perspective and the depth of the model might hide the other content in the VStack or HStack so be mindful. View Modifiers Because these are all extensions on View that throw a view modifier into the great next responder chain that is @environment, view modifiers such as blur(radius:) or blendMode(_:) don’t work but .opacity(_:) does (grumble, grumble, grumble).]]></summary></entry><entry><title type="html">I Built a Keyboard (Part 2)</title><link href="https://jacobvanorder.github.io/i-built-a-keyboard-part-2/" rel="alternate" type="text/html" title="I Built a Keyboard (Part 2)" /><published>2023-06-29T14:56:00+00:00</published><updated>2023-06-29T14:56:00+00:00</updated><id>https://jacobvanorder.github.io/i-built-a-keyboard-part-2</id><content type="html" xml:base="https://jacobvanorder.github.io/i-built-a-keyboard-part-2/"><![CDATA[<h1 id="i-built-a-keyboard-part-2">I Built a Keyboard (Part 2)</h1>

<h2 id="previously">Previously</h2>

<p>In <a href="/i-built-a-keyboard/">the last post</a>, I talked about my ability to solder, my progression of keyboards, and how I rolled the dice on a pre-made Sofle RGB from AliExpress.</p>

<h2 id="keyboard-number-two">Keyboard (Number Two)</h2>

<h3 id="procurement">Procurement</h3>

<p>With the first keyboard in a box and ready to go to China, I figured I’d give this whole thing a crack the conventional way. Previously, I explained that because the keyboard is open source, it’s possible to have the pcb fabricated and to source the parts yourself. I wanted to speed that process up and found that there <em>are</em> websites dedicated towards making the process easier but, again, to the extent that you might need a spreadsheet to determine if you have all of the items on the Bill of Materials. What I mean by that is that some vendors would have the kit but not a microcontroller. Some would include the microcontroller but not the rotary encoders. Also, these things would be for sale on their website but maybe not.</p>

<p>What I ended up going with was a kit from <a href="https://www.diykeyboards.com/parts/pcbs/product/sofle-pcbs">diykeyboards.com</a>. Again, they were missing microcontrollers and rotary encoders from the kit but they <em>did</em> have it on their site and it was not sold out. Shipping was relatively fast with it being in Pennsylvania.</p>

<h3 id="actual-building">Actual Building</h3>

<p>Following a combination of the <a href="https://josefadamcik.github.io/SofleKeyboard/build_guide_rgb.html">official</a> but also better <a href="https://docs.beekeeb.com/build-guide/sofle-rgb-v2.1-soflekeyboard-build-log-guide-with-photos">instructions from another vendor</a>. Between the combination of these, I was on my way.</p>

<h4 id="first-couple-steps">First Couple Steps</h4>

<p>I needed to flash the microcontrollers with the firmware from the Beekeeb site. Next was the step of soldering the SMD diodes on to the board. SMD are usually difficult but this was no problem as the diodes were not super small. I had to bridge some pads so that the pcb, which is double sided and ambidexterous, knows which side to use. Remember how I fried my previous microcontroller? I wanted to avoid that by adding sockets to the board for the microcontroller and pins for the microcontroller. Unfortunately, the pins that the site included with the microcontroller were too large for the socket holes and I had to improvise by using 24AWG wire and soldering them in, one by one. I had to do this later with the OLED display as well.</p>

<p>At this point, I could test it by plugging it in and touching the contact point with tweezers. Luckily, everything worked and I could continue on. The Kailh switch sockets were next and that went seamlessly too. TRS jack, reset button, rotary encoder, OLED display? Check, check, check, check. I had to jumper some pads which determine which lighting configuration I’ll be using. No problem there.</p>

<h4 id="leds-from-hell">LEDs from Hell</h4>

<p>Now come the LEDs. The keyboard has 72 LEDs and the kit included 80 of them. For the status indicator and backlights, these are surface mounted which means that you need to douse the board with flux and pray that the LED is flat enough and that the solder wicked up from the pad, which you can see, to the underside of the LED, which you can’t. Oh, and because each LED has a little bit of logic in it, you can’t overheat it otherwise it will die. This was challenging but I got those first 7 LEDs on with little to no problem.</p>

<p>In order to do the per-light LEDs, I needed to place it in a hole in the PCB in the correct orientation.</p>

<p><img src="https://josefadamcik.github.io/SofleKeyboard/images/build_guide_rgb/led-pinout.jpg" alt="Picture of the LED resting in the hole" /></p>

<p>The tolerance here was so tight but you had to gently nestle that LED in <em>just</em> right and in the middle. Then solder it with absolutely no gaps. So, I followed the instructions of soldering the LED and testing but it wouldn’t work and was super frustated. Did I have an air gap in the solder? Did I overheat the LED? Was the LED faulty? I would remove it and toss the LED and try again with no luck.</p>

<p>It is here I’d like to stop and point something out. Take a moment to look at this image:</p>

<p><img src="https://josefadamcik.github.io/SofleKeyboard/images/build_guide_rgb/board-both.png" alt="The documentation image for LEDs" /></p>

<p>Does anything strike you about this? This is from the official documentation and <em>not</em> the much more, up to this point, comprehensive Beekeeb documentation. It turns out that the LEDs (SK6812) are commonly used in those strips of LED lights and each one is addressable but this means that they need to be in a chain and that’s how the circuit board is configured but going off of the Beekeeb documentation, this is not clear.</p>

<p>Once I figured that out, It made more sense and made it easier to debug as I could trace down the LED that was prone to any of the issues I just mentioned. By the point I figured this out, though, it was too late and I had blown through my extra LEDs. I wouldn’t have enough to finish the entire thing.</p>

<h4 id="assembly">Assembly</h4>

<p>The rest went super smooth. Sockets went in without ripping pads off, unlike the first keyboard. I didn’t have a case yet but I was able to test things out without much worry.</p>

<h3 id="software">Software</h3>

<p>There is an open-source keyboard software package called <a href="https://docs.qmk.fm/#/">QMK</a> that I mentioned in the first part. The gist of it is that the keyboard makers add their keyboard to this repo and it is up to the user to clone this repo and compile it using the <code class="language-plaintext highlighter-rouge">qmk</code> tool. Configuration is done by editing a <a href="https://github.com/qmk/qmk_firmware/blob/master/keyboards/sofle/keymaps/rgb_default/keymap.c">configuration</a> locally and then recompiling.</p>

<p>This is done with the command <code class="language-plaintext highlighter-rouge">qmk compile -kb sofle/rev1</code></p>

<p>Luckily, people have mercifully written software to go on top of QMK that enables altering the keyboard on the fly. The first keyboard used <a href="https://get.vial.today">VIAL</a> but when I went to try to install that, there was much thrashing about and nothing seemed to work.</p>

<p>It turns out that there is <strong>another</strong> GUI for this type of thing called <a href="https://www.caniusevia.com">VIA</a> but it was mysterious as to how to get my keyboard to be recognized by VIA. The QMK firmware from the BeeKeeb tutorial <em>was</em> recognized but the QMK firmware I compiled myself <em>wasn’t</em>. There must have been something going on.</p>

<p>According to the <a href="https://www.caniusevia.com/docs/configuring_qmk">docs</a>, it was the steps of adding <code class="language-plaintext highlighter-rouge">VIA_ENABLE = yes</code> to the <code class="language-plaintext highlighter-rouge">rules.mk</code> file and then compiling. Then, when compiling the QMK firmware, I need to change the keymap to via with <code class="language-plaintext highlighter-rouge">qmk compile -kb sofle/rev1 -km via</code>.</p>

<p>After I did this, I was able to see the keyboard in Via in order change the keys &amp; lighting and test the keys.</p>

<h3 id="what-the-heck">What the Heck</h3>

<p>By this point, I had gotten another batch of LEDs from China and soldered them all in. It went smoothly but the last two on the right side would not work. I took them out, replace them, tested with the multimeter. I went to go reflash the firmware when I notice that when I plugged in the right side (in order to flash it), those LEDs which weren’t working <strong>were</strong> and the <strong>last two on the left side</strong> wouldn’t work.</p>

<p>It turns out that the QMK keymap for VIA is incorrect and has a constant of 70 LEDs when it should have 72. Luckily, someone <a href="https://github.com/qmk/qmk_firmware/commit/2750e031c1ad9e2f90fcd94f445efcfd8b41bf1c">has fixed this</a>.</p>

<p>Still, the LEDs will glitch from time to time and I already broke one which I tapped a little to hard to see if there was a cold solder joint</p>

<h3 id="nice-cozy-home">Nice Cozy Home</h3>

<p>The last component was a <a href="https://www.thingiverse.com/thing:4837481">case</a> for it which I found on Thingiverse. I ordered the hardware from AliExpress and they came with the LEDs. I had to find some <a href="https://www.homedepot.com/p/Everbilt-10-Black-Rubber-Screw-Protectors-2-Piece-812788/204275995">thread protectors</a> at Home Depot to act as rubber feet for the adjustable hex screws on the bottom of the legs.</p>

<p>## All Done?</p>

<p><img src="/assets/images/2023-06-29-i-built-a-keyboard-part-2/black_keyboard.jpeg" alt="Finished Keyboard" /></p>

<p>I’m pretty pleased with it! Sometimes the LED glitch out and, when the computer is asleep, I’ll come back to the keyboard jamming on the “V” and Enter key but I just unplug it and it’s fine when plugged back in. Maybe it’ll be fixed in the future.</p>

<p>It was a fun project that maddening with the LEDs but pretty rewarding overall.</p>]]></content><author><name>Jacob</name></author><summary type="html"><![CDATA[I Built a Keyboard (Part 2) Previously In the last post, I talked about my ability to solder, my progression of keyboards, and how I rolled the dice on a pre-made Sofle RGB from AliExpress. Keyboard (Number Two) Procurement With the first keyboard in a box and ready to go to China, I figured I’d give this whole thing a crack the conventional way. Previously, I explained that because the keyboard is open source, it’s possible to have the pcb fabricated and to source the parts yourself. I wanted to speed that process up and found that there are websites dedicated towards making the process easier but, again, to the extent that you might need a spreadsheet to determine if you have all of the items on the Bill of Materials. What I mean by that is that some vendors would have the kit but not a microcontroller. Some would include the microcontroller but not the rotary encoders. Also, these things would be for sale on their website but maybe not. What I ended up going with was a kit from diykeyboards.com. Again, they were missing microcontrollers and rotary encoders from the kit but they did have it on their site and it was not sold out. Shipping was relatively fast with it being in Pennsylvania. Actual Building Following a combination of the official but also better instructions from another vendor. Between the combination of these, I was on my way. First Couple Steps I needed to flash the microcontrollers with the firmware from the Beekeeb site. Next was the step of soldering the SMD diodes on to the board. SMD are usually difficult but this was no problem as the diodes were not super small. I had to bridge some pads so that the pcb, which is double sided and ambidexterous, knows which side to use. Remember how I fried my previous microcontroller? I wanted to avoid that by adding sockets to the board for the microcontroller and pins for the microcontroller. Unfortunately, the pins that the site included with the microcontroller were too large for the socket holes and I had to improvise by using 24AWG wire and soldering them in, one by one. I had to do this later with the OLED display as well. At this point, I could test it by plugging it in and touching the contact point with tweezers. Luckily, everything worked and I could continue on. The Kailh switch sockets were next and that went seamlessly too. TRS jack, reset button, rotary encoder, OLED display? Check, check, check, check. I had to jumper some pads which determine which lighting configuration I’ll be using. No problem there. LEDs from Hell Now come the LEDs. The keyboard has 72 LEDs and the kit included 80 of them. For the status indicator and backlights, these are surface mounted which means that you need to douse the board with flux and pray that the LED is flat enough and that the solder wicked up from the pad, which you can see, to the underside of the LED, which you can’t. Oh, and because each LED has a little bit of logic in it, you can’t overheat it otherwise it will die. This was challenging but I got those first 7 LEDs on with little to no problem. In order to do the per-light LEDs, I needed to place it in a hole in the PCB in the correct orientation. The tolerance here was so tight but you had to gently nestle that LED in just right and in the middle. Then solder it with absolutely no gaps. So, I followed the instructions of soldering the LED and testing but it wouldn’t work and was super frustated. Did I have an air gap in the solder? Did I overheat the LED? Was the LED faulty? I would remove it and toss the LED and try again with no luck. It is here I’d like to stop and point something out. Take a moment to look at this image: Does anything strike you about this? This is from the official documentation and not the much more, up to this point, comprehensive Beekeeb documentation. It turns out that the LEDs (SK6812) are commonly used in those strips of LED lights and each one is addressable but this means that they need to be in a chain and that’s how the circuit board is configured but going off of the Beekeeb documentation, this is not clear. Once I figured that out, It made more sense and made it easier to debug as I could trace down the LED that was prone to any of the issues I just mentioned. By the point I figured this out, though, it was too late and I had blown through my extra LEDs. I wouldn’t have enough to finish the entire thing. Assembly The rest went super smooth. Sockets went in without ripping pads off, unlike the first keyboard. I didn’t have a case yet but I was able to test things out without much worry. Software There is an open-source keyboard software package called QMK that I mentioned in the first part. The gist of it is that the keyboard makers add their keyboard to this repo and it is up to the user to clone this repo and compile it using the qmk tool. Configuration is done by editing a configuration locally and then recompiling. This is done with the command qmk compile -kb sofle/rev1 Luckily, people have mercifully written software to go on top of QMK that enables altering the keyboard on the fly. The first keyboard used VIAL but when I went to try to install that, there was much thrashing about and nothing seemed to work. It turns out that there is another GUI for this type of thing called VIA but it was mysterious as to how to get my keyboard to be recognized by VIA. The QMK firmware from the BeeKeeb tutorial was recognized but the QMK firmware I compiled myself wasn’t. There must have been something going on. According to the docs, it was the steps of adding VIA_ENABLE = yes to the rules.mk file and then compiling. Then, when compiling the QMK firmware, I need to change the keymap to via with qmk compile -kb sofle/rev1 -km via. After I did this, I was able to see the keyboard in Via in order change the keys &amp; lighting and test the keys. What the Heck By this point, I had gotten another batch of LEDs from China and soldered them all in. It went smoothly but the last two on the right side would not work. I took them out, replace them, tested with the multimeter. I went to go reflash the firmware when I notice that when I plugged in the right side (in order to flash it), those LEDs which weren’t working were and the last two on the left side wouldn’t work. It turns out that the QMK keymap for VIA is incorrect and has a constant of 70 LEDs when it should have 72. Luckily, someone has fixed this. Still, the LEDs will glitch from time to time and I already broke one which I tapped a little to hard to see if there was a cold solder joint Nice Cozy Home The last component was a case for it which I found on Thingiverse. I ordered the hardware from AliExpress and they came with the LEDs. I had to find some thread protectors at Home Depot to act as rubber feet for the adjustable hex screws on the bottom of the legs. ## All Done? I’m pretty pleased with it! Sometimes the LED glitch out and, when the computer is asleep, I’ll come back to the keyboard jamming on the “V” and Enter key but I just unplug it and it’s fine when plugged back in. Maybe it’ll be fixed in the future. It was a fun project that maddening with the LEDs but pretty rewarding overall.]]></summary></entry></feed>