<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by Alex Cohen on Medium]]></title>
        <description><![CDATA[Stories by Alex Cohen on Medium]]></description>
        <link>https://medium.com/@alexandercohen?source=rss-11158e34626c------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Fri, 24 Apr 2026 03:31:20 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@alexandercohen/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Notes on Memory for Products]]></title>
            <link>https://medium.com/@alexandercohen/notes-on-memory-for-products-db15cdc17083?source=rss-11158e34626c------2</link>
            <guid isPermaLink="false">https://medium.com/p/db15cdc17083</guid>
            <dc:creator><![CDATA[Alex Cohen]]></dc:creator>
            <pubDate>Tue, 16 Dec 2025 22:12:10 GMT</pubDate>
            <atom:updated>2025-12-16T22:12:10.852Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*V-eoOqYgmSBRP95x" /><figcaption>Photo by <a href="https://unsplash.com/@jeffreyblum?utm_source=medium&amp;utm_medium=referral">Jeffrey Blum</a> on <a href="https://unsplash.com?utm_source=medium&amp;utm_medium=referral">Unsplash</a></figcaption></figure><p><em>I wrote this a while back when I was working at one of the larger social networks. I’ve stripped out all the internal bits, but the core message holds up: memory is a resource worth understanding and using well. The goal was to help teams stop treating it as an afterthought.</em></p><p>This article covers app terminations and the unintended consequences of the code we write. In most cases, there’s a trace that leads directly to the cause. A task is filed, the issue is fixed and we move on; sometimes we even build tools to ensure types of issues don’t happen again. Other cases are more complicated, all we have are breadcrumbs we decided to leave behind. One of these is Foreground Out-of-Memory App Deaths (a.k.a. FOOM or OOM). This note covers how these occur, why you should track what leads to them, and how to do that. The topic: memory.</p><p>The persistence of memory issues brings to mind a quote:</p><blockquote><em>“…nothing can be said to be certain, except </em><strong><em>death and taxes</em></strong><em>.”</em></blockquote><blockquote><em>— Benjamin Franklin (and 2 others before him).</em></blockquote><p><strong>Death</strong> comes in the simplest form, a SIGKILL signal; you can put it off, but no doubt it’s waiting at the next opportunity. Death keeps a prioritized list of targets, so there are things you can do to stay off its radar.</p><p><strong>Taxes</strong> are the cost paid to stay up and kicking; trade-offs. This is usually seen by the user as performance regressions, animation hitches, longer loads, UI stalls and more annoyances like that.</p><h3>Why Should You Care</h3><p>OOMs make up a significant share of all foreground app deaths. But here’s the thing: <strong>OOMs are simply a symptom of extreme memory consumption</strong>. They are mostly preventable.</p><p>Users can often sense when an app is struggling — slowness creeps in, something feels off. They develop coping habits: force-quitting from the app switcher, avoiding certain flows. You don’t want that. It doesn’t have to be this way.</p><h3>Limits and Pressure</h3><p>There are two areas to be mindful of when it comes to memory: <strong>limits</strong> and <strong>pressure</strong>. The former is under your complete control and affects both foreground and background execution. The latter not so much, and is much more relevant in the background. Limits revolve around your use of memory through allocations. There are undocumented hard limits, getting too close to them is not recommended. Pressure relates to how other apps handle their limits.</p><p>The orchestration between the limits and pressure is what allows the OS to ensure the foreground app will survive as long as possible in order to give users the best experience with their Apple device. It does this by sending memory warning notifications. Apps respond by reducing their memory consumption. If that is not enough, and the OS still needs to reclaim memory in order to keep chugging along, it will start terminating apps.</p><h3>Dirty, Compressed and Clean</h3><p>There are 3 important kinds of memory: dirty, compressed and clean.</p><pre>┌─────────────────────────────────┐<br>│           Footprint             │<br>│  ┌───────────────────────────┐  │<br>│  │                           │  │<br>│  │          Dirty            │  │<br>│  │                           │  │<br>│  ├───────────────────────────┤  │<br>│  │        Compressed         │  │<br>│  └───────────────────────────┘  │<br>└─────────────────────────────────┘<br>┌─────────────────────────────────┐<br>│            Clean                │<br>└─────────────────────────────────┘</pre><p>Dirty and compressed can be combined into what’s called the app’s memory <strong>footprint</strong>. These go hand-in-hand. The OS will take dirty memory that has not been used in a while, squeeze it and move it to compressed memory (basically in-memory swap). This will in turn reduce the memory footprint. Apple is on your side and will do its damndest to never intervene in a way that is unfavorable to your app.</p><p>Dirty memory is accumulated through allocations and object creation (class instances, large arrays, image buffers, etc.). It <em>is</em> the memory you can control. Dirty memory is what Apple is most interested in when an app is in the foreground and it is what counts towards the memory limit; get close or exceed that limit and the app is terminated, no questions asked. This is how the OS purges dirty memory. These are FOOMs.</p><p>Before this termination occurs the OS will notify the app that it’s getting close to the limit by way of memory warnings. It is your duty to ensure that when this occurs you take action. Here are a few simple things you can do:</p><ul><li>Compact caches.</li><li>Clean up strongly referenced objects that can easily be recreated.</li><li>Sometimes you might even take a cue from the warnings and change strategies used for choosing video bitrates, network requests, image sizes and prefetching.</li></ul><p>Just as there are go-to things you should do, there are also things that you <em>don’t</em> want to do when a memory warning comes in. Here are a few of them:</p><ul><li>Don’t run code that will have any side-effects. This can include any UI work, dispatching to other queues and/or calling out into functions that might have unknown repercussions.</li><li>Definitely don’t DispatchQueue.async or Task { }. The OS wants action now, not later.</li><li>Never iterate collections (Array, Set, Dictionary, ...) during a memory warning. The OS compresses unused memory—including collection contents. Iterating forces decompression, which increases memory use at the worst possible time.</li></ul><p><strong>Clean memory</strong> is pretty interesting. It’s basically memory the OS can drop and reload later without consequences. Paging in libraries has a huge effect on clean memory. As soon as you write to that memory (swizzling is one way), that memory is dirtied and it can’t be purged anymore, the best that can be done is compression.</p><h3>Being a Better Memory Citizen</h3><p>What can you do to keep your app’s memory footprint hovering in the sweet spot? Here are a few pointers that might help.</p><ul><li>Don’t cheat, let the OS do its thing. Follow its guidance and you’ll be OK. Don’t optimize because you think it’s right — profile first.</li><li>Allocate on the stack as much as possible and keep the stack short. In Swift, prefer value types (struct, enum) when you can.</li><li>If you’re still using C or Objective-C: use const, always use const. It&#39;s not magic but will help you write better code down the line.</li><li>Profile often. Apple Instruments is your best friend.</li><li>Use the Xcode Memory Graph. Pause in it once in a while to see what objects are in memory. It will surprise you and you’ll save yourself a lot of grief if you take this simple step.</li><li>Override didReceiveMemoryWarning() on view controllers and/or observe UIApplication.didReceiveMemoryWarningNotification to manage your memory usage when the OS asks you to. Keep it to larger objects and make sure you do it fast.</li><li>Make meticulous use of caching. Use NSCache with objects that conform to NSDiscardableContent for a thread-safe way of keeping heavier, often used objects in memory. NSCache will purge objects when needed, so make sure they&#39;re not mission critical or can be recreated.</li><li>Be careful of image size when loading up large images. Use CGImageSource, buffer caching properties and thumbnail creation to keep this under control.</li><li>Track your memory consumption with metrics and dashboards.</li></ul><h3>Tracking Memory Consumption</h3><p>When tracking memory consumption, there are two key metrics to consider: <strong>Peak Consumption</strong> and <strong>Product Accumulation</strong>.</p><p><strong>Peak Consumption</strong> is the maximum memory footprint attained throughout the lifetime of a session. It’s best to keep this as low as possible — the higher it goes, the more likely the app will be terminated.</p><p><strong>Product Accumulation</strong> is the amount of memory footprint left behind by a surface after a user navigates away from it. This includes leaks, but also objects that have a clear strongly held code path but aren’t accessed anymore. This is summed up throughout the session and tells you what surfaces are not cleaning up after themselves. Keep this as low as possible.</p><h3>Related</h3><ul><li><a href="https://medium.com/@alexandercohen/the-case-for-another-cache-116bf28c189e">The Case for Another Cache</a></li><li><a href="https://medium.com/@alexandercohen/reducing-memory-terminations-in-ios-apps-3e76797ca5bd">Reducing Memory Terminations in iOS Apps</a></li><li><a href="https://github.com/naftaly/Footprint">Footprint</a> — memory tracking library</li></ul><h3>References</h3><ul><li><a href="https://developer.apple.com/videos/play/wwdc2018/416/">iOS Memory Deep Dive</a> (WWDC 2018)</li><li><a href="https://developer.apple.com/videos/play/wwdc2018/219/">Image and Graphics Best Practices</a> (WWDC 2018)</li><li><a href="https://developer.apple.com/videos/play/wwdc2019/417/">Improving Battery Life and Performance</a> (WWDC 2019)</li><li><a href="https://developer.apple.com/videos/play/wwdc2019/411/">Getting Started with Instruments</a> (WWDC 2019)</li><li><a href="https://developer.apple.com/videos/play/wwdc2020/10078/">Why is my app getting killed?</a> (WWDC 2020)</li><li><a href="https://developer.apple.com/videos/play/wwdc2021/10180/">Detect and diagnose memory issues</a> (WWDC 2021)</li><li><a href="https://developer.apple.com/videos/play/wwdc2024/10217/">Explore Swift performance</a> (WWDC 2024)</li><li><a href="https://developer.apple.com/documentation/metal/resource_fundamentals/reducing_the_memory_footprint_of_metal_apps">Reducing the Memory Footprint of Metal Apps</a></li><li><a href="https://www.mikeash.com/pyblog/friday-qa-2010-01-15-stack-and-heap-objects-in-objective-c.html">Stack and Heap Objects in Objective-C</a> (Mike Ash)</li><li><a href="https://developer.apple.com/documentation/foundation/nscache">NSCache</a> and <a href="https://developer.apple.com/documentation/foundation/nsdiscardablecontent">NSDiscardableContent</a> documentation</li></ul><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=db15cdc17083" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Automatic SwiftUI View Tracing with Swift Macros]]></title>
            <link>https://medium.com/@alexandercohen/how-we-built-a-swift-macro-that-automatically-wraps-any-swiftui-view-no-more-manual-f5761376f923?source=rss-11158e34626c------2</link>
            <guid isPermaLink="false">https://medium.com/p/f5761376f923</guid>
            <category><![CDATA[swift]]></category>
            <category><![CDATA[swiftui]]></category>
            <category><![CDATA[observability]]></category>
            <category><![CDATA[ios]]></category>
            <category><![CDATA[embrace]]></category>
            <dc:creator><![CDATA[Alex Cohen]]></dc:creator>
            <pubDate>Wed, 28 May 2025 01:17:19 GMT</pubDate>
            <atom:updated>2025-06-29T18:24:48.236Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*zOV02iBiKboCzo83" /><figcaption>Photo by <a href="https://unsplash.com/@matthiasspeicher?utm_source=medium&amp;utm_medium=referral">Matthias Speicher</a> on <a href="https://unsplash.com?utm_source=medium&amp;utm_medium=referral">Unsplash</a></figcaption></figure><p>For the past month or so, I’ve been working with one of the best observability platforms out there — <a href="https://embrace.io">Embrace</a>. The team has been absolutely fantastic, and I’ve been having a blast collaborating with them. They gave me an interesting challenge: figure out how to automatically instrument SwiftUI views without all the usual boilerplate and manual work.</p><p><strong><em>Liner notes: This article discusses unreleased feature of the Embrace SDK. They may, or may not make it into a final release. The SDK Is open source, so you’re free to peruse the code.</em></strong></p><p>As any iOS developer knows, adding performance monitoring, analytics tracking, or debugging instrumentation to SwiftUI views can quickly become a maintenance nightmare. You start with good intentions, but before you know it, you’re drowning in repetitive code and constantly forgetting to add instrumentation to new views.</p><p>Here’s the solution I came up with using Swift macros. It’s cleaner than I expected, and honestly, I wish I’d figured this out years ago.</p><h3>Manual is Hard</h3><p>Usually when we want to trace SwiftUI views, we end up with one of these approaches:</p><p><strong>Wrapping everything:</strong></p><pre>TraceView(&quot;MyView&quot;) {<br>  Text(&quot;Hello World&quot;)<br>}</pre><p><strong>Or adding view modifiers:</strong></p><pre>Text(&quot;Hello World&quot;)<br>  .onAppear { startTrace(&quot;MyView&quot;) }<br>  .onDisappear { endTrace(&quot;MyView&quot;) }</pre><p>Both work, but they’re a lot of work and pretty cumbersome if you ask me (you did, right?). Your view code gets mixed up with tracing code, you forget to add it half the time, and refactoring becomes a disaster due to past you.</p><h3>Our Solution</h3><p>A macro. Boring things should be easy. Hard things should be made easy. So we figured a macro should do it. Simply add the @embraceTrace macro to your views and you’re good to go.</p><pre>@EmbraceTrace<br>struct MyView: View {<br>    var body: some View {<br>        Text(&quot;Hello World&quot;)<br>            .foregroundColor(.blue)<br>            .padding()<br>    }<br>}</pre><p>That’s it. One attribute, and your view gets automatic performance tracing without any of the usual mess.</p><h3>What Did We Actually Solve?</h3><p>The macro does some clever stuff behind the scenes. Check it out.</p><p>First, it makes sure you’re using it on a View, otherwise you get errors flagged directly in Xcode.</p><p>This catches mistakes at compile time, which is nice.</p><p>Next, instead of modifying your body implementation, the macro creates a private copy and adds it to the same View, this way anything it refers to will still work as usual:</p><pre>private var _embraceOriginalBody: some View {<br>    // Exact copy of your original body<br>    \(raw: declaration.description)<br>}</pre><p><em>TBH, I couldn’t find a way to directly reference the original body implementation, so copying it seemed like the best option</em>.</p><p>This body duplication is obviously a small hiccup; it causes a binary size impact. It’s negligible, so we run with it. And I’m sure we’ll find a way around that soon enough. <em>Worried, I am not</em>.</p><p>Next is the meat of the macro. SwiftUI views have their body type defined, so you can’t just return something different from the same view. I got a lot of help from the community figuring this part out.</p><p>The first solution was to redefine the Body to AnyView. But, AnyView causes performance issues with SwiftUI&#39;s layout system due to type erasure and other things I haven’t explored too much, so I tried to stay away from that.</p><p>Instead, the macro creates a container view:</p><pre>struct _EmbraceBodyContainer: View {<br>    let view: MyView<br>    <br>    var body: some View {<br>        view._embraceOriginalBody<br>    }<br>}</pre><p>This container holds a reference to your view and returns your original body. It avoids circular references and keeps the type system happy.</p><p>Then we redefine your view’s body type:</p><pre>typealias Body = EmbraceTraceView&lt;_EmbraceBodyContainer&gt;</pre><pre>@_implements(View, body)<br>@inline(never)<br>@ViewBuilder<br>var _embraceTracedBody: Self.Body {<br>    EmbraceTraceView(&quot;MyView&quot;) {<br>        _EmbraceBodyContainer(view: self)<br>    }<br>}</pre><p>More interesting details. The @_implements attribute tells the compiler that this new body satisfies the View protocol&#39;s body requirement instead of the original one.</p><p>By using the container approach instead of AnyView, we keep all the original type information intact. SwiftUI can still optimize your view hierarchy the same way, so you get the performance benefits plus automatic instrumentation.</p><h3>Beyond Instrumentation</h3><p>This idea isn’t limited to performance monitoring. You could use the same technique for a bunch of other things, like…</p><ul><li>@AnalyticsTrack - automatic user interaction tracking</li><li>@ErrorBoundary - error catching with fallback UI</li><li>@AccessibilityAudit - automatic accessibility improvements</li><li>@MemoryWatch - memory usage monitoring</li></ul><p>Anytime you’re adding the same boilerplate across multiple views, you could potentially build a macro for it.</p><h3>Why This Approach Works</h3><p>The main benefit is that your view code stays focused on what it’s supposed to do. All the monitoring happens automatically.</p><p>Once you add the @embraceTrace attribute, you&#39;re done. No more futzing around. No more hard coded span names.</p><p>Since everything happens locally, before you even hit run, you catch mistakes immediately. And because kicked AnyView to the curb, your views perform exactly the same as before.</p><p>You can always expand the macro to see what code is being generated, and it won’t mess with your dSYMs or debugging metadata like some runtime instrumentation systems do.</p><h3>Wrapping Up</h3><p>The @embraceTrace macro solves a real problem in a clean way. For those of us who&#39;ve spent way too much time adding (and forgetting to add) tracing code throughout our apps, this is a major improvement.</p><p>The real value lies in keeping your code focused and maintainable. With this approach, instrumentation becomes a one-time annotation rather than an ongoing maintenance burden. Your views stay clean, your monitoring stays consistent, and your development workflow becomes more efficient.</p><p>The Embrace SDK’s implementation of this technique is available as <a href="https://github.com/embrace-io/embrace-apple-sdk">open source</a>, so you can explore the full implementation details and adapt the approach for your own use cases.</p><p>I hope you enjoy it and find new and useful ways to use and improve it.</p><p>Cheers!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=f5761376f923" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[The Case for Another Cache]]></title>
            <link>https://medium.com/@alexandercohen/the-case-for-another-cache-116bf28c189e?source=rss-11158e34626c------2</link>
            <guid isPermaLink="false">https://medium.com/p/116bf28c189e</guid>
            <category><![CDATA[developer]]></category>
            <category><![CDATA[apple]]></category>
            <category><![CDATA[ios]]></category>
            <category><![CDATA[memory-improvement]]></category>
            <dc:creator><![CDATA[Alex Cohen]]></dc:creator>
            <pubDate>Mon, 16 Dec 2024 03:40:39 GMT</pubDate>
            <atom:updated>2024-12-16T03:48:31.955Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*3KWiMtDXX_bK0ST_" /><figcaption>Photo by <a href="https://unsplash.com/@perfectmirror?utm_source=medium&amp;utm_medium=referral">PerfectMirror</a> on <a href="https://unsplash.com?utm_source=medium&amp;utm_medium=referral">Unsplash</a></figcaption></figure><p>When I say <em>“another cache,”</em> I’m not suggesting we create one. Instead, I’m proposing we acknowledge that another cache likely already exists, and we should account for it rather than ignore it.</p><p>This “other cache” can be any type of reusable object container. For instance, consider the trusty UIImageView. It has an .image property that, when set, strongly retains the image—usually an instance of UIImage that&#39;s also cached in-memory or on-disk.</p><h3>The Problem</h3><p>A user interacts with an app, loading, caching, and displaying images repeatedly. The developer diligently follows best practices to minimize memory usage while maintaining performance. Over time, memory usage grows until the OS flags the app with warnings. At this point, the app trims its caches, often by removing items based on a least recently used (LRU) pattern.</p><p>The twist? Views may still hold onto data that has been evicted from the cache. If the app survives subsequent memory warnings, it eventually needs to reload resources that are no longer in the cache but are still retained by views or containers.</p><p>This is especially common with media objects like images or videos, though it can happen with any type of object.</p><p>As a result, the same object exists in memory twice:</p><ul><li>One instance in a container (like a view).</li><li>Another instance in the cache (if reloaded).</li></ul><p>This duplication, occurring repeatedly, is a major cause of excessive memory consumption in applications.</p><h3>How Do We Fix It?</h3><p>The solution is simpler than it might seem. There are two approaches:</p><ol><li><strong>From the Container’s Perspective</strong><br>We could ensure containers properly release their data when no longer needed. However, this approach is error-prone and doesn’t scale well across complex systems.</li><li><strong>From the Cache’s Perspective</strong><br>A better approach is to clean up unreferenced objects before compacting the cache. This ensures we’re not holding onto anything unnecessary.</li></ol><h3>The Weak/Strong Round-Trip Solution</h3><p>To handle this, I recommend a <em>weak/strong round-trip.</em></p><p>Here’s the basic idea:</p><ol><li>Take your cached object.</li><li>Create a weak reference to it.</li><li>Clear the strong reference.</li><li>Reassign the weak reference back to a strong reference.</li></ol><p>If the object is no longer needed, it will be cleared. If it’s still in use, it remains. Then, proceed with your LRU trimming.</p><p>Here’s an example implementation:</p><pre>// A structure that holds a weak reference to an object.<br>struct WeakReference {<br>    weak var value: AnyObject?<br>}<br><br>// Start with an array of objects.<br>var items: [AnyObject] = [UIImage(), UIImage(), UIImage()]<br><br>// Create weak references for all objects.<br>var weaks = items.map { WeakReference(value: $0) }<br><br>// Clear the original container.<br>items.removeAll()<br><br>// Rebuild the array by removing nil objects.<br>// The result contains only objects still in use.<br>items = weaks.compactMap(\.value)</pre><h3>Results and Benefits</h3><p>This technique can significantly lower your app’s memory footprint and reduce the risk of termination due to high memory usage. It ensures your cache stays efficient and prevents objects from being duplicated unnecessarily.</p><p>If you’d like to see this approach in action, check out the open-source caching library <a href="https://github.com/naftaly/Arsenal"><strong>Arsenal</strong></a>. And if you found this useful, don’t forget to <strong>clap, follow, and subscribe!</strong></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=116bf28c189e" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Reducing Memory Terminations in iOS Apps]]></title>
            <link>https://medium.com/@alexandercohen/reducing-memory-terminations-in-ios-apps-3e76797ca5bd?source=rss-11158e34626c------2</link>
            <guid isPermaLink="false">https://medium.com/p/3e76797ca5bd</guid>
            <category><![CDATA[apple]]></category>
            <category><![CDATA[swift]]></category>
            <category><![CDATA[swiftui]]></category>
            <category><![CDATA[memory-management]]></category>
            <category><![CDATA[ios]]></category>
            <dc:creator><![CDATA[Alex Cohen]]></dc:creator>
            <pubDate>Fri, 24 May 2024 03:15:38 GMT</pubDate>
            <atom:updated>2024-05-28T00:57:37.181Z</atom:updated>
            <content:encoded><![CDATA[<p>You might recognize the premise. About 10 years ago, two engineers who then worked at Facebook wrote <a href="https://engineering.fb.com/2015/08/24/ios/reducing-fooms-in-the-facebook-ios-app/">Reducing FOOMs in the Facebook iOS app</a>. That was a demonstration of incredible deductive tactics in order to better understand the lifecycle of apps on Apple platforms. And it worked really well.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*hf1wtLtWRNVMRFCx" /><figcaption>Photo by <a href="https://unsplash.com/@sharonmccutcheon?utm_source=medium&amp;utm_medium=referral">Alexander Grey</a> on <a href="https://unsplash.com?utm_source=medium&amp;utm_medium=referral">Unsplash</a></figcaption></figure><h3>What is important to users</h3><p>Today, we know more. We know that users care about more than what is foreground. For example, you might be using the phone app in the background while you sift through something in the foreground. If it came down to it, which should be terminated first: the phone App or the foreground app? I’ll go for the foreground app every time. So the OS has priorities, and the foreground app is not always at the top.</p><p>There’s another small particularity about foreground that is kind of interesting. Most apps are considered foreground when their state is active, which makes total sense. Consider what the app state is when the user taps the icon on the SpringBoard — it’s in background until the open animation is complete. This means that most products don’t consider the user interaction of opening an app (or the act of opening/foregrounding an App) to be a foreground issue and simply disregard it. We need to come up with a new way to discuss the state of an app, or how the user perceives its state, which is much more important. So let’s call that <strong>User Perceptible</strong>.</p><p>User Perceptible is a binary state; did the user know about it or not (tbd if we need a care level to go with this switch — let’s not complicate things yet). This works for all sorts of different kinds of situations, from traditional crashes, to terminations all the way to much more complex functional issues. From an observability standpoint, any logged span could include an extra flag specifying the perceptibility of the event being logged. A really simple example is the span from when the user starts the action of foregrounding an App to when they begin the action of changing to something else (backgrounding the App). Anything that happens within that span is important and should be considered User Perceptible. To add to that example, let’s start playing a live podcast in an App, that should begin a span that is important to the user. That span completes when the podcast ends or when the user stops listening to it, not necessarily when the app is sent to the background. In other words, the user perceives our App as important to them for as long as it’s in the foreground or that podcast is playing. Any issues that occur within a User Perceptible span are considered important to the user and should be reported so teams can address them.</p><p>I hear you. I just expanded the scope of where you need to collect issues in your product. Your users will be happier and will love your app more for it. You’ll also be able to prioritize issues much more effectively and spend time in areas with a much larger ROI.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*1DLJkbKSdEc5ei9Y" /><figcaption>Photo by <a href="https://unsplash.com/@tolga__?utm_source=medium&amp;utm_medium=referral">Tolga Ulkan</a> on <a href="https://unsplash.com?utm_source=medium&amp;utm_medium=referral">Unsplash</a></figcaption></figure><h3>OOMs</h3><p>Many issues occur during transitions from one state to another, which led to the concept of User Perceptible. It just so happens that these areas are known to be prone to OOMs, at least we thought they were. What is actually going on, is the major flaw in <a href="https://engineering.fb.com/2015/08/24/ios/reducing-fooms-in-the-facebook-ios-app/">Reducing FOOMs in the Facebook iOS app</a>. I say this with all the love in the world because without that article reliability of iOS apps would be far from what it is today.</p><p>That article makes OOMs the falloff, the default in a switch, the one who always gets the blame, the scapegoat. Truth is, they aren’t. We need to examine various factors during startup to understand why the app is starting, such as why the previous session was terminated. That’s all for another post (or partly that Facebook post), today it’s all about Memory Terminations.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*HHaC7zGWQNMBElaH" /><figcaption>Photo by <a href="https://unsplash.com/@simmerdownjpg?utm_source=medium&amp;utm_medium=referral">Jackson Simmer</a> on <a href="https://unsplash.com?utm_source=medium&amp;utm_medium=referral">Unsplash</a></figcaption></figure><h3>Memory Limit and Memory Pressure</h3><p>There’s a reason an OOM was the falloff case, we didn’t have tools to know if they were actually the real reason for a termination. So instead of just saying we didn’t know, we blamed memory, it was a good bet and worked most times, but also led to many days and nights of trying to solve an issue that was just not going to be solved in that specific way. I’ve been there, I know. My Wife Sarah knows, my dogs know. So let’s fix that.</p><p><strong>Memory Pressure</strong>. This has been around for a while and has undergone changes in its implementation by Apple. Today, it’s basically a mix of pressure put on the app by the OS and other Apps running (remember that music player that’s not foreground, that’s putting pressure on your foreground app). It is rarely going to be an issue for the foreground app, but when an app is in the background, watch out, memory pressure is what is going to cause your app to be jettisoned (terminated by the OS). Now keep in mind, this is normal and is a fact of life on iOS. Your app, if in the background and doesn’t have any priority enhancers (playing music for example), will be terminated, it’s how things work.</p><p>I always emphasize that an app should have a very small memory footprint when backgrounded. But if for any reason it is terminated, as long as it has an optimized and efficient startup, and handles app restoration flawlessly, then users won’t be aware of background terminations and you should be good to go. What comes next is what really counts.</p><p><strong>Memory Limit.</strong> If there’s one thing you need to be aware of, and it’s likely you aren’t because Apple has yet to give us a way to handle this, it’s <em>how close you are to being terminated in the foreground due to memory.</em> A few years ago they provided a way <a href="https://developer.apple.com/documentation/os/3191911-os_proc_available_memory">to query how much memory is remaining</a>, that was the start of something great. We finally knew the reality of when an app would be terminated.</p><p>The <em>eureka</em> moment comes when we realize that we can add that to the App footprint, which is the dirty memory used within the process, and determine the real <strong>memory limit</strong>. This changes <em>everything</em>. To clarify, this means we know the absolute limit of heap our apps can use before the OS simply sends a SIGKILL signal to our Apps. For those who don’t know, from within an App, you cannot intercept a SIGKILL signal, the app just dies and no one tells us why. Hence the falloff case in the Facebook article, it was just the last possible case… until now. This is HUGE.</p><p>I’m really excited in case you couldn’t tell. Furthermore, we can split up the memory limit into distinct levels and react to these levels at runtime. This allows us to act on user behavior as it happens instead of when it’s too late when Apple sends out memory issue notifications.</p><p>With this information, and many tests done over the last few years, I can say with almost certainty that when we’ve used up above 90–95 ish% of memory, if the app is terminated then we can say it was terminated due to hitting the memory limit.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*wuplzBqdB-dC8QhX" /><figcaption>Photo by <a href="https://unsplash.com/@clarktibbs?utm_source=medium&amp;utm_medium=referral">Clark Tibbs</a> on <a href="https://unsplash.com?utm_source=medium&amp;utm_medium=referral">Unsplash</a></figcaption></figure><h3>Just do it</h3><p>Swoosh. I want everyone to be able to use this information. Oftentimes, depending on what your role is, you either want to be able to make sure things are prioritized efficiently, or be able to prioritize them efficiently. Either way, it all comes down to having the right information in your issues, observability systems, or built into your apps so everything else works flawlessly.</p><p>To this effect, there are currently a few ways to go about it. I have worked with the <a href="https://github.com/kstenerud/KSCrash/pull/476">KSCrash team to build OOM support directly into their crash reporting systems</a>. KSCrash has been forked by many apps and products out there so this should be a huge step in reducing OOMs across the Apple ecosystem. If you use KSCrash, you’ll get this soon by default. I have proposed the same solution to <a href="https://github.com/firebase/firebase-ios-sdk/pull/12903">Crashlytics</a> and <a href="https://github.com/getsentry/sentry-cocoa/pull/3870">Sentry</a> which are also leading error reporting systems. I hope they implement it sooner than later.</p><p>I admit I have no insight into what Apple is planning, but I expect that this WWDC (’24) will have have some sort of update to MetricKit that brings more memory observability than simply reporting how many sessions were killed due to memory. I have ideas that I’ve shared. Fingers crossed.</p><p>But most importantly, <a href="https://github.com/naftaly/Footprint">Footprint</a> is my own Swift implementation of Memory Limit and Memory Pressure. You can add it to your app using Swift Package Manager and start reacting to memory changes at runtime today. Go forth and be a good memory citizen.</p><pre>import Footprint<br><br>@main<br>struct SplashApp: App {<br><br>    /// Finds a good weightLimit for our memory caches base on the current memory state.<br>    func goodMemoryLimitFor(state: Footprint.Memory.State) -&gt; UInt64 {<br>        <br>        // half the memory limit is a good start for the max usage<br>        let maxValue = Footprint.shared.memory.limit / 2<br>        <br>        let eachSlice = maxValue / Footprint.Memory.State.terminal.rawValue<br>        let currentSlice = Footprint.Memory.State.terminal.rawValue - state.rawValue<br>        return UInt64(currentSlice * eachSlice)<br>    }<br>    <br>    var body: some Scene {<br>        WindowGroup {<br>            ContentView()<br>                .onFootprintMemoryStateDidChange { state, _ in<br>                    Task {<br>                        // Anytime the memory state changes, I update the weightLimit of the<br>                        // memory caches. This does 2 things:<br>                        // 1. Ensures any memory growth we might have in the cache never gets out of bounds<br>                        // by limiting how much the cache can contain based on user behavior.<br>                        // 2. Purges memory caches to newer limits making memory growth much slower than with<br>                        // usual `memory warnings`.<br>                        let byteLimit = goodMemoryLimitFor(state: state)<br>                        // set your cache max cost to _byteLimit_<br>                    }<br>                }<br>        }<br>    }<br>}</pre><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=3e76797ca5bd" width="1" height="1" alt="">]]></content:encoded>
        </item>
    </channel>
</rss>