Streaming is available in most browsers,
and in the Developer app.
-
Analyze heap memory
Dive into the basis for your app's dynamic memory: the heap! Explore how to use Instruments and Xcode to measure, analyze, and fix common heap issues. We'll also cover some techniques and best practices for diagnosing transient growth, persistent growth, and leaks in your app.
Chapters
- 0:00 - Introduction
- 1:05 - Heap memory overview
- 3:45 - Tools for inspecting heap memory issues
- 7:40 - Transient memory growth overview
- 10:34 - Managing autorelease pool growth in Swift
- 13:57 - Persistent memory growth overview
- 16:00 - How the Xcode memory graph debugger works
- 20:15 - Reachability and ensuring memory is deallocated appropriately
- 21:54 - Resolving leaks of Swift closure contexts
- 24:13 - Leaks FAQ
- 26:51 - Comparing performance of weak and unowned
- 30:44 - Reducing reference counting overhead
- 32:06 - Cost of measurement
- 32:30 - Wrap up
Resources
Related Videos
WWDC24
WWDC22
WWDC21
WWDC18
-
Download
Hello and welcome to Analyze Heap Memory! That’s Ben, and that’s Daniel! Today we're going to talk about heap memory and your app. Heap memory is used directly and indirectly by apps, and it's something as a developer, you can control and optimize. It's where your app's reference types are stored and matters because it's usually written to and dirty, making it count against your app's memory limits. That's why this session's focus will be on measuring and reducing heap memory.
If you're interested in learning more about other types of memory including graphics memory or memory limits, there are some other excellent talks that cover those in more detail.
So, whether your app is using too much memory or you're just curious like me, and want to take a peek under the hood, let's dive in! We're going to cover five topics today: Measuring your heap, dealing with transient growth, tracking persistent growth, fixing memory leaks, and improving runtime performance. So let's get started by asking the question! What is heap memory, and what tools can we use to measure how much our app is using? To understand the heap, we'll need to see where it fits in context within your app's overall virtual memory. When an app starts, it gets it's own empty address space of virtual memory.
When the app launches, the system loads its main executable, linked libraries, and frameworks, and maps in regions of read-only resources from disk. When running, the app also uses stack areas for each thread's local and temporary variables, with dynamic and long-lived memory getting placed in memory regions collectively known as the heap. Today, we're going to focus in on this part. Let’s zoom in! The heap isn't just one block of memory it's made up of multiple virtual memory regions as well.
Zooming in a bit more to the regions level, Each region is broken up into individual heap allocations. Under the hood, each of these regions is made up of 16KB memory pages from the operating system, but each allocation can be bigger or smaller. These memory pages can be in one of three states: Clean, dirty, or swapped. Clean pages are memory that hasn't been written to. This might be space allocated but unused, or pages that represent files mapped read-only from disk. These are pretty cheap as the system can discard and fault these pages again at any time.
Dirty pages are memory that's been written to recently by the application. When dirty pages aren't used for a while, they can't be thrown away. If there's memory pressure, the system can swap them, either compressing or writing them to disk. That way, when they're needed the memory can be decompressed or faulted from disk.
Of these three, only dirty and swapped count towards an application's memory footprint and in most applications, the heap will be responsible for the majority of that footprint.
Heap regions are memory that's created with the function malloc, or a similar allocation primitive like calloc or realloc. In many cases you don’t call these functions directly, but compilers and runtimes use them a lot for example, when you create an instance of a Swift or Objective-C class. Malloc lets your app dynamically allocate long-lived memory. Allocations stay alive until they’re explicitly freed meaning they can live past the scope of the code that created them. These functions enforce a few rules, for example, their minimum allocation size and alignment is 16 bytes, which means that if you ask for 4 bytes, your request gets rounded up to 16. And, as a security feature, most small allocations are zeroed when they’re freed. Language runtimes use the heap to allocate long-lived memory. Swift, for example, expands this class initializer to call a series of Swift runtime functions, which end up calling malloc.
Malloc also has a few debugging features. One of them is MallocStackLogging which records call stacks and timestamps for each allocation. Having MallocStackLogging enabled makes it much easier to track where and when memory was allocated.
In Xcode, you can enable MallocStackLogging with a checkbox in the scheme diagnostics tab. We enabled malloc stack logging like this for all the demos we’ll do today.
For tracking memory usage, the first tool we have available is the Xcode memory report, which shows an application’s footprint over time. An application’s footprint is made up of more than just its heap, but the memory report can show large memory issues and some recent history. Unfortunately, it can’t tell you why memory use is growing. We'll need other tools to help us understand that.
Another tool we'll cover today is also a part of Xcode. The Memory Graph Debugger can capture memory graphs, which are a snapshot of all allocations and the references between them.
With MallocStackLogging, this includes backtraces for each allocation.
It’s a great tool if you need to focus on a specific allocation, and it’s accessible right from Xcode’s debug bar.
Xcode also includes some powerful command-line tools for memory analysis. Leaks, heap, vmmap, and malloc_history can analyze macOS and Simulator processes directly, or investigate issues using already-captured memory graphs. I recommend looking at these tools’ man pages to learn more about their advanced functionality.
For profiling memory use over time, the Instruments application has several templates available.
The Allocations instrument records the history of all allocation and free events over time, aggregating statistics and call-trees to help track these back to your code. The Leaks instrument takes periodic snapshots of your app’s memory to detect memory leaks. Let's take a look at how to use Allocations to investigate an issue in our DestinationVideo example app.
I’m seeing memory issues with a new feature Daniel and I are developing for the DestinationVideo app. It allows users to choose a new background image for a video.
I opened and closed a gallery of background images a few times, then saw the app crash because it used too much memory.
Each time I opened the gallery, the Xcode memory report shows memory use spike again, up to almost a gigabyte. We can analyze this using the Allocations instrument. And I’ll test this on my device.
To profile in Instruments, I’ll use the Product, Profile menu item. This will perform a release build of my app and open Instruments with it selected as the target. When Instruments opens it asks us to choose a template for profiling. In this case we want to profile our app’s heap, so I’ll choose Allocations. The Allocations template includes two instruments, Allocations and VM Tracker. Allocations records heap and VM events live, helping us see activity in real-time. VM Tracker can periodically snapshot to measure all virtual memory. I won’t enable it today because I’m focusing specifically on the heap. I’ll start the trace, by clicking the Record button at the top left of the trace document.
As the trace begins, data starts streaming in about our app.
The track view shows that the app’s memory use is holding steady now that it’s loaded. I’ll open and close the gallery view to see what’s happening when memory use spikes.
This is a little slower than last time, but that’s expected.
I’m getting stack traces for every malloc and free. This data is going to be super useful in a minute.
I’ll stop the trace by clicking the Stop button at the top left. In the Allocations track, we clearly see the spike pattern reproducing. Now that I’ve got a trace file, I can send it to my partner who loves memory bugs. I’ll save the trace using the menu item File, Save and now, it’s Daniel’s problem.
Daniel, do you want to talk about diagnosing transient memory growth? Sure Ben! Let's take a deeper look at the memory spikes Ben recorded and see if there's something we can do about them. Memory spikes in your app are one type of transient memory growth, and this kind of growth is bad for three reasons.
Memory spikes cause memory pressure, and the system reacts. Swapping and compressing dirty memory, discarding read-only memory, and even terminating background tasks. In the worst case, these spikes can mean termination for your app as well. The long-term effect of memory spikes is also bad, as it causes fragmentation or holes in heap memory regions.
There are two ways to track it down. We can look at a specific spike to find the allocations Created & Still Living from the low point to the high point. Or, in aggregate we can select a large range and find all allocations that were Created & Destroyed in that range. Let's try it out now with the trace Ben sent me.
I'll select one of the spike intervals in the timeline by clicking and dragging from the low point to the top of the spike in the track view, and the statistics detail below should show me some info on what's causing it. Let me sort these rows by total bytes and look for the top contributors. Even though we're focusing on heap memory in this talk, the top categories look like, IOSurface virtual memory. That's a pretty good hint that our temporary memory issue is going to be with how we're handling our background images. If I sort by persistent, in this case, objects that made it to the top of the spike there's one node that stands out to me: @autoreleasepool content. There were hundreds of these created which for autoreleasepools is a lot. I’ll come back to this in a moment. The other way of looking for our temporary memory problem is to find the code responsible for the objects created and destroyed in a larger range. At the bottom of the window, I'll change the Lifespan filter to Created & Destroyed. And in the timeline, I'll select all three spikes.
Now I can switch the detail view to a call tree using the jump bar in the middle. Call trees are a good way of breaking down the allocations by backtrace, which lets me see the code allocating the most memory.
Looking at the total, there's, 8GB of temporary allocations? Wow. Where's all this coming from? The heaviest stack trace on the right gives me a good clue at where to look. Let me make this a little wider.
The frames from my code are emphasized, and looking at this list, the makeThumbnail() code looks like a good place to start. I can click once on it to quickly disclose the call tree, or I can double-click to view the source.
Oh yeah, this is the image filters we were applying, and one line shows a huge amount of memory, gigabytes, getting created and destroyed. This should be temporary memory, but we're seeing it grow until the top of the spike and then all getting freed at once. Let me go up a couple frames by first clicking back to the call tree in the jump bar. Looking up a few frames, this time I think I'll go to my ThumbnailLoader's loadThumbnails code.
It's faulting the thumbnails in a loop, and the memory is growing while the loop runs, and then falling at the end. Combined with the autoreleasepool clue from earlier, I think I know what’s going on.
Even though I'm using Swift which has automatic reference counting, autorelease pools are a common reason for temporary memory growth. Objective-C uses these pools to extend object lifetimes for return values from functions. Autorelease pools keep these return values alive by delaying a release until later. But this also means Swift can produce autoreleased objects when it calls into frameworks that use or expose Objective-C APIs. This simple example prints the current date, but it also creates an autoreleased string. This string will live on the heap until the end of current autorelease scope, which can be a while.
Threads usually have a top-level autorelease pool, but it’s not cleaned very often. This can matter a lot when code fills up the pool with objects, which easily happens in loops.
Every iteration objects get autoreleased into the same pool, and they can live longer than necessary. In this case, waiting until after the loop's all finished. Internally, autorelease pools allocate content pages to reference the objects. Since these are visible in the Allocations instrument, it can be a good way of noticing this kind of issue. Later when the autorelease pool is drained, the pool sends the delayed releases and lots of objects can get freed at once.
The fix for this is usually to define a nested, local autorelease pool scope to narrow down these lifetimes. In this example, autoreleased objects are held by the inner, per-loop pool and released on each iteration. This means fewer objects accumulate, and less content pages are needed to track the references.
Let's jump back in and see if we can fix our problem.
From Instruments, I'll use the menu at the top right of the source view to open this file in Xcode.
To fix this, let's add an autorelease pool scope in the body of the loop, which will drain the objects after each iteration.
Let's see how well this works.
Uh oh, I don't have my development phone. Hey Ben, can I use your phone to test out my fix? No! It's my phone. Why don't you use the Simulator? Fine, good point. With most profiling, it's important to run a release build on a real device for accurate timing. For heap analysis, though, the Simulator environment is a lot closer in behavior, and it's fine to use for memory profiling. I'll switch back to the memory gauge while I try out the feature Ben was showing.
In the Simulator, I'll open the gallery once and dismiss it.
The gauge looks better memory went up, but no huge spike this time.
The second time is the same, but now I'm starting to see another pattern I don't like.
After bringing up the sheet three times, I can confirm, the memory spikes are gone! We never got close to a gigabyte. But now our problem is that memory goes up each time in a stair step pattern. That's weird, because even if creating the thumbnails is expensive, it should only grow the first time I open the gallery. I’ll push my autorelease pool fix in a minute, but I don't want to steal all the fun.
From Xcode's debug bar, I'm going to pause in the Memory Graph Debugger. This captures every single allocation in my application's heap, and if I already knew what type was causing the growth I could search for it now. Or on the right, I can share it.
While I could import this memory graph right in Instruments, I think I’d rather just airdrop it to Ben.
Maybe he'll have some ideas on finding what seems like a needle in a haystack. Good luck Ben! Nice try, Daniel, but I'm not getting rickrolled again! I'll use my own memory graph to look into this persistent growth. Persistent memory is memory that doesn’t get deallocated. Persistent growth generally looks something like this. With memory increasing over time. This growth is made up of multiple allocations.
The Mark Generation feature in the Allocations instrument can break down the growth by timespan. When I click the Mark Generation button.
Instruments creates a new group for allocations. This generation collects all allocations created prior to this point. That also persisted to the end of the trace. When I select a later time and click Mark Generation again, Instruments creates a new group. This next generation collects all persisting allocations made after the previous generation, and before the new timestamp. I generated my own memory graph in Xcode, and imported it in Instruments.
Instruments shows data in the Allocations, Leaks, and VM Tracker instruments. I’ll focus on the Allocations track for now. In the track view, I can see the same step pattern Daniel noted, in the Xcode memory report. I'll use the generation marking feature to isolate the persistent allocations created during the growth intervals. I’ll select several times between periods of growth and press the Mark Generation button.
Instruments now shows three generations. Generations B and C show persistent growth caused by opening the gallery. I can expand one of these generations to see its allocations, and order by growth size to see what types are responsible for the most growth. It looks like most of the growth is coming from storage for Data. I can expand the entry for this type, to see individual allocations and their addresses. Hah! I found the needle! It looks like, all of these Data storage allocations were created by our ThumbnailLoader code. So what’s holding onto the Data? We can take one of these addresses from Instruments and put it into the Memory Graph Debugger to see what references it, which should tell us why it still exists after the gallery was closed. I’ll copy the address from the extended detail view and put it into the Memory Graph Debugger’s filter bar, and select the allocation.
To better understand what the Memory Graph Debugger is telling us, let’s talk a little about how it works.
Investigating memory growth is all about asking the question, why does this allocation still exist? What's holding on to it? It's the question the Memory Graph Debugger helps to answer. To get the most out of this tool, we'll need to talk about type information and scanning for references. There are four main types of references: Strong references, which are definitely pointers, in ARC-managed locations with explicit ownership guarantees. Weak and Unowned references, which are also definitely pointers with explicit non-ownership guarantees. Unmanaged references, which are pointers in locations that the runtime knows about, but doesn't manage automatically. These might be manual owning references, but then again, maybe not. And Uncertain, or Conservative references. These are recorded when the tools don’t know the type of the memory they’re scanning, and just see raw memory. If the value looks like a pointer, maybe it is, but without type information there's really no way to be sure. When the tools scan your process' heap, they use the best type information available for each allocation.
For this Swift Swallow example, the first two fields are standard and don't contain anything important for reference scanning. After that, we have a coconut reference to scan! This field does hold a pointer to a heap allocation it’s a strong reference to a Coconut object! Type information for Swift and Objective-C is great, but for C and C++ there's no reference ownership information, so you'll only see conservative references. The best the tools can do is look up names for C++ types with virtual methods. An instance of this class would be seen as a Coconut.
For types without virtual methods, or other allocations, stack traces can help provide names. With MallocStackLogging data, an instance of this class might be labeled malloc in PalmTree::growCoconut(), which is a pretty good hint about what it might be.
Now that we’ve talked about type information and references, let’s go back and look for why our data storage is persisting forever.
In the Memory Graph Debugger we can see that our selected allocation is being held by a __DataStorage object. Which is held by a PhotoThumbnail. The PhotoThumbnail is in turn held by a dictionary. And looking all the way back, it looks like it’s being held by the static property, ThumbnailLoader.globalImageCache. Because I’m running with MallocStackLogging enabled I can see allocation backtraces in the Inspector on the right. I’ll use the Inspector to navigate to the source responsible for an allocation.
Let’s pick the PhotoThumbnail holding the data.
It looks like one of my code’s closures is responsible for allocating this. I’ll use the stack trace to jump to that code.
It looks like this faultThumbnail method is caching thumbnails and creating a new one on a cache miss. I bet it’s storing it in that globalImageCache we saw just a moment ago.
From the comment, it looks like we’re caching based on URL and creationDate, which seems reasonable. But there’s a bug! That’s clearly not the creation timestamp for the file! It’s the current time. That means we’ll never find anything in the cache, and we’ll always cache a new PhotoThumbnail each time this method is called. This explains why we’re seeing persistent growth of thumbnails! Let’s fix it by actually caching based on the file’s creation date. I’ll delete the code that uses the wrong timestamp, and now I need to get the creation timestamp of the file. Nice, Xcode suggested the code I wanted. I’ll press Tab to accept it.
I’ll run the app again, to verify this fixed our issue, making sure the step pattern doesn’t show up in Xcode’s Memory Report.
Let’s try that feature again.
Okay, we’ve generated our thumbnails, now let’s try it again.
Nice, no growth. I’ll try it one more time just to be sure.
Let’s stop in the Memory Graph Debugger just to make sure there are no other problems.
I see that some memory leaks were found, it’s the allocations with the yellow triangle icons next to them.
Daniel, are you causing leaks in our code again? Yes I am! I guess it's a good thing Leaked memory is next on our list. To understand and fix leaked memory we first need to talk about reachability.
All memory in your program should be reachable through non-weak references from somewhere to be used in the future. There are three kinds of memory on your heap.
First, useful memory, that's reachable by your program, and will be used again in the future.
Second, abandoned memory, which is reachable and could be used but won't actually ever be used again. This memory counts against your app's footprint and is just wasted. It's easy to do, for example caching too aggressively or holding expensive data on singletons. The third type of memory in your app is leaked, that is unreachable memory that can't ever be used again. Typically this happens when the last pointer is lost, either to a manually managed allocation or a reference cycle of objects.
For most leaks, our goal will be to find and fix one reference in a cycle. This might be removing an accidental reference or changing an ownership qualifier from strong to weak or unowned. To make investigating these leaks easier I’ll use the Show only leaked allocations button in the filter bar, which also has the triangular icon.
The navigator shows types grouped by the different binaries in our app. It’s possible for your code to leak types from system binaries, but leaks are usually caused directly by issues in the project. I'll filter down to only my project's types by clicking the other filter bar button. That’s more approachable! There are 3 leaks of the ThumbnailLoader class, and three of the ThumbnailRenderer. I’ll select one of these. This looks like a small reference cycle between a ThumbnailRenderer, a ThumbnailLoader, and a closure context. But what’s the closure context for? Let’s talk about that for a moment.
When Swift closures need to capture values, they allocate memory on the heap to store the captures. The Memory Graph Debugger labels these allocations as closure contexts. Each closure context in your app's heap corresponds 1:1 with a live closure.
Closures capture references strongly by default, making it possible to create reference cycles. You can break these cycles using weak or unowned captures instead.
Let's say I have a Swallow object with a completion handler, that’s called when the swallow delivers a coconut. If I'm not careful, this will create a reference cycle by strongly capturing the Swallow itself. The Memory Graph Debugger would show the reference as a strong capture, but closure metadata doesn't include variable names. All references from closure contexts are simply labeled, capture. Let's go back and see if we can solve our leak.
I'll open the Inspector and click on a few of these references.
The ThumbnailRenderer has a cacheProvider reference to the loader The loader has a completionHandler referencing the closure context. And if I select the capture back to the renderer, the inspector shows me that the reference is strong.
To break this reference cycle, we’ll need to find the code that created the closure.
From the closure context’s stack trace I’ll jump to my code in the PhotosView.
This code is creating a ThumbnailLoader object, assigning it a completion handler, and then telling it to begin loading.
But the problem we just saw, is that the closure is strongly capturing the ThumbnailRenderer and this is causing a reference cycle. So what's the fix? Well, we probably should change this code to use Swift Concurrency instead of completion closures. But for now, we can specify a capture list. Either weak or unowned will break the cycle.
I’ll make the renderer capture weak, and because we now have an optional weak reference, I've added this guard let to make sure the destination the renderer is still around when we use it.
The fix I just made is for a 3-node cycle, but small changes like this can have big impacts. After trying out our feature again and pausing in the Memory Graph Debugger, now I don't see any of my types leaked. If I turn off the type filter, there's another nice surprise the other leaks were resolved too! Those other types were referenced by the leaks we just fixed, and now they're getting deallocated as well. In this example, finding and fixing the leak was pretty easy. There are a lot of different ways code can leak, though, and finding leaks is probably the area where the most questions come up. Let's talk through a few of these, starting with, why doesn't leaks checking find everything? Let's say I intentionally create a leak. Why don't the tools always find it? There's a lot of memory the tools don't have type information for, and languages like C permit unmanaged pointers. This means that tools have to allow things that look like they might be pointers but maybe aren’t.
When the tools scan conservatively, they look for pointers byte by byte, looking for values that appear to be references and then checking them against the list of allocations. If values match, then the tools record an Uncertain, Conservative reference to the block. But keep in mind that the value could have been a number value, flags, or just random bytes that looked like a valid pointer.
So in answer to the question, a real leak can be missed due to conservative references. If I want to create an intentional leak and see it be found, it doesn't hurt to drop it in a loop that does it 100 times. In real apps, leaking code is usually run multiple times, so even if tools don't find every leak, they still catch the bugs that cause them. Another related question, how can the reported number of leaks go up and down over time? Bugs will lead to more leaks over time, but the heap can be pretty noisy and random. This noise makes conservative references non-deterministic so they can appear or disappear. So even if your program leaks 5 objects on launch, the tools might find 5 of them at first, and only find 4 later.
Another common question is why nonreturning functions sometimes appear to leak memory. This may be C functions with the noreturn attribute or Swift functions that return the Never type. Because these functions will never return, compilers can optimize away cleanup they'd normally have to do, including releasing local allocations or references they create. When these kinds of functions are used for fatal asserts, no problem the program's about to crash anyway! But sometimes they're used to park a thread forever. If you ever see, local state being reported as leaked from a call to a noreturn function, such as the Server object in this example, one solution is to explicitly store it into a global.
By storing the object outside the local function scope, the reference ends up somewhere the tools can see. And because the tools can see it, the object will be considered reachable instead of leaked, even though the local variables aren't otherwise preserved by the compiler. Now that we've covered leaks, I'd like to hand it back to Ben to talk about runtime speed, and coconuts. Thanks Daniel! Reducing memory can greatly improve an app's performance, and there are some runtime details to keep in mind that can improve it even further.
weak and unowned are two common tools you'll use in Swift to avoid creating strong reference cycles.
Let’s talk about their differences and when to use them.
Weak references are always optional types, and they become nil after their destinations are deinitialized. You’re always allowed to use a weak reference, regardless of source and destination lifetimes. Consider the case of a swallow, and a coconut. A coconut can be carried by a swallow, but it doesn’t own the swallow.
If we want the coconut to reference the swallow we shouldn’t use a strong reference. We could use a weak reference instead.
This comes with overhead, though. To implement the weak reference, Swift allocates a Swift weak reference storage for the destination object the first time it's weakly referenced. This allocation sits between the Swallow, and all its incoming weak references. It allows the weak references to be lazily nil'd out after the Swallow goes away.
Unlike weak references, unowned references directly hold their destinations. This means they don’t use any extra memory, and take less time to access than weak references. They can even be non-optional and also constant. However, it's not always valid to use an unowned reference. Let's say we make the Coconut's 'holder' reference unowned instead of weak. What happens if the Swallow goes away before our reference? The Swallow will be deinitialized, but not deallocated. This is what makes unowned references safe. The unowned reference must point to something, so the runtime keeps the ex-parrot around, or swallow, I’m mixing metaphors here. At this point, if I try to access the Swallow using the Coconut’s unowned reference, I’d get a deterministic crash. In this way, unowned references are a lot like force-unwrapping weak references. Even if I don’t access the unowned reference, it’s still bad to leave it around. As long as the unowned reference exists, its destination can’t be deallocated and wastes memory. If you don’t know how long the destination will live the small overhead of a weak reference is worth it.
If you’re not seeing weak or unowned references reported in your memory graph, you may need to check your project’s Reflection Metadata Level build setting in Xcode. We recommend using the default All level if possible. This setting includes all metadata the tools want and lets the tools provide much better accuracy for Swift.
Let's look at a concrete example.
This ByteProducer class has a generator property a closure that starts out assigned to its defaultAction method.
The problem is, this creates a strong reference cycle, because the defaultAction method implicitly uses self. Be very careful when using methods as closures.
To fix this, we can define a closure which calls defaultAction(). It still performs a capture of self, but now the capture is explicit and we can use a capture list to keep it from being strong.
We need to specify a reference qualifier, and weak certainly works here as a good default.
Unowned is also fine in this case because the generator closure has the same lifetime as its destination, the ByteProducer instance.
The closure isn’t vended to other code or being dispatched asynchronously, so there's no way it can outlive the captured self.
The performance difference between these choices sometimes adds up. If I allocate a million of those ByteProducers, and export a memory graph, the heap command-line tool can provide a quick summary of the cost. There’s one weak reference storage allocation for each ByteProducer, and they use almost as much memory as the ByteProducers themselves! With unowned, this memory isn’t needed.
The point is weak references are a good default, and unowned references save memory and time when you can guarantee a reference won't outlive its destination.
To find areas where they introduce CPU overhead, profile and look for calls to runtime functions like swift_weakLoadStrong().
You can learn more about Swift’s reference counting in the “Swift Programming Language” chapter on Automatic Reference Counting.
Besides weak and unowned, sometimes automatic retain and release calls can show up as a profiling hotspot. While it might be tempting, don't circumvent ARC. There are better solutions than using unmanaged pointers or moving performance-sensitive code to a memory-unsafe language.
Make sure -whole-module-optimization is enabled, as it can reduce overhead by allowing more inlining. Also, profile and look for generics that may need explicit specialization.
It's also helpful to make sure your most-copied structs have simple fields. Profiling can help identity expensive struct copies. For these structs, try to minimize the use of reference types, copy-on-write types, and uses of any.
For more Swift performance tips, check out "Explore Swift Performance", and "Consume noncopyable types in Swift".
For Objective-C code, there are also a few ways to reduce retain and release overhead.
Again, don't circumvent ARC, as leaks from manual reference counting can be extremely hard to debug.
Mark methods as objc_direct to allow inlining of Objective-C method calls, which helps reduce retain and release traffic.
For cases where inlining isn't possible, the objc_externally_retained attribute is great for letting the compiler know when parameter lifetimes are guaranteed, eliminating retain and release.
Part of performance is being aware of observation cost. MallocStackLogging and Allocations track live data, which requires some memory and CPU to record information about every allocation. Leaks, VM Tracker, and Memory Graphs are snapshot-based, which requires the target app to be suspended during analysis. This can cause your app to briefly stutter or hang during the snapshot process.
To wrap up, today we've shown how to measure the heap with Instruments, and look for patterns of transient and persistent growth. Once you find issues with individual allocations, use Xcode's Memory Graph Debugger and MallocStackLogging to find out why they still exist in your app's heap. But most of all, please be proactive! Analyze and optimize your app's heap memory. Finding leaks and persistent growth will let users enjoy your app for longer. Thanks again for joining us!
-
-
10:01 - ThumbnailLoader.makeThumbnail(from:) implementation
func makeThumbnail(from photoURL: URL) -> PhotoThumbnail { validate(url: photoURL) var coreImage = CIImage(contentsOf: photoURL)! let sepiaTone = CIFilter.sepiaTone() sepiaTone.inputImage = coreImage sepiaTone.intensity = 0.4 coreImage = sepiaTone.outputImage! let squareSize = min(coreImage.extent.width, coreImage.extent.height) coreImage = coreImage.cropped(to: CGRect(x: 0, y: 0, width: squareSize, height: squareSize)) let targetSize = CGSize(width:64, height:64) let scalingFilter = CIFilter.lanczosScaleTransform() scalingFilter.inputImage = coreImage scalingFilter.scale = Float(targetSize.height / coreImage.extent.height) scalingFilter.aspectRatio = Float(Double(coreImage.extent.width) / Double(coreImage.extent.height)) coreImage = scalingFilter.outputImage! let imageData = context.generateImageData(of: coreImage) return PhotoThumbnail(size: targetSize, data: imageData, url: photoURL) }
-
10:23 - ThumbnailLoader.loadThumbnails(with:), with autorelease pool growth issues
func loadThumbnails(with renderer: ThumbnailRenderer) { for photoURL in urls { renderer.faultThumbnail(from: photoURL) } }
-
10:33 - Simple autorelease example
print("Now is \(Date.now)") // Produces autoreleased .description String
-
11:08 - Autorelease pool growth in loop
autoreleasepool { // ... for _ in 1...1000 { // Autoreleases into single pool, causing growth as loop runs print("Now is \(Date.now)") } // ... }
-
11:50 - Autorelease pool growth in loop, managed by nested pool
autoreleasepool { // ... for _ in 1...1000 { autoreleasepool { // Autoreleases into nested pool, preventing outer pool from bloating print("Now is \(Date.now)") } } // ... }
-
12:16 - ThumbnailLoader.loadThumbnails(with:), with nested autorelease pool growth issues fixed
func loadThumbnails(with renderer: ThumbnailRenderer) { for photoURL in urls { autoreleasepool { renderer.faultThumbnail(from: photoURL) } } }
-
17:27 - C++ class with virtual method
class Coconut { Swallow *swallow; virtual void virtualMethod() {} };
-
17:40 - C++ class without virtual method
class Coconut { Swallow *swallow; };
-
18:41 - ThumbnailRenderer.faultThumbnail(from:), caching thumbnails incorrectly
func faultThumbnail(from photoURL: URL) { // Cache the thumbnail based on url + creationDate let timestamp = UInt64(Date.now.timeIntervalSince1970) // Bad - caching with wrong timestamp let cacheKey = CacheKey(url: photoURL, timestamp: timestamp) let thumbnail = cacheProvider.thumbnail(for: cacheKey) { return makeThumbnail(from: photoURL) } images.append(thumbnail.image) }
-
19:28 - ThumbnailRenderer.faultThumbnail(from:), caching thumbnails correctly
func faultThumbnail(from photoURL: URL) { // Cache the thumbnail based on url + creationDate let timestamp = cacheKeyTimestamp(for: photoURL) // Fixed - caching with correct timestamp let cacheKey = CacheKey(url: photoURL, timestamp: timestamp) let thumbnail = cacheProvider.thumbnail(for: cacheKey) { return makeThumbnail(from: photoURL) } images.append(thumbnail.image) }
-
22:19 - Code creating reference cycle with closure context
let swallow = Swallow() swallow.completion = { print("\(swallow) finished carrying a coconut") }
-
23:11 - PhotosView image loading code, with leak
// ... let renderer = ThumbnailRenderer(style: .vibrant) let loader = ThumbnailLoader(bundle: .main, completionQueue: .main) loader.completionHandler = { self.thumbnails = renderer.images // implicit strong capture of renderer causes strong reference cycle } loader.beginLoading(with: renderer) // ...
-
23:40 - PhotosView image loading code, with leak fixed
// ... let renderer = ThumbnailRenderer(style: .vibrant) let loader = ThumbnailLoader(bundle: .main, completionQueue: .main) loader.completionHandler = { [weak renderer] in guard let renderer else { return } self.thumbnails = renderer.images } loader.beginLoading(with: renderer) // ...
-
24:24 - Intentional leak of manually-managed allocation
let oops = UnsafeMutablePointer<Int>.allocate(capacity: 16) // intentional mistake: missing `oops.deallocate()`
-
25:12 - Loop over intentional leak of manually-managed allocations
for _ in 0..<100 { let oops = UnsafeMutablePointer<Int>.allocate(capacity: 16) // intentional mistake: missing `oops.deallocate()` }
-
26:11 - Nonreturning function which can see leaks of allocations owned by local variables
func beginServer() { let singleton = Server(delegate: self) dispatchMain() // __attribute__((noreturn)) }
-
26:22 - Fix for reported leak in nonreturning function
static var singleton: Server? func beginServer() { Self.singleton = Server(delegate: self) dispatchMain() }
-
27:21 - Weak reference example
weak var holder: Swallow?
-
27:43 - Unowned reference example
unowned let holder: Swallow
-
29:07 - Implicit use of self by method causes reference cycle
class ByteProducer { let data: Data private var generator: ((Data) -> UInt8)? = nil init(data: Data) { self.data = data generator = defaultAction // Implicitly uses `self` } func defaultAction(_ data: Data) -> UInt8 { // ... } }
-
29:25 - Break reference cycle cause day implicit use of self by method, using weak
class ByteProducer { let data: Data private var generator: ((Data) -> UInt8)? = nil init(data: Data) { self.data = data generator = { [weak self] data in return self?.defaultAction(data) } } func defaultAction(_ data: Data) -> UInt8 { // ... } }
-
29:41 - Break reference cycle cause day implicit use of self by method, using unowned
class ByteProducer { let data: Data private var generator: ((Data) -> UInt8)? = nil init(data: Data) { self.data = data generator = { [unowned self] data in return self.defaultAction(data) } } func defaultAction(_ data: Data) -> UInt8 { // ... } }
-
31:14 - Struct with non-trivial init/copy/deinit
struct Nontrivial { var number: Int64 var simple: CGPoint? var complex: String // Copy-on-write, requires non-trivial struct init/copy/destroy }
-
-
Looking for something specific? Enter a topic above and jump straight to the good stuff.
An error occurred when submitting your query. Please check your Internet connection and try again.