Streaming is available in most browsers,
and in the Developer app.
-
Use HDR for dynamic image experiences in your app
Discover how to read and write HDR images and process HDR content in your app. Explore the new supported HDR image formats and advanced methods for displaying HDR images. Find out how HDR content can coexist with your user interface — and what to watch out for when adding HDR image support to your app.
Chapters
- 0:00 - Introduction
- 1:10 - HDR concepts and technologies
- 9:38 - Adaptive HDR
- 18:24 - Read HDR images
- 20:51 - Edit strategies
- 26:48 - Display tone mapping
- 31:37 - Saving images
Resources
Related Videos
WWDC23
WWDC22
-
Download
Hello and welcome! My name is Davide. This presentation continues last year's session titled "Support HDR Images in Your App." This year, we have created some exceptional technologies related to HDR images and cannot wait to share them with you. In this video, I will first introduce the concepts of headroom and tone mapping. I will then explain Adaptive HDR, a new technology that is backward compatible with SDR applications, decoders, and displays. David will then explain how to read Adaptive HDR images from a file. He will explain the process of editing images while preserving the HDR content.
You will learn that when adopted, the new APIs will enable seamless integration with the operating system and other apps, such as transitioning between background and foreground.
David will then conclude the talk by explaining the recommended ways to write an HDR image to a file.
Let's start with HDR concepts and technologies. First, what is High Dynamic Range? At its core, HDR is a set of technologies created to represent the visual world around us with greater fidelity.
It allows the capture and display of a wider range of light intensity found in real life. Compared to the Standard Dynamic Range, it can display a deeper range of colors. It also defines a set of rules or transformations that enable displaying brighter and deeper content on HDR screens. These transformations are called Tone Mapping.
What is tone mapping, you may ask? In order to understand that, we first need to understand the notion of headroom.
And to understand headroom, we need to talk about dynamic range in photography.
One of the amazing properties of the human visual system is the ability to adapt to a wide range of input stimuli, from the dim light of a night star to the extreme brightness of the sunlight.
The term Dynamic Range refers to the contrast between the brightest and the darkest tones of an image.
On a Standard Dynamic Range display, you can accurately represent only part of real-life light ranges. Even though images look good, there is a tonal compression happening to the range of the image. HDR displays, on the other hand, can represent both dark and bright tones better than SDR with fewer compromises.
For instance specular highlights. Or light coming from emissive objects are better preserved.
Now, let’s zoom in on the plot at the bottom of the slide.
As we have seen, the peak brightness that HDR can represent can be much higher than the SDR peak. Per the ISO image standard, the brightest SDR signal is known also as reference white.
Reference white is approximately the brightness of a page of a book in an indoor setting, or the white background of a Keynote presentation.
An HDR display, on the other hand, lets us render specular highlights or light from emissive objects brighter than reference white.
The extra brightness is known as headroom.
In mathematical terms, headroom is the ratio between the HDR Peak and the Reference White. Headroom can also be represented as the logarithm of the value.
When in Log form, we would say the the headroom is 1 stop, or 2 stops above reference white, to indicate that HDR is 2 or 4 times brighter.
Now that we understand the notion of headroom, it is important to make a fundamental distinction, which will be useful later.
When a file is encoded in such a way that the data contains brightness levels above reference white, we will call it Content Headroom.
When we talk about the display's ability to show brightness levels above reference white, we will refer to it as Display Headroom. Let’s now dig deeper into the difference.
In the example above, you can see a high-dynamic range version of a beautiful landscape. The Histogram shows that the content contains approximately 2 stops of headroom above reference white. Now, an editor or a capture device like an iPhone may create a brighter rendition. The example above shows around 3 stops.
At the same time, the image can be displayed on a wide variety of screens. If the Display Headroom capacity is sufficient, the image will be decoded and rendered with full fidelity. In the example above, the display has 3 stops of headroom and can display all 3 stops of the content headroom. However, there are situations where the display may not be able to show the entire content headroom. For example, because of the current screen brightness settings. Later in the video, we will explore other causes of reduced display headroom. In this case, the data must be manipulated first, to preserve the creator's intent and avoid clipping. How is that done? Through a technique called tone mapping. Tone mapping adjusts the image brightness and color values to fit within the range that the medium or the display can handle, ensuring an accurate representation of the image. Tone mapping in digital photography can occur when a photo is captured or edited and also when an image is decoded and displayed. The first stage is commonly referred to as artistic or creative adjustment.
For example, an artist would import an image, like an Apple ProRAW asset, analyze it on a reference display and viewing environment, and tone map it to either SDR or a defined HDR headroom, based on the creative intent.
A capture device such as an iPhone could also make automatic tone mapping decisions using machine learning techniques, before saving the image to a file.
The second stage is commonly referred to as display adjustment.
After an HDR image is decoded. It can be displayed on a variety of devices, including some that may have limited headroom.
And each device can be located on a variety of physical environments.
In each of these cases, the display will need to adjust the rendering of the image according to the situation.
Adjusting the content happens due to the following reasons: Based on the display capabilities, or how many nits the hardware can physically show.
Based on the current screen brightness settings, and the resulting headroom available. Depending on the remaining battery life of your device. In a low-battery situation, the display may need to dim, to conserve power and keep the device operational.
Due to coexistence. For instance, the operating system will promote the image in the foreground to HDR and tone map to SDR images in the background.
Or even if you, as the developer, want to present the HDR effect of a photo in a different manner.
Now that we have introduced headroom and tone mapping, In the next section of the video, I will describe three technologies and their respective standards, which are the foundation for the new features and APIs. The first was introduced last year, and it is called ISO HDR. The ISO HDR standard is the first of its kind for HDR photography. Apple played an active role in its development, and it was published in 2023. ISO HDR images have been supported in our ecosystem since last year.
Key aspects of the ISO HDR include: The ability to store 10 bits Perceptual Quantizer and Hybrid Log Gamma encoding. The definition of an HDR reference display. And the metadata needed to maintain the artistic intent of the creator.
ISO HDR images can be stored in various file formats, such as HEIF, AVIF, PNG, JPEG XL and others. For more information, please refer to the document number above.
It is important to note that ISO HDR files need to be adjusted to SDR when viewed on SDR displays. This is usually done using default tone mapping operators, such as those described in the ITU specifications 2408, 2446, and 2390. We will come back to the ITU tone mapper later in the video.
The second technology for HDR images is new this year, and we call it Adaptive HDR. You may ask, why do we need a new standard if we already have ISO HDR? Adaptive HDR builds upon ISO HDR and goes even further in three key areas.
The first one is the backward compatibility with SDR systems, decoders and applications.
The second one is the ability to store optimized representations of both HDR and SDR in a single file.
And the third is the ability to easily tone map between HDR and SDR to accommodate for the available display headroom. We will see how Adaptive HDR implements all three.
The fundamental Idea behind the Adaptive HDR technology is to store in a file a fully backward-compatible SDR baseline representation of an image. Together with specific metadata and a map, that preserves the spatial locations of bright areas of the scene. This map is commonly called Gain Map because it allows for gaining parts of the SDR image to increase the brightness. When the Gain Map is applied to the baseline rendition, it will produce a beautiful HDR output. A sharp eye may have noticed something peculiar in the slide above. The title says Apple Gain Map, and not Adaptive HDR. This is not a mistake. Since 2020, iPhone cameras have been capturing images with embedded gain maps to enhance their appearance. Over a trillion images have been captured in this format! You can find more information about Apple Gain Map on the Apple Developer Portal.
What is new this year is that we are driving an effort to standardize the Gain Map technology.
We are standardizing the mathematical formula for creating and applying the gain map to the baseline SDR. Adaptive HDR encodes the map as the logarithm of the ratio between the HDR and the SDR signal. It also defines new metadata and how to store the new information in common file formats like HEIF and JPEG.
Adaptive HDR has now reached Committee Draft at ISO and we are working towards the final stage, Draft International Standard. If you require more details and are an ISO member you can locate the document number on this slide. This standard guarantees a uniform experience across software and hardware platforms, and we expect the imaging industry to adopt it widely.
I will now explain how Adaptive HDR achieves the three improvements over ISO HDR mentioned before: The first, backward compatibility, is guaranteed because the file contains a SDR baseline fully decodable by older systems or SDR-only applications.
Secondly, the file includes Dual Rendering uncompromised representations of SDR and HDR. This is due to the Gain Map containing information for each pixel of the image.
Third, given that the Gain Map is defined as a ratio between SDR and HDR, it allows for easy tone mapping of the input content based on the display headroom.
In fact, any desired output headroom can be achieved by multiplying the SDR input by the gain map with a weight less than 1 It’s as easy as that.
It's worth noting, that Adaptive HDR enhances the representation of the Gain Map signal. It can be an RGB 3-channels map, providing greater control over the appearance of the image.
The standard also allows for symmetrical transformations. This means that the baseline rendition may be HDR, and the gain map can contain information to tone map to SDR.
With the release of iOS 18, we are transitioning to Adaptive HDR Gain Maps and related metadata. This new generation is based on the latest draft.
The iPhone 15 and 15 Pro will capture HDR images compliant with Adaptive HDR.
Let’s now look at the anatomy of the new Adaptive HDR files.
I want to point out that new HEIC files captured on iOS 18 will still contain only one image. In fact, calling CGImageSourceGetCount on a CGImageSource, will return 1, but now a developer can request an alternate look for the image.
By default, images will be decoded to present the Standard Dynamic Range look. This is the backward-compatible look that we are familiar with.
In this case only the metadata necessary for the SDR representation will be reported, and any additional information in the file will be disregarded.
When apps request it, they can obtain the alternate representation of the image, or the HDR look. In this case, the extra information in the file, including the Gain Map, will be used and reported to the app.
In HEIF parlance, this representation is called the TMAP alternate, or the tone-mapped image. It is important to understand, that the file does not contain an HDR image at all. But rather a set of ingredients.
And a recipe, that together will create the HDR look.
We have been working with the MPEG body to formalize and standardize the Adaptive HDR file format for HEIF, which is currently included in the working draft of the second amendment of the HEIF specifications.
And we are also working with the International Color Consortium to promote Adaptive HDR as part of ICC profiles.
JPEG files also fully support Adaptive HDR, with a slightly different syntax compared to HEIF. The Adaptive HDR draft standard provides more information.
ProRAW files will also support Adaptive HDR by including the Gain Map and the new metadata in the full-size thumbnail.
As mentioned, the iPhone 15 and iPhone 15 Pro are transitioning to the new Adaptive HDR from iOS 18. I will not review all the specifics of the table above, but you can pause the video to get a complete list of changes and how they may impact you.
Now that you are HDR experts, I have a bonus topic related to how the operating system renders HDR images based on the format.
Depending on the input content, there are 2 ways to tone map an HDR to SDR or a lower display headroom.
For ISO HDR images, since iOS 17 and macOS 14, the operating system was able to adjust from HDR to a lower headroom using ITU default global tone mapping techniques.
This year Apple has developed an new Reference White Tone Mapping Operator that preserves the output quality better than the default one. Highlight clipping is greatly reduced, and color reproduction is better maintained. This new global tone mapper will be used for ISO HDR files in the new iOS, macOS, tvOS, watchOS, and visionOS.
Adaptive HDR images, on the other hand, will be adjusted down to the display’s headroom, using a curve optimized according to the Gain Map in the file.
Before David starts sharing the new APIs, I'd like to highlight the system apps we updated this year to utilize the upcoming technologies available to you. In iOS 17 and macOS 14, the Photos app was the only application capable of rendering HDR images using the full display headroom.
On iOS 18 and MacOS 15 we are adding messages, Quick Look and Preview.
And it is worth noting, that we changed the Photos app to use all these new APIs. And now, David, the floor is yours! Thank you for that great overview, Davide. Next let’s discuss how to convert these concepts to code in your application. When working with HDR images, there are common operations that your app is likely to support.
A full HDR pipeline involves reading, editing, displaying, and writing images. I will discuss all of these in this video.
But the first step is to read the file into memory.
The incredible thing about Adaptive HDR files is the flexibility they provide. Because of the Gain Map and associated metadata, the image can either be loaded as an SDR image for best backward compatibility or as an HDR image for maximum fidelity.
By default, reading a gain map image will load the SDR representation into memory. All you need to do is initialize a CIImage object with a URL or data. But to get the most impact from your images you will want to see them as HDR.
Last year we introduced the CIImage option expandToHDR to support the Apple Gain Map image format. This same API now also works to read Adaptive HDR files. All you need to do is provide this option when initializing the CIImage.
This option is also available via the ImageIO API kCGImageSourceDecodeToHDR. As Davide mentioned earlier, it is critical that the image objects have an associated content headroom property. This property is necessary for the subsequent display of an HDR image. This year we have added it to key system image classes.
Querying the headroom of a CIImage is as simple as reading the new contentHeadroom property. For common SDR images, the returned headroom will be just 1. For iPhone HDR photos, the value will be greater than 1 and up to 8 depending on the scene content. For some images the property may return zero to indicate that the headroom is unknown.
Similarly, given a CGImageRef, there is a new CGImageGetContentHeadroom API to get its headroom.
IOSurfaces have an equivalent property.
and to easily get the headroom from a CVPixelBuffer, create a CIImage from it and get its content headroom.
Next, lets discuss some recommendations, for how to edit Adaptive HDR images.
Given the flexibility of Adaptive HDR image files, there are several strategies that you can choose from when editing, displaying and saving these images. Perhaps the most obvious is to treat the file as an SDR image. I won’t discuss this today because you are likely familiar with this approach.
Alternatively, you can treat the file as HDR as we described in last year’s session. Or you can treat the file as an SDR image and a coupled Gain Map image. Lastly, your app can treat the file as two images: One SDR and the other HDR. I’ll discuss these three strategies throughout the remainder of this video.
To use the HDR approach, read the image with the expandToHDR option to make an HDR image right when the image is loaded.
Then the HDR image can be edited using filters that preserve HDR range. Please watch “Support HDR Images in Your App” for more information on CIFilters that preserve HDR.
One point to consider, is how editing an image will alter its content headroom property. For certain modifications, Core Image knows that the headroom will be unchanged. This works, if for example, you scale, crop, warp or apply certain convolutions. For other modifications, Core Image does not know how the headroom will be affected. In these cases, the resulting headroom property will be zero to indicate that it is unknown.
To use the SDR and Gain approach, load both the SDR and Gain Map components of the file as two in-memory image objects. Use the auxiliaryHDRGainMap option to load the Gain Map from a file as a CIImage object.
Keep in mind that the base image is SDR so its content headroom will be 1.
When editing the SDR image, analogous edits, where appropriate, are applied to the Gain Map.
For example, if the SDR image is cropped, then the Gain Map image should be cropped too. Note that the Gain image is typically half the size of the SDR image so edits need to account for the scale difference.
To use the SDR and HDR approach, use code like this to read both representations of the file as Image objects.
Keep in mind that the SDR image headroom will be 1, and the HDR image headroom will be greater than 1.
When editing the SDR image, analogous edits are applied to the HDR image. As long as the edits support HDR, they can be applied to both images.
These three strategies have advantages and disadvantages.
The HDR strategy is simpler to implement because your code only needs to track the one image.
This strategy also has the advantage of working with ISO HDR images that don’t have a Gain Map.
On the downside, some edit operations notably some photo blend modes don’t support HDR so alternatives need to be used. You can tell if a built-in CIFilter supports HDR by checking the filter attribute categories.
Also, once an edit has been applied in HDR, the original Gain Map can no longer be used for tone mapping down to SDR or to a different display headroom.
The SDR and Gain strategy has the advantage that it preserves the original Gain Map. This is convenient when tone mapping or saving the image with the best backward compatibility.
This strategy works best when doing simple edits such as rotations, warps, and crops. It can even work for some edits that only support SDR.
On the downside, some types of edits cannot be applied to the Gain Map. For example a filter that strongly alters the brightness of the SDR image, won’t have an appropriate effect when applied to the Gain Map.
The SDR and HDR strategy has the advantage that it allows the app to tune edits to optimize both SDR and HDR.
Also, given both SDR and HDR edits, it is possible to re-calculate a Gain Map so that the image file can be saved with maximum flexibility.
On the downside, there is the added complexity of editing two images and ensuring that both SDR and HDR edits look good.
Here’s an example of code that uses the HDR edit strategy. First, it loads the image requesting the .expandToHDR option, Next, in this example I’ve chosen to apply the vignetteEffectFilter because it fully supports HDR content. Then this code applies the filter to the image to produce a new edited image.
Here’s an example of code that uses the SDR and Gain edit strategy. First, it loads an image with no options to obtain the SDR image. Then, it loads the image again using the auxiliaryHDRGainMap option to get the Gain Map as a CIImage. This image needs to be scaled to match the size of the base SDR image.
Next, in this example I’ve chosen to apply the stretchCropFilter. This filter warps the image so it should be applied to both the SDR and Gain images.
Here the code applies the filter to the SDR image to produce an edited SDR image. And lastly, it applies the filter to the Gain image to produce an edited Gain image.
Now that you understand how to load and edit Adaptive HDR images, let’s dive into how to tone map for display.
One key challenge of displaying an HDR image is that, given the wide variety of displays and environments, you may need to tone map the image. To do this optimally, we need to know both the image headroom and the display headroom.
This year we have new system tone mapping APIs so that images look great and consistent across applications.
The great news is that for applications that use a UIImageView or a SwiftUI view, this tone mapping is automatic. Let me describe that code next.
In this example I have an Adaptive HDR image file accessible via a URL and I want to display it using SwiftUI. All I have to do is to create a UIImage using UIImageReader. This reader correctly supports various HDR file formats. Then you can create a SwiftUI Image View and specify the allowedDynamicRange modifier to determine how much of the file’s dynamic range should be displayed. It's as easy as that! Similarly, if your app uses UIKit then you can create a UIImageView with the UIImage. Then set the preferredImageDynamicRange property to specify how much dynamic range should be displayed.
If your app needs performance or control beyond that provided by UIImage, then you can display using Core Image and Metal. This is appropriate when you want to interactively change the image. When displaying Adaptive HDR images with Core Image, the best approach depends on what edit strategy was used.
Regardless of the strategy, the goal is to use the image content headroom and the display headroom to tone map so that it looks optimal for the current display state.
If your app uses the HDR edit strategy, use the new toneMapHeadroomFilter before display. If the file is an ISO HDR file, then the new Reference White Tone Mapping Operator will be used. If the file is an Adaptive HDR file, then the filter will apply a custom tone map function that is optimized according to the file’s unique Gain Map.
Here is an example of code the uses the toneMapHeadroomFilter and displays the result in a MTKView. The first step is to setup the MTKView for extended range content. Next, apply CIFilters to create an edited CIImage. Then get the current display headroom state for the view and then apply the new CIToneMapHeadroom filter.
Then render the resulting tone mapped image using the CIRenderDestination API. Please watch "Display EDR Content With Core Image, Metal, and SwiftUI" for more details on how to display CIImages efficiently in an MTKView.
Alternatively, your app may choose to use the SDR and Gain edit strategy. If so, when it comes time for display, the two should be combined and tone mapped. This can be done in one operation by using the new imageByApplyingGainMap:headroom API.
This is an example of code that edits the SDR and Gain Map images separately and displays the combined result in an MTKView. The code to setup the MTKView is the same as the prior example. Next, apply CIFilters to create an edited CIImage and, apply appropriate filters to create an edited gain image.
Then get the current display headroom for the view, and use applyGainMap API to combine the two images.
As before, the resulting toned mapped CIImage can be rendered using CIRenderDestination.
If you prefer, your code can render an HDR CGImage into a EDR CGBitmapContext. The first step is to use ImageIO to read the Adaptive HDR file using the kCGImageSourceDecodeToHDR option. Then you can use the CGImageGetContentHeadroom API if needed. For example you might want to know this if you are rendering more than one image to a context. Next create an extended range CGContext with RGBA half float pixels and an extended colorspace.
Then you can use the CGContextSetEDRTargetHeadroom to tell CoreGraphics how much of the context range to use. The final step is to render the image into the context. Lastly, lets discuss the recommended ways to save an HDR image to a file. The best practice for saving depends on what strategy was used for loading and editing. First, if you have loaded and edited the image as HDR, then the most modern method is to save 10 bit HEIF files with a PQ colorspace. This allows the image to be saved with the best precision and range.
Alternatively, if you have loaded and edited an image as SDR and HDR, then the most compatible method is to save an Adaptive HDR file To do this you need to call writeHEIFRepresentation with two CIImages. The first step is to provide the edited SDR image and colorspace. The edited HDR image is passed using the new HDRImage option.
From these two images, Core Image will calculate the Gain Map and include it as an auxiliary image along with the base SDR image.
If your app uses the SDR and Gain edit strategy, then you can you should call writeHEIFRepresentation to save both of these images. Just pass the SDR image and colorspace. And also provide the edited gain image using the HDRGainMapImage option. If the Gain Map image has the original metadata properties from when it was loaded, then these will be used when saving to the Adaptive HDR file. Lastly, your app can use ImageIO to save an SDR CGImage and Gain Map data. The first step is to call CGImageDestinationAddImage with an SDR CGImage with an SDR CGColorSpace. The next step is to create a dictionary that describes the Gain Map. This should contain the actual pixel data of the Gain Map, a sub-dictionary that describes the height, width, and format of that data and a CGImageMetadata that describes how the pixel data should be converted to linear gain values. In most cases you can re-use the CGImageMetadata from the source file. Then, all that is needed is to call CGImageDestinationAddAuxiliaryDataInfo and pass the new kCGImageAuxiliaryDataTypeISOGainMap key and the info dictionary.
This concludes our discussion of the new Adaptive HDR file format. We've discussed in detail the features and principles behind this new format as well as the API and strategies that will allow you to support them in your application. I hope that this video and others related to this topic, will allow your apps to present amazing HDR photos to the user.
-
-
Looking for something specific? Enter a topic above and jump straight to the good stuff.
An error occurred when submitting your query. Please check your Internet connection and try again.