Streaming is available in most browsers,
and in the Developer app.
-
Dive deep into volumes and immersive spaces
Discover powerful new ways to customize volumes and immersive spaces in visionOS. Learn to fine-tune how volumes resize and respond to people moving around them. Make volumes and immersive spaces interact through the power of coordinate conversions. Find out how to make your app react when people adjust immersion with the Digital Crown, and use a surrounding effect to dynamically customize the passthrough tint in your immersive space experience.
Chapters
- 0:00 - Introduction
- 2:04 - Volumes
- 2:06 - Volumes: Baseplate
- 4:08 - Volumes: Size
- 6:59 - Volumes: Toolbars
- 8:48 - Volumes: Ornaments
- 11:36 - Volumes: Viewpoints
- 15:34 - Volumes: World alignment
- 16:52 - Volumes: Dynamic scale
- 18:26 - Intermezzo
- 18:42 - Immersive spaces
- 19:38 - Immersive spaces: Coordinate conversions
- 22:40 - Immersive spaces: Immersion styles
- 26:08 - Immersive spaces: Anchored UI interactions
- 29:03 - Immersive spaces: Surroundings effects
- 31:21 - Next steps
Resources
Related Videos
WWDC24
WWDC23
-
Download
Hello! My name is Owen, and I’m a SwiftUI engineer. And I’m Troy. I’m also a SwiftUI engineer. In this video, we’ll be taking a deep dive into working with 3D content in volumes and immersive spaces on visionOS.
visionOS has three scene types: Windows, Volumes, and Immersive Spaces. All three can be used together to create unique and exciting experiences. Today, we’ll be focusing on Volumes and Immersive Spaces. These scene types are unique to visionOS, and used for rich, immersive 3D content. Volumetric apps using these scene types are one of the most exciting and unique features of Apple Vision Pro. They enable a new dimension for apps and games, letting people experience your app existing in the real world.
There are already many delightful spatial experiences available on visionOS. These apps take advantage of the third dimension, displaying information in clever ways and providing fun and playful interactions that can both mimic the real world and do entirely new things.
These apps take advantage of spatial API built for visionOS. And now in visionOS 2, we have added lots of new features to let you bring even more life to your volumetric apps. Troy and I will be going through building a new volumetric app, called Botanist. I’m going to start by adding a volume and building it out using new API to make it really shine. After that, Troy will expand the app to fill the room using an immersive space.
So let’s get started with a Volume! When I create a new volumetric app, or link an existing app against the visionOS 2 SDK, one of the first things to notice is the new baseplate.
It appears automatically when looked at, highlighting the bottom edges of the volume.
Here’s the baseplate in action on my brand new volume. It gently guides me to the edges of the volume, so I know immediately how much space I’m working with.
Baseplates are great for content that sits inside a volume, but doesn’t fill its bounds. It gives people a sense of the edges of the volume even when the content occupies a smaller amount of space inside it. However, for cases where your app’s content already pushes out to the bounds of the volume, or if you draw your own surface already, it’s better to disable the baseplate so that it doesn’t conflict with your app, and let the content itself guide people to the edges.
On visionOS 2 the baseplate is enabled by default, and can be controlled using the volume baseplate visibility modifier. The automatic behavior in visionOS 2 fades in the baseplate when it’s looked at. This is the equivalent of writing the modifier with visible. The baseplate can also be explicitly turned off by setting it to hidden.
Starting with the baseplate gives me an immediate sense of the bounds of my volume, even without any additional content.
And now as I add in the circular level for the Botanist app, the baseplate helps me to find the edges and corners of the volume, where the window controls are. This is especially important in visionOS 2, because volumes now have new resize handles at the corners, just like windows do.
Notice how the baseplate guides me to the resize handle at the corner. But when I try to resize the scene it just springs back to its original size. What’s going on here? Volumes inherit their minimum and maximum sizes from the size of their content by default. In SwiftUI, this is the behavior provided by the windowResizability modifier with contentSize behavior. The automatic behavior for volumes is as if this modifier were written on them. And this means the volume’s minimum size and maximum size are both going to come from the size of its views.
On ExplorationView, I have written a frame with a specific width, height, and depth. Because of this, the volume’s size is fixed to match that view’s frame.
If I change this to instead specify minimum values for the frame, the view is saying it can resize larger. And because the volume inherits the size of its content, it will now also allow resizing.
Specifying a maximum size will also cap the size of the volume.
Now when I drag on the resize handles, the volume smoothly resizes.
That’s pretty neat! This behavior means you can also drive the size of your volume in code. As your app updates its content, any changes to the frames of your views inform the volume about its size as well. This means it’s easy to accommodate changing content without worrying about clipping at the bounds of the scene. And because people look for the resize handles at the corners close to your content, this also ensures those controls are never too far away.
To change the size of my volume programmatically, I add a new state variable for the scale.
When I change the value of my scale variable, it updates the frame values of my view, and the volume will automatically resize itself to fit that new size.
I also scale the RealityKit entity for the circular exploration level by that amount. Now, I need a control to change the scale.
I add a button to toggle the size, and for now I just put it into an overlay.
Now, when I press the button, the level switches between its small and large size, and the volume adjusts its bounds to match. However, the button itself looks a little bit out of place here. It would fit perfectly in a toolbar.
Volumes support toolbars that float below them in an ornament. This is a great and easy way to provide a collection of common app controls.
This toolbar automatically scales as the volume is moved so its content remains accessible regardless of the placement of the volume. A toolbar like this is the best way to group together common app controls, so I’ll go ahead and add one to the app.
To put controls into the toolbar, I write a toolbar modifier on a view of the app.
Inside the toolbar, I can create toolbar items and toolbar item groups.
I specify a .bottomOrnament placement for each item. For visionOS 1 compatibility, this is required. But in visionOS 2 the automatic placement will also resolve to the bottom ornament. If your app only targets visionOS 2, you can leave the argument out.
Inside the toolbar items, I add my buttons to perform different actions in the game.
And just like that, my controls appear in the toolbar! And new in visionOS 2, the toolbar will automatically move to the side of the volume I’m standing on, along with the window controls. This encourages people to get a new angle on the experience, while keeping all the tools exactly where they’re needed. Alongside the toolbar, you can now add additional ornaments to a volume. Ornaments are great for providing extra controls, or more detail about the current content.
They help to reduce clutter in the main window by floating auxiliary information above and around the window of an app. Windows can add ornaments anywhere. Not just as a toolbar, but at any position around the window as well. And now, volumes can do the exact same, including with control over how deeply the ornament is positioned on the volume. Ornaments also dynamically scale keeping them at a comfortable size when the volume is moved further away.
Ornaments offer a lot of flexibility, but it’s important to not overdo it.
Putting many ornaments around the volume can crowd out your incredible content, the real hero of your app. A single ornament makes a great container for a group of controls and information. Additionally, some system-provided controls, such as the toolbar and Tab View are also in ornaments, so make sure your custom ornaments don’t conflict with these.
In the app, I added this view that shows the player’s progress toward their planting goals. Right now, that view is inside the Volume’s main view.
Because of this it doesn’t update as I walk around the volume, and if I move the volume further away, this view gets smaller and harder to read. It would be great to pull this view out into an ornament, so I get those automatic behaviors. Here, the view is inside the body of the volume. To put it inside an ornament, I move the view into an ornament modifier.
I provide a scene anchor with a UnitPoint3D, which will account for the width, height, and depth of the volume when placing the ornament. In my case, a topBack placement puts the ornament centered above and behind the main level. Let’s try it out in the app.
Here it is, floating behind the main level. Just like the toolbar, all ornaments will also follow around the volume, making sure they’re always accessible from any direction. Its position in the scene is always determined relative to the side I’m viewing the volume from.
Each side of the volume is a viewpoint. As you move around a volume, the window controls and ornaments automatically move to the viewpoint closest to you.
The system automatically updates the positions of my ornaments based on the current viewpoint, but now that I’ve added my little robot friend, it just faces the front. It doesn’t know where I am. It would bring a sense of life to the app if the robot could also face me as I move around.
First, in order to get updates on the viewpoint, I add a new modifier, onVolumeViewpointChange. It gets called whenever the active viewpoint updates. Using this, I set the variable in the app state that tracks the active viewpoint. When the robot updates, it will use this value to move in the world and face the current viewpoint.
I use the squareAzimuth value of the viewpoint. This type normalizes the position around a volume to one of four values, representing the four faces of the volume.
The four sides of the Square Azimuth provide semantic values for front, left, right, and back. These semantic values also include a specific Rotation3D, which can be applied directly as a rotation onto views and entities.
In the code that handles the robot’s movement, I’ve added a bit of code to turn the robot toward that position when it is in idle mode. Then I fire off a little waving animation.
And now, the robot turns to face me.
And it gives me a little wave! This makes the app really come alive and feel more vibrant.
Not all apps want to support every viewpoint, however. I want to limit the viewpoints only to the front and sides.
To specify the supported viewpoints, I use another new View modifier, supportedVolumeViewpoints. By default all viewpoints are supported.
In my case, I only want to support the front and sides of the volume, and not the back. To do this, I pass in an option set containing the front, left, and right values.
Now, the ornaments and window controls will not move to the back of the volume when I move there. Right now, the robot stops as well. Since the robot has been reactive to my movement up until now, I feel like it should let me know I’m on the wrong side.
In order to still have my volume viewpoint change block get called, even when the new value is not in the set of supported viewpoints, I add a new argument for viewpoint update strategy. By specifying all, I will get updates for all viewpoints, including unsupported ones. I check if the new value is in my supported set, and if it isn’t, I trigger a new animation on the robot, to give an indication that I should move back around to one of the supported viewpoints.
Now as I walk to the back of the volume, my ornaments all stop on the last supported side and the robot gets upset with me, wanting me to move back around.
There we go, that’s better! There are also a couple new options options to control how volumes present themselves within the world. The first one of these is world alignment. This controls whether the volume remains gravity aligned, keeping its base parallel to the floor, or tilts itself down as the volume is raised. In most cases, the adaptive tilting behavior will be most comfortable. This is the default in visionOS 2. The volume begins parallel to the ground, but as it is lifted above the horizon, it begins to tilt itself. This keeps the contents of the volume usable usable even from a reclined position, which is much more comfortable for interactive content.
However, some volumetric apps don’t require a lot of interaction, or are designed to provide more ambient content.
In these cases, the gravity-aligned behavior is better.
The volumeWorldAlignment modifier allows overriding the adaptive alignment, keeping the volume aligned with the floor.
Also, volumes can now be made dynamic scale. Windows in visionOS change their scale scale as they get moved around in the world. As the window gets moved further away, it scales up to maintain its size in your field of view. This is useful because windows often contain text and controls that would get difficult to use at further distances.
This is the same behavior as the toolbar and ornaments on volumes.
Volumes themselves, on the other hand, default to fixed scale, which helps to increase their sense of presence in the world.
They remain a fixed size as they get moved further away, so their content appears smaller at a distance.
For a lot of volumetric apps, this looks great, because it lets you visualize the virtual content in your room, like it was right there with you.
However, Volumetric experiences that rely on dense content with a large number of different interactive areas also benefit from dynamic scale.
To make a volume dynamic scale, use the new Scene modifier called default world scaling behavior. Because Botanist is an interactive game, it makes sense for the level to be dynamic scale, so I enable this behavior with the .dynamic option.
This is a great start. We have a very fun app in a volume. You've certainly spoken volumes about that. Alright, so what's next? I want to bring the robot out into the real world with an immersive space. That sounds out of this world.
With Immersive Spaces, developers are building rich experiences on Apple Vision Pro. I'm really enjoying the progress Owen made to allow the botanist to explore the volume in the shared space. Next, I'm going to go beyond the window and create a rich immersive experience that enables the greenhouse to fill the room.
The very first thing to do is create the immersive space itself. There are options to configure this in the New Project dialogue in Xcode, but in this case I add it myself.
I have an immersive space, but it’s empty right now.
When the immersive space is opened, I'll transplant all of the RealityKit content out of the volume and into the immersive space. This should happen seamlessly, giving surprise and delight when the botanist is able to start exploring around in the real world. There's a named coordinate space especially for this introduced in visionOS 1.1, called Immersive Space.
Coordinate spaces are a tool to precisely specify position relative to a particular frame of reference.
The new immersive coordinate space fits in alongside SwiftUI's existing local and global coordinate spaces.
Local refers to the current view’s coordinate space, with the origin at the top left of the view.
Global refers to the window’s coordinate space, with the origin at the top-left of the window.
For volumes, the origin is at the top-left-back.
Immersive Space sits above Global and has its origin defined as the point on the ground below you while the immersive space is open. I'll use coordinate spaces with RealityView to build the transition to the immersive space. RealityView provides a host of convert functions that I'll use to go between RealityKit and SwiftUI coordinate spaces. First, I'll handle converting the robot's transform from RealityKit scene space for the volume to SwiftUI immersive space. In the update closure of my RealityView, I call out to my first convert function.
Here I convert the robot's transform from local RealityKit scene space for the volume to SwiftUI's immersive space. Then I store the converted transform in the app model for later use.
I then re-parent the robot from the volume to the root entity of the immersive space. With the robot reparented and the transform to take the robot from the volume calculated, I mark the conversions from the volume view as completed. I'll continue the conversions in the immersive space view.
In the immersive space view, I'll call another convert function.
There, I calculate a transform to go from SwiftUI immersive space to RealityKit scene space. Then I compose these two transforms. I do that by multiplying the one I just computed by the one I stored earlier on my app model. The result converts from the local coordinate space in the volume to the one in the immersive space. I update the robot's transform to place the robot in the immersive space to match where it appeared in the volume.
With the robot's transform converted, I start the jump.
The transition is now ready for action! With the power of coordinate conversion APIs the robot is able to jump out of the volume and into the world of the immersive experience. It stuck the landing! Next I'll pick an immersion style for the Botanist app. By default, the immersive experience starts with a mixed style, displaying the app in the context of your surroundings. The progressive style provides a bridge between passthrough and fully immersive, using a radial portal to display the app while allowing passthrough around it. In the full style, the immersive app completely replaces the surroundings. I'll pick the progressive style.
It's a great fit for enabling the botanist to explore the world.
When adopting the progressive style in the Botanist app, by default the initial size of the portal covers around half of a person's field of view. The system also defines the supported immersion range by providing fixed fixed minimum and maximum immersion amounts.
I need to make the Botanist app start out more immersive. New in visionOS 2, that can be achieved with custom immersion amounts. that let me dial down, or dial up the immersion, to really showcase the botanist leaping out of the volume into the immersive space. Let's dive in to this new API.
I start by using a new initializer to create a progressive immersion style that takes in a custom immersion range as well as a value for the initial amount. When the progressive style is applied to an Immersive Space, the system will use the provided values to define the minimum, maximum and initial value of the progressive effect applied to the scene.
I need the immersive experience for the Botanist app to start out more immersive so I pick an initial immersion amount of 80%. For the custom immersion range in the Botanist app, I specify a minimum of 20% with a maximum of 100% corresponding to full immersion.
Let's check out custom immersion amounts in action. Starting out more immersive really helps highlight the botanist jumping out of the volume. This looks great through the whole range I specified. I can use the Digital Crown to dial in my experience.
Next, I want to make the botanist react as the Digital Crown is used to dial through the supported immersion amounts. Use the onImmersionChange modifier to react to changes to the immersion amount. This provides a context value with the new immersion level.
When the immersion changes, I read the value from the provided context. For Botanist, I store it so I can compare the before and after values.
I use onChange to handle changes to the stored immersion amount, extracting the new and old values from the closure to pass along.
To make the robot react to changes in immersion amounts, I call a function that triggers the robot to move outward when the immersion amount increases. And I call a function that triggers the robot to move inward when the immersion amount decreases. Let's check it out! Now the botanist reacts as the immersion level changes. It moves toward me as I increase the immersion and away from me as I decrease it.
Let's dial the immersion back up again. The botanist now reacts to move out and explore the expanding space as the immersion amount increases, but it feels like the environment is a little sparse at the moment.
To address that, I'll make the floor tappable to enable placing plants relative to the floor of the environment for the botanist to explore. To place plants at a specific location on the floor, I'll need to place the plants relative to a floor anchor and for that I'll need the 3D location of the anchor to be available.
You can provide your app access to the 3D location of anchors with the new Spatial Tracking Session API in RealityKit to enable people to authorize which anchor capabilities they want to track.
To use this API, first I create a spatial tracking session.
Then I create a task that calls a function to run the spatial tracking session when the immersive space is opened.
To run the session, I first set up a configuration for plane anchor tracking.
Then I run the session with the configuration to prompt for authorization to plane anchor transforms.
Now that I’ve registered for plane tracking, I need to add an anchor to track.
I specify a horizontal alignment for a target plane with floor classification to create a floor anchor entity to track.
Then I add the floor anchor to the RealityView content in the immersive space. Finally, I need to use the 3D location of my new anchor to place plants in my room.
I add a spatial tap gesture that can be used to detect taps on targeted entities in the immersive space.
When the gesture ends, I pass the gesture value along to a function that handles the tap.
To handle the tap, I first use the convert function on the gesture value to get the location of the gesture relative to the floor anchor. This step requires the transform of the floor anchor to be available to the app.
Finally, I can place the plant by adding the plant as a child of the floor anchor and using the converted location to set the plant entity's position.
Now, I pick a plant from the list. The hover effect on the floor indicates where the plant will be placed. I just tap, and a plant is placed! I love how placing plants in the room makes the Botanist app feel more alive.
For a deeper dive on the Spatial Tracking Session API, check out the session titled "Build a spatial drawing app with RealityKit".
The immersive greenhouse experience is really starting to come together. Check out how the plants play a growth celebration animation as they're visited by the botanist.
I'd like to add to the celebration. Currently, each plant in our environment is placed in a planter with an associated tint color. Tinting the passthrough to match the planter's tint color is a great way to add to the celebration.
The preferred Surroundings Effect API can be used to tint the surrounding passthrough. I'll update the immersive experience in the Botanist app to tint the passthrough using the tint color of the planter for the actively celebrating plant. First, I'll pick the tint colors to complement the planters.
I add a tint color property to the custom Plant Component. Then I switch over the plant type to choose a tint color. For example, light blue for coffee berry.
To trigger the tint effect, I need to detect when the botanist is near a planter. Collision detection with RealityKit is the right tool for this job.
When handling the robot's movement over time, I use the collision closure to process hit entities. I then pass the collision value to a helper function.
To learn more about collision detection with RealityKit, check out Develop your first immersive app.
In my helper function, I first check whether the botanist has collided with a plant, returning if not.
Then I play the celebration animation.
Finally, if I'm in an immersive space, I store the active tint color on the app model for later use.
Back in the immersive view, I create a color multiply Surroundings Effect based on the stored active tint color. And tint the passthrough using that surroundings effect.
Let's check out the updated celebration colors that now tint the passthrough as plants are visited by the botanist.
Magenta for poppy.
Light green for yucka.
Light blue for coffee berry. Nice work botanist! In this session, we've covered a lot of new ways to build volumes and immersive spaces in your app. Try the new resizability behaviors to fine tune how volumes resize in your app. Use viewpoints to allow your app to respond to people moving around your volumes. Break out of the volume and into the immersive space the power of coordinate conversions. Respond to changes in immersion level in your immersive space. Spatial apps are an entirely new frontier for experiences people have never dreamed of. With the powerful, expressive tools of SwiftUI and RealityKit, the possibilities are limitless. And with a bit of whimsy and imagination, you can make something that astounds. Thanks!
-
-
3:09 - Baseplate
// Baseplate WindowGroup(id: "RobotExploration") { ExplorationView() .volumeBaseplateVisibility(.visible) // Default! } .windowStyle(.volumetric)
-
4:29 - Enabling resizability
// Enabling resizability WindowGroup(id: "RobotExploration") { let initialSize = Size3D(width: 900, height: 500, depth: 900) ExplorationView() .frame(minWidth: initialSize.width, maxWidth: initialSize.width * 2, minHeight: initialSize.height, maxHeight: initialSize.height * 2) .frame(minDepth: initialSize.depth, maxDepth: initialSize.depth * 2) } .windowStyle(.volumetric) .windowResizability(.contentSize) // Default!
-
6:10 - Programmatic resize
// Programmatic resize struct ExplorationView: View { @State private var levelScale: Double = 1.0 var body: some View { RealityView { content in // Level code here } update: { content in appState.explorationLevel?.setScale( [levelScale, levelScale, levelScale], relativeTo: nil) } .frame(width: levelSize.value.width * levelScale, height: levelSize.value.height * levelScale) .frame(depth: levelSize.value.depth * levelScale) .overlay { Button("Change Size") { levelScale = levelScale == 1.0 ? 2.0 : 1.0 } } } }
-
7:39 - Toolbar ornament
// Toolbar ornament ExplorationView() .toolbar { ToolbarItem { Button("Next Size") { levelScale = levelScale == 1.0 ? 2.0 : 1.0 } } ToolbarItemGroup { Button("Replay") { resetExploration() } Button("Exit Game") { exitExploration() openWindow(id: "RobotCreation") } } }
-
10:41 - Ornaments
// Ornaments WindowGroup(id: "RobotExploration") { ExplorationView() .ornament(attachmentAnchor: .scene(.topBack)) { ProgressView() } } .windowStyle(.volumetric)
-
12:08 - Volume viewpoint
// Volume viewpoint struct ExplorationView: View { var body: some View { RealityView { content in // Some RealityKit code } .onVolumeViewpointChange { oldValue, newValue in appState.robot?.currentViewpoint = newValue.squareAzimuth } } }
-
13:06 - Using volume viewpoint
// Volume viewpoint class RobotCharacter { func handleMovement(deltaTime: Float) { if self.robotState == .idle { characterModel.performRotation(toFace: self.currentViewpoint, duration: 0.5) self.animationState.transition(to: .wave) } else { // Handle normal movement } } }
-
13:43 - Supported viewpoints
// Supported viewpoints struct ExplorationView: View { let supportedViewpoints: Viewpoint3D.SquareAzimuth.Set = [.front, .left, .right] var body: some View { RealityView { content in // Some RealityKit code } .supportedVolumeViewpoints(supportedViewpoints) .onVolumeViewpointChange { _, newValue in appState.robot?.currentViewpoint = newValue.squareAzimuth } } }
-
14:30 - Viewpoint update strategy
// Viewpoint update strategy struct ExplorationView: View { let supportedViewpoints: Viewpoint3D.SquareAzimuth.Set = [.front, .left, .right] var body: some View { RealityView { content in // Some RealityKit code } .supportedVolumeViewpoints(supportedViewpoints) .onVolumeViewpointChange(updateStrategy: .all) { _, newValue in appState.robot?.currentViewpoint = newValue.squareAzimuth if !supportedViewpoints.contains(newValue) { appState.robot?.animationState.transition(to: .annoyed) } } } }
-
16:42 - World alignment
// World alignment WindowGroup { ExplorationView() .volumeWorldAlignment(.gravityAligned) } .windowStyle(.volumetric)
-
18:05 - Dynamic scale
// Dynamic scale WindowGroup { ContentView() } .windowStyle(.volumetric) .defaultWorldScalingBehavior(.dynamic)
-
19:16 - Starting with an empty immersive space
struct BotanistApp: App { var body: some Scene { // Volume WindowGroup(id: "Exploration") { VolumeExplorationView() } .windowStyle(.volumetric) // Immersive Space ImmersiveSpace(id: "Immersive") { EmptyView() } } }
-
20:52 - Callout to convert function from volume view
// Coordinate conversions // Convert from RealityKit entity in volume to SwiftUI space struct VolumeExplorationView: View { @Environment(ImmersiveSpaceAppModel.self) var appModel var body: some View { RealityView { content in content.add(appModel.volumeRoot) // ... } update: { content in guard appModel.convertingRobotFromVolume else { return } // Convert the robot transform from RealityKit scene space for // the volume to SwiftUI immersive space convertRobotFromRealityKitToImmersiveSpace(content: content) } } }
-
21:08 - Convert robot's transform to SwiftUI immersive space
// Coordinate conversions // Convert from RealityKit entity in volume to SwiftUI space func convertRobotFromRealityKitToImmersiveSpace(content: RealityViewContent) { // Convert the robot transform from RealityKit scene space for // the volume to SwiftUI immersive space appModel.immersiveSpaceFromRobot = content.transform(from: appModel.robot, to: .immersiveSpace) // Reparent robot from volume to immersive space appModel.robot.setParent(appModel.immersiveSpaceRoot) // Handoff to immersive space view to continue conversions. appModel.convertingRobotFromVolume = false appModel.convertingRobotToImmersiveSpace = true }
-
21:42 - Callout to convert function from immersive space view
// Coordinate conversions // Convert from SwiftUI immersive space back to RealityKit local space struct ImmersiveExplorationView: View { @Environment(ImmersiveSpaceAppModel.self) var appModel var body: some View { RealityView { content in content.add(appModel.immersiveSpaceRoot) } update: { content in guard appModel.convertingRobotToImmersiveSpace else { return } // Convert the robot transform from SwiftUI space for the immersive // space to RealityKit scene space convertRobotFromSwiftUIToRealityKitSpace(content: content) } } }
-
21:48 - Compute transform to place robot in matching position in immersive space
// Coordinate conversions // Calculate transform from SwiftUI to RealityKit scene space func convertRobotFromSwiftUIToRealityKitSpace(content: RealityViewContent) { // Calculate transform from SwiftUI immersive space to RealityKit // scene space let realityKitSceneFromImmersiveSpace = content.transform(from: .immersiveSpace, to: .scene) // Multiply with the robot's transform in SwiftUI immersive space to build a // transformation which converts from the robot's local // coordinate space in the volume and ends with the robot's local // coordinate space in an immersive space. let realityKitSceneFromRobot = realityKitSceneFromImmersiveSpace * appModel.immersiveSpaceFromRobot // Place the robot in the immersive space to match where it // appeared in the volume appModel.robot.transform = Transform(realityKitSceneFromRobot) // Start the jump! appModel.startJump() }
-
23:54 - Customizing immersion
// Customizing immersion struct BotanistApp: App { // Custom immersion amounts @State private var immersionStyle: ImmersionStyle = .progressive(0.2...1.0, initialAmount: 0.8) var body: some Scene { // Immersive Space ImmersiveSpace(id: "ImmersiveSpace") { ImmersiveSpaceExplorationView() } .immersionStyle(selection: $immersionStyle, in: .mixed, .progressive, .full) } }
-
25:17 - Callout to function to handle immersion amount changed
// Reacting to immersion struct ImmersiveView: View { @State var immersionAmount: Double? var body: some View { ImmersiveSpaceExplorationView() .onImmersionChange { context in immersionAmount = context.amount } .onChange(of: immersionAmount) { oldValue, newValue in handleImmersionAmountChanged(newValue: newValue, oldValue: oldValue) } } }
-
25:39 - Handle function to make robot react to changed immersion amount
// Reacting to immersion func handleImmersionAmountChanged(newValue: Double?, oldValue: Double?) { guard let newValue, let oldValue else { return } if newValue > oldValue { // Move the robot outward to react to increasing immersion moveRobotOutward() } else if newValue < oldValue { // Move the robot inward to react to decreasing immersion moveRobotInward() } }
-
26:57 - Create spatial tracking session
// Create and run spatial tracking session struct ImmersiveExplorationView { @State var spatialTrackingSession: SpatialTrackingSession = SpatialTrackingSession() var body: some View { RealityView { content in // ... } .task { await runSpatialTrackingSession() } } }
-
27:11 - Run spatial tracking session
// Create and run the spatial tracking session func runSpatialTrackingSession() async { // Configure the session for plane anchor tracking let configuration = SpatialTrackingSession.Configuration(tracking: [.plane]) // Run the session to request plane anchor transforms let _ = await spatialTrackingSession.run(configuration) }
-
27:32 - Create a floor anchor to track
// Create a floor anchor to track struct ImmersiveExplorationView { @State var spatialTrackingSession: SpatialTrackingSession = SpatialTrackingSession() let floorAnchor = AnchorEntity( .plane(.horizontal, classification: .floor, minimumBounds: .init(x: 0.01, y: 0.01)) ) var body: some View { RealityView { content in content.add(floorAnchor) } .task { await runSpatialTrackingSession() } } }
-
27:54 - Detect taps on entities in immersive space
// Detect taps on entities in immersive space RealityView { content in // ... } .gesture( SpatialTapGesture( coordinateSpace: .immersiveSpace ) .targetedToAnyEntity() .onEnded { value in handleTapOnFloor(value: value) } )
-
28:09 - Handle tap event to place plant
// Handle tap event func handleTapOnFloor(value: EntityTargetValue<SpatialTapGesture.Value>) { let location = value.convert(value.location3D, from: .immersiveSpace, to: floorAnchor) plantEntity.position = location floorAnchor.addChild(plantEntity) }
-
29:47 - Add tint color to custom plant component
// Add tint color to custom plant component struct PlantComponent: Component { var tintColor: Color { switch plantType { case .coffeeBerry: // Light blue return Color(red: 0.3, green: 0.3, blue: 1.0) case .poppy: // Magenta return Color(red: 1.0, green: 0.0, blue: 1.0) case .yucca: // Light green return Color(red: 0.2, green: 1.0, blue: 0.2) } } }
-
30:09 - Handle collisions with robot
// Handle collisions with robot // // Handle movement of the robot between frames func handleMovement(deltaTime: Float) { // Move character in the collision world appModel.robot.moveCharacter(by: SIMD3<Float>(...), deltaTime: deltaTime, relativeTo: nil) { collision in handleCollision(collision) } }
-
30:29 - Set active tint color when colliding with plant
// Set active tint color when colliding with plant // // Handle collision between robot and hit entity func handleCollision(_ collision: CharacterControllerComponent.Collision) { guard let plantComponent = collision.hitEntity.components[PlantComponent.self] else { return } // Play the plant growth celebration animation playPlantGrowthAnimation(plantComponent: plantComponent) if inImmersiveSpace { appModel.tintColor = plantComponent.tintColor } }
-
30:48 - Apply effect to tint passthrough
// Apply effect to tint passthrough struct ImmersiveExplorationView: View { var body: some View { RealityView { content in // ... } .preferredSurroundingsEffect(surroundingsEffect) } // The resolved surroundings effect based on tint color var surroundingsEffect: SurroundingsEffect? { if let color = appModel.tintColor { return SurroundingsEffect.colorMultiply(color) } else { return nil } } }
-
-
Looking for something specific? Enter a topic above and jump straight to the good stuff.
An error occurred when submitting your query. Please check your Internet connection and try again.