I understand that the system helps maintain user comfort by automatically adjusting the opacity of content in certain situations, like when someone moves too quickly or gets too close to a physical object. The content in front of them dims briefly to allow a clearer view of their surroundings. And I'd like to know the specific distance at which the system begins to show the physical object, or what criteria are used for this adjustment.
General
RSS for tagDiscuss Spatial Computing on Apple Platforms.
Post
Replies
Boosts
Views
Activity
We seen to have found an issue when using the pushWindow action on visionOS. The issue occurs if the app is backgrounded then reopened by selecting the apps icon on the home screen. Any window that is opened via the pushWindow action is then dismissed. We've been able to replicate the issue in a small sample project.
Replication steps
Open app
Open window via the push action
Press the digital crown
On the home screen select the apps icon again
The pushed window will now be dismissed.
There is a sample project linked here that shows off the issue, including a video of the bug in progress
I have been concentrating on developing the visionOS application. While I am currently quite familiar with RealityKit, CompositorServices has also captured my attention. I have not yet acquired knowledge of CompositorServices. Could you please clarify whether it is essential for me to learn CompositorServices? Additionally, I would appreciate it if you could provide insights into the advantages of RealityKit and CompositorServices.
There is a flickering and slight dimming occurring specifically on skysphere, at initial load of the scene, when using Attachment. This is observed in the simulator and on the real device.
Since we cannot upload a video illustrating the undesirable behaviour, I have to describe how to setup the project for you to observe it.
To replicate the issue, follow these steps:
Create a new visionOS app using Xcode template, see image.
Configure the project to launch directly into an immersive space (set Preferred Default Scene Session Role to Immersive Space Application Session Role in Info.plist), see image.
Replace all swift files with those you will find in the attached texts.
Add the skysphere image asset Skydome_8k found at this Apple Sample App Presenting an artist’s scene.
Launch the app in debug mode via Xcode and onto the AVP device or simulator
Continuously open and dismiss the skysphere by pressing on buttons Open Skysphere and Close.
Observe the skysphere flicker and dim upon display of the skysphere.
The current workaround is commented in file ThreeSixtySkysphereRealityView at lines 65, 70, 71, and 72. Uncomment these lines, and the flickering and dimming do not occur.
Are we using attachments wrongly?
Is this behavior known and documented?
Or, is there really a bug in visionOS?
AppModel
InitialImmersiveView
MainImmersiveView
TestSkysphereAttachmentFlickerApp
ThreeSixtySkysphereRealityView
This restriction causes me to be unable to use Metal to create images and simultaneously use Swift to add UI controls or RealityKit content (without using a window) in immersive mode.
Screenshot:
Specific error message:
validateComputeFunctionArguments:1149: failed assertion `Compute Function(textureShader): Shader uses texture(texture[0]) as read-write, but hardware does not support read-write texture of this pixel format.'
OS: visionOS 2.1 (22N5548c) simulator.
Link:
https://developer.apple.com/documentation/visionos/generating-procedural-textures-in-visionos
A popover presented from a view that is used as an attachment is properly displayed in preview mode in the canvas but not so at runtime. I was wondering if it is supported at all.
I"m trying to create a simple app for my students that will display .heic images taken with a nikon and them converted to .heic in the photos app. My attempts only result in the QuickLook viewer showing the images in 2d. Any guidance? Here is my ContentView:
import SwiftUI
import QuickLook
struct ContentView: View {
@State private var showQuickLook = false
@State private var previewURL: URL? = nil // State to store the URL for Quick Look
var body: some View {
VStack {
Button("See it in 3D") {
// Set the URL for the file from the bundle and toggle Quick Look presentation
if let imageURL = Bundle.main.url(forResource: "Michelia_fuego", withExtension: "heic") {
previewURL = imageURL // Set the preview URL if the image is found
showQuickLook.toggle() // Toggle to trigger Quick Look presentation
} else {
print("File not found") // Print error if the file is missing
}
}
.quickLookPreview($previewURL) // Binding to the URL
}
}
}
#Preview {
ContentView()
}
While trying to control the following two scenes in 1 ImmersiveSpace, we found the following memory leak when we background the app while a stereoscopic video is playing.
ImmersiveView's two scenes:
Scene 1 has 1 toggle button
Scene 2 has same toggle button with a 180 degree skysphere playing a stereoscopic video
Attached are the files and images of the memory leak as captured in Xcode.
To replicate this memory leak, follow these steps:
Create a new visionOS app using Xcode template as illustrated below.
Configure the project to launch directly into an immersive space (set Preferred Default Scene Session Role to Immersive Space Application Session Role in Info.plist.
Replace all swift files with those you will find in the attached texts.
In ImmersiveView, replace the stereoscopic video to play with a large 3d 180 degree video of your own bundled in your project.
Launch the app in debug mode via Xcode and onto the AVP device or simulator
Display the memory use by pressing on keys command+7 and selecting Memory in order to view the live memory graph
Press on the first immersive space's button "Open ImmersiveView"
Press on the second immersive space's button "Show Immersive Video"
Background the app
When the app tray appears, foreground the app by selecting it
The first immersive space should appear
Repeat steps 7, 8, 9, and 10 multiple times
Observe the memory use going up, the graph should look similar to the below illustration.
In ImmersiveView, upon backgrounding the app, I do:
a reset method to clear the video's memory
dismiss of the Immersive Space containing the video (even though upon execution, visionOS raises the purple warning "Unable to dismiss an Immersive Space since none is opened". It appears visionOS dismisses any ImmersiveSpace upon backgrounding, which makes sense..)
Am I not releasing the memory correctly?
Or, is there really a memory leak issue in either SwiftUI's ImmersiveSpace or in AVFoundation's AVPlayer upon background of an app?
App file TestVideoLeakOneImmersiveView
First ImmersiveSpace file InitialImmersiveView
Second ImmersiveSpace File ImmersiveView
Skysphere Model File Immersive180VideoViewModel
File AppModel
In WWDC24, visionOS hand tracking has a new function that can make an entity track the hand faster (but at the expense of a certain degree of accuracy), and the video only explains how to implement ARKit, so please ask how to implement the anchorEntiy in the reality view.
I'm setting:
.immersionStyle(selection: .constant(.progressive(0.1...1.0, initialAmount: 0.1)), in: .progressive(0.1...1.0, initialAmount: 0.1))
In UnityVisionOSSettings.swift before build out in Xcode.
I'm having an issue where this only works on occasion. Seems random. I'll either get no immersion level available (crown dial is greyed out and no changes can be made) or it will only allow 0.5 - 1.0 immersion (dial will go below 0.5 but springs back to 0.5 when released).
With no changes to my setup or how I'm setting immersionStyle I've been able to get this to work as I would expect. Wondering if there is some bug that would be causing this to fail. I've tested a simple NativeSDK progressive immersion style with same code for custom setting and it works everytime, so it's something related to Unity.
Here is the entire UnityVisionOSSettings that, from as far as I can tell, are controlling this:
`// GENERATED BY BUILD
import Foundation
import SwiftUI
import PolySpatialRealityKit
import UnityFramework
let unityStartInBatchMode = false
extension UnityPolySpatialApp {
func initialWindowName() -> String { return "Unbounded" }
func getAllAvailableWindows() -> [String] { return ["Bounded-0.500x0.500x0.500", "Unbounded"] }
func getAvailableWindowsForMatch() -> [simd_float3] { return [] }
func displayProviderParameters() -> DisplayProviderParameters { return .init(
framebufferWidth: 1830,
framebufferHeight: 1600,
leftEyePose: .init(position: .init(x: 0, y: 0, z: 0),
rotation: .init(x: 0, y: 0, z: 0, w: 1)),
rightEyePose: .init(position: .init(x: 0, y: 0, z: 0),
rotation: .init(x: 0, y: 0, z: 0, w: 1)),
leftProjectionHalfAngles: .init(left: -1, right: 1, top: 1, bottom: -1),
rightProjectionHalfAngles: .init(left: -1, right: 1, top: 1, bottom: -1)
)
}
@SceneBuilder
var mainScenePart0: some Scene {
ImmersiveSpace(id: "Unbounded", for: UUID.self) { uuid in
PolySpatialContentViewWrapper(minSize: .init(1.000, 1.000, 1.000), maxSize: .init(1.000, 1.000, 1.000))
.environment(\.pslWindow, PolySpatialWindow(uuid.wrappedValue, "Unbounded", .init(1.000, 1.000, 1.000)))
.onImmersionChange() { oldContext, newContext in
PolySpatialWindowManagerAccess.onImmersionChange(oldContext.amount, newContext.amount)
}
KeyboardTextField().frame(width: 0, height: 0).modifier(LifeCycleHandlerModifier())
} defaultValue: { UUID() } .upperLimbVisibility(.automatic)
.immersionStyle(selection: .constant(.progressive(0.1...1.0, initialAmount: 0.1)), in: .progressive(0.1...1.0, initialAmount: 0.1))
WindowGroup(id: "Bounded-0.500x0.500x0.500", for: UUID.self) { uuid in
PolySpatialContentViewWrapper(minSize: .init(0.100, 0.100, 0.100), maxSize: .init(0.500, 0.500, 0.500))
.environment(\.pslWindow, PolySpatialWindow(uuid.wrappedValue, "Bounded-0.500x0.500x0.500", .init(0.500, 0.500, 0.500)))
KeyboardTextField().frame(width: 0, height: 0).modifier(LifeCycleHandlerModifier())
} defaultValue: { UUID() } .windowStyle(.volumetric).defaultSize(width: 0.500, height: 0.500, depth: 0.500, in: .meters).windowResizability(.contentSize) .upperLimbVisibility(.automatic) .volumeWorldAlignment(.gravityAligned)
}
@SceneBuilder
var mainScene: some Scene {
mainScenePart0
}
struct LifeCycleHandlerModifier: ViewModifier {
func body(content: Content) -> some View {
content
.onOpenURL(perform: { url in
UnityLibrary.instance?.setAbsoluteUrl(url.absoluteString)
})
}
}
}`
There is a flickering occurring on 3D assets when switching immersive spaces, which is not the nicest user experience. The flickering does occur either when loading the scenes directly from the RealityKitContent package, or from memory (pre-loaded assets).
Since we cannot upload a video illustrating the undesirable behaviour, I have to describe how to setup the project for you to observe it.
To replicate the issue, follow these steps:
Create a new visionOS app using Xcode template, see image.
Configure the project to launch directly into an immersive space (set Preferred Default Scene Session Role to Immersive Space Application Session Role in Info.plist), see image.
Replace all swift files with those you will find in the attached texts.
In the RealityKitContent package, create a scene named YellowSpheres as illustrated below.
In the RealityKitContent package, create a scene named RedSpheres as illustrated below.
Launch the app in debug mode via Xcode and onto the AVP device or simulator
Continuously switch immersive spaces by pressing on buttons Show RedSpheres and Show YellowSpheres.
Observe the 3d assets flicker upon opening of the immersive spaces.
AppModel
RedSpheresImmersiveView
YellowSpheresImmersiveView
TestFlickeringBetweenImmersiveSpacesApp
Hi, I have a video player app that lost its audio spatialization since the VisionOS 2 update. I am using the VideoPlayerComponent (https://developer.apple.com/documentation/realitykit/videoplayercomponent), to implement my videos as entities, as I want a custom look and controls to my player.
In VisionOS 1, there was automatic audio spatialization. Depending where my video entity is, the app automatically enables head tracking audio spatialization. Since VisionOS 2 however, I cannot get my video entities to play Spatial Audio. I've looked into DestinationVideo and even set up AVAudioSessionSpatialExperience but Spatial Audio is still not working.
Appreciate any help. Thanks.
I have two Apple Vision Pros so that I can make and test a multi-player immersive reality game. But I am one developer, so I need to be able to take one Apple Vision Pro off, and put the other one on to see what the other device is seeing, and to ensure my game information is correctlly being sent over the network with multipeer connectivity.
But when I take one off, the Apple Vision Pro immediately goes to sleep. With Apple Vision Pro OS 1, I could put a piece of paper into the pro and they would stay on for hours, and I could take them on and off and debug my game. But now with VisionOS 2, even with the paper they soon go to sleep.
Is there a setting I can change or override as a developer to stop this auto sleep or auto lock?
I need to check things like:
when two devices are on the network, can I see them both so that players can select each other from a menu?
can i send object positions back and forth
Thank you.
I tried "WWDC24: Build compelling spatial photo and video experiences | Apple" and it can successfully capture spatial video.
But I found the video by my app differs from the iPhone build-in camera app in:
Videos captured with the iPhone's build-in camera app tend to have a more natural or warmer tone, while videos taken with my app appear whiter or cooler in color temperature.
In videos recorded using the iPhone's built-in camera app, the left eye image is typically sharper than the right eye image. However, in my app, this is reversed: the right eye image is clearer than the left eye image.
I've noticed that when I cover the wide-angle lens while shooting, the entire preview screen in my app becomes brighter. However, this doesn't occur when using the iPhone's built-in camera app.
Is there any api or parameters to make my app more close to the iPhone build-in app? I have tried "whiteBalanceMode" and "exposureMode" but no luck.
Does anyone have experience of creating their own EntityActions?
Say for example I wanted one that faded up the opacity of an entity, then once it had completed set another property on one of the entity's components.
I understand that I could use the FromToByAction to control the opacity (and have this working), but I am interested to learn how to create my own dedicated EntityAction, and finding the documentation hard to fathom.
I got as far as creating a struct conforming to EntityAction protocol:
var animatedValueType: (any AnimatableData.Type)?
}
Subscribing to update events on this:
FadeUpAction.subscribe(to: .updated) { event in
guard let animationState = event.animationState else {
return
}
// My animation state is always nil, so I never get here!
let newValue = \\\Some Calc...
animationState.storeAnimatedValue(newValue)
}
And setting it up as an animation on an entity:
let action = FadeUpAction()
if let animation = try? AnimationResource.makeActionAnimation(
for:action,
duration: 2.0,
bindTarget: .opacity
) {
entity.playAnimation(animation)
}
...but haven't been able to understand how to extract the current timeDelta or set the value in the event handler.
Any pointers?
Hi,
I currently have Enterprise API access and have observed that the main camera API only provides RGB data. I am trying to access point cloud information from LIDAR, but it seems ARKit doesn't offer this directly via the standard APIs that iPad uses.
I wanted to ask if there are any possible options to access depth data or enhanced camera capabilities using the Enterprise API.
Specifically:
Does having Enterprise API access unlock any additional camera-related APIs in AVFoundation that could provide depth information or more advanced control over the camera?
Are there any workarounds or alternative methods to obtain depth data from the camera?
Hi,
When opening an ImmersiveSpace with the .mixed style, is it possible to keep the user's current selected system immersive environment?
Currently, the system immersive environment will be dismissed.
ImmersiveSpace(id: "some id") {
SomeRealityView()
}
.immersionStyle(selection: .constant(.mixed), in: .mixed)
My name is Tom Shannon, a developer with Omnia (d.b.a Aequilibrium Inc.). We were recently approved for some of the Enterprise APIs for the Vision Pro.
You can reference the history through our Case-ID: 9237594
We are contacting you for assistance as we have downloaded the entitlement license provided and added it to our target for an application under the bundle id: com.omnia.spatialbrowser
Then under my project and with my developer account, which is under the Aequilibrium Inc. account (279PV9XKZ2), we tried to add the Barcode Scanner Enterprise API entitlement, but this does not show up as an option for us.
I am on XCode 16.1 beta (16B5001e) for reference! Any help would be greatly appreciated.
Best,
Hello
I was wondering if the keyboard awareness feature that came with visionOS 2 would also work for the Mac Book keyboard if someone is in an immersive .progressive custom environment such as the "Garden" environment from Construct an immersive environment for visionOS in e.g. an app I'm currently developing, to see one's keyboard. I haven't managed to achieve it so far.
Thank you very much in advance!