Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics

Post

Replies

Boosts

Views

Activity

VisionOS slow image tracking and enterprise API questions
To set the stage: I made a prototype of an app for a company, the app is to be used internally right now. Prototype runs perfectly on iOS, so now I got VP to port the app to its final destination. The first thing I found out is that the image tracking on VP is useless for moving images (and that's the core of my app). Also distance at which image is lost seems to be way shorter on VP. Now I'm trying to figure out if it's possible to fix/work around it in any way and I'm wondering if Enterprise API would change anything. So: Is it possible to request Enterprise API access as a single person with basic Apple Developer subscription? I looked around the forum and only got more confused. Does QR code detection and tracking work any better than image detection, or anchor updates are the same? Does the increased "object detection" frequency affect in any way image/QR tracking, or is it (as name implies) only for object tracking? Would increasing the CPU/GPU headroom make any change to image/QR detection frequency? Is there something to disable to make anchor updates more frequent? I don't need complex models, shadows, physics, etc. Greetings Michal
4
0
431
Sep ’24
How to Display Transparent GIF on a 3D Point in AR Environment?
I’m currently working on a project where I need to display a transparent GIF at a specific 3D point within an AR environment. I’ve tried various methods, including using RealityKit with UnlitMaterial and TextureResource, but the GIF still appears with a black background instead of being transparent. Does anyone have experience with displaying animated transparent GIFs on an AR plane or point? Any guidance on how to achieve true transparency for GIFs in ARKit or RealityKit would be appreciated.
2
0
188
Oct ’24
Synchronize 3D object animation inside a volume with SharePlay
I have been experimenting some experiences in which I would like to use SharePlay to allow the app to be used by multiple users. Currently I achieved sharing a volume containing a Reality Composer Pro scene inside of it, the scene contains some entities with an animation. So far I have been able to correctly share the volume and its content, with the animation playing without problems, but once I activate SharePlay different users see different moments of the animation if no animation at all. Is there a way to synchronize animations between all the users, no matter when someone entered the SharePlay session, aside from communicating the animation time once someone joins?
0
0
240
Oct ’24
Video Memory Leak when Backgrounding
While trying to control the following two scenes in 1 ImmersiveSpace, we found the following memory leak when we background the app while a stereoscopic video is playing. ImmersiveView's two scenes: Scene 1 has 1 toggle button Scene 2 has same toggle button with a 180 degree skysphere playing a stereoscopic video Attached are the files and images of the memory leak as captured in Xcode. To replicate this memory leak, follow these steps: Create a new visionOS app using Xcode template as illustrated below. Configure the project to launch directly into an immersive space (set Preferred Default Scene Session Role to Immersive Space Application Session Role in Info.plist. Replace all swift files with those you will find in the attached texts. In ImmersiveView, replace the stereoscopic video to play with a large 3d 180 degree video of your own bundled in your project. Launch the app in debug mode via Xcode and onto the AVP device or simulator Display the memory use by pressing on keys command+7 and selecting Memory in order to view the live memory graph Press on the first immersive space's button "Open ImmersiveView" Press on the second immersive space's button "Show Immersive Video" Background the app When the app tray appears, foreground the app by selecting it The first immersive space should appear Repeat steps 7, 8, 9, and 10 multiple times Observe the memory use going up, the graph should look similar to the below illustration. In ImmersiveView, upon backgrounding the app, I do: a reset method to clear the video's memory dismiss of the Immersive Space containing the video (even though upon execution, visionOS raises the purple warning "Unable to dismiss an Immersive Space since none is opened". It appears visionOS dismisses any ImmersiveSpace upon backgrounding, which makes sense..) Am I not releasing the memory correctly? Or, is there really a memory leak issue in either SwiftUI's ImmersiveSpace or in AVFoundation's AVPlayer upon background of an app? App file TestVideoLeakOneImmersiveView First ImmersiveSpace file InitialImmersiveView Second ImmersiveSpace File ImmersiveView Skysphere Model File Immersive180VideoViewModel File AppModel
3
0
349
Oct ’24
Issue with tracking multiple images using ARKit on VisionOS
We are using the ARKit image tracking feature on visionOS 2.0 with three pre-registered images. The image tracking works, but only one image is actively tracked at a time. When more than one target image is visible to the camera, it has difficulty detecting and tracking the other images. Is this the expected behavior in visionOS, or is there something we need to do to resolve this issue?
0
0
206
Oct ’24
Custom Progressive mode --> Unity/Polyspatial not working
I'm setting: .immersionStyle(selection: .constant(.progressive(0.1...1.0, initialAmount: 0.1)), in: .progressive(0.1...1.0, initialAmount: 0.1)) In UnityVisionOSSettings.swift before build out in Xcode. I'm having an issue where this only works on occasion. Seems random. I'll either get no immersion level available (crown dial is greyed out and no changes can be made) or it will only allow 0.5 - 1.0 immersion (dial will go below 0.5 but springs back to 0.5 when released). With no changes to my setup or how I'm setting immersionStyle I've been able to get this to work as I would expect. Wondering if there is some bug that would be causing this to fail. I've tested a simple NativeSDK progressive immersion style with same code for custom setting and it works everytime, so it's something related to Unity. Here is the entire UnityVisionOSSettings that, from as far as I can tell, are controlling this: `// GENERATED BY BUILD import Foundation import SwiftUI import PolySpatialRealityKit import UnityFramework let unityStartInBatchMode = false extension UnityPolySpatialApp { func initialWindowName() -> String { return "Unbounded" } func getAllAvailableWindows() -> [String] { return ["Bounded-0.500x0.500x0.500", "Unbounded"] } func getAvailableWindowsForMatch() -> [simd_float3] { return [] } func displayProviderParameters() -> DisplayProviderParameters { return .init( framebufferWidth: 1830, framebufferHeight: 1600, leftEyePose: .init(position: .init(x: 0, y: 0, z: 0), rotation: .init(x: 0, y: 0, z: 0, w: 1)), rightEyePose: .init(position: .init(x: 0, y: 0, z: 0), rotation: .init(x: 0, y: 0, z: 0, w: 1)), leftProjectionHalfAngles: .init(left: -1, right: 1, top: 1, bottom: -1), rightProjectionHalfAngles: .init(left: -1, right: 1, top: 1, bottom: -1) ) } @SceneBuilder var mainScenePart0: some Scene { ImmersiveSpace(id: "Unbounded", for: UUID.self) { uuid in PolySpatialContentViewWrapper(minSize: .init(1.000, 1.000, 1.000), maxSize: .init(1.000, 1.000, 1.000)) .environment(\.pslWindow, PolySpatialWindow(uuid.wrappedValue, "Unbounded", .init(1.000, 1.000, 1.000))) .onImmersionChange() { oldContext, newContext in PolySpatialWindowManagerAccess.onImmersionChange(oldContext.amount, newContext.amount) } KeyboardTextField().frame(width: 0, height: 0).modifier(LifeCycleHandlerModifier()) } defaultValue: { UUID() } .upperLimbVisibility(.automatic) .immersionStyle(selection: .constant(.progressive(0.1...1.0, initialAmount: 0.1)), in: .progressive(0.1...1.0, initialAmount: 0.1)) WindowGroup(id: "Bounded-0.500x0.500x0.500", for: UUID.self) { uuid in PolySpatialContentViewWrapper(minSize: .init(0.100, 0.100, 0.100), maxSize: .init(0.500, 0.500, 0.500)) .environment(\.pslWindow, PolySpatialWindow(uuid.wrappedValue, "Bounded-0.500x0.500x0.500", .init(0.500, 0.500, 0.500))) KeyboardTextField().frame(width: 0, height: 0).modifier(LifeCycleHandlerModifier()) } defaultValue: { UUID() } .windowStyle(.volumetric).defaultSize(width: 0.500, height: 0.500, depth: 0.500, in: .meters).windowResizability(.contentSize) .upperLimbVisibility(.automatic) .volumeWorldAlignment(.gravityAligned) } @SceneBuilder var mainScene: some Scene { mainScenePart0 } struct LifeCycleHandlerModifier: ViewModifier { func body(content: Content) -> some View { content .onOpenURL(perform: { url in UnityLibrary.instance?.setAbsoluteUrl(url.absoluteString) }) } } }`
3
0
294
Oct ’24
Capturing External Object Images via Vision Pro Passthrough Camera with Enterprise APIs
We are currently working with the Enterprise APIs for visionOS 2 and have successfully obtained the necessary entitlements for passthrough camera access. Our goal is to capture images of external real-world objects using the passthrough camera of the Vision Pro, not just take screenshots or screen captures. Our specific use case involves: 1. Accessing the raw passthrough camera feed. 2. Capturing high-resolution images of objects in the real world through the camera. 3. Processing and saving these images for further analysis within our custom enterprise app. We would greatly appreciate any guidance, tutorials, or sample code that could help us achieve this functionality. If there are specific APIs or best practices for handling real-world image capture via passthrough cameras with the Enterprise APIs, please let us know.
0
0
218
Oct ’24
Asset flickering when switching ImmersiveSpaces
There is a flickering occurring on 3D assets when switching immersive spaces, which is not the nicest user experience. The flickering does occur either when loading the scenes directly from the RealityKitContent package, or from memory (pre-loaded assets). Since we cannot upload a video illustrating the undesirable behaviour, I have to describe how to setup the project for you to observe it. To replicate the issue, follow these steps: Create a new visionOS app using Xcode template, see image. Configure the project to launch directly into an immersive space (set Preferred Default Scene Session Role to Immersive Space Application Session Role in Info.plist), see image. Replace all swift files with those you will find in the attached texts. In the RealityKitContent package, create a scene named YellowSpheres as illustrated below. In the RealityKitContent package, create a scene named RedSpheres as illustrated below. Launch the app in debug mode via Xcode and onto the AVP device or simulator Continuously switch immersive spaces by pressing on buttons Show RedSpheres and Show YellowSpheres. Observe the 3d assets flicker upon opening of the immersive spaces. AppModel RedSpheresImmersiveView YellowSpheresImmersiveView TestFlickeringBetweenImmersiveSpacesApp
1
0
220
Oct ’24
How is the HandTracking in Happy Beam avoiding data racing?
I am new to learning about concurrency and I am working on an app that uses the HandTrackingProvider class. In the Happy Beam sample code, there is a HearGestureModel which has a reference to the HandTrackingProvider() and this seems to write to a struct called HandUpdates inside the HeartGestureModel class through the publishHandTrackingUpdates() function. On another thread, there is a function called computeTransformofUserPerformedHeartGesture() which reads the values of the HandUpdates to determine whether the user is making the appropriate gesture. My question is, how is the code handling the constant read and write to the HandUpdates struct?
1
0
256
Oct ’24
ARKit delegate code broken by Swift 6
I'm porting over some code that uses ARKit to Swift 6 (with Complete Strict Concurrency Checking enabled). Some methods on ARSCNViewDelegate, namely Coordinator.renderer(_:didAdd:for:) among at least one other is causing a consistent crash. On Swift 5 this code works absolutely fine. The above method consistently crashes with _dispatch_assert_queue_fail. My assumption is that in Swift 6 a trap has been inserted by the compiler to validate that my downstream code is running on the main thread. In Implementing a Main Actor Protocol That’s Not @MainActor, Quinn “The Eskimo!” seems to address scenarios of this nature with 3 proposed workarounds yet none of them seem feasible here. For #1, marking ContentView.addPlane(renderer:node:anchor:) nonisolated and using @preconcurrency import ARKit compiles but still crashes :( For #2, applying @preconcurrency to the ARSCNViewDelegate conformance declaration site just yields this warning: @preconcurrency attribute on conformance to 'ARSCNViewDelegate' has no effect For #3, as Quinn recognizes, this is a non-starter as ARSCNViewDelegate is out of our control. The minimal reproducible set of code is below. Simply run the app, scan your camera back and forth across a well lit environment and the app should crash within a few seconds. Switch over to Swift Language Version 5 in build settings, retry and you'll see the current code works fine. import ARKit import SwiftUI struct ContentView: View { @State private var arViewProxy = ARSceneProxy() private let configuration: ARWorldTrackingConfiguration @State private var planeFound = false init() { configuration = ARWorldTrackingConfiguration() configuration.worldAlignment = .gravityAndHeading configuration.planeDetection = [.horizontal] } var body: some View { ARScene(proxy: arViewProxy) .onAddNode { renderer, node, anchor in addPlane(renderer: renderer, node: node, anchor: anchor) } .onAppear { arViewProxy.session.run(configuration) } .onDisappear { arViewProxy.session.pause() } .overlay(alignment: .top) { if !planeFound { Text("Slowly move device horizontally side to side to calibrate") } else { Text("Plane found!") .bold() .foregroundStyle(.green) } } } private func addPlane(renderer: SCNSceneRenderer, node: SCNNode, anchor: ARAnchor) { guard let planeAnchor = anchor as? ARPlaneAnchor, let device = renderer.device, let planeGeometry = ARSCNPlaneGeometry(device: device) else { return } planeFound = true planeGeometry.update(from: planeAnchor.geometry) let material = SCNMaterial() material.isDoubleSided = true material.diffuse.contents = UIColor.white.withAlphaComponent(0.65) planeGeometry.materials = [material] let planeNode = SCNNode(geometry: planeGeometry) node.addChildNode(planeNode) } } struct ARScene { private(set) var onAddNodeAction: ((SCNSceneRenderer, SCNNode, ARAnchor) -> Void)? private let proxy: ARSceneProxy init(proxy: ARSceneProxy) { self.proxy = proxy } func onAddNode( perform action: @escaping (SCNSceneRenderer, SCNNode, ARAnchor) -> Void ) -> Self { var view = self view.onAddNodeAction = action return view } } extension ARScene: UIViewRepresentable { func makeUIView(context: Context) -> ARSCNView { let arView = ARSCNView() arView.delegate = context.coordinator arView.session.delegate = context.coordinator proxy.arView = arView return arView } func updateUIView(_ uiView: ARSCNView, context: Context) { context.coordinator.onAddNodeAction = onAddNodeAction } func makeCoordinator() -> Coordinator { Coordinator() } } extension ARScene { class Coordinator: NSObject, ARSCNViewDelegate, ARSessionDelegate { var onAddNodeAction: ((SCNSceneRenderer, SCNNode, ARAnchor) -> Void)? func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) { onAddNodeAction?(renderer, node, anchor) } } } @MainActor class ARSceneProxy: NSObject, @preconcurrency ARSessionProviding { fileprivate var arView: ARSCNView! @objc dynamic var session: ARSession { arView.session } } Any help is greatly appreciated!
1
7
337
Oct ’24
App lost audio spatialization from VisionOS 2 Update
Hi, I have a video player app that lost its audio spatialization since the VisionOS 2 update. I am using the VideoPlayerComponent (https://developer.apple.com/documentation/realitykit/videoplayercomponent), to implement my videos as entities, as I want a custom look and controls to my player. In VisionOS 1, there was automatic audio spatialization. Depending where my video entity is, the app automatically enables head tracking audio spatialization. Since VisionOS 2 however, I cannot get my video entities to play Spatial Audio. I've looked into DestinationVideo and even set up AVAudioSessionSpatialExperience but Spatial Audio is still not working. Appreciate any help. Thanks.
1
0
213
Oct ’24
How to setup PS5 Controller to move character with animation, and VR camera
Hello! I would like to do exactly this: https://youtu.be/Cun8K7ctKp0?si=TgWvtdw-VdlBVL0R I can't seem to find any documentation on getting a PS5 controller hooked up properly in Reality Composer Pro and Xcode, driving a character with animation (or, moving an object around freely) then over to the Vision Pro. Additionally, I would also like to learn how to use a controller to move a VR camera around a scene, so that we can navigate in custom built spaces - similar to Meta virtual environments, or Steam VR home environments. Typically, I would do this with Unreal Engine, but unfortunately, AVP support is still in its infancy there. So I figured, why not try to do it natively? Any help with concrete tutorials or documentation would be greatly appreciated. Thx!
1
0
248
Oct ’24
PlaneDetectionProvider on VisionOS seems to limit detection to planes less than5 m from world origin.
When using the plane PlaneDetectionProvider in visionOS I seem to have hit a limitation which is that regardless of where the headset is in the space, planes will only be detected that are (as far as I can tell) less that 5m from the world origin. Mapping a room becomes very tricky as a result because you often find some walls are outside the radius, even if you're standing two feet away from a ten foot wall. It just won't see it. I've picked my way through the documentation but I cannot see any way to extend this distance. Am I missing something?
3
0
339
Oct ’24
Keep Apple Vision Pro Awake
I have two Apple Vision Pros so that I can make and test a multi-player immersive reality game. But I am one developer, so I need to be able to take one Apple Vision Pro off, and put the other one on to see what the other device is seeing, and to ensure my game information is correctlly being sent over the network with multipeer connectivity. But when I take one off, the Apple Vision Pro immediately goes to sleep. With Apple Vision Pro OS 1, I could put a piece of paper into the pro and they would stay on for hours, and I could take them on and off and debug my game. But now with VisionOS 2, even with the paper they soon go to sleep. Is there a setting I can change or override as a developer to stop this auto sleep or auto lock? I need to check things like: when two devices are on the network, can I see them both so that players can select each other from a menu? can i send object positions back and forth Thank you.
0
1
211
Oct ’24
Spatial Video captured quality using the API differs from that of the iPhone's built-in camera app
I tried "WWDC24: Build compelling spatial photo and video experiences | Apple" and it can successfully capture spatial video. But I found the video by my app differs from the iPhone build-in camera app in: Videos captured with the iPhone's build-in camera app tend to have a more natural or warmer tone, while videos taken with my app appear whiter or cooler in color temperature. In videos recorded using the iPhone's built-in camera app, the left eye image is typically sharper than the right eye image. However, in my app, this is reversed: the right eye image is clearer than the left eye image. I've noticed that when I cover the wide-angle lens while shooting, the entire preview screen in my app becomes brighter. However, this doesn't occur when using the iPhone's built-in camera app. Is there any api or parameters to make my app more close to the iPhone build-in app? I have tried "whiteBalanceMode" and "exposureMode" but no luck.
3
0
492
Aug ’24