Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics

Post

Replies

Boosts

Views

Activity

Not getting camera frame using enterprise API in Vision Pro
I don't get cameraFrame from cameraFrameUpdates in vision pro app, why it's no getting , where I am doing wrong in code please guide me. for await cameraFrame in cameraFrameUpdates { print("cameraFrame:: (cameraFrame)") } var body: some View { VStack { image .resizable() .scaledToFit() if(self.finalImage != nil){ self.finalImage! .resizable() .scaledToFit() }else{ image .resizable() .scaledToFit() } } .task { if #available(visionOS 2.0, *) { guard CameraFrameProvider.isSupported else { print("CameraFrameProvider not supported.") return } let formats = CameraVideoFormat.supportedVideoFormats(for: .main, cameraPositions: [CameraFrameProvider.CameraPosition.left]) let cameraFrameProvider = CameraFrameProvider() do { try await arkitSession.run([cameraFrameProvider]) } catch { guard let sessionError = error as? ARKitSession.Error else { preconditionFailure("ARKitSession.run() returned a non-session error: \(error)") print("ARKitSession.run() returned a non-session error: \(error)") } } guard let cameraFrameUpdates = cameraFrameProvider.cameraFrameUpdates(for: formats[0]) else { preconditionFailure("Failed to get an async sequence for the first format.") print("Failed to get an async sequence for the first format.") } print("cameraFrameUpdates:: \(cameraFrameUpdates)") for await cameraFrame in cameraFrameUpdates { print("cameraFrame:: \(cameraFrame)") print("Camera Frame ::: LEFT :: \(cameraFrame.sample(for: .left))") guard let leftSample = cameraFrame.sample(for: .left) else { print("CameraFrameProviderSample - Nil camera frame left sample") print("CameraFrameProviderSample - Nil camera frame left sample") continue } self.pixelBuffer = leftSample.pixelBuffer print(" ======== PIXEL BUFFER ::: \(self.pixelBuffer) ========") self.finalImage = self.setImage() } } else { // Fallback on earlier versions } } }
2
0
247
3w
Running multiple ARKitSessions in the same app?
I would like to implement the following but I am not sure if this is a supported use case based on the current documentation: Run one ARKitSession with a WorldTrackingProvider in Swift for mixed immersion Metal rendering (to get the device anchor for the layer renderer drawable & view matrix) Run another ARKitSession with a WorldTrackingProvider and a CameraFrameProvider in a different library (that is part of the same app) using the ARKit C API and using the transforms from the anchors in that session to render objects in the Swift application part. In general, is this a supported use case or is it necessary to have one shared ARKitSession? Assuming this is supported, will the (device) anchors from both WorldTrackingProviders reference the same world coordinate system? Are there any performance downsides to having multiple ARKitSessions? Thanks
1
0
246
3w
RealityKit/ARKit Environment Texturing broken on iOS 18
Devices running iOS 18 using RealityKit do not seem to receive lighting supplied via ARKit Environment Texturing (https://developer.apple.com/documentation/arkit/arworldtrackingconfiguration/2977509-environmenttexturing). Instead just a default IBL is used by RealityKit. This happens with RealityView as well as ARView. It also happens when I explicitly opt-in to environment texturing: let worldTrackingConfig = ARWorldTrackingConfiguration() worldTrackingConfig.environmentTexturing = .automatic arView.session.run(worldTrackingConfig) Even the Xcode AR Template has this issue. I'm attaching a screenshot of the sample app running on iOS 18 where it's broken and from iOS 17 where it works as expected. I hope this can get resolved quickly since I see it as a major regression. Feedback ID: FB15091335 UPDATE: It works on my older iPhone XS (iOS 18 22A5282m) Broken on iPad Pro (11-inch) (3rd generation) (iPadOS 18.0 (22A5350a)) Maybe it's related to LiDAR? Thank you! iOS 17 (works): iOS 18 (broken):
1
0
259
3w
Xcode 16 – Symbol not found: ShapeResource generateConvex
Hi, I have a RealityKit app that I am building with Xcode 16. The app has a minimum deployment target of iOS 17. If I run it on an iOS 17 device the app crashes: dyld[15716]: Symbol not found: _$s10RealityKit13ShapeResourceC14generateConvex4fromAcA04MeshD0C_tYaKFZ Referenced from: … Expected in: …/System/Library/Frameworks/RealityFoundation.framework/RealityFoundation My code looks something like this: @available(iOS, introduced: 13.0, obsoleted: 18.0) @MainActor @preconcurrency func generateNonAsyncConvexShapeResource(from meshResource: MeshResource) throws -> ShapeResource { ShapeResource.generateConvex(from: meshResource) } @available(iOS 18.0, *) func generateConvexShapeAsync(from meshResource: MeshResource) async throws -> ShapeResource { // This will only be available for iOS 18 and above return try await ShapeResource.generateConvex(from: meshResource) } if let meshResource = try? modelEntity.model?.mesh.applying(transform: transform.matrix) { if #available(visionOS 1.0, iOS 18.0, *) { try? await generateConvexShapeAsync(from: meshResource)// await shapeResources.append(.generateConvex(from: meshResource)) } else { try? generateNonAsyncConvexShapeResource(from: meshResource) } } So I actually do check for the system and only call the async variant on iOS 18. Any hints how to fix that? Thanks!
2
0
369
3w
RoomCaptureSession custom ARSession missing SceneDepth
Hello We are exploring the iOS 17 RoomPlan updates that allow for a custom ARSession to be passed into the RoomCaptureSession via the new initializer. let roomCaptureSession = RoomCaptureSession(arSession: myARSession) Currently we use our ARSession to extract sceneDepth from the ARFrames via the delegate callback. This works prior to activation of the RoomCaptureSession via session.run(configuration). However, when we do call run on the RoomCaptureSession, sceneDepth is no longer present on the incoming ARFrames. Are these mutually exclusive? Should we expect ARFrame depth data to be present when a RoomCaptureSession is running with the shared ARSession?
1
0
244
3w
ShaderGraphMaterial with Occlusion Surface Output fails to load on iOS and macOS
A ShaderGraphMaterial with an Occlusion Surface Output generated with RealityComposer 2 fails to load on iOS 18 and macOS 15 with the following error: RealityFoundation.ShaderGraphMaterial.LoadError.invalidTypeFound (https://developer.apple.com/documentation/realitykit/shadergraphmaterial/loaderror/invalidtypefound) This happens with both https://developer.apple.com/documentation/shadergraph/realitykit/occlusion-surface-(realitykit) and https://developer.apple.com/documentation/shadergraph/realitykit/shadow-receiving-occlusion-surface-(realitykit) RealityView { content in do { let bgEntity = ModelEntity(mesh: .generateCone(height: 0.5, radius: 0.1), materials: [SimpleMaterial(color: .red, isMetallic: true)]) bgEntity.position.z = -0.2 content.add(bgEntity) let occlusionMaterial = try await ShaderGraphMaterial(named: "/Root/OcclusionMaterial", from: "OcclusionMaterial") let testEntity = ModelEntity(mesh: .generateSphere(radius: 0.4), materials: [occlusionMaterial]) content.add(testEntity) content.cameraTarget = testEntity } catch { print("Shader Graph Load Error:") dump(error) } } .realityViewCameraControls(.orbit) .edgesIgnoringSafeArea(.all) Feedback ID: FB15081296
0
0
198
3w
Blurred Background (RealityKit) Shader Graph Node not working on iOS/macOS
The ShaderGraph Node Blurred Background (RealityKit) – https://developer.apple.com/documentation/shadergraph/realitykit/blurred-background-(realitykit) works fine within the RealityComposer Pro 2 editor but isn't working on iOS 18 or macOS 15. Instead of the blurred content it just renders as opaque in a single color (Screenshot 2). Interestingly it also fails to render within RealityComposer Pro when no other entities are within the scene, e.g only a background skybox set. Expected Behavior: It would be great if this node worked the same way as it does on visionOS since this would allow for really interesting and nice effects for scenes. Feedback ID: FB15081190
0
1
199
3w
OcclusionMaterial renders plain black when custom skybox environment is set
Using OcclusionMaterial on macOS and iOS works fine in Non-AR mode when I set the background to just a simple color (https://developer.apple.com/documentation/realitykit/arview/environment-swift.struct/color) but when I set a custom skybox (https://developer.apple.com/documentation/realitykit/arview/environment-swift.struct/background-swift.struct/skybox(_:)) the OcclusionMaterial renders as fully black. I would expect it to properly occlude the content and show through the skybox behind it. This happens with box ARView and RealityView. On current iOS/macOS Betas as well as on older systems, e.g iOS 17 and macOS Sonoma. Feedback ID: FB15081053
0
0
161
3w
Open Reality Composer pro Scenes as were they files
Is there a good way to have an app open scenes from Reality Composer Pro, without that scene being part of the app? Kind of like you would browse any other file. I have a business where I provide walk throughs in building plans. I do this by building an app containing a Reality Composer Pro scene containing the customers building. For each customer I will duplicate the app and change the Scene content. (One app per customer) In order to scale my business I would love to be able to distribute the scenes to the customers so they could just open them in the app, but I don't see how to to that. If you have any idea as to how this can be done it would be great! I have very little experience in coding, so please assume I don't know what you are talking about when explaining
3
0
192
3w
Open the vision pro camera using Enterprise API and view it in application window
I want to see the vision pro camera view in my application window. I had write some code from apple, I stuck on CVPixelBuffer , How to convert pixelbuffer to video frame? Button("Camera Feed") { Task{ if #available(visionOS 2.0, *) { let formats = CameraVideoFormat.supportedVideoFormats(for: .main, cameraPositions:[.left]) let cameraFrameProvider = CameraFrameProvider() var arKitSession = ARKitSession() var pixelBuffer: CVPixelBuffer? await arKitSession.queryAuthorization(for: [.cameraAccess]) do { try await arKitSession.run([cameraFrameProvider]) } catch { return } guard let cameraFrameUpdates = cameraFrameProvider.cameraFrameUpdates(for: formats[0]) else { return } for await cameraFrame in cameraFrameUpdates { guard let mainCameraSample = cameraFrame.sample(for: .left) else { continue } //==== print("=========================") print(mainCameraSample.pixelBuffer) print("=========================") // self.pixelBuffer = mainCameraSample.pixelBuffer } } else { // Fallback on earlier versions } } } I want to convert "mainCameraSample.pixelBuffer" in to video. Could you please guide me!!
2
0
279
3w
Opening other apps while Immersive space is open?
I am working on a small side project for the Apple Vision pro. One thing I'm trying to figure out is can I open another app while having the immersive space open from my original app? As an example I want to present a fully immersed view displaying a 360 degree photo. I then want to allow the user to open up safari or any other app of their choice and use the immersive environment as a background? Is this possible? Everything I've read so far seems to say no but I wasn't sure if someone found out how to make this possible.
1
0
143
3w
Casting shadows on the ground
In visionOS 2 beta, I have a character loaded from a Reality Composer Pro scene standing on the floor, but he isn't casting a shadow on the floor. I added a GroundingShadowComponent in RealityView, and he does cast shadows on himself (e.g., his hands cast shadows on his shoes), but I don't see any shadow on the floor. Do I need to enable something to have my character cast a show on the real-world floor?
1
0
214
3w
Modify CapturedRoom Objects
The RoomPlan API makes it possible to serialize and de-serialize CapturedRoom objects. This opens up the possibility to modify a CapturedRoom (e.g. deleting surfaces/objects) in a de-serialized state and serialize it as a new CapturedRoom. All modified attributes are loaded accordingly, so far so good. My problem starts with the StructureBuilder and it's merge function capturedStructure(). This function ignores any modifications to attributes of a CapturedRoom. The only data that is considered is encoded in the CoreModel attribute (which is not mentioned in the official documentation). If someone has more information or a working solution about how to modify CapturedRooms please let me know. Additionally if there is somewhere a documentation about the CoreModel-attribute please post a link here.
0
0
250
4w
Vision Pro App Stuck on Loading Screen – Works Fine on Simulator
Hi everyone, I'm currently developing an app for Vision Pro using SwiftUI, and I've encountered an issue when testing on the Vision Pro device. The app works perfectly fine on the Vision Pro simulator in Xcode, but when I run it on the actual device, it gets stuck on the loading screen. The logo appears and pulsates when it loads, as expected, but it never progresses beyond that point. Issue Details: The app doesn't crash, and I don't see any major errors in the console. However, in the debug logs, I encounter an exception: Thread 1: "*** -[NSProxy doesNotRecognizeSelector:plane] called!" I’ve searched through my project, but there’s no direct reference to a selector named plane. I suspect it may be related to a framework or system call failing on the device. There’s also this warning: NSBundle file:///System/Library/PrivateFrameworks/MetalTools.framework/ principal class is nil because all fallbacks have failed. What I’ve Tried: Verified that all assets and resources are properly bundled and loading (since simulators tend to be more forgiving with file paths). Tested the app with minimal UI to isolate potential causes, but the issue persists. Checked the app's Info.plist configuration to ensure it’s properly set up for Vision Pro. No crashes, just a loading screen hang on the device, while the app works fine in the Vision Pro simulator. Additional Info: The app’s UI consists of a loading animation (pulsating logo) before transitioning to the main content. Using Xcode 16.1 Beta, VisionOS SDK. The app is based on SwiftUI, with Vision Pro optimizations for immersive experience. Has anyone experienced something similar when moving from the simulator to the Vision Pro hardware? Any help or guidance would be appreciated, especially with regards to the exception or potential resource loading issues specific to the device. Thanks in advance!
1
0
268
Sep ’24
Unity/PolySpatial GameController framework failing to load
I have a simple example of a motion matching (MxM for Unity) character controller that uses Unity's input system and gamepad support. In editor the scene and inputs work as expected. When I build to headset the app stops at an initialization step where my game controller should kick in. The app doesn't crash but my character is frozen in A-Pose and doesn't respond to input. I'm wondering if this error I'm seeing in the logs is what's causing it? And if so how do I fix it? error 15:56:11.724200-0700 PolySpatialProjectTemplate NSBundle file:///System/Library/Frameworks/GameController.framework/ principal class is nil because all fallbacks have failed I'm using Xcode 16 beta 6 Unity 6000.0.17f1 VisionOS 2.0 beta 9
2
0
283
Sep ’24
Using Native ARKit Object Tracking in Unity
Hello, Has anyone had success with implementing object tracking in Unity or adding native tracking capability to the VisionOS project built from Unity? I am working on an application for Vision Pro mainly in Unity using Polyspatial. The application requires me to track objects and make decisions based on tracked object's location. I was able to create an object tracking application on Native Swift, but could not successfully combine this with my Unity project yet. Each separate project (Main Unity app using Polyspatial and the native app on Swift) can successfully build and be deployed onto VisionPro. I know that Polyspatial and ARFoundation does not have support for ARKit's object tracking feature for VIsion Pro as of today, and they only support image tracking inside Unity. For that reason I have been exploring different ways of creating a bridge for two way interaction of the native tracking functionality and the other functionality in Unity. Below are the methods I tried and failed so far: Package the tracking functionality as a Swift Plugin and access this in Unity, and then build for Vision Pro: I can create packages and access them for simple exposed variables and methods, but not for outputs and methods from ARKit, which throw dependency errors while trying to make the swift package. Build project from Unity to VIsion Pro and expose a boolean to start/stop tracking that can be read by the native code, and then carry the tracking classes into the built project. In this approach I keep getting an error that says _TrackingStateChanged cannot be found, which is the class that exposes the bool toggled by the Unity button press: using System.Runtime.InteropServices; public class UnityBridge { [DllImport("__Internal")] private static extern void TrackingStateChanged(bool isTracking); public static void NotifyTrackingState() { // Call the Swift method TrackingStateChanged(TrackingStartManager.IsTrackingActive()); } } This seems to be translated to C++ code in the ill2cpp output from Unity, and even though I made sure that all necessary packages were added to the target, I keep receiving this error. from the UnityFramework plugin: Undefined symbol: _TrackingStateChanged I have considered extending the current Image Tracking approach in ARFoundation to include object tracking, but that seems to be too complicated for my use case and time frame for now. The final resort will be to forego Unity implementation and do everything in native code. However, I really want to be able to use Unity's conveniences and I have very limited experience with Swift development.
0
0
270
Sep ’24
Post Notification to RCP but Timeline won't fire
I am trying to use onNofitication in BehaviorComponent to fire up my composed timeline actions. Which is formed up by one TransformTo action, one Hide action and followed by a Notification action indicating the other two actions are finished. With this post, I successfully send a notification to RCP to fire up my timeline with identification: NotificationCenter.default.post( name: NSNotification.Name("RealityKit.NotificationTrigger"), object: nil, userInfo: [ "RealityKit.NotificationTrigger.Scene": scene, "RealityKit.NotificationTrigger.Identifier": "onSomethingStart" ] ) On the other hand, to subscribe that Notification Action, I append a onReceive function below my RealityView, and succesfully received my notification private let notificationTrigger = NotificationCenter.default.publisher( for: Notification.Name("RealityKit.NotificationTrigger")) guard let entity = out.userInfo?["RealityKit.NotificationTrigger.SourceEntity"] as? Entity, let notificationName = out.userInfo?["RealityKit.NotificationTrigger.Identifier"] as? String else { return } debugPrint("Received notification: \(notificationName), entity name: \(entity.name)") Which means that my Timeline is fired up because I can received my notification in my Timeline. But the rest two actions just don't appear to be working. I played the timeline in RCP it works fine. Anything I missed to make it tick? XCode beta 16.1 VisionOS beta 9
2
0
308
Sep ’24
Component with SIMD3<Float>
I want to use SIMD values with a design time component. public struct SomeComponent: Component, Codable { public var Magnitude: SIMD3 = .zero } Is extra work required? I had understood that serialization of simple values including SIMD would be handled by Reality Composer Pro. At run time I get the error: decodeComponent: Unexpected error: keyNotFound(CodingKeys(stringValue: "Magnitude", intValue: nil), Swift.DecodingError.Context(codingPath: [], debugDescription: "No value associated with key CodingKeys(stringValue: "Magnitude", intValue: nil) ("Magnitude").", underlyingError: nil)) Asset deserialization failed. Asset type "SceneAsset". Details: Failed to deserialize "/container/@shared/17/object". Reason: Failed to deserialize Swift Codable component of type RealityKitContent.SomeComponent.
2
0
237
Sep ’24