Discuss Spatial Computing on Apple Platforms.

Post

Replies

Boosts

Views

Activity

Not getting camera frame using enterprise API in Vision Pro
I don't get cameraFrame from cameraFrameUpdates in vision pro app, why it's no getting , where I am doing wrong in code please guide me. for await cameraFrame in cameraFrameUpdates { print("cameraFrame:: (cameraFrame)") } var body: some View { VStack { image .resizable() .scaledToFit() if(self.finalImage != nil){ self.finalImage! .resizable() .scaledToFit() }else{ image .resizable() .scaledToFit() } } .task { if #available(visionOS 2.0, *) { guard CameraFrameProvider.isSupported else { print("CameraFrameProvider not supported.") return } let formats = CameraVideoFormat.supportedVideoFormats(for: .main, cameraPositions: [CameraFrameProvider.CameraPosition.left]) let cameraFrameProvider = CameraFrameProvider() do { try await arkitSession.run([cameraFrameProvider]) } catch { guard let sessionError = error as? ARKitSession.Error else { preconditionFailure("ARKitSession.run() returned a non-session error: \(error)") print("ARKitSession.run() returned a non-session error: \(error)") } } guard let cameraFrameUpdates = cameraFrameProvider.cameraFrameUpdates(for: formats[0]) else { preconditionFailure("Failed to get an async sequence for the first format.") print("Failed to get an async sequence for the first format.") } print("cameraFrameUpdates:: \(cameraFrameUpdates)") for await cameraFrame in cameraFrameUpdates { print("cameraFrame:: \(cameraFrame)") print("Camera Frame ::: LEFT :: \(cameraFrame.sample(for: .left))") guard let leftSample = cameraFrame.sample(for: .left) else { print("CameraFrameProviderSample - Nil camera frame left sample") print("CameraFrameProviderSample - Nil camera frame left sample") continue } self.pixelBuffer = leftSample.pixelBuffer print(" ======== PIXEL BUFFER ::: \(self.pixelBuffer) ========") self.finalImage = self.setImage() } } else { // Fallback on earlier versions } } }
2
0
217
2w
Xcode 16 – Symbol not found: ShapeResource generateConvex
Hi, I have a RealityKit app that I am building with Xcode 16. The app has a minimum deployment target of iOS 17. If I run it on an iOS 17 device the app crashes: dyld[15716]: Symbol not found: _$s10RealityKit13ShapeResourceC14generateConvex4fromAcA04MeshD0C_tYaKFZ Referenced from: … Expected in: …/System/Library/Frameworks/RealityFoundation.framework/RealityFoundation My code looks something like this: @available(iOS, introduced: 13.0, obsoleted: 18.0) @MainActor @preconcurrency func generateNonAsyncConvexShapeResource(from meshResource: MeshResource) throws -> ShapeResource { ShapeResource.generateConvex(from: meshResource) } @available(iOS 18.0, *) func generateConvexShapeAsync(from meshResource: MeshResource) async throws -> ShapeResource { // This will only be available for iOS 18 and above return try await ShapeResource.generateConvex(from: meshResource) } if let meshResource = try? modelEntity.model?.mesh.applying(transform: transform.matrix) { if #available(visionOS 1.0, iOS 18.0, *) { try? await generateConvexShapeAsync(from: meshResource)// await shapeResources.append(.generateConvex(from: meshResource)) } else { try? generateNonAsyncConvexShapeResource(from: meshResource) } } So I actually do check for the system and only call the async variant on iOS 18. Any hints how to fix that? Thanks!
2
0
340
3w
OcclusionMaterial renders plain black when custom skybox environment is set
Using OcclusionMaterial on macOS and iOS works fine in Non-AR mode when I set the background to just a simple color (https://developer.apple.com/documentation/realitykit/arview/environment-swift.struct/color) but when I set a custom skybox (https://developer.apple.com/documentation/realitykit/arview/environment-swift.struct/background-swift.struct/skybox(_:)) the OcclusionMaterial renders as fully black. I would expect it to properly occlude the content and show through the skybox behind it. This happens with box ARView and RealityView. On current iOS/macOS Betas as well as on older systems, e.g iOS 17 and macOS Sonoma. Feedback ID: FB15081053
0
0
151
3w
Open the vision pro camera using Enterprise API and view it in application window
I want to see the vision pro camera view in my application window. I had write some code from apple, I stuck on CVPixelBuffer , How to convert pixelbuffer to video frame? Button("Camera Feed") { Task{ if #available(visionOS 2.0, *) { let formats = CameraVideoFormat.supportedVideoFormats(for: .main, cameraPositions:[.left]) let cameraFrameProvider = CameraFrameProvider() var arKitSession = ARKitSession() var pixelBuffer: CVPixelBuffer? await arKitSession.queryAuthorization(for: [.cameraAccess]) do { try await arKitSession.run([cameraFrameProvider]) } catch { return } guard let cameraFrameUpdates = cameraFrameProvider.cameraFrameUpdates(for: formats[0]) else { return } for await cameraFrame in cameraFrameUpdates { guard let mainCameraSample = cameraFrame.sample(for: .left) else { continue } //==== print("=========================") print(mainCameraSample.pixelBuffer) print("=========================") // self.pixelBuffer = mainCameraSample.pixelBuffer } } else { // Fallback on earlier versions } } } I want to convert "mainCameraSample.pixelBuffer" in to video. Could you please guide me!!
2
0
251
3w
Opening other apps while Immersive space is open?
I am working on a small side project for the Apple Vision pro. One thing I'm trying to figure out is can I open another app while having the immersive space open from my original app? As an example I want to present a fully immersed view displaying a 360 degree photo. I then want to allow the user to open up safari or any other app of their choice and use the immersive environment as a background? Is this possible? Everything I've read so far seems to say no but I wasn't sure if someone found out how to make this possible.
1
0
138
3w
Casting shadows on the ground
In visionOS 2 beta, I have a character loaded from a Reality Composer Pro scene standing on the floor, but he isn't casting a shadow on the floor. I added a GroundingShadowComponent in RealityView, and he does cast shadows on himself (e.g., his hands cast shadows on his shoes), but I don't see any shadow on the floor. Do I need to enable something to have my character cast a show on the real-world floor?
1
0
198
3w
Modify CapturedRoom Objects
The RoomPlan API makes it possible to serialize and de-serialize CapturedRoom objects. This opens up the possibility to modify a CapturedRoom (e.g. deleting surfaces/objects) in a de-serialized state and serialize it as a new CapturedRoom. All modified attributes are loaded accordingly, so far so good. My problem starts with the StructureBuilder and it's merge function capturedStructure(). This function ignores any modifications to attributes of a CapturedRoom. The only data that is considered is encoded in the CoreModel attribute (which is not mentioned in the official documentation). If someone has more information or a working solution about how to modify CapturedRooms please let me know. Additionally if there is somewhere a documentation about the CoreModel-attribute please post a link here.
0
0
238
3w
Vision Pro App Stuck on Loading Screen – Works Fine on Simulator
Hi everyone, I'm currently developing an app for Vision Pro using SwiftUI, and I've encountered an issue when testing on the Vision Pro device. The app works perfectly fine on the Vision Pro simulator in Xcode, but when I run it on the actual device, it gets stuck on the loading screen. The logo appears and pulsates when it loads, as expected, but it never progresses beyond that point. Issue Details: The app doesn't crash, and I don't see any major errors in the console. However, in the debug logs, I encounter an exception: Thread 1: "*** -[NSProxy doesNotRecognizeSelector:plane] called!" I’ve searched through my project, but there’s no direct reference to a selector named plane. I suspect it may be related to a framework or system call failing on the device. There’s also this warning: NSBundle file:///System/Library/PrivateFrameworks/MetalTools.framework/ principal class is nil because all fallbacks have failed. What I’ve Tried: Verified that all assets and resources are properly bundled and loading (since simulators tend to be more forgiving with file paths). Tested the app with minimal UI to isolate potential causes, but the issue persists. Checked the app's Info.plist configuration to ensure it’s properly set up for Vision Pro. No crashes, just a loading screen hang on the device, while the app works fine in the Vision Pro simulator. Additional Info: The app’s UI consists of a loading animation (pulsating logo) before transitioning to the main content. Using Xcode 16.1 Beta, VisionOS SDK. The app is based on SwiftUI, with Vision Pro optimizations for immersive experience. Has anyone experienced something similar when moving from the simulator to the Vision Pro hardware? Any help or guidance would be appreciated, especially with regards to the exception or potential resource loading issues specific to the device. Thanks in advance!
1
0
256
3w
Unity/PolySpatial GameController framework failing to load
I have a simple example of a motion matching (MxM for Unity) character controller that uses Unity's input system and gamepad support. In editor the scene and inputs work as expected. When I build to headset the app stops at an initialization step where my game controller should kick in. The app doesn't crash but my character is frozen in A-Pose and doesn't respond to input. I'm wondering if this error I'm seeing in the logs is what's causing it? And if so how do I fix it? error 15:56:11.724200-0700 PolySpatialProjectTemplate NSBundle file:///System/Library/Frameworks/GameController.framework/ principal class is nil because all fallbacks have failed I'm using Xcode 16 beta 6 Unity 6000.0.17f1 VisionOS 2.0 beta 9
1
0
268
3w
Issue with losing alignment on multi-room session
Hello! We're having this issue in our app that is implementing multi room scan via RoomPlan, where the ARSession world origin is shifted to wherever the RoomCaptureSession is ran again (e.g in the next room) To clarify a few point We are using the RoomCaptureView, starting a new room using roomCaptureView.captureSession.run(configuration: captureSessionConfig) and stopping the room scan via roomCaptureView.captureSession.stop(pauseARSession: false) We are re-using the same ARSession and, which is passed into the RoomCaptureView as so: arSession = ARSession() roomCaptureView = RoomCaptureView(frame: .zero, arSession: arSession) Any clue why the AR world origin is reset? I need it to be consistent for storing frame camera position Thanks!
0
1
139
3w
Collision detection between two entities not working
Hi, I've tried to implement collision detection between the left index finger through a sphere and a simple 3D rectangle box. The sphere of my left index finger goes through the object, but no collision seems to take place. What am I missing? Thank you very much for your consideration! Below is my code; App.swift import SwiftUI @main private struct TrackingApp: App { public init() { ... } public var body: some Scene { WindowGroup { ContentView() } ImmersiveSpace(id: "AppSpace") { ImmersiveView() } } } ImmersiveView.swift import SwiftUI import RealityKit struct ImmersiveView: View { @State private var subscriptions: [EventSubscription] = [] public var body: some View { RealityView { content in /* LEFT HAND */ let leftHandIndexFingerEntity = AnchorEntity(.hand(.left, location: . let leftHandIndexFingerSphere = ModelEntity(mesh: .generateSphere(radius: 0.01), materials: [SimpleMaterial(color: .orange, isMetallic: false)]) leftHandIndexFingerEntity.addChild(leftHandIndexFingerSphere) leftHandIndexFingerEntity.generateCollisionShapes(recursive: true) leftHandIndexFingerEntity.components[CollisionComponent.self] = CollisionComponent(shapes: [.generateSphere(radius: 0.01)]) leftHandIndexFingerEntity.name = "LeftHandIndexFinger" content.add(leftHandIndexFingerEntity) /* 3D RECTANGLE*/ let width: Float = 0.7 let height: Float = 0.35 let depth: Float = 0.005 let rectangleEntity = ModelEntity(mesh: .generateBox(size: [width, height, depth]), materials: [SimpleMaterial(color: .red.withAlphaComponent(0.5), isMetallic: false)]) rectangleEntity.transform.rotation = simd_quatf(angle: -.pi / 2, axis: [1, 0, 0]) let rectangleAnchor = AnchorEntity(world: [0.1, 0.85, -0.5]) rectangleEntity.generateCollisionShapes(recursive: true) rectangleEntity.components[CollisionComponent.self] = CollisionComponent(shapes: [.generateBox(size: [width, height, depth])]) rectangleEntity.name = "Rectangle" rectangleAnchor.addChild(rectangleEntity) content.add(rectangleAnchor) /* Collision Handling */ let subscription = content.subscribe(to: CollisionEvents.Began.self, on: rectangleEntity) { collisionEvent in print("Collision detected between \(collisionEvent.entityA.name) and \(collisionEvent.entityB.name)") } subscriptions.append(subscription) } } }
1
0
291
4w
Speed up the conversion of MV-HEVC to Side-by-side
I have read the Converting side-by-side 3D video to multi-view HEVC and spatial video, now I want to convert back to side-by-side 3D video. On iPhone 15 Pro MAX, the converting time is about 1:1 as the original video length. I do almost the same as the article mentioned above, the only difference is I get the frames from Spatial video, merging into Side-by-side. Currently my code merging the frame wrote as below. Is any suggestion to speed up the process? Or in the official article, is there anything that we can do to speed up the conversion? // Merge frame let leftCI = resizeCVPixelBufferFill(bufferLeft, targetSize: targetSize) let rightCI = resizeCVPixelBufferFill(bufferRight, targetSize: targetSize) let lbuffer = convertCIImageToCVPixelBuffer(leftCI!)! let rbuffer = convertCIImageToCVPixelBuffer(rightCI!)! pixelBuffer = mergeFrames(lbuffer, rbuffer)
1
0
236
4w
Forward and reverse animations with RealityKit on Vision Pro
Hello! I'm trying to play an animation with a toggle button. When the button is toggled the animation either plays forward from the first frame (.speed = 1) OR plays backward from the last frame (.speed = -1), so if the button is toggled when the animation is only halfway through, it 'jumps' to the first or last frame. The animation is 120 frames, and I want the position in playback to be preserved when the button is toggled - so the animation reverses or continues forward from whatever frame the animation was currently on. Any tips on implementation? Thanks! import RealityKit import RealityKitContent struct ModelView: View { var isPlaying: Bool @State private var scene: Entity? = nil @State private var unboxAnimationResource: AnimationResource? = nil var body: some View { RealityView { content in // Specify the name of the Entity you want scene = try? await Entity(named: "TestAsset", in: realityKitContentBundle) scene!.generateCollisionShapes(recursive: true) scene!.components.set(InputTargetComponent()) content.add(scene!) } .installGestures() .onChange(of: isPlaying) { if (isPlaying){ var playerDefinition = scene!.availableAnimations[0].definition playerDefinition.speed = 1 playerDefinition.repeatMode = .none playerDefinition.trimDuration = 0 let playerAnimation = try! AnimationResource.generate(with: playerDefinition) scene!.playAnimation(playerAnimation) } else { var playerDefinition = scene!.availableAnimations[0].definition playerDefinition.speed = -1 playerDefinition.repeatMode = .none playerDefinition.trimDuration = 0 let playerAnimation = try! AnimationResource.generate(with: playerDefinition) scene!.playAnimation(playerAnimation) } } } } Thanks!
2
0
302
4w
Unable to update IBL at runtime
I seem to be running into an issue in an app I am working on were I am unable to update the IBL for entity more than once in a RealityKit scene. The app is being developed for visionOS. I have a scene with a model the user interacts with and 360 panoramas as a skybox. These skyboxes can change based on user interaction. I have created an IBL for each of the skyboxes and was intending to swap out the ImageBasedLightComponent and ImageBasedLightReceiverComponent components when updating the skybox in the RealityView's update closure. The first update works as expected but updating the components after that has no effect. Not sure if this is intended or if I'm just holding it wrong. Would really appreciate any guidance. Thanks Simplified example // Task spun up from update closure in RealityView Task { if let information = currentSkybox.iblInformation, let resource = try? await EnvironmentResource(named: information.name) { parentEntity.components.remove(ImageBasedLightReceiverComponent.self) if let iblEntity = content.entities.first(where: { $0.name == "ibl" }) { content.remove(iblEntity) } let newIBLEntity = Entity() var iblComponent = ImageBasedLightComponent(source: .single(resource)) iblComponent.inheritsRotation = true iblComponent.intensityExponent = information.intensity newIBLEntity.transform.rotation = .init(angle: currentPanorama.rotation, axis: [0, 1, 0]) newIBLEntity.components.set(iblComponent) newIBLEntity.name = "ibl" content.add(newIBLEntity) parentEntity.components.set([ ImageBasedLightReceiverComponent(imageBasedLight: newIBLEntity), EnvironmentLightingConfigurationComponent(environmentLightingWeight: 0), ]) } else { parentEntity.components.remove(ImageBasedLightReceiverComponent.self) } }
1
0
222
Sep ’24
Track hardware input (keyboard, trackpad, etc.) in visionOS app during Mac Virtual Display usage?
Hi, I'm experimenting with how my visionOS app interacts with the Mac Virtual Display while the immersive space is active. Specifically, I'm trying to find out if my app can detect key presses or trackpad interactions (like clicks) when the Mac Virtual Display is in use for work, and my app is running in the background with an active immersive space. So far, I've tested a head-tracking system in my app that works when the app is open with an active immersive space, where I just moved the Mac Virtual Display in front of the visionOS app window. Could my visionOS app listen to keyboard and trackpad events that happen in the Mac Virtual Display environment?
0
0
208
Sep ’24
Creating Shared Experiences in Physical Locations
Hello everyone, I'm working on developing an app that allows users to share and enjoy experiences together while they are in the same physical locations. Despite trying several approaches, I haven't been able to achieve the desired functionality. If anyone has insights on how to make this possible or is interested in joining the project, I would greatly appreciate your help!
3
0
448
Aug ’24
Memory Leak using simple app with visionOS
Hello. When displaying a simple app like this: struct ContentView: View { var body: some View { EmptyView() } } And run the Leaks app from the developer tools in Xcode, I see a memory leak which I don't see when running the same application on iOS. You can simply run the app and it will show a memory leak. And this is what I see in the Leaks application. Any ideas on what is going on? Thanks!
2
0
336
Aug ’24