Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics

Post

Replies

Boosts

Views

Activity

VisionOS Enterprise API: fail to get cameraFrame in cameraFrameUpdates{}
I am developing an app based on visionOS and need to utilize the main camera access provided by the Enterprise API. I have applied for an enterprise license and added the main camera access capability and the license file in Xcode. In my code, I used await arKitSession.queryAuthorization(for: [.cameraAccess]) to request user permission for camera access. After obtaining the permission, I used arKitSession to run the cameraFrameProvider. However, when running for await cameraFrame in cameraFrameUpdates{ print("hello") guard let mainCameraSample = cameraFrame.sample(for: .left) else { continue } pixelBuffer = mainCameraSample.pixelBuffer } , I am unable to receive any frames from the camera, and even print("hello") within the braces do not execute. The app does not crash or throw any errors. Here is my full code: import SwiftUI import ARKit struct cameraTestView: View { @State var pixelBuffer: CVPixelBuffer? var body: some View { VStack{ Button(action:{ Task { await loadCameraFeed() } }){ Text("test") } if let pixelBuffer = pixelBuffer { let ciImage = CIImage(cvPixelBuffer: pixelBuffer) let context = CIContext(options: nil) if let cgImage = context.createCGImage(ciImage, from: ciImage.extent) { Image(uiImage: UIImage(cgImage: cgImage)) } }else{ Image("exampleCase") .resizable() .scaledToFill() .frame(width: 400,height: 400) } } } func loadCameraFeed() async { // Main Camera Feed Access Example let formats = CameraVideoFormat.supportedVideoFormats(for: .main, cameraPositions:[.left]) let cameraFrameProvider = CameraFrameProvider() let arKitSession = ARKitSession() // main camera feed access example var cameraAuthorization = await arKitSession.queryAuthorization(for: [.cameraAccess]) guard cameraAuthorization == [ARKitSession.AuthorizationType.cameraAccess:ARKitSession.AuthorizationStatus.allowed] else { return } do { try await arKitSession.run([cameraFrameProvider]) } catch { return } let cameraFrameUpdates = cameraFrameProvider.cameraFrameUpdates(for: formats[0]) if cameraFrameUpdates != nil { print("identify cameraFrameUpdates") } else{ print("fail to get cameraFrameUpdates") return } for await cameraFrame in cameraFrameUpdates! { print("hello") guard let mainCameraSample = cameraFrame.sample(for: .left) else { continue } pixelBuffer = mainCameraSample.pixelBuffer } } } #Preview(windowStyle: .automatic) { cameraTestView() } When I click the button, the console prints: identify cameraFrameUpdates It seems like it stuck in getting cameraFrame from cameraFrameUpdates. Occurring on VisionOS 2.0 Beta (just updated), Xcode 16 Beta 6 (just updated). Does anyone have a workaround for this? I would be grateful if anyone can help.
2
1
378
Aug ’24
VisionOS system Boundary
Hello I want to ask help from VisionOS devs inside Apple, if it is possible to extend or disable(toggle) the Play Space boundary which is 1.5 meter or 10 feets right now, it is really a shame with such great display and computing power we can't run any room scale VR, I'm currently working on a Undergrad Thesis which choose to use the AVP but I didn't know about this boundary until I've build my room in Unity and put onto my device, is it possible to cut us some slacks regarding the boundary? much thanks
3
0
293
Aug ’24
Spatial Video captured quality using the API differs from that of the iPhone's built-in camera app
I tried "WWDC24: Build compelling spatial photo and video experiences | Apple" and it can successfully capture spatial video. But I found the video by my app differs from the iPhone build-in camera app in: Videos captured with the iPhone's build-in camera app tend to have a more natural or warmer tone, while videos taken with my app appear whiter or cooler in color temperature. In videos recorded using the iPhone's built-in camera app, the left eye image is typically sharper than the right eye image. However, in my app, this is reversed: the right eye image is clearer than the left eye image. I've noticed that when I cover the wide-angle lens while shooting, the entire preview screen in my app becomes brighter. However, this doesn't occur when using the iPhone's built-in camera app. Is there any api or parameters to make my app more close to the iPhone build-in app? I have tried "whiteBalanceMode" and "exposureMode" but no luck.
2
0
369
Aug ’24
Is there a way to make an .objcap file from a .USDZ file
I have design a 3D object and exported it as a USDZ. I also 3D printed said object. I want to use the object as a 3D trigger for an AR experience I am building. My question is: is there a process that would let me take the 3D .usdz file and convert it to a .arobject or a .objcap medium/low density point cloud to use as an AR trigger. Because I do have the 3D print of the object I did use the "scan" option when setting up my scene but the "resolution"/fidelity seems really low, and the results I get are just mediocre. I would love to take my 3D USDZ that I already have and use it to generate a file that can be used as a 3D trigger. is this possible, or is there a process to do this. I am able to take the 3D that I scan in Reality Composer (which is exported as a .objcap file), send it to reality converter on my Mac and make a USDZ from it. I am looking for a way to go the other way .USDZ > .objcap or .arobject. I am trying to make a experience that mimic projection mapping but in AR. I have a 3D object I built and textured in substance painter. I also printed this object in a base gray color. I want to use the 3D print of the object as an AR trigger that would start a scene placing/overlaying/projection mapping the textured 3D model over the gray 3D printed model. Ideally the mapped 3D model would be spatial attached to the 3D print and move with it when the object is handled.
1
0
319
Aug ’24
Update World Anchor using object anchor
Hi. I display buildings in mixed immersive view. Right now the building appears in relation to the person when the view is opened. (world anchor) To position the building precisely, I want to use object tracking. Set up a project following the wwdc object tracking session. That works well sort of... With an object anchor, the 3D object related to the anchor disappears as soon as the Tracked object is out of view, and with the big objects you don't get the chance to look around. I figure I need to give my 3D object a world anchor, and only have that world anchor update if a change in the object anchor is detected. how do I do that? Preferable using the tools in Reality Composer pro (or very well explained, as I am new to code)
4
0
329
Aug ’24
VisionOS AvPlayer issue
I wanted to report an issue I've encountered with the latest Beta 6 update concerning the immersive space feature. Before this update, when I was in immersive space and clicked on a window button to play a video using AVPlayer, I had the option to keep other windows open and accessible within the environment. Could you please investigate this issue? It would be helpful to know if this is an intentional change or if there might be a bug affecting window management in immersive space. Thank you for your attention to this matter. I look forward to your response.
3
1
369
Aug ’24
RealityView in macOS, Skybox, and lighting issue
I am testing RealityView on a Mac, and I am having troubles controlling the lighting. I initially add a red cube, and everything is fine. (see figure 1) I then activate a skybox with a star field, the star field appears, and then the red cube is only lit by the star field. Then I deactivate the skybox expecting the original lighting to return, but the cube continues to be lit by the skybox. The background is no longer showing the skybox, but the cube is never lit like it originally was. Is there a way to return the lighting of the model to the original lighting I had before adding the skybox? I seem to recall ARView's environment property had both a lighting.resource and a background, but I don't see both of those properties in RealityViewCameraContent's environment. Sample code for 15.1 Beta (24B5024e), Xcode 16.0 beta (16A5171c) struct MyRealityView: View { @Binding var isSwitchOn: Bool @State private var blueNebulaSkyboxResource: EnvironmentResource? var body: some View { RealityView { content in // Create a red cube 10cm on a side let mesh = MeshResource.generateBox(size: 0.1) let simpleMaterial = SimpleMaterial(color: .red, isMetallic: false) let model = ModelComponent( mesh: mesh, materials: [simpleMaterial] ) let redBoxEntity = Entity() redBoxEntity.components.set(model) content.add(redBoxEntity) // Load skybox let blueNeb2Name = "BlueNeb2" blueNebulaSkyboxResource = try? await EnvironmentResource(named: blueNeb2Name) } update: { content in if (blueNebulaSkyboxResource != nil) && (isSwitchOn == true) { content.environment = .skybox(blueNebulaSkyboxResource!) } else { content.environment = .default } } .realityViewCameraControls(CameraControls.orbit) } } Figure 1 (default lighting before adding the skybox): Figure 2 (after activating skybox with star field; cube is lit by / reflects skybox): Figure 3 (removing skybox by setting content.environment to .default, cube still reflects skybox; it is hard to see):
1
0
280
Aug ’24
Placing 3D object in video
Is there any way to place 3D objects, maybe using ARKit or Metalkit in the video. I have tried to extract frames from video, then draw a cube using SCNNode and then render it into UIImage, then gather all images and create video. But this is not feasible solution as it creates huge memory spike and ultimately gives memory warning. So is there any other way to draw 3D objects on the video file.
1
0
261
Aug ’24
Generating MeshResource.Skeleton for IKComponent in RealityComposerPro/XCode
I'm building a VisionOS 2.0 app where the AVP user can change the position of the end effector of a robot model, which was generated in Reality Composer Pro (RCP) from Primitive Shapes. I'd like to use an IKComponent to achieve this functionality, following example code here. I am able to load my entityy and access its MeshResource following the IKComponent example code, but on the line let modelSkeleton = meshResource.contents.skeletons[0] I get an error since my MeshResource does not include a skeleton. Is there some way to directly generate the skeleton with my entities in RCP, or is there a way to add a skeleton generated in XCode to an existing MeshResource that corresponds to my entities generated in RCP? I have tried using MeshSkeletonCollection.insert() with a skeleton I generated in XCode, but I cannot figure out how to assign this skeletonCollection to the MeshResource of the entity.
4
0
277
Aug ’24
Immersive experience from sample
Hi, We are currently building an app for immersive experiences of our custom content. This is displayed from a video on a custom geometry in the immersive on the Vision Pro I have enabled the AVPlayerViewController system controls that detach when entering immersive like in the sample: https://developer.apple.com/documentation/visionos/building-an-immersive-media-viewing-experience For our case, we do not need the 2D screen showing after entering the immersive, only the environment So my question is how to remove the screen with the video and keep the controls, like in the Apple TV app for immersive experiences? Thanks in advance
2
0
339
Aug ’24
Building a custom render pipeline with RealityKit
Hello experts, and question seekers, I have been trying to get Gaussian splats working with RealityKit, however it seems not to work out for me. The library I use for Gaussian splatting: https://github.com/scier/MetalSplatter My idea was to use the renderers provided by RealityKit (aka RealityRenderer) https://developer.apple.com/documentation/realitykit/realityrenderer and the renderer provided by MetalSplatter (aka. SplatRenderer) https://github.com/scier/MetalSplatter/blob/main/MetalSplatter/Sources/SplatRenderer.swift Then with a custom render pipeline, I would be able to compose the outputs of the renderers, enabling the possibility, for example to build immersive scenery with realistic environment scans, as Gaussian splats, and RealityKit to provide the necessary features to build extra scenery around Gaussian splats, eg. dynamic 3D models inside Gaussian splats. However the problem is, as of now I am not able to do that with the current implementation of RealityRenderer. It seems to be, that first RealityRenderer is supposed to be an API, just to render colour information onto a texture, which in first glance might be useful, but misses important information, such as for example depth, and stencil information. Second issue is, even with that in mind, currently I am not able to execute RealityRenderer.updateAndRender, due to the following error messages: Could not resolve material name 'engine:BuiltinRenderGraphResources/Common/realityRendererBackground.rematerial' in bundle at '/Users//Library/Developer/CoreSimulator/Devices//data/Containers/Bundle/Application//.app'. Loading via asset path. exiting spatial tracking service update thread because wait returned 37” I was able to build a custom Metal view with UIViewRepresentable, MTKView, and MTKViewDelegate, enabling me to build a custom rendering pipeline, by utilising some of the Metal developer workflows. Reference: https://developer.apple.com/documentation/xcode/metal-developer-workflows/ Inside draw(in view: MTKView), in a class derived by MTKViewDelegate: guard let currentDrawable = view.currentDrawable else { return } let realityRenderer = try! RealityRenderer() try! realityRenderer.updateAndRender(deltaTime: 0.0, cameraOutput: .init(.singleProjection(colorTexture: currentDrawable.texture)), whenScheduled: { realityRenderer in print("Rendering scheduled") }, onComplete: { RealityRenderer in print("Rendering completed") }) Can you please tell me, what I am doing wrong? Is there any solution, that enables me to use RealityKit with for example Gaussian splats? Any help is greatly appreciated. All the best, Ethem Kurt
0
1
223
Aug ’24
Object Capture API crash frequently when start generating model
I have updated the sample code so that the scan will start generating when 15 photos r captured. I hope I can catch this error so the app wont crash.... really need help on this and thank you in advanced ! Hardware Model: iPhone14,2 OS Version: iPhone OS 17.6.1 (21G93) Exception Type: EXC_BREAKPOINT (SIGTRAP) Exception Codes: 0x0000000000000001, 0x000000023363518c Termination Reason: SIGNAL 5 Trace/BPT trap: 5 Terminating Process: exc handler [525] Triggered by Thread: 0 Thread 0 name: Thread 0 Crashed: 0 RealityKit_SwiftUI 0x000000023363518c CoveragePointCloudMiniView.interfaceOrientation.getter + 508 (CoveragePointCloudMiniView.swift:0) 1 RealityKit_SwiftUI 0x0000000233634cdc closure #1 in closure #2 in CoveragePointCloudMiniView.body.getter + 124 (CoveragePointCloudMiniView.swift:75) 2 RealityKit_SwiftUI 0x000000023363db9c partial apply for closure #1 in closure #2 in CoveragePointCloudMiniView.body.getter + 20 (:0) 3 SwiftUI 0x0000000195c4bbac closure #1 in withTransaction(::) + 276 (Transaction.swift:243) 4 SwiftUI 0x0000000195c4ba90 partial apply for closure #1 in withTransaction(::) + 24 (:0) 5 libswiftCore.dylib 0x00000001903f8094 withExtendedLifetime<A, B>(::) + 28 (LifetimeManager.swift:27) 6 SwiftUI 0x0000000195b17d78 withTransaction(::) + 72 (Transaction.swift:228) 7 SwiftUI 0x0000000195b17d04 withAnimation(::) + 116 (Transaction.swift:280) 8 RealityKit_SwiftUI 0x0000000233634bfc closure #2 in CoveragePointCloudMiniView.body.getter + 664 (CoveragePointCloudMiniView.swift:73) 9 SwiftUI 0x0000000195bef134 closure #1 in closure #1 in SubscriptionView.Subscriber.updateValue() + 72 (SubscriptionView.swift:66) 10 SwiftUI 0x0000000195b3f57c thunk for @escaping @callee_guaranteed () -> () + 28 (:0) 11 SwiftUI 0x0000000195b3c864 static Update.dispatchActions() + 1140 (Update.swift:151) 12 SwiftUI 0x0000000195b3bedc static Update.end() + 144 (Update.swift:58) 13 SwiftUI 0x0000000195a691fc closure #1 in SubscriptionView.Subscriber.updateValue() + 700 (SubscriptionView.swift:66) 14 SwiftUI 0x0000000195a68eb0 partial apply for thunk for @escaping @callee_guaranteed (@in_guaranteed A.Publisher.Output) -> () + 28 (:0) 15 SwiftUI 0x0000000195a68e78 closure #1 in ActionDispatcherSubscriber.respond(to:) + 76 (SubscriptionView.swift:98) 16 SwiftUI 0x0000000195a68c80 ActionDispatcherSubscriber.respond(to:) + 816 (SubscriptionView.swift:97) 17 SwiftUI 0x0000000195a68938 ActionDispatcherSubscriber.receive(:) + 16 (SubscriptionView.swift:110) 18 SwiftUI 0x0000000195a6786c SubscriptionLifetime.Connection.receive(:) + 100 (SubscriptionLifetime.swift:195) 19 Combine 0x000000019aed29d4 Publishers.Autoconnect.Inner.receive(:) + 52 (Autoconnect.swift:142) 20 Combine 0x000000019aed2928 Publishers.Multicast.Inner.receive(:) + 244 (Multicast.swift:211) 21 Combine 0x000000019aed2828 protocol witness for Subscriber.receive(_:) in conformance Publishers.Multicast<A, B>.Inner + 24 (:0) .... (FBSScene.m:812) 46 FrontBoardServices 0x00000001aa892844 __94-[FBSWorkspaceScenesClient _queue_updateScene:withSettings:diff:transitionContext:completion:]_block_invoke_2 + 152 (FBSWorkspaceScenesClient.m:692) 47 FrontBoardServices 0x00000001aa8926cc -[FBSWorkspace _calloutQueue_executeCalloutFromSource:withBlock:] + 168 (FBSWorkspace.m:411) 48 FrontBoardServices 0x00000001aa8977fc __94-[FBSWorkspaceScenesClient _queue_updateScene:withSettings:diff:transitionContext:completion:]_block_invoke + 344 (FBSWorkspaceScenesClient.m:691) 49 libdispatch.dylib 0x00000001999aedd4 _dispatch_client_callout + 20 (object.m:576) 50 libdispatch.dylib 0x00000001999b286c _dispatch_block_invoke_direct + 288 (queue.c:511) 51 FrontBoardServices 0x00000001aa893d58 FBSSERIALQUEUE_IS_CALLING_OUT_TO_A_BLOCK + 52 (FBSSerialQueue.m:285) 52 FrontBoardServices 0x00000001aa893cd8 -[FBSMainRunLoopSerialQueue _targetQueue_performNextIfPossible] + 240 (FBSSerialQueue.m:309) 53 FrontBoardServices 0x00000001aa893bb0 -[FBSMainRunLoopSerialQueue performNextFromRunLoopSource] + 28 (FBSSerialQueue.m:322) 54 CoreFoundation 0x0000000191adb834 CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE0_PERFORM_FUNCTION + 28 (CFRunLoop.c:1957) 55 CoreFoundation 0x0000000191adb7c8 __CFRunLoopDoSource0 + 176 (CFRunLoop.c:2001) 56 CoreFoundation 0x0000000191ad92f8 __CFRunLoopDoSources0 + 340 (CFRunLoop.c:2046) 57 CoreFoundation 0x0000000191ad8484 __CFRunLoopRun + 828 (CFRunLoop.c:2955) 58 CoreFoundation 0x0000000191ad7cd8 CFRunLoopRunSpecific + 608 (CFRunLoop.c:3420) 59 GraphicsServices 0x00000001d65251a8 GSEventRunModal + 164 (GSEvent.c:2196) 60 UIKitCore 0x0000000194111ae8 -[UIApplication run] + 888 (UIApplication.m:3713) 61 UIKitCore 0x00000001941c5d98 UIApplicationMain + 340 (UIApplication.m:5303) 62 SwiftUI 0x0000000195ccc294 closure #1 in KitRendererCommon(:) + 168 (UIKitApp.swift:51) 63 SwiftUI 0x0000000195c78860 runApp(:) + 152 (UIKitApp.swift:14) 64 SwiftUI 0x0000000195c8461c static App.main() + 132 (App.swift:114) 65 SoleFit 0x0000000103046cd4 static SoleFitApp.$main() + 24 (SoleFitApp.swift:0) 66 SoleFit 0x0000000103046cd4 main + 36 67 dyld 0x00000001b52af154 start + 2356 (dyldMain.cpp:1298)
1
0
231
Aug ’24
AVCaptureMovieFileOutput DOES NOT support spatial video capture on iPhone 15 Pro of iOS 18.1?
Hi everyone, I am having trouble implementing spatial video recording into files by following the WWDC24 video: Build compelling spatial photo and video experiences. Specifically, the flag "isSpatialVideoCaptureSupported" of AVCaptureMovieFileOutput shows FALSE where the code is tested on both my physical iPhone 15 Pro (iOS 18.1) and the simulator (iOS 18.0). This is the code that I am running: let movieFileOutput = AVCaptureMovieFileOutput() print("movieCapture output isSpatialVideoCaptureSupported: \(movieFileOutput.isSpatialVideoCaptureSupported)") However, one of the formats of AVCaptureDevice shows a TRUE for the flag isSpatialVideoCaptureSupported. for format in currentDevice.formats { if format.isSpatialVideoCaptureSupported { print("isSpatialVideoCaptureSupported is true") break } } I am totally confused now, why DOES the camera device support spatial mode while the movieFileCapture DOES NOT? Can someone please help? Really appreciate it!! Here are my testing environment: iPhone 15 Pro iOS 18.1 (US version) Xcode 16.0 beta 16A5171c
1
0
282
Aug ’24
Loading USDZ asset into Model3D causes visionOS 2.0 beta 5 to crash
We've recently discovered that our app crashes on startup on the latest visionOS 2.0 beta 5 (22N5297g) build. In fact, the entire field of view would dim down and visionOS would then restart, showing the Apple logo. Interestingly, no app crash is reported by Xcode during debug. After investigation, we have isolated the issue to a specific USDZ asset in our app. Loading it in a sample, blank project also causes visionOS to reliably crash, or become extremely unresponsive with rendering artifacts everywhere. This looks like a potentially serious issue. Even if the asset is problematic, loading it should not crash the entire OS. We have filed feedback FB14756285, along with a demo project. Hopefully someone can take a look. Thanks!
2
1
252
Aug ’24