Discuss Spatial Computing on Apple Platforms.

Post

Replies

Boosts

Views

Activity

Loading USDZ asset into Model3D causes visionOS 2.0 beta 5 to crash
We've recently discovered that our app crashes on startup on the latest visionOS 2.0 beta 5 (22N5297g) build. In fact, the entire field of view would dim down and visionOS would then restart, showing the Apple logo. Interestingly, no app crash is reported by Xcode during debug. After investigation, we have isolated the issue to a specific USDZ asset in our app. Loading it in a sample, blank project also causes visionOS to reliably crash, or become extremely unresponsive with rendering artifacts everywhere. This looks like a potentially serious issue. Even if the asset is problematic, loading it should not crash the entire OS. We have filed feedback FB14756285, along with a demo project. Hopefully someone can take a look. Thanks!
2
1
248
Aug ’24
Object Capture API crash frequently when start generating model
I have updated the sample code so that the scan will start generating when 15 photos r captured. I hope I can catch this error so the app wont crash.... really need help on this and thank you in advanced ! Hardware Model: iPhone14,2 OS Version: iPhone OS 17.6.1 (21G93) Exception Type: EXC_BREAKPOINT (SIGTRAP) Exception Codes: 0x0000000000000001, 0x000000023363518c Termination Reason: SIGNAL 5 Trace/BPT trap: 5 Terminating Process: exc handler [525] Triggered by Thread: 0 Thread 0 name: Thread 0 Crashed: 0 RealityKit_SwiftUI 0x000000023363518c CoveragePointCloudMiniView.interfaceOrientation.getter + 508 (CoveragePointCloudMiniView.swift:0) 1 RealityKit_SwiftUI 0x0000000233634cdc closure #1 in closure #2 in CoveragePointCloudMiniView.body.getter + 124 (CoveragePointCloudMiniView.swift:75) 2 RealityKit_SwiftUI 0x000000023363db9c partial apply for closure #1 in closure #2 in CoveragePointCloudMiniView.body.getter + 20 (:0) 3 SwiftUI 0x0000000195c4bbac closure #1 in withTransaction(::) + 276 (Transaction.swift:243) 4 SwiftUI 0x0000000195c4ba90 partial apply for closure #1 in withTransaction(::) + 24 (:0) 5 libswiftCore.dylib 0x00000001903f8094 withExtendedLifetime<A, B>(::) + 28 (LifetimeManager.swift:27) 6 SwiftUI 0x0000000195b17d78 withTransaction(::) + 72 (Transaction.swift:228) 7 SwiftUI 0x0000000195b17d04 withAnimation(::) + 116 (Transaction.swift:280) 8 RealityKit_SwiftUI 0x0000000233634bfc closure #2 in CoveragePointCloudMiniView.body.getter + 664 (CoveragePointCloudMiniView.swift:73) 9 SwiftUI 0x0000000195bef134 closure #1 in closure #1 in SubscriptionView.Subscriber.updateValue() + 72 (SubscriptionView.swift:66) 10 SwiftUI 0x0000000195b3f57c thunk for @escaping @callee_guaranteed () -> () + 28 (:0) 11 SwiftUI 0x0000000195b3c864 static Update.dispatchActions() + 1140 (Update.swift:151) 12 SwiftUI 0x0000000195b3bedc static Update.end() + 144 (Update.swift:58) 13 SwiftUI 0x0000000195a691fc closure #1 in SubscriptionView.Subscriber.updateValue() + 700 (SubscriptionView.swift:66) 14 SwiftUI 0x0000000195a68eb0 partial apply for thunk for @escaping @callee_guaranteed (@in_guaranteed A.Publisher.Output) -> () + 28 (:0) 15 SwiftUI 0x0000000195a68e78 closure #1 in ActionDispatcherSubscriber.respond(to:) + 76 (SubscriptionView.swift:98) 16 SwiftUI 0x0000000195a68c80 ActionDispatcherSubscriber.respond(to:) + 816 (SubscriptionView.swift:97) 17 SwiftUI 0x0000000195a68938 ActionDispatcherSubscriber.receive(:) + 16 (SubscriptionView.swift:110) 18 SwiftUI 0x0000000195a6786c SubscriptionLifetime.Connection.receive(:) + 100 (SubscriptionLifetime.swift:195) 19 Combine 0x000000019aed29d4 Publishers.Autoconnect.Inner.receive(:) + 52 (Autoconnect.swift:142) 20 Combine 0x000000019aed2928 Publishers.Multicast.Inner.receive(:) + 244 (Multicast.swift:211) 21 Combine 0x000000019aed2828 protocol witness for Subscriber.receive(_:) in conformance Publishers.Multicast<A, B>.Inner + 24 (:0) .... (FBSScene.m:812) 46 FrontBoardServices 0x00000001aa892844 __94-[FBSWorkspaceScenesClient _queue_updateScene:withSettings:diff:transitionContext:completion:]_block_invoke_2 + 152 (FBSWorkspaceScenesClient.m:692) 47 FrontBoardServices 0x00000001aa8926cc -[FBSWorkspace _calloutQueue_executeCalloutFromSource:withBlock:] + 168 (FBSWorkspace.m:411) 48 FrontBoardServices 0x00000001aa8977fc __94-[FBSWorkspaceScenesClient _queue_updateScene:withSettings:diff:transitionContext:completion:]_block_invoke + 344 (FBSWorkspaceScenesClient.m:691) 49 libdispatch.dylib 0x00000001999aedd4 _dispatch_client_callout + 20 (object.m:576) 50 libdispatch.dylib 0x00000001999b286c _dispatch_block_invoke_direct + 288 (queue.c:511) 51 FrontBoardServices 0x00000001aa893d58 FBSSERIALQUEUE_IS_CALLING_OUT_TO_A_BLOCK + 52 (FBSSerialQueue.m:285) 52 FrontBoardServices 0x00000001aa893cd8 -[FBSMainRunLoopSerialQueue _targetQueue_performNextIfPossible] + 240 (FBSSerialQueue.m:309) 53 FrontBoardServices 0x00000001aa893bb0 -[FBSMainRunLoopSerialQueue performNextFromRunLoopSource] + 28 (FBSSerialQueue.m:322) 54 CoreFoundation 0x0000000191adb834 CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE0_PERFORM_FUNCTION + 28 (CFRunLoop.c:1957) 55 CoreFoundation 0x0000000191adb7c8 __CFRunLoopDoSource0 + 176 (CFRunLoop.c:2001) 56 CoreFoundation 0x0000000191ad92f8 __CFRunLoopDoSources0 + 340 (CFRunLoop.c:2046) 57 CoreFoundation 0x0000000191ad8484 __CFRunLoopRun + 828 (CFRunLoop.c:2955) 58 CoreFoundation 0x0000000191ad7cd8 CFRunLoopRunSpecific + 608 (CFRunLoop.c:3420) 59 GraphicsServices 0x00000001d65251a8 GSEventRunModal + 164 (GSEvent.c:2196) 60 UIKitCore 0x0000000194111ae8 -[UIApplication run] + 888 (UIApplication.m:3713) 61 UIKitCore 0x00000001941c5d98 UIApplicationMain + 340 (UIApplication.m:5303) 62 SwiftUI 0x0000000195ccc294 closure #1 in KitRendererCommon(:) + 168 (UIKitApp.swift:51) 63 SwiftUI 0x0000000195c78860 runApp(:) + 152 (UIKitApp.swift:14) 64 SwiftUI 0x0000000195c8461c static App.main() + 132 (App.swift:114) 65 SoleFit 0x0000000103046cd4 static SoleFitApp.$main() + 24 (SoleFitApp.swift:0) 66 SoleFit 0x0000000103046cd4 main + 36 67 dyld 0x00000001b52af154 start + 2356 (dyldMain.cpp:1298)
1
0
226
Aug ’24
Not all attachments displaying visionOS 2.0 beta 5
I'm dynamically creating anywhere from 10-50 attachments in an immersive view by looping through an array. When there are 10-20 attachments -> no problem, all attachments appear fine. Where there are more than ~40 attachments -> ~25% of them never show up. The ones that don't show up are random and change each time the immersive view is loaded. Never had a problem in visionOS 1 so wondering if this is a bug or what's going on here. Nothing in the console that would indicate a problem. Thanks
1
0
297
Aug ’24
Building a custom render pipeline with RealityKit
Hello experts, and question seekers, I have been trying to get Gaussian splats working with RealityKit, however it seems not to work out for me. The library I use for Gaussian splatting: https://github.com/scier/MetalSplatter My idea was to use the renderers provided by RealityKit (aka RealityRenderer) https://developer.apple.com/documentation/realitykit/realityrenderer and the renderer provided by MetalSplatter (aka. SplatRenderer) https://github.com/scier/MetalSplatter/blob/main/MetalSplatter/Sources/SplatRenderer.swift Then with a custom render pipeline, I would be able to compose the outputs of the renderers, enabling the possibility, for example to build immersive scenery with realistic environment scans, as Gaussian splats, and RealityKit to provide the necessary features to build extra scenery around Gaussian splats, eg. dynamic 3D models inside Gaussian splats. However the problem is, as of now I am not able to do that with the current implementation of RealityRenderer. It seems to be, that first RealityRenderer is supposed to be an API, just to render colour information onto a texture, which in first glance might be useful, but misses important information, such as for example depth, and stencil information. Second issue is, even with that in mind, currently I am not able to execute RealityRenderer.updateAndRender, due to the following error messages: Could not resolve material name 'engine:BuiltinRenderGraphResources/Common/realityRendererBackground.rematerial' in bundle at '/Users//Library/Developer/CoreSimulator/Devices//data/Containers/Bundle/Application//.app'. Loading via asset path. exiting spatial tracking service update thread because wait returned 37” I was able to build a custom Metal view with UIViewRepresentable, MTKView, and MTKViewDelegate, enabling me to build a custom rendering pipeline, by utilising some of the Metal developer workflows. Reference: https://developer.apple.com/documentation/xcode/metal-developer-workflows/ Inside draw(in view: MTKView), in a class derived by MTKViewDelegate: guard let currentDrawable = view.currentDrawable else { return } let realityRenderer = try! RealityRenderer() try! realityRenderer.updateAndRender(deltaTime: 0.0, cameraOutput: .init(.singleProjection(colorTexture: currentDrawable.texture)), whenScheduled: { realityRenderer in print("Rendering scheduled") }, onComplete: { RealityRenderer in print("Rendering completed") }) Can you please tell me, what I am doing wrong? Is there any solution, that enables me to use RealityKit with for example Gaussian splats? Any help is greatly appreciated. All the best, Ethem Kurt
0
1
216
Aug ’24
Tracked object coordinates in program
Hey, as a follow up to my earlier posts about object tracking on visionOS 2 - I'm doing some experimentation, and my use-case/requirements require me to track the coordinates of some digital entity that I attach (relative to my reference object) to my reference object. Can something like this be done? Right now, all I'm doing is putting my reference object in my scene, and then positioning the 3D content that I want to show at the corresponding locations on the reference object. I am then loading the scene in a RealityView block via my SwiftUI code. I want to know now if I can also extract and use the coordinates of the digital entity that I have placed (post object-tracking), and then make some manipulations via code, for example, if the physical coordinates of the digital entity is in a certain x,y,z range -> trigger this function/bring up this alert message in a tile.. Is something like this possible, and if so, can you help me with understanding different aspects to this problem via code with some sample/reference code? So far I've only done most of the object tracking related tasks via the Reality Composer Pro, but this task that I'm trying to implement will require me to do quite a bit of programming as well, and I'm kinda lost as to how to start and go about this. Thanks for any help that ya'll can give me!
1
0
248
Aug ’24
How to convert a Point3D value obtained by a SpatialTapGesture to an Entity coordinate?
I have a visionOS app that displays a ModelEntity in a RealityView. This entity can be tapped by a SpatialTapGesture, and this gesture calls .onEnded { event in let point3D = event.location3D // … } I am unable to convert point3D to the local coordinate of the entity. I wrote a little test project to investigate the situation (below). The RealityView shows a box, I can tap the visible faces, and I get point3D values that don't make much sense to me. These values are presented here. So, the main question is: How can I get the coordinate of the point tapped on the shown entity? I used for SpatialTapGesture all 3 options for the coordinateSpace, .local, .global, and .immersive, without success. Here is my code: struct ImmersiveView: View { var body: some View { RealityView { content in let mesh = MeshResource.generateBox(width: 1, height: 0.5, depth: 0.25, splitFaces: true) var frontMaterial = UnlitMaterial() frontMaterial.color.tint = .green var topMaterial = UnlitMaterial() topMaterial.color.tint = .red let boxEntity = ModelEntity(mesh: mesh, materials: [frontMaterial, topMaterial]) boxEntity.components.set(InputTargetComponent(allowedInputTypes: .all)) boxEntity.components[CollisionComponent.self] = CollisionComponent(shapes: [ShapeResource.generateConvex(from: mesh)]) boxEntity.transform.translation = [0, 0, -3] content.add(boxEntity) } .gesture(tapGesture) } var tapGesture: some Gesture { SpatialTapGesture(coordinateSpace: .local) .targetedToAnyEntity() .onEnded { event in let point3D = event.location3D print(point3D) } } }
3
0
272
Aug ’24
Get Window(Group) position / Track Window? (VisionOS)
Hi guys, I'm currently working on a Head Tracking application for visionOS and was wondering if there are any properties or ways to access the position of the app window in an immersive space? I was planning to somehow determine if the window is/is not within the AVP's orientation (through queryDeviceAnchor()) or "visible space". Or is there a way to access a property or data that tells me if the app window is within the user's AVP orientation or not if e.g. the user is turning around having the window behind the back? I would be extremely thankful for any helpful input! import SwiftUI @main struct HeadTrackingApp: App { init() { HeadTrackingSystem.registerSystem() } var body: some Scene { WindowGroup { // Basically getting spatial coordinates of this ContentView() } ImmersiveSpace(id: "appSpace") { } } }
3
0
369
Aug ’24
Entity Coordinates in Object Tracking
A second post on the same topic, as I feel I may have over complicated the earlier one. I essentially am performing object tracking inside Reality Composer Pro and adding a digital entity to the tracked object. I now want to get the coordinates of this digital entity inside Xcode.. Secondly, can I track more than 1 object inside the same scene? For example if I want to find a spanner and a screwdriver amongst a bunch of tools laid out on the table, and spawn an arrow on top of the spanner and the screwdriver, and then get the coordinates of the arrows that I spawn, how can I go about this?
3
0
342
Aug ’24
Button in the attachment not clickable after adding BillboardComponent
I created some attachments by following the Diorama Apple example. Things have been working fine. I wanted to add BillboardComponent to my attachments. So I added it in this way guard let attachmentEntity = attachments.entity(for: component.attachmentTag) else { return } guard attachmentEntity.parent == nil else {return} var billBoard = BillboardComponent() billBoard.rotationAxis = [0,1,0] attachmentEntity.components.set(billBoard) content.add(attachmentEntity) attachmentEntity.setPosition([0.0, 0.5, 0.0], relativeTo: entity) My attachment view is like this Text(name) .matchedGeometryEffect(id: "Name", in: animation) .font(titleFont) Text(description) .font(descriptionFont) Button("Done") { viewModel.arrows.remove(at: 0) } } If I remove the BillboardComponent then button click works fine. but with the `BillboardComponent button click doesn't work (not even highlighting when I look at it) in certain directions. How to resolve this issue?
1
0
240
Aug ’24
Vision Pro system audio volume is very low after VisionOS 2.0 Beta 5 update
I updated my Vision Pro to VisionOS 2.0 Beta yesterday, and now everything is very quiet even at max volume. I tested with the built in speakers, Beats Pro and Airpods Pro Gen 2 as well and same problem with all of them. If I turn the volume down to 50% you cant tell what audio is being played anymore. I tried restarting the headset and it makes no difference. Anything else I can try to resolve this issue?
0
1
330
Aug ’24
Apple Vision Pro stuck at waiting MDM configuration (2.0 beta 5)
Hello all ! Received my Apple Vision Pro today. Device is on ABM, assigned to JAMF Pro with a separate Prestage. Out of the box, it did not catch the configuration (Vision OS 1.3). I enabled beta releases, and it installed 2.0 beta 5. At reboot, it regenerated the Persona, and is now stuck in "waiting configuration" (from the MDM I guess. I can not reset it. Even with the developer Strap, Apple Configurator is not able restore the ipsw (it was not paired yet). Any idea ? Any secret DFU ?
1
0
345
Aug ’24
BOT-anist Vision Pro demo not working
I am trying out the BOT-anist demo and compiled it for Vision Pro. When you enter the Start Planting module, the app quits with a fatal error in this section in RobotCharacter.swift: guard var headOffset = headOffset ?? skeleton.pins["head"]?.position, var backpackOffset = backpackOffset ?? skeleton.pins["backpack"]?.position else { fatalError("Didn't find expected joint for head or backpack.") } Thread 1: Fatal error: Didn't find expected joint for head or backpack. How can I fix this? Thanks for any suggestions.
1
4
268
Aug ’24
MTKView is now available on visionOS but isn't working on visionOS 1.x
Hello! I noticed that after WWDC 24 there was support added for MTKView in visionOS 1.0+. This is great! But when I use an MTKView in anything before visionOS 2.0 it doesn't work and the app ends up crashing. Console error when running on a device that is on visionOS 1.2: Symbol not found: _$s27_CompositorServices_SwiftUI0A5LayerV13configuration8rendererAcA0aE13Configuration_p_ySo019CP_OBJECT_cp_layer_G0CScMYcctcfC Expected in: <EFD973D2-97E1-380B-B89A-13CC3820B7F7> /System/Library/Frameworks/_CompositorServices_SwiftUI.framework/_CompositorServices_SwiftUI Looks like MTKView may be using compositor services under the hood? Any help would be great. Thank you!
3
2
468
Jul ’24
How to add gestures to objects inside other objects
I have a scene with multiple RealityKit entities. There is a blue cube which I want to rotate along with all of its children (it's partly transparent). Inside the cube are a number of child entities (red) that I want to tap. The cube and red objects all have collision components as is required for gestures to work. If I want to rotate the blue cube, and also tap the red objects I can't do this as the blue cube's collision component intercepts the taps. Is there a way of accomplishing what I want? I'm targeting visionOS 2, and my scene is in a volume.
1
0
338
Jul ’24
Integrating Apple Watch Health Data with Vision Pro
Hi everyone, I have a question regarding the integration of Apple Watch and Vision Pro. Is it possible to connect an Apple Watch to Vision Pro to access health data and display it within Vision Pro applications? If so, could you provide some guidance or point me towards relevant resources or APIs that would help in achieving this? Thank you in advance for your assistance!
4
0
381
Aug ’24
visionOS 2 full immersive space permission change?
Does visionOS 2 still prompt the user with a permission alert when a full immersive space is presented? In visionOS 1, the first time an app presented an immersive space, the user was prompted with an alert to grant permission. openImmersiveSpace would return an error code if the user opted not to grant permission. In visionOS 1, it was important to handle this case correctly. In visionOS 1, the Settings > Developer menu had an option to reset the immersive user's space permission prompting state so developers could test this interaction flow. In visionOS 2, I no longer see the full immersive space permissions alert. I can't remember if I saw it once, the first time visionOS 2.0 beta was installed, or if I never saw it at all. The Settings > Developer menu no longer has an option to reset the permission prompting state. I can't find any way to test the interaction flow in my app to make sure that it will work correctly for users. Does visionOS 2 no longer ask for full immersive space permission at all? I can't find this change documented anywhere. If visionOS 2 does prompt the user for permission, is there any way to reproduce and test this interaction flow so I can make sure my app handles it correctly? Thanks for taking the time to answer this question.
3
0
507
Jul ’24