Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics

Post

Replies

Boosts

Views

Activity

Trouble with initializing a SharePlay and using GroupSessionJournal.
I am having trouble with initializing the SharePlay. It works but we have to leave the game (click the close button) and rejoin it, sometimes several times, for it to establish the connection. I am also having trouble sharing images over SharePlay with GroupSessionJournal. I am not able to get it to transfer any amount of data or even get recognition on the other participants in the SharePlay that an image is being sent. We have look at all the information we can find online and are not able to establish a connection. I am not sure if I am missing a step, or if I am incorrectly sending the data through the GroupSessionJournal. Here are the steps I took take to replicate the issue I have: FaceTime another person with the app. Open the app and click the SharePlay button to SharePlay it with the other person. Establish the SharePlay and by making sure that the board states are syncronized across participants. If its not click the close button and click open app again to rejoin the SharePlay. (This is one of the bugs that I would like to fix. This is just a work around we developed to establish the SharePlay. We would like it so that when you click SharePLay and they join the session it works.) Once the SharePlay has been established, change the image by clicking change 1 image. Select a jpg image. The image that represents 1 should be not set. If you dont see the image click on any of the X in the squares and it will change to the image. The image should appear on the other participant in the SharePlay. (This does not happen and is what we have not been able to figure out how to get working.) Here are the classes for the example project I created: Content View Game Model Class Activity Manager Main Starter Class
0
0
99
2d
Palm Menu Button Issue
Hi, we have in our app an immersive space and we taught the palm menu button is not available in immersive spaces, but when I look in the hand and tap the menu button appear. Is it possible to keep it hidden? Because we a have an hand tracking feature in palm and when we try to press a button to overlap the palm it triggers the menu button and then when the user presses again by mistake, it sends the application to the background. This is very important for us because we would like to release this hand-tracking feature as soon as possible. Here is a link with to a video with the problem: https://drive.google.com/file/d/1cfOcdzF19h_mbmpvkVNCJjXEBJecVeJL/view?usp=sharing
1
0
104
2d
Collisions are not detected if the entity is a child of a hand AnchorEntity
I have a created an AnchorEntity for my index finger tip and then created a model entity (A sphere) as a child of it. This model entity has a collision component and a physics body component. I tried using dynamic and kinematic modes for the physics body component. I have created a plane from a cube that has collision component and a static physics body. I have subscribed to the CollisionEvents.Began on this plane. I have also stored it in a EventSubscription state variable. @State private var collisionSubscription: EventSubscription? The I subscribed as follows collisionSubscription = content.subscribe(to: CollisionEvents.Began.self, on: self.boxTopCollision, { collisionEvent in print("something collided with the box top") }) The collision event fires when I directly put the sphere above the plane and let gravity do the collision, but when the the sphere is the child of the anchor entity, the collision events don't happen. I tried adding collision and physics body component directly to the anchor entity and that doesn't work too. I created another sphere with a physics body and a collision component and input target component and manipulate it with a drag gesture. When the manipulation is happening and collide the plane and the sphere the events don't happen when my sphere is touching the plane, but when the gesture end and the sphere is in contact with the plane, the event gets fired. I am confused as to why this is happening. All I want to do is have a collider on my finger tip and want to detect the collision with this plane. How can I make this work? Is there some unstated rule somewhere as where a physics body is manipulated manually it cannot trigger collision events? For more context. I am using SpatialTrackingSession with the tracking configuration of .hand. I am successfully able to track the finger tip.
3
0
151
5d
Keyboard awareness during custom immersive enviroment of an app
Hello I was wondering if the keyboard awareness feature that came with visionOS 2 would also work for the Mac Book keyboard if someone is in an immersive .progressive custom environment such as the "Garden" environment from Construct an immersive environment for visionOS in e.g. an app I'm currently developing, to see one's keyboard. I haven't managed to achieve it so far. Thank you very much in advance!
0
0
59
2d
'Segmentation fault: 11' error after upgrading to the latest macOS version
Today I updated my macbook pro to macOS sequoia. with this I also downloaded the latest Xcode and visionOS 2 packages. I had a working project. Which did work with my vision pro which is updated to the latest visionOS 2. But now whenever I try to click on preview in xcode while editing a swift file I am receiving the following error: (lot of lines here) Library/Developer/Xcode/DerivedData/test2-fznbrpphddkqdaddrzamkayoajjm/Build/Intermediates.noindex/RealityKitContent.build/Debug-xrsimulator/RealityKitContent_RealityKitContent.build/DerivedSources/RealityAssetsGenerated/CustomComponentUSDInitializers.usda error: Tool terminated by signal 'Segmentation fault: 11' I tried exiting and restarting my mac but the problem is not going away. Can someone help me with this? Thank you!
0
0
75
2d
Physics
Given my limited knowledge of physics, I would appreciate it if individuals with a solid understanding of the subject could provide insights into this matter. I have added a physical component to a entity in Reality Composer Pro, but I am seeking guidance on how to achieve the following: Make an object float in the air (with a slight downward motion reminiscent of the moon’s surface) Enable the object to move at a slow pace Implement a strong rebound force I would be grateful if you could provide appropriate values for these parameters. Thank you for your assistance.
1
0
133
3d
visionOS App Not Receiving Latest Heart Rate Data from HealthKit
Hello everyone, I'm developing an app for visionOS that utilizes HealthKit to query heart rate data. However, I'm encountering an issue where the app doesn't retrieve the latest heart rate values. Specifically, it fails to get live heart rate data even after the data has been saved to the Health app. The readings my app displays are outdated and do not match the current values shown in the Health app. Here's what I've tried so far: Fetching Heart Rate Samples: Used HKSampleQuery and HKAnchoredObjectQuery to fetch the most recent heart rate samples. Despite this, the data retrieved is still not up-to-date. Checking Permissions: Ensured that all necessary HealthKit permissions are granted. The app has authorization to read heart rate data and write workout data. My questions are: Is there a known issue or limitation with HealthKit on visionOS that prevents apps from accessing the latest heart rate data? Are there additional steps or configurations required to access live heart rate data in visionOS apps? Has anyone successfully implemented live heart rate monitoring on visionOS, and if so, could you share how you achieved it?
0
0
96
4d
VisionOS : defaultWindowPlacement unavailable?
I'm getting the following error in my swift build targeting VisionOS 2.0 : " 'defaultDisplay' is unavailable in visionOS " TLDR : how do I specify an initial window position in visionOS? The docs seem to be off? - see below. The docs say it is available, but it is not, or at least my XCODE (Version 16.0 ) is throwing errors on it : https://developer.apple.com/documentation/swiftui/scene/defaultwindowplacement(_:) I know apple is opinionated about window placement in visionOS, and maybe it will never be available, but the docs say it is in visionOS 2.0+ and it sure would be nice to be able to specify a default position toward the bottom of one's FOV, etc . Side-note -- the example code in that doc also has the issue that "Window" is not available in visionOS ( WindowGroup is ). example code -- barely modified from example code in doc : var body: some Scene { WindowGroup("MyLilWindow", id: "MyLilWindow") { TestView() } .windowResizability(.contentSize) .defaultWindowPlacement { content, context in let displayBounds = context.defaultDisplay.visibleRect let size = content.sizeThatFits(.unspecified) let position = CGPoint( x: displayBounds.midX - (size.width / 2), y: displayBounds.maxY - size.height - 140) return WindowPlacement(position) } }
1
0
138
6d
Inserted image not showing up on tab bar on visionOS
Images are not appearing on tab bar on visionOS despite it shows up in perfect on iOS. I tried rendering mode API to make the original image visible, and it is working fine on iOS. But on visionOS the image stays white like masked by the tab bar default content color. Did anyone achieve solving this problem? I might be able to create my custom ornament to make it look like tab bar, but I think it‘s too much coding to do so.
2
0
128
1w
Vision Pro app crashes when scene loads in ImmersiveSpace
Hello, I am getting following error on console and my app crashes. It goes to dark and then Apple logo appears and app crashes apply fence tx failed (client=0x61dbbfd7) [0xfffffecc (ipc/mig) server died] [C:3] Error received: Connection interrupted. Failed to commit transaction (client=0x94097449) [0x10000003 (ipc/send) invalid destination port] Failed to commit transaction (client=0xe9684b50) [0x10000003 (ipc/send) invalid destination port] [C:3-1] Error received: Connection interrupted. Failed to commit transaction (client=0xbcac17e9) [0x10000003 (ipc/send) invalid destination port] Failed to commit transaction (client=0x52392119) [0x10000003 (ipc/send) invalid destination port] Failed to commit transaction (client=0xff841d17) [0x10000003 (ipc/send) invalid destination port] Failed to commit transaction (client=0xdef5c915) [0x10000003 (ipc/send) invalid destination port] Failed to commit transaction (client=0xefdc8bf3) [0x10000003 (ipc/send) invalid destination port] Failed to commit transaction (client=0xd50c1eff) [0x10000003 (ipc/send) invalid destination port] Failed to commit transaction (client=0x15690a46) [0x10000003 (ipc/send) invalid destination port] Failed to commit transaction (client=0xf296f56b) [0x10000003 (ipc/send) invalid destination port] Failed to commit transaction (client=0x61dbbfd7) [0x10000003 (ipc/send) invalid destination port] apply fence tx failed (client=0x61dbbfd7) [0x10000003 (ipc/send) invalid destination port] nw_read_request_report [C1] Receive failed with error "No message available on STREAM" nw_read_request_report [C1] Receive failed with error "No message available on STREAM" nw_read_request_report [C1] Receive failed with error "No message available on STREAM" nw_read_request_report [C1] Receive failed with error "No message available on STREAM" nw_read_request_report [C1] Receive failed with error "No message available on STREAM" nw_read_request_report [C1] Receive failed with error "No message available on STREAM" nw_read_request_report [C1] Receive failed with error "No message available on STREAM" nw_read_request_report [C1] Receive failed with error "No message available on STREAM" nw_read_request_report [C1] Receive failed with error "No message available on STREAM" nw_protocol_socket_reset_linger [C1:2] setsockopt SO_LINGER failed [22: Invalid argument] apply fence tx failed (client=0x61dbbfd7) [0x10000003 (ipc/send) invalid destination port] Failed to set override status for bind point component member. Failed to set override status for bind point component member. Failed to set override status for bind point component member. Message from debugger: Terminated due to signal 9 Could you please tell me what's the reason and how can I resolve this. When I loads 2,3 times then app works fine from that point onwards. But this happens time to time when debug.
0
0
112
5d
PushWindowAction requires the replaced window to be a WindowGroup or DocumentGroup
Hello, I keep running into the below warning when pushing a window of type volumetric. Although pushing the windows is achieved, we always get the warning regardless of pushing the window via the Attachment button or via the buttons in the ToolbarItemGroup. Illustrated is all the code: app file, first volume and second volume. You can see in my app file that all volumetric window are indeed in a WindowGroup. What is wrong? How can I get rid of that warning? Warning: PushWindowAction requires the replaced window to be a WindowGroup or DocumentGroup
2
0
131
1w
Failed to find and load USDZ from RealityKitContentBundle
I'm developing a VisionOS app and I'm trying to load a ModelEntity from a USDZ file which is inside my custom RealityKit package called R2UVisionOficial. But it keeps giving me an resourceNotFound error. import RealityKit import R2UVisionOficial import ARKit /* more code */ do { let newEntity: Entity //... // Loads entity from USDZ inside package newEntity = try await ModelEntity(named: "Salas", in: r2UVisionOficialBundle) //... return newEntity } catch { print("wtManager >>> **** FAILED to load entity:", error.localizedDescription) throw error } I'm sure I have the Salas.usdz file in the root folder of my package and that I'm using the correct paths. However I keep getting the error: Failed to find resource with name "Salas" in bundle It's funny because when I try to load a USDA (scenes) from the same packages, it works fine. So I guess there's something to do with ModelEntity or USDZ files. Can you please help me? P.S. This issue is similar to https://developer.apple.com/forums/thread/746842?answerId=780415022#780415022
1
0
134
1w
Cannot find the entitlement of Enterprise API for Vision pro
We are developing VisionOS app now, we have applied the Enterprise API for visionOS, including Main Camera Access for Vision Pro, and already get the "Enterprise.license" in the mail apple sent us, we use the developer account import the license file into Xcode: but in Xcode, we cannot find the entitlement of Enterprise API: if we put com.apple.developer.arkit.main-camera-access.allow into Entitlement file of the project manually,Xcode will alarm: and we find that the app itself dont have "Additional Capabilities" which include the Enterprise API: what should we do to have the entitlement file for the Enterprise API, so we can use the enterprise API?
5
1
148
6d
Downgrade to visionOS 1.3 from ipsw file
I'm trying to downgrade my visionPro to visionOS 1.3. I downloaded the visionOS 1.3 ipsw file from the Apple Developer site (on September 25, 2024), but I'm unable to restore the device using this file. After checking ipsw.me, I noticed that visionOS 1.3 is no longer signed. This makes me wonder if the 1.3 IPSW file, although available on the developer site, might not be usable anymore. Has anyone else encountered this issue? Is there any official confirmation on whether visionOS 1.3 can still be restored?
3
0
159
1w
Potential bug in Anchor updates on visionOS using the ARKit C API
I have an application running on visionOS 2.0 that uses the ARKit C API to create anchors and listen for updates. I am running an ARKit session with a WorldTrackingProvider (and a CameraFrameProvider, if that is relevant) Then, I am registering a callback using ar_world_tracking_provider_set_anchor_update_handler_f When updates arrive I iterate over the updated anchors using ar_world_anchors_enumerate_anchors_f. Then, as described in the https://developer.apple.com/documentation/visionos/tracking-points-in-world-space documentation, I walk around and hold down the Digital Crown to reposition the current space. This resets the world origin to my current position. When this happens, anchor updates arrive. In most cases, the anchor updates return the new transform (using ar_world_anchor_get_origin_from_anchor_transform) but sometimes I get an anchor update that reports the transform of the anchor from before the world origin was repositioned. Meaning instead of staying in place in the physical world, the world anchor moves relative to me. I can work around this by calling ar_world_tracking_provider_copy_all_world_anchors_f which provides me with the correct transform, but this async method also adds some noticeable delay to the anchor updates. Is this already a known issue?
0
0
107
6d
Gesture filtering using .targetedToEntity(where: QueryPredicate<Entity>) is not working
I am trying to only apply a drag gesture to specific entities that has a specific component. My entities has the component on it along with the input target and collision component. The gestures work when I use .targetedToAnyEntity() modifier but .targetedToEntity(where:) modifier fails struct ImmersiveView: View { var body: some View { RealityView { content in // Add the initial RealityKit content if let scene = try? await Entity(named: "Immersive", in: realityKitContentBundle) { content.add(scene) } } .gesture( DragGesture() .targetedToEntity(where: .has(ToyComponent.self)) .onChanged({ value in value.entity.position = value.convert(value.location3D, from: .local, to: value.entity.parent!) }) ) } } What could be wrong here?
5
1
216
1w
Attachment layer hidden behind visionOS app window in immersive space
Hello, To me, it does not seem to be entirely clear why, when I'm trying to display my attachment, no matter the positioning, it will always be hidden/covered by my visionOS app window. I'm trying to achieve displaying the attachment one layer above/in front of the window. When my head isn't directed towards the window I can see the attachment but else it's covered by it. I appreciate any help! ContentView.swift import SwiftUI import RealityKit struct ContentView: View { @Environment(\.openImmersiveSpace) private var openImmersiveSpace public var body: some View { VStack { Text("Hello World") .font(.largeTitle) Button("Start") { Task { await openImmersiveSpace(id: "AppSpace") } } } } } ImmersiveView.swift import SwiftUI import RealityKit struct ImmersiveView: View { var loader: EnvironmentLoader public var body: some View { RealityView { content, attachments in content.add(try! await loader.getEntity()) let headEntity = AnchorEntity(.head) content.add(headEntity) if let text = attachments.entity(for: "at01") { text.position = [0, 0, -0.25] headEntity.addChild(text) } } attachments: { Attachment(id: "at01") { Text("Hello World!") .font(.extraLargeTitle) .padding() } } } } App.swift import SwiftUI @main private struct App: App { @State var loader = EnvironmentLoader() public var body: some Scene { WindowGroup { ContentView() } ImmersiveSpace(id: "AppSpace") { ImmersiveView(loader: loader) } .immersionStyle(selection: .constant(.progressive), in: .progressive) } }
3
0
153
1w
Creating and Viewing Immersive Video Locally on Vision Pro
We would like to create an Immersive video and store the video file locally in Vision Pro for viewing. By Immersive video, I mean the video that is played at the end of the Vision Pro experience at the Apple Store (LeBron's dunk, Curry's 3-point shot, tightrope walk, etc.). It is unclear if a way is currently provided to view Immersive video locally. I can find some information about Spatial video on the Dev site, but I can't find any information about Immersive video. My understanding is: Spatial video: A video window appears in space and plays video with depth. Up to 4K side-by-side video can be converted to MV-HEVC format using Xcode and played back in the Photos app. Immersive video: 180VR video, but I’m not sure how it was created. Similar to Spatial video, I converted a side-by-side 180VR video to MV-HEVC format using Xcode, but it could not be played back in the Photos app as expected. Vision Pro's Photos app features an Immersive button during video playback, but this appears to be for zooming in on Spatial video to the full field of view, which seems different from Immersive video. The demo video provided by Apple is streamed from Apple TV, and there are no local files available. We are currently considering creating an app that displays different videos to each eye, but we prefer not to go this route due to licensing and distribution issues.
3
0
150
1w
What does setWorldOrigin() do?
I stumbled across the function setWorldOrigin(relativeTransform:) from the ARSession which is documented here: https://developer.apple.com/documentation/arkit/arsession/2942278-setworldorigin I made a custom ARSession where i override this function and print and modify the relativeTransform parameter. The print shows that this function is called with an updated relativeTransform value but it seems that it has no impact e.g. on the world origin when starting or continuing a scan, the tiny puppet house in RoomPlan or any tracking position that i get from ARKit. Has anybody experience with this method or knows what parts are influenced by setWorldOrigin()?
0
0
108
1w