Discuss Spatial Computing on Apple Platforms.

Post

Replies

Boosts

Views

Activity

[App synchronization] I have a question about synchronizing Vision Pro app contents.
Hi! I'm creating an app like this: Using Image Tracking to set world anchor in real world first. The timeline in Reality Composer Pro scene needs to be played in same time(for the people in same place using the app). People using the app will see the same contents in same position in same time in same place. I already made Image Tracking feature worked. But the big problem is "Synchronization". I found Group Activities and TabletopKit to solve the problem. But I don't know if this are the right modules for this project. How do I solve this problem technically? If you have ideas, please let me know. I really need help for this.
0
0
101
1d
'Segmentation fault: 11' error after upgrading to the latest macOS version
Today I updated my macbook pro to macOS sequoia. with this I also downloaded the latest Xcode and visionOS 2 packages. I had a working project. Which did work with my vision pro which is updated to the latest visionOS 2. But now whenever I try to click on preview in xcode while editing a swift file I am receiving the following error: (lot of lines here) Library/Developer/Xcode/DerivedData/test2-fznbrpphddkqdaddrzamkayoajjm/Build/Intermediates.noindex/RealityKitContent.build/Debug-xrsimulator/RealityKitContent_RealityKitContent.build/DerivedSources/RealityAssetsGenerated/CustomComponentUSDInitializers.usda error: Tool terminated by signal 'Segmentation fault: 11' I tried exiting and restarting my mac but the problem is not going away. Can someone help me with this? Thank you!
8
0
562
Sep ’24
Maintain rotation sense while dragging
Hi, I heve a problem with an visionOS app and I couldn't find a solution. I have 3D carousel with cards and when I use the drag gesture and I drag to the left I want the carousel to rotate clockwise and when I drag to the right I want the carousel to rotate counter clockwise. My problem is when I rotate my body more than 90 degrees to the left or to the right the drag gesture changes it's value and the carousel rotates in the opposite direction? Do you know how can I maintain the right sense taking into account that the user can rotate his body? I've tried to take the user orientation with device tracking and check if rotation on Y axis is greater than 90 degrees in both direction but It is a very small area bettween 70-110 degrees when it still rotates in the opposite direction. I think that's because the device traker doesn't update at the same rate as drag gesture or it doesn't have the same acurracy.
0
0
89
2d
Creating a multiview video playback experience in visionOS. There is no back button on the player.
Function Introduction "https://developer.apple.com/documentation/avkit/creating-a-multiview-video-playback-experience-in-visionos/" When I use this function, my videoPlayer has no back Action in player. And we did not find any method provided by the system "addChildViewControllerAndView(form)" "https://developer.apple.com/documentation/avkit/adopting-the-system-player-interface-in-visionos" Referencing this document also did not work As long as you enter this line of code let playerController = AVPlayerViewController() // Enable the multiview experience along with the default recommended set. playerController.experienceController.allowedExperiences = .recommended(including: [.multiview]) there is no back button, only full screen and zoom out
6
0
202
1w
Hover Effect & Tap Problem
Hi, since I updated my device to visionOS 2.0 or higher I have some problems with my app. Sometimes when I go with my eyes over buttons or the tabview which is a SwiftUI component that many apps use, the hover effect doesn't trigger anymore, I can tap on the icon from the tabview but doesn't extend when I hover it to se the description of the button. It is weird because it doesn't happen all the time. But in VisionOS 1.0 doesn't happen at all. My second issue is that I have an navbar as an attachment and this attachment has a draggable modifier that we created. The buttons are not tappable until I drag the navbar a little, this also never happened in VisionOS 1.0 but always happen in Vision 2.0+ because I've tested in the simulator with different versions. Is it possible these problems to be related to handTracking Service that we use from ARKit? Because sometimes when we close the trackers the app works as intended.
0
0
111
2d
Tracking Multiple Instances of the Same Object on the Apple Vision Pro
Hello everyone, I'm currently working through this example project Exploring object tracking with ARKit to learn how to use Object Tracking with visionOS. I was able to modify the example project to overlay a different model onto the detected object. However, with the example code, I'm only able to track 1 instance of the trained object at a time. I'm wondering if there is a way to track multiple instances of 1 object? For example, I have a usdz model of a box, then trained that box for object tracking, I'm able to overlay a model of a chair over that box once it's detected. But now, I have multiple copies of that same box, and I want to arrange them so that when I wear the Vision Pro, I can see the chairs arranged however I want. I'm still new to visionOS development, so I'm not sure if there's a way to accomplish that by just training 1 object and having copies of it. If it helps, this is my current modification to overlay a virtual object ontop of a detected object. func loadReferenceObjects() async -> [ReferenceObject] { // Get a list of all reference object files in the app's main bundle and attempt to load each. var referenceObjectFiles: [String] = [] if let resourcesPath = Bundle.main.resourcePath { print("resource path: \(resourcesPath)") try? referenceObjectFiles = FileManager.default.contentsOfDirectory(atPath: resourcesPath).filter { $0.hasSuffix(".referenceobject") } } await withTaskGroup(of: Void.self) { group in for file in referenceObjectFiles { let objectURL = Bundle.main.bundleURL.appending(path: file) group.addTask { // get the file name let fileNameWithoutExtension = (file as NSString).deletingPathExtension // load each ref objs as task await self.loadSingleReferenceObject(url: objectURL, fileName: fileNameWithoutExtension) } } } return self.referenceObjects } // Private helper method to load a single object // and assign entity to it. private func loadSingleReferenceObject(url: URL, fileName: String) async { var referenceObject: ReferenceObject do { print("Loading reference object from \(url)") // Load the file as a `ReferenceObject` - this can take a while for larger objects. try await referenceObject = ReferenceObject(from: url) } catch { fatalError("Failed to load reference object with error \(error)") } // add the ref obj to the ref objs array self.referenceObjects.append(referenceObject) // entity with each file var model: Entity = Entity() // add entity according to the file name switch fileName { // Box1 ref obj binds with Chair1 case "Box1": // try to load the model do { try await model = Entity(named: "chair1", in: realityKitContentBundle) } catch { print("Failed to load chair1") } // Box2 ref obj binds with Chair2 case "Box2": // try to load the model do { try await model = Entity(named: "chair2", in: realityKitContentBundle) } catch { print("Failed to load chair2") } default: print("no model associated with this file name: \(fileName)") break } // map entity to ref objs usdzPerReferenceObject[referenceObject.id] = model } Any help or suggestions would be greatly appreciated. Thank you.
2
0
160
4d
How can I access the player’s camera vector in VisionOS, specifically using RealityKit?
How can I access the player’s camera vector in VisionOS, specifically using RealityKit? In Unity and other game engines, there’s often an API like Camera.main.transform.forward for this purpose. I’ve found the head anchor but haven’t identified a way to obtain the forward vector in RealityKit. Is there a related API for this? Any guidance would be greatly appreciated. Thanks!
1
0
206
6d
Need to get The VisonOS Player's Camera forward vector on runtime
How can I access the player’s camera vector in VisionOS, specifically using RealityKit? In Unity and other game engines, there’s often an API like "Camera.main.transform.forward for this purpose. " I’ve found the head anchor but haven’t identified a way to obtain the forward vector in RealityKit. Is there a related API for this? Any guidance would be greatly appreciated. Thanks!
1
0
180
6d
Can visionOS windows be smarter about following users?
I haven't found a way to programmatically position the main view in visionOS apps, which seems intentional. While this aligns with user-controlled window placement, it creates a challenge: as users move, they must constantly reposition the main window manually. A potential solution could be a feature that quickly brings the window to the user, perhaps via a custom gesture. This might improve user experience significantly. Given my current understanding of visionOS, I may be missing something. I'd appreciate any insights or alternative perspectives on this issue. Thoughts?
2
0
269
Oct ’24
How to use `unproject(_:from:to:ontoPlane:)` of Content of RealityView
When using ARView of RealityKit, I can code like this let results = arView.raycast(from: point, allowing: .estimatedPlane, alignment: .any) to get the 3D position of where I tap on the plane. In iOS 18, we can use RealityView and I found that unproject(_:from:to:ontoPlane:) may implement the same function, but I don't know how to set the ontoPlane parameter. Can someone help me with some code snippets?
1
0
166
1w
Game controller issues in visionOS
Hi, I'm building a windowed game in visionOS which requires a gamepad. And I'm using a PS5 controller for this case. However, I found a few problems: When looking at the window close button, and press X in the gamepad, the window will be closed. This can happen accidentally when the user is playing. When press the PS button(the home button), I want my app to handle it, e.g. take the user to the title screen. But visionOS will always capture this input and opens the home screen(App Library). How to avoid these 2 issues? Or in general, how to make the gamepad input only available to my app but not the visionOS system?
1
0
188
1w
RealityKit Rotation Origin
I am trying to rotate topEntity around the origin point of shapeEntity, but have not found a way to do so. topEntity is an entity group that also contains shapeEntity, so I cannot set topEntity as a child of shapeEntity. From Blender I set the correct origin of topEntity, but when I import the usd model into Reality Composer Pro it does not save the origin point and there is no way to set the origin in Reality Composer Pro. DragGesture() .targetedToEntity(where: .has(CustomComponent.self)) .onChanged({ value in let rotation = -Float(value.translation.height) let clampedRotation = min(max(rotation, 0), 45) if value.entity.name == "grab"{ if let topEntity = selectedEntity.findEntity(named: "top"), let shapeEntity = selectedEntity.findEntity(named: "Shape_1"){ topEntity.transform.rotation = simd_quatf( angle: clampedRotation * .pi / 180, axis: SIMD3(x: 0, y: 0, z: 1) ) } } })
1
0
88
1w
Limited Window Manipulation in Mixed Immersive Style in VisionOS
In VisionOS versions 2.1 and 2.2, I’m encountering a significant limitation when using the .immersionStyle(selection: .constant(.mixed), in: .mixed) mode, specifically in mixed immersive style. Here’s a breakdown of the behavior: In full immersion mode (.immersionStyle(selection: .constant(.full), in: .full)), users can interact with and manipulate system windows while inside a 3D model, allowing typical interactions like moving windows, pinching, or activating UI switches. However, in mixed immersive mode, using the exact same layout “inside” a 3D model (which doesn’t visually obstruct the window), users are unable to interact with window content or move the window. Basic interactions like pinching or toggling switches require users to physically touch these elements in AR space, which is inconsistent with the behavior in full immersion. From a usability perspective, this restriction seems unnecessary, as the software should ideally allow for similar interaction capabilities across both immersive styles. The expected behavior is to enable window manipulation within a 3D model in mixed mode, matching the functionality observed in full immersion. The scene in question is a House in which the user is placed during the immersion that's why I am referring to the user being "Inside" of the scene. Has anyone else experienced this or found a workaround?
1
0
160
1w
How does rendering to a higher resolution RenderTarget and then downsampling to a Drawable cause image distortion?
Rendering the scene onto a RenderTarget with twice the resolution of the Drawable, and then downsampling to the Drawable, causes the image to appear distorted. Modifications were made on the Xcode VisionOS template Foveation should be enabled by default struct ContentStageConfiguration: CompositorLayerConfiguration { func makeConfiguration(capabilities: LayerRenderer.Capabilities, configuration: inout LayerRenderer.Configuration) { configuration.depthFormat = .depth32Float configuration.colorFormat = .bgra8Unorm_srgb let foveationEnabled = capabilities.supportsFoveation configuration.isFoveationEnabled = foveationEnabled let options: LayerRenderer.Capabilities.SupportedLayoutsOptions = foveationEnabled ? [.foveationEnabled] : [] let supportedLayouts = capabilities.supportedLayouts(options: options) configuration.layout = supportedLayouts.contains(.layered) ? .layered : .dedicated } } To avoid errors, rasterizationRateMap is not set. var renderPassDescriptor = MTLRenderPassDescriptor() renderPassDescriptor.colorAttachments[0].texture = self.renderTarget.currentFrameColor renderPassDescriptor.renderTargetWidth = self.renderTarget.currentFrameColor.width renderPassDescriptor.renderTargetHeight = self.renderTarget.currentFrameColor.height renderPassDescriptor.colorAttachments[0].loadAction = .clear renderPassDescriptor.colorAttachments[0].storeAction = .store renderPassDescriptor.colorAttachments[0].clearColor = MTLClearColor(red: 0.0, green: 0.0, blue: 0.0, alpha: 0.0) renderPassDescriptor.depthAttachment.texture = self.renderTarget.currentFrameDepth renderPassDescriptor.depthAttachment.loadAction = .clear renderPassDescriptor.depthAttachment.storeAction = .store renderPassDescriptor.depthAttachment.clearDepth = 0.0 //renderPassDescriptor.rasterizationRateMap = drawable.rasterizationRateMaps.first if layerRenderer.configuration.layout == .layered { renderPassDescriptor.renderTargetArrayLength = drawable.views.count } The rendering process is as follows:
2
0
326
Apr ’24
Regarding real-time object tracking and real-time image recognition
We used real-time object tracking, and with enterprise permissions, we can improve the smoothness to 30Hz, but there are still noticeable delays. On one hand, we want to know why this delay occurs; is it due to performance considerations? Because we found that the delay in hand tracking is actually very low. On the other hand, we consider that it may be due to the complexity of 3D objects, so I considered using image tracking. However, we found that there are even more serious delays in image tracking and QR code tracking. We hope to optimize it. Currently, the frame rate for recognizing images for tracking seems to be one frame per second, and we hope to increase it because object recognition and tracking can be very smooth on other Apple platforms, such as iOS. Additionally, can we appropriately consider interfaces for depth recognition to obtain depth data? We want to know what accuracy vision can achieve in measuring the physical world, as well as the accuracy in rendering on the screen. We wonder if this is related to hardware devices like radar. Also, what accuracy can we achieve in tracking the movement distance of objects?
1
0
166
2w
Parent is changed during gesture interaction producing incorrect relative translation values.
I'm having trouble re-setting the position of a child entity during app re-load even though it appears that I am correctly obtaining and persisting the correct translation values after a drag gesture. The problem exists when I drag a child element to a new location (persist those new values) then reload the app to force re-positioning from persisted translation values. I notice that the parent relationship changes during interaction (tap or drag) which can be seen in the debug statements. I'm wondering if this is related to the problem, or, if the parent change is normal during re-rendering and is un-related to my problem. My thought process is since we care about relative translation values when persisting, if the parent relationship is changed just before persistence, then, are we persisting and setting the wrong values? Project Link: Private STEPS TO REPRODUCE Run the app. Drag the pre-loaded stage down the Y axis so that the floor of the stage is more visible to your eye (in order to better visualize the problem). Tap the button in the timeline to create a new project. Drag the only visible element from the left panel onto the timeline (element is labeled f_works_entity_1). There should now be a green 3d model added to the stage. Drag this green element to a new location (be careful to hover over the green element so that you don't inadvertently drag the stage). Re-run the app to see that the green element is offset to a new location, not the last dragged location. To reset and try again, delete the project canvas next to the project name (trash button) then restart the app. Areas of concern: RealityKitView is the only file you may need. Line 119 is where we create new child entities Lines 185-219 is where we persist and apply persisted values. You can also search FIXME in the file to see areas of concern. Tip: I have a tap gesture on each entity that produces a debug statement with info about the entity and its parent including IDs.
1
0
242
1w
Prevent Window (or Volume) Mouse Focus
When using a trackpad (or screen-shared Mac) with the Vision Pro, moving your attention to a new window or app immediately refocuses the mouse cursor, which in many circumstances is really useful. But in circumstances where there is a viewer-only window, that window jumping gets in the way. Imagine a 3d object editor of some sort, with a live viewer in a second window, maybe a browser. Manipulating the 3d object with the mouse in the editor gets continually interrupted when looking at the live viewer because the cursor jumps to the viewer window. Is there anyway to reject that focus?
2
0
185
1w
Creating and Viewing Immersive Video Locally on Vision Pro
We would like to create an Immersive video and store the video file locally in Vision Pro for viewing. By Immersive video, I mean the video that is played at the end of the Vision Pro experience at the Apple Store (LeBron's dunk, Curry's 3-point shot, tightrope walk, etc.). It is unclear if a way is currently provided to view Immersive video locally. I can find some information about Spatial video on the Dev site, but I can't find any information about Immersive video. My understanding is: Spatial video: A video window appears in space and plays video with depth. Up to 4K side-by-side video can be converted to MV-HEVC format using Xcode and played back in the Photos app. Immersive video: 180VR video, but I’m not sure how it was created. Similar to Spatial video, I converted a side-by-side 180VR video to MV-HEVC format using Xcode, but it could not be played back in the Photos app as expected. Vision Pro's Photos app features an Immersive button during video playback, but this appears to be for zooming in on Spatial video to the full field of view, which seems different from Immersive video. The demo video provided by Apple is streamed from Apple TV, and there are no local files available. We are currently considering creating an app that displays different videos to each eye, but we prefer not to go this route due to licensing and distribution issues.
4
0
430
Sep ’24