Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics

Post

Replies

Boosts

Views

Activity

Presenting immersive content in UIKit app
I have a UIKit app and would like to provide spacial experience when run on VisionOS. I added VisionOS support, but not sure how to present an immersive view. All tutorials are in SwiftUI, but my app is in UIKit. This is an example from a SwiftUI project, but how how do I declare this ImmersiveView in UIKit? struct VirtualApp: App { var body: some Scene { WindowGroup { ContentView() }.windowStyle(.volumetric) ImmersiveSpace(id: "ImmersiveSpace") { ImmersiveView() } } } and in UIKit how do I make the call to open the ImmersiveView?
5
1
1.6k
Jul ’23
How to display stereo images in Apple Vision Pro?
Hi community, I have a pair of stereo images, one for each eye. How should I render it on visionOS? I know that for 3D videos, the AVPlayerViewController could display them in fullscreen mode. But I couldn't find any docs relating to 3D stereo images. I guess my question can be brought up in a more general way: Is there any method that we can render different content for each eye? This could also be helpful to someone who only has sight on one eye.
8
0
3.2k
Jul ’23
Why does this entity appear behind spatial tap collision location?
I am trying to make a world anchor where a user taps a detected plane. How am I trying this? First, I add an entity to a RealityView like so: let anchor = AnchorEntity(.plane(.vertical, classification: .wall, minimumBounds: [2.0, 2.0]), trackingMode: .continuous) anchor.transform.rotation *= simd_quatf(angle: -.pi / 2, axis: SIMD3<Float>(1, 0, 0)) let interactionEntity = Entity() interactionEntity.name = "PLANE" let collisionComponent = CollisionComponent(shapes: [ShapeResource.generateBox(width: 2.0, height: 2.0, depth: 0.02)]) interactionEntity.components.set(collisionComponent) interactionEntity.components.set(InputTargetComponent()) anchor.addChild(interactionEntity) content.add(anchor) This: Declares an anchor that requires a wall 2 meters by 2 meters to appear in the scene with continuous tracking Makes an empty entity and gives it a 2m by 2m by 2cm collision box Attaches the collision entity to the anchor Finally then adds the anchor to the scene It appears in the scene like this: Great! Appears to sit right on the wall. I then add a tap gesture recognizer like this: SpatialTapGesture() .targetedToAnyEntity() .onEnded { value in guard value.entity.name == "PLANE" else { return } var worldPosition: SIMD3<Float> = value.convert(value.location3D, from: .local, to: .scene) let pose = Pose3D(position: worldPosition, rotation: value.entity.transform.rotation) let worldAnchor = WorldAnchor(transform: simd_float4x4(pose)) let model = ModelEntity(mesh: .generateBox(size: 0.1, cornerRadius: 0.03), materials: [SimpleMaterial(color: .blue, isMetallic: true)]) model.transform = Transform(matrix: worldAnchor.transform) realityViewContent?.add(model) I ASSUME This: Makes a world position from the where the tap connects with the collision entity. Integrates the position and the collision plane's rotation to create a Pose3D. Makes a world anchor from that pose (So it can be persisted in a world tracking provider) Then I make a basic cube entity and give it that transform. Weird Stuff: It doesn't appear on the plane.. it appears behind it... Why, What have I done wrong? The X and Y of the tap location appears spot on, but something is "off" about the z position. Also, is there a recommended way to debug this with the available tools? I'm guessing I'll have to file a DTS about this because feedback on the forum has been pretty low since labs started.
2
0
1.1k
Aug ’23
Is ARKit's detecting 3D object reliable enough to use?
Hi, I'm doing a research on AR using real world object as the anchor. For this I'm using ARKit's ability to scan and detect 3d object. Here's what I found so far after scanning and testing object detection on many objects : It works best on 8-10" object (basically objects you can place on your desk) It works best on object that has many visual features/details (makes sense just like plane detection) Although things seem to work well and exactly the behavior I need, I noticed issues in detecting when : Different lighting setup, this means just directions of the light. I always try to maintain bright room light. But I noticed testing in the morning and in the evening sometimes if not most of the time will make detection harder/fails. Different environment, this means simply moving the object from one place to another will make detection fails or harder(will take significant amount of time to detect it). -> this isn't scanning process, this is purely anchor detection from the same arobject file on the same real world object. These two difficulties make me wonder if scanning and detecting 3d object will ever be reliable enough for real world case. For example you want to ship an AR app that contains the manual of your product where you can use AR app to detect and point the location/explanation of your product features. Has anyone tried this before? Is your research show the same behavior as mine? Does using LIDAR will help in scanning and detection accuracy? So far there doesn't seem to be any information on what actually ARKit does when scanning and detecting, maybe if anyone has more information I can learn on how to make better scan or what not. Any help or information regarding this matter that any of you willing to share will be really appreciated Thanks
5
0
1.5k
Sep ’23
Adding custom Simulated Scenes to Apple Vision Pro Simulator
Greetings, Been playing around with some of the first examples and apps documented. After a while, I wanted to move to a custom space in the simulator. Scanned throughout the options in the simulator and in XCode but couldn't find how to add custom Simulated Scenes (e.g. my own room) to the Simulator. Could someone point out how to get this done? Even if programatically, some pointers would be welcome. Thanks,
5
1
1.2k
Oct ’23
Texture not appling on roomplan wall object (capturedata)
We are attempting to update the texture on a node. The code below works correctly when we use a color, but it encounters issues when we attempt to use an image. The image is available in the bundle, and it image correctly in other parts of our application. This texture is being applied to both the floor and the wall. Please assist us with this issue." for obj in Floor_grp[0].childNodes { let node = obj.flattenedClone() node.transform = obj.transform let imageMaterial = SCNMaterial() node.geometry?.materials = [imageMaterial] node.geometry?.firstMaterial?.diffuse.contents = UIColor.brown obj.removeFromParentNode() Floor_grp[0].addChildNode(node) }
1
0
565
Oct ’23
Question about Checkpoint Directory
Hello! I have a question about usage snapshots from ios17 sample app on macOS 14. I tried to export folders "Photos" and "Snapshots" captured from ios and then wrote like: let checkpointDirectoryPath = "/path/to/the/Snapshots/" let checkpointDirectoryURL = URL(fileURLWithPath: checkpointDirectoryPath, isDirectory: true) if #available(macOS 14.0, *) { configuration.checkpointDirectory = checkpointDirectoryURL } else { // Fallback on earlier versions } But I didn't notice any speed or performance improvements. It looks like "Snapshots" folder was simply ignored. Please advise what I can do so that "Snapshots" folder is affected during calculations.
0
0
452
Oct ’23
Roompaln wall group objects apply texture
I am writing to seek assistance with a challenge I am facing while working on a 3D model rendering project. I believe your expertise in this area could be immensely helpful in resolving the issue. The problem I am encountering involves difficulties in displaying textures on both parent and child nodes within the 3D model. Here are the key details of the problem: This model contents wall_grp(doors, windows and wall) objects. We are using roomplan data in SCNView. This code dependent on scene kit and room plan apis When we are comment childnode code its working but in this case we don’t have windows and door on wall. func updateWallObjects() { if arch_grp.count > 0 { if !arch_grp.isEmpty { for obj in arch_grp[0].childNodes { let color = UIColor.init(red: 255/255, green: 229/255, blue: 204/255, alpha: 1.0) let parentNode = obj.flattenedClone() for childObj in obj.childNodes { let childNode = childObj.flattenedClone() let childMaterial = SCNMaterial() childNode.geometry?.materials = [childMaterial] if let name = childObj.name { if (removeNumbers(from: name) != "Wall") { childNode.geometry?.firstMaterial?.diffuse.contents = UIColor.white } else { childNode.geometry?.firstMaterial?.diffuse.contents = color } } childObj.removeFromParentNode() parentNode.addChildNode(childObj) } let material = SCNMaterial() parentNode.geometry?.materials = [material] parentNode.geometry?.firstMaterial?.diffuse.contents = color obj.removeFromParentNode() arch_grp[0].addChildNode(parentNode) } } } }``` Please suggest us
0
0
571
Oct ’23
Can I "experience" my AR project from Reality Composer Pro on an iPad, instead of the Vision Pro?
Hi all, I don't have a Vision Pro (yet), and I'm wondering if it is possible to preview my Reality Composer Pro project in AR using an iPad Pro or latest iPhones? I also am interested in teaching others - I'm also a college professor, and I don't believe that my students have Vision Pros either. I could always use the iOS versions, as I have done in the past, but the Pro version is much more capable and it would be great to be able to use it. Thanks for any comments on this!
2
0
485
Oct ’23
Face Anchor in Reality Composer: Enabling Ball Movement Based on Head Tilts
Using the face anchor feature in Reality Composer, I'm exploring the potential for generating content movement based on facial expressions and head movement. In my current project, I've positioned a horizontal wood plane on the user's face, and I've added some dynamic physics-enabled balls on the wood surface. While I've successfully anchored the wood plane to the user's head movements, I'm facing a challenge with the balls. I'm aiming to have these balls respond to the user's head tilts, effectively rolling in the direction of the head movement. For instance, a tilt to the right should trigger the balls to roll right, and likewise for leftward tilts. However, my attempts thus far have not yielded the expected results, as the balls seem to be unresponsive to the user's head movements. The wood plane, on the other hand, follows the head's motion seamlessly. I'd greatly appreciate any insights, guidance, or possible solutions you may have regarding this matter. Are there specific settings or techniques I should be implementing to enable the balls to respond to the user's head movement as desired? Thank you in advance for your assistance.
0
0
680
Oct ’23
ArKit entire scene is drifting away
I'm using ArKit with Metal in my ios app without ARSCNView. Most of times it works fine. However sometimes the whole scene is drifting away (especially first few seconds), then it sort of comes back but far from the original place. I don't see this effect in other model viewers - Unity based or WebAR based. Surely they use ArKit under the hood (not all but most of them do). Here is a snippet of my code: private let session = ARSession() func initialize() { let configuration = ARWorldTrackingConfiguration() configuration.maximumNumberOfTrackedImages = 10 configuration.planeDetection = [.horizontal, .vertical] configuration.automaticImageScaleEstimationEnabled = true configuration.isLightEstimationEnabled = true session.delegate = self session.run(configuration) } func getCameraViewMat(...) -> simd_float4x4 { //... return session.currentFrame.camera.viewMatrix(for: .portrait) } func createAnchor(...) -> Int { // ... anchors[id] = ARAnchor(transform: mat) session.add(anchor: anchors[id]!) return id } func getAnchorTransform(...) -> simd_float4x4 { //... return anchors[id]!.transform } func onUpdate(...) { // draw session.currentFrame.rawFeaturePoints!.points // draw all ARPlaneAnchor } func session(_ session: ARSession, cameraDidChangeTrackingState camera: ARCamera) { print("TRACKING STATE CHANGED: \(camera.trackingState)") } I can see it's not just anchors problem - everything moves including the point cloud. The tracking state is normal and changes to limited only after the drift occurred which is too late. What can I do to prevent the drift?
0
0
355
Oct ’23