Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics

Post

Replies

Boosts

Views

Activity

Issue with losing alignment on multi-room session
Hello! We're having this issue in our app that is implementing multi room scan via RoomPlan, where the ARSession world origin is shifted to wherever the RoomCaptureSession is ran again (e.g in the next room) To clarify a few point We are using the RoomCaptureView, starting a new room using roomCaptureView.captureSession.run(configuration: captureSessionConfig) and stopping the room scan via roomCaptureView.captureSession.stop(pauseARSession: false) We are re-using the same ARSession and, which is passed into the RoomCaptureView as so: arSession = ARSession() roomCaptureView = RoomCaptureView(frame: .zero, arSession: arSession) Any clue why the AR world origin is reset? I need it to be consistent for storing frame camera position Thanks!
0
1
153
Sep ’24
Collision detection between two entities not working
Hi, I've tried to implement collision detection between the left index finger through a sphere and a simple 3D rectangle box. The sphere of my left index finger goes through the object, but no collision seems to take place. What am I missing? Thank you very much for your consideration! Below is my code; App.swift import SwiftUI @main private struct TrackingApp: App { public init() { ... } public var body: some Scene { WindowGroup { ContentView() } ImmersiveSpace(id: "AppSpace") { ImmersiveView() } } } ImmersiveView.swift import SwiftUI import RealityKit struct ImmersiveView: View { @State private var subscriptions: [EventSubscription] = [] public var body: some View { RealityView { content in /* LEFT HAND */ let leftHandIndexFingerEntity = AnchorEntity(.hand(.left, location: . let leftHandIndexFingerSphere = ModelEntity(mesh: .generateSphere(radius: 0.01), materials: [SimpleMaterial(color: .orange, isMetallic: false)]) leftHandIndexFingerEntity.addChild(leftHandIndexFingerSphere) leftHandIndexFingerEntity.generateCollisionShapes(recursive: true) leftHandIndexFingerEntity.components[CollisionComponent.self] = CollisionComponent(shapes: [.generateSphere(radius: 0.01)]) leftHandIndexFingerEntity.name = "LeftHandIndexFinger" content.add(leftHandIndexFingerEntity) /* 3D RECTANGLE*/ let width: Float = 0.7 let height: Float = 0.35 let depth: Float = 0.005 let rectangleEntity = ModelEntity(mesh: .generateBox(size: [width, height, depth]), materials: [SimpleMaterial(color: .red.withAlphaComponent(0.5), isMetallic: false)]) rectangleEntity.transform.rotation = simd_quatf(angle: -.pi / 2, axis: [1, 0, 0]) let rectangleAnchor = AnchorEntity(world: [0.1, 0.85, -0.5]) rectangleEntity.generateCollisionShapes(recursive: true) rectangleEntity.components[CollisionComponent.self] = CollisionComponent(shapes: [.generateBox(size: [width, height, depth])]) rectangleEntity.name = "Rectangle" rectangleAnchor.addChild(rectangleEntity) content.add(rectangleAnchor) /* Collision Handling */ let subscription = content.subscribe(to: CollisionEvents.Began.self, on: rectangleEntity) { collisionEvent in print("Collision detected between \(collisionEvent.entityA.name) and \(collisionEvent.entityB.name)") } subscriptions.append(subscription) } } }
1
0
307
Sep ’24
Speed up the conversion of MV-HEVC to Side-by-side
I have read the Converting side-by-side 3D video to multi-view HEVC and spatial video, now I want to convert back to side-by-side 3D video. On iPhone 15 Pro MAX, the converting time is about 1:1 as the original video length. I do almost the same as the article mentioned above, the only difference is I get the frames from Spatial video, merging into Side-by-side. Currently my code merging the frame wrote as below. Is any suggestion to speed up the process? Or in the official article, is there anything that we can do to speed up the conversion? // Merge frame let leftCI = resizeCVPixelBufferFill(bufferLeft, targetSize: targetSize) let rightCI = resizeCVPixelBufferFill(bufferRight, targetSize: targetSize) let lbuffer = convertCIImageToCVPixelBuffer(leftCI!)! let rbuffer = convertCIImageToCVPixelBuffer(rightCI!)! pixelBuffer = mergeFrames(lbuffer, rbuffer)
1
0
246
Sep ’24
Forward and reverse animations with RealityKit on Vision Pro
Hello! I'm trying to play an animation with a toggle button. When the button is toggled the animation either plays forward from the first frame (.speed = 1) OR plays backward from the last frame (.speed = -1), so if the button is toggled when the animation is only halfway through, it 'jumps' to the first or last frame. The animation is 120 frames, and I want the position in playback to be preserved when the button is toggled - so the animation reverses or continues forward from whatever frame the animation was currently on. Any tips on implementation? Thanks! import RealityKit import RealityKitContent struct ModelView: View { var isPlaying: Bool @State private var scene: Entity? = nil @State private var unboxAnimationResource: AnimationResource? = nil var body: some View { RealityView { content in // Specify the name of the Entity you want scene = try? await Entity(named: "TestAsset", in: realityKitContentBundle) scene!.generateCollisionShapes(recursive: true) scene!.components.set(InputTargetComponent()) content.add(scene!) } .installGestures() .onChange(of: isPlaying) { if (isPlaying){ var playerDefinition = scene!.availableAnimations[0].definition playerDefinition.speed = 1 playerDefinition.repeatMode = .none playerDefinition.trimDuration = 0 let playerAnimation = try! AnimationResource.generate(with: playerDefinition) scene!.playAnimation(playerAnimation) } else { var playerDefinition = scene!.availableAnimations[0].definition playerDefinition.speed = -1 playerDefinition.repeatMode = .none playerDefinition.trimDuration = 0 let playerAnimation = try! AnimationResource.generate(with: playerDefinition) scene!.playAnimation(playerAnimation) } } } } Thanks!
2
0
325
Sep ’24
Unable to update IBL at runtime
I seem to be running into an issue in an app I am working on were I am unable to update the IBL for entity more than once in a RealityKit scene. The app is being developed for visionOS. I have a scene with a model the user interacts with and 360 panoramas as a skybox. These skyboxes can change based on user interaction. I have created an IBL for each of the skyboxes and was intending to swap out the ImageBasedLightComponent and ImageBasedLightReceiverComponent components when updating the skybox in the RealityView's update closure. The first update works as expected but updating the components after that has no effect. Not sure if this is intended or if I'm just holding it wrong. Would really appreciate any guidance. Thanks Simplified example // Task spun up from update closure in RealityView Task { if let information = currentSkybox.iblInformation, let resource = try? await EnvironmentResource(named: information.name) { parentEntity.components.remove(ImageBasedLightReceiverComponent.self) if let iblEntity = content.entities.first(where: { $0.name == "ibl" }) { content.remove(iblEntity) } let newIBLEntity = Entity() var iblComponent = ImageBasedLightComponent(source: .single(resource)) iblComponent.inheritsRotation = true iblComponent.intensityExponent = information.intensity newIBLEntity.transform.rotation = .init(angle: currentPanorama.rotation, axis: [0, 1, 0]) newIBLEntity.components.set(iblComponent) newIBLEntity.name = "ibl" content.add(newIBLEntity) parentEntity.components.set([ ImageBasedLightReceiverComponent(imageBasedLight: newIBLEntity), EnvironmentLightingConfigurationComponent(environmentLightingWeight: 0), ]) } else { parentEntity.components.remove(ImageBasedLightReceiverComponent.self) } }
1
0
231
Sep ’24
Track hardware input (keyboard, trackpad, etc.) in visionOS app during Mac Virtual Display usage?
Hi, I'm experimenting with how my visionOS app interacts with the Mac Virtual Display while the immersive space is active. Specifically, I'm trying to find out if my app can detect key presses or trackpad interactions (like clicks) when the Mac Virtual Display is in use for work, and my app is running in the background with an active immersive space. So far, I've tested a head-tracking system in my app that works when the app is open with an active immersive space, where I just moved the Mac Virtual Display in front of the visionOS app window. Could my visionOS app listen to keyboard and trackpad events that happen in the Mac Virtual Display environment?
0
0
218
Sep ’24
Creating Shared Experiences in Physical Locations
Hello everyone, I'm working on developing an app that allows users to share and enjoy experiences together while they are in the same physical locations. Despite trying several approaches, I haven't been able to achieve the desired functionality. If anyone has insights on how to make this possible or is interested in joining the project, I would greatly appreciate your help!
3
0
457
Aug ’24
Memory Leak using simple app with visionOS
Hello. When displaying a simple app like this: struct ContentView: View { var body: some View { EmptyView() } } And run the Leaks app from the developer tools in Xcode, I see a memory leak which I don't see when running the same application on iOS. You can simply run the app and it will show a memory leak. And this is what I see in the Leaks application. Any ideas on what is going on? Thanks!
2
0
353
Aug ’24
Main camera not updating the buffer when comes from background to foreground
Hi, My camera access method is like this func processCameraUpdates() async { print("Process camera called") let formats = CameraVideoFormat.supportedVideoFormats(for: .main, cameraPositions:[.left]) guard let cameraFrameUpdates = cameraFrameProvider.cameraFrameUpdates(for: formats[0]) else { return } for await cameraFrame in cameraFrameUpdates { guard let mainCameraSample = cameraFrame.sample(for: .left) else { continue } pixelBuffer = mainCameraSample.pixelBuffer print("Pixel buffer updated)") } } In my ImmersiveSpace I am calling that method in this way task { // Main camera access await placeManager.processCameraUpdates() } This works fine as long as app is in active / opened / foreground. Once I close the app and re-open, I cannot capture any image. What am I missing here? Do I need to do something when scene become active?
2
0
285
Aug ’24
Access to ARKit underlying scene mesh in visionOS
Is there anyway to reset the scan memory Vision Pro stores on-device, so that upon every new scanning in my application, it starts from scratch rather than getting instantly recognized. In Apple Vision Pro Privacy overview (https://www.apple.com/privacy/docs/Apple_Vision_Pro_Privacy_Overview.pdf), it is stated: "visionOS builds a three-dimensional model to map your surroundings on-device. Apple Vision Pro uses a combination of camera and LiDAR data to map the area around you and save that model on-device. The model enables visionOS to alert you about real-life obstacles, as well as appropriately reflect the lighting and shadows of your physical space. visionOS uses audio ray tracing to analyze your room’s acoustic properties on-device to adapt and match sound to your space. The underlying scene mesh is stored on-device and encrypted with your passcode if one is set" How to access and erase the, and I quote, “underlying scene mesh stored on-device”?
2
0
350
Aug ’24
How to Drag Objects Separately with Both Hands
I would like to drag two different objects simultaneously using each hand. In the following session (6:44), it was mentioned that such an implementation could be achieved using SpatialEventGesture(): https://developer.apple.com/jp/videos/play/wwdc2024/10094/ However, since targetedEntity.location3D obtained from SpatialEventGesture is of type Point3D, I'm having trouble converting it for moving objects. It seems like the convert method in the protocol linked below could be used for this conversion, but I'm not quite sure how to implement it: https://developer.apple.com/documentation/realitykit/realitycoordinatespaceconverting/ How should I go about converting the coordinates? Additionally, is it even possible to drag different objects with each hand? .gesture( SpatialEventGesture() .onChanged { events in for event in events { if event.phase == .active { switch event.kind { case .indirectPinch: if (event.targetedEntity == cube1){ let pos = RealityViewContent.convert(event.location3D, from: .local, to: .scene) //This Doesn't work dragCube(pos, for: cube1) } case .touch, .directPinch, .pointer: break; @unknown default: print("unknown default") } } } } )
2
0
230
Aug ’24
Indicator of how much time left to fully load a RealityKit 3d Model
I have a quiet big USDZ file which have my 3d model that I run on Realityview Swift Project and it takes sometime before I can see the model on the screen, So I was wondering if there is a way to know how much time left for the RealityKit/RealityView Model to be loaded or a percentage that I can add on a progress bar to show for the user how much time left before he can see the full model on screen. and if there is a way how to do this on progress bar while loading. Something like that
2
0
213
Aug ’24
coordinates add
This effect was mentioned in https://developer.apple.com/wwdc24/10153 (the effect is demonstrated at 28:00), in which the demonstration is you can add coordinates by looking somewhere on the ground and clicking., but I don't understand his explanation very well. I hope you can give me a basic solution. I am very grateful for this!
1
0
318
Aug ’24
Coordinate conversion
Coordinate conversion was mentioned in https://developer.apple.com/wwdc24/10153 (the effect is demonstrated at 22:00), in which the demonstration is an entity that jumps out of volume into space, but I don't understand his explanation very well. I hope you can give me a basic solution. I am very grateful for this!
1
0
323
Aug ’24
Draw a physical mesh
In this code: https://developer.apple.com/documentation/visionos/incorporating-real-world-surroundings-in-an-immersive-experience It contains a physical collision reaction between virtual objects and the real world, which is realized by creating a grid with physical components. However, I don't understand the information in the document very well. Who can give me a solution? Thank you!
1
0
279
Aug ’24
Is it possible to load Reality Composer Pro scenes from URL? (VisionOS)
My visionOS app has over 150 3D models that work flawlessly via firebase URL. I also have some RealityKitContent scenes (stored locally) that are getting pretty large so I'm looking to move those into firebase as well and call on them as needed. I can't find any documentation that has worked for this and can't find anyone that's talked about it. Does anyone know if this possible? I tried exporting the Reality Composer Pro scenes as USDZ's and then importing them in but kept getting material errors. Maybe there's a way to call on each 3D model separately and have RealityKit build them into the scene? I'm not really sure. Any help would be much appreciated !
1
0
282
Aug ’24