Discuss Spatial Computing on Apple Platforms.

Post

Replies

Boosts

Views

Activity

Forward and reverse animations with RealityKit on Vision Pro
Hello! I'm trying to play an animation with a toggle button. When the button is toggled the animation either plays forward from the first frame (.speed = 1) OR plays backward from the last frame (.speed = -1), so if the button is toggled when the animation is only halfway through, it 'jumps' to the first or last frame. The animation is 120 frames, and I want the position in playback to be preserved when the button is toggled - so the animation reverses or continues forward from whatever frame the animation was currently on. Any tips on implementation? Thanks! import RealityKit import RealityKitContent struct ModelView: View { var isPlaying: Bool @State private var scene: Entity? = nil @State private var unboxAnimationResource: AnimationResource? = nil var body: some View { RealityView { content in // Specify the name of the Entity you want scene = try? await Entity(named: "TestAsset", in: realityKitContentBundle) scene!.generateCollisionShapes(recursive: true) scene!.components.set(InputTargetComponent()) content.add(scene!) } .installGestures() .onChange(of: isPlaying) { if (isPlaying){ var playerDefinition = scene!.availableAnimations[0].definition playerDefinition.speed = 1 playerDefinition.repeatMode = .none playerDefinition.trimDuration = 0 let playerAnimation = try! AnimationResource.generate(with: playerDefinition) scene!.playAnimation(playerAnimation) } else { var playerDefinition = scene!.availableAnimations[0].definition playerDefinition.speed = -1 playerDefinition.repeatMode = .none playerDefinition.trimDuration = 0 let playerAnimation = try! AnimationResource.generate(with: playerDefinition) scene!.playAnimation(playerAnimation) } } } } Thanks!
2
0
468
Sep ’24
Issue with losing alignment on multi-room session
Hello! We're having this issue in our app that is implementing multi room scan via RoomPlan, where the ARSession world origin is shifted to wherever the RoomCaptureSession is ran again (e.g in the next room) To clarify a few point We are using the RoomCaptureView, starting a new room using roomCaptureView.captureSession.run(configuration: captureSessionConfig) and stopping the room scan via roomCaptureView.captureSession.stop(pauseARSession: false) We are re-using the same ARSession and, which is passed into the RoomCaptureView as so: arSession = ARSession() roomCaptureView = RoomCaptureView(frame: .zero, arSession: arSession) Any clue why the AR world origin is reset? I need it to be consistent for storing frame camera position Thanks!
0
1
229
Sep ’24
Collision detection between two entities not working
Hi, I've tried to implement collision detection between the left index finger through a sphere and a simple 3D rectangle box. The sphere of my left index finger goes through the object, but no collision seems to take place. What am I missing? Thank you very much for your consideration! Below is my code; App.swift import SwiftUI @main private struct TrackingApp: App { public init() { ... } public var body: some Scene { WindowGroup { ContentView() } ImmersiveSpace(id: "AppSpace") { ImmersiveView() } } } ImmersiveView.swift import SwiftUI import RealityKit struct ImmersiveView: View { @State private var subscriptions: [EventSubscription] = [] public var body: some View { RealityView { content in /* LEFT HAND */ let leftHandIndexFingerEntity = AnchorEntity(.hand(.left, location: . let leftHandIndexFingerSphere = ModelEntity(mesh: .generateSphere(radius: 0.01), materials: [SimpleMaterial(color: .orange, isMetallic: false)]) leftHandIndexFingerEntity.addChild(leftHandIndexFingerSphere) leftHandIndexFingerEntity.generateCollisionShapes(recursive: true) leftHandIndexFingerEntity.components[CollisionComponent.self] = CollisionComponent(shapes: [.generateSphere(radius: 0.01)]) leftHandIndexFingerEntity.name = "LeftHandIndexFinger" content.add(leftHandIndexFingerEntity) /* 3D RECTANGLE*/ let width: Float = 0.7 let height: Float = 0.35 let depth: Float = 0.005 let rectangleEntity = ModelEntity(mesh: .generateBox(size: [width, height, depth]), materials: [SimpleMaterial(color: .red.withAlphaComponent(0.5), isMetallic: false)]) rectangleEntity.transform.rotation = simd_quatf(angle: -.pi / 2, axis: [1, 0, 0]) let rectangleAnchor = AnchorEntity(world: [0.1, 0.85, -0.5]) rectangleEntity.generateCollisionShapes(recursive: true) rectangleEntity.components[CollisionComponent.self] = CollisionComponent(shapes: [.generateBox(size: [width, height, depth])]) rectangleEntity.name = "Rectangle" rectangleAnchor.addChild(rectangleEntity) content.add(rectangleAnchor) /* Collision Handling */ let subscription = content.subscribe(to: CollisionEvents.Began.self, on: rectangleEntity) { collisionEvent in print("Collision detected between \(collisionEvent.entityA.name) and \(collisionEvent.entityB.name)") } subscriptions.append(subscription) } } }
1
0
372
Sep ’24
Update, Content, and Attachments on Vision Pro
I have a scene setup that uses places images on planes to mimic an RPG-style character interaction. There's a large scene background image and a smaller character image in the foreground. Both are added as content to a RealityView. There's one attachment that is a dialogue window for interaction with the character, and it is attached to the character image. When the scene changes, I need the images and the dialogue window to refresh. My current approach has been to remove everything from the scene and add the new content in the update closure. @EnvironmentObject var narrativeModel: NarrativeModel @EnvironmentObject var dialogueModel: DialogueViewModel @State private var sceneChange = false private let dialogueViewID = "dialogue" var body: some View { RealityView { content, attachments in //at start, generate background image only and no characters if narrativeModel.currentSceneIndex == -1 { content.add(generateBackground(image: narrativeModel.backgroundImage!)) } } update : { content, attachments in print("update called") if narrativeModel.currentSceneIndex != -1 { print("sceneChange: \(sceneChange)") if sceneChange { //remove old entitites if narrativeModel.currentSceneIndex != 0 { content.remove(attachments.entity(for: dialogueViewID)!) } content.entities.removeAll() //generate the background image for the scene content.add(generateBackground(image: narrativeModel.scenes[narrativeModel.currentSceneIndex].backgroundImage)) //generate the characters for the scene let character = generateCharacter(image: narrativeModel.scenes[narrativeModel.currentSceneIndex].characterImage) content.add(character) print(content) if let character_attachment = attachments.entity(for: "dialogue"){ print("attachment clause executes") character_attachment.position = [0.45, 0, 0] character.addChild(character_attachment) } } } } attachments: { Attachment(id: dialogueViewID){ DialogueView() .environmentObject(dialogueModel) .frame(width: 400, height: 600) .glassBackgroundEffect() } } //load scene images .onChange(of:narrativeModel.currentSceneIndex){ print("SceneView onChange called") DispatchQueue.main.async{ self.sceneChange = true } print("SceneView onChange toggle - sceneChange = \(sceneChange)") } } If I don't use the dialogue window, this all works just fine. If I do, when I click the next button (in another view), which increments the current scene index, I enter some kind of loop where the sceneChange value gets toggled to true but never gets toggled back to false (even though it's changed in the update closure). The reason I have the sceneChange value is because I need to update the content and attachments whenever the scene index changes, and I need a state variable to trigger the update function to do this. My questions are: Why might I be entering this loop? Why would it only happen if I send a message in the dialogue view attachment, which is a whole separate view? Is there a better way to be doing this?
7
0
574
Aug ’24
Creating Shared Experiences in Physical Locations
Hello everyone, I'm working on developing an app that allows users to share and enjoy experiences together while they are in the same physical locations. Despite trying several approaches, I haven't been able to achieve the desired functionality. If anyone has insights on how to make this possible or is interested in joining the project, I would greatly appreciate your help!
3
0
567
Aug ’24
Unable to update IBL at runtime
I seem to be running into an issue in an app I am working on were I am unable to update the IBL for entity more than once in a RealityKit scene. The app is being developed for visionOS. I have a scene with a model the user interacts with and 360 panoramas as a skybox. These skyboxes can change based on user interaction. I have created an IBL for each of the skyboxes and was intending to swap out the ImageBasedLightComponent and ImageBasedLightReceiverComponent components when updating the skybox in the RealityView's update closure. The first update works as expected but updating the components after that has no effect. Not sure if this is intended or if I'm just holding it wrong. Would really appreciate any guidance. Thanks Simplified example // Task spun up from update closure in RealityView Task { if let information = currentSkybox.iblInformation, let resource = try? await EnvironmentResource(named: information.name) { parentEntity.components.remove(ImageBasedLightReceiverComponent.self) if let iblEntity = content.entities.first(where: { $0.name == "ibl" }) { content.remove(iblEntity) } let newIBLEntity = Entity() var iblComponent = ImageBasedLightComponent(source: .single(resource)) iblComponent.inheritsRotation = true iblComponent.intensityExponent = information.intensity newIBLEntity.transform.rotation = .init(angle: currentPanorama.rotation, axis: [0, 1, 0]) newIBLEntity.components.set(iblComponent) newIBLEntity.name = "ibl" content.add(newIBLEntity) parentEntity.components.set([ ImageBasedLightReceiverComponent(imageBasedLight: newIBLEntity), EnvironmentLightingConfigurationComponent(environmentLightingWeight: 0), ]) } else { parentEntity.components.remove(ImageBasedLightReceiverComponent.self) } }
1
0
269
Sep ’24
Memory Leak using simple app with visionOS
Hello. When displaying a simple app like this: struct ContentView: View { var body: some View { EmptyView() } } And run the Leaks app from the developer tools in Xcode, I see a memory leak which I don't see when running the same application on iOS. You can simply run the app and it will show a memory leak. And this is what I see in the Leaks application. Any ideas on what is going on? Thanks!
2
0
432
Aug ’24
Track hardware input (keyboard, trackpad, etc.) in visionOS app during Mac Virtual Display usage?
Hi, I'm experimenting with how my visionOS app interacts with the Mac Virtual Display while the immersive space is active. Specifically, I'm trying to find out if my app can detect key presses or trackpad interactions (like clicks) when the Mac Virtual Display is in use for work, and my app is running in the background with an active immersive space. So far, I've tested a head-tracking system in my app that works when the app is open with an active immersive space, where I just moved the Mac Virtual Display in front of the visionOS app window. Could my visionOS app listen to keyboard and trackpad events that happen in the Mac Virtual Display environment?
0
0
255
Sep ’24
How to Drag Objects Separately with Both Hands
I would like to drag two different objects simultaneously using each hand. In the following session (6:44), it was mentioned that such an implementation could be achieved using SpatialEventGesture(): https://developer.apple.com/jp/videos/play/wwdc2024/10094/ However, since targetedEntity.location3D obtained from SpatialEventGesture is of type Point3D, I'm having trouble converting it for moving objects. It seems like the convert method in the protocol linked below could be used for this conversion, but I'm not quite sure how to implement it: https://developer.apple.com/documentation/realitykit/realitycoordinatespaceconverting/ How should I go about converting the coordinates? Additionally, is it even possible to drag different objects with each hand? .gesture( SpatialEventGesture() .onChanged { events in for event in events { if event.phase == .active { switch event.kind { case .indirectPinch: if (event.targetedEntity == cube1){ let pos = RealityViewContent.convert(event.location3D, from: .local, to: .scene) //This Doesn't work dragCube(pos, for: cube1) } case .touch, .directPinch, .pointer: break; @unknown default: print("unknown default") } } } } )
2
0
292
Aug ’24
VisionOS Shareplay - Persona Preview Profile
Hey, I was reading through the Happy Beam intro website(https://developer.apple.com/documentation/visionos/happybeam) and I stumbled upon the info about Persona Preview Profile, that suppose to help with testing SharePlay on the device. However, the link from the website points to 404- and I was curious if anyone knows what Persona Preview Profile is and how exactly can it help with testing SharePlay? Where can I find more info about it?
1
0
344
Aug ’24
Coordinate conversion
Coordinate conversion was mentioned in https://developer.apple.com/wwdc24/10153 (the effect is demonstrated at 22:00), in which the demonstration is an entity that jumps out of volume into space, but I don't understand his explanation very well. I hope you can give me a basic solution. I am very grateful for this!
1
0
409
Aug ’24
coordinates add
This effect was mentioned in https://developer.apple.com/wwdc24/10153 (the effect is demonstrated at 28:00), in which the demonstration is you can add coordinates by looking somewhere on the ground and clicking., but I don't understand his explanation very well. I hope you can give me a basic solution. I am very grateful for this!
1
0
421
Aug ’24
SwiftUI's alert window won't get automatically focus in visionOS
I have three basic elements in this UI page: View, Alert, Toolbar. I put Toolbar and Alert along with the View, when I click a button on Toolbar, my alert window shows up. Below could be a simple version of my code: @State private var showAlert = false HStack { // ... } .alert(Text("Quit the game?"), isPresented: $showAlert) { MyAlertWindow() } message: { Text("Description text about this alert") } .toolbar { ToolbarItem(placement: .bottomOrnament) { MyToolBarButton(showAlert: $showAlert) } } And in MyToolBarButton I just toggle the binded showAlert variable to try to open/close the alert window. When running on either simulator or device, the bahavior is quite strange. Where when toggle MyToolBarButton the alert window takes like 2-3 seconds to show-up, and all the elements on the alert window is grayed out, behaving like the whole window is losing focus. I have to click the moving control bar below (by dragging gesture) to make the whole window back to focus. And this is not the only issue, I also find MyToolBarButton cannot be pressed to close the alert window (even thogh when I try to click the button on my alert window it closes itself). Oh btw I don't know if this may affect but I open the window with my immersive view opened (though I tested it won't affect anything here) Any idea of what's going on here? XCode 16.1 / visionOS 2 beta 6
1
0
469
Aug ’24
How to update immersiveSpace from another window
Hi I have 2 views and an Immersive space. 1st and 2nd views are display in a TabView I open my ImmersiveSpace from a button in the 1st view of the tab. Then When I go to 2nd TabView I want to show an attachment in my Immersive space. This attachment should be visible in Immersive space only as long as the user os on the 2nd view. This is what I have done so far struct Second: View { @StateObject var sharedImageData = SharedImageData() var body: some View { VStack { // other code } .onAppear() { Task { sharedImageData.shouldCameraButtonShouw = true } } .onDisappear() { Task { sharedImageData.shouldCameraButtonShouw = false } } } } This is my Immersive space struct ImmersiveView: View { @EnvironmentObject var sharedImageData: SharedImageData var body: some View { RealityView { content, attachments in // some code } update: { content, attachments in guard let controlCenterAttachmentEntity = attachments.entity(for: Attachments.controlCenter) else { return } controlCenterentity.addChild(controlCenterAttachmentEntity) content.add(controlCenterentity) } attachments: { if sharedImageData.shouldCameraButtonShouw { Attachment(id: Attachments.controlCenter) { ControlCenter() } } } } } And this is my Observable class class SharedImageData: ObservableObject { @Published var takenImage: UIImage? = nil @Published var shouldCameraButtonShouw: Bool = false } My problem is, when I am on Second view my attachment never appears. Attachment appears without this if condition. But How can I achieve my goal?
1
0
414
Aug ’24