Discuss Spatial Computing on Apple Platforms.

Post

Replies

Boosts

Views

Activity

How to Drag Objects Separately with Both Hands
I would like to drag two different objects simultaneously using each hand. In the following session (6:44), it was mentioned that such an implementation could be achieved using SpatialEventGesture(): https://developer.apple.com/jp/videos/play/wwdc2024/10094/ However, since targetedEntity.location3D obtained from SpatialEventGesture is of type Point3D, I'm having trouble converting it for moving objects. It seems like the convert method in the protocol linked below could be used for this conversion, but I'm not quite sure how to implement it: https://developer.apple.com/documentation/realitykit/realitycoordinatespaceconverting/ How should I go about converting the coordinates? Additionally, is it even possible to drag different objects with each hand? .gesture( SpatialEventGesture() .onChanged { events in for event in events { if event.phase == .active { switch event.kind { case .indirectPinch: if (event.targetedEntity == cube1){ let pos = RealityViewContent.convert(event.location3D, from: .local, to: .scene) //This Doesn't work dragCube(pos, for: cube1) } case .touch, .directPinch, .pointer: break; @unknown default: print("unknown default") } } } } )
2
0
213
Aug ’24
Indicator of how much time left to fully load a RealityKit 3d Model
I have a quiet big USDZ file which have my 3d model that I run on Realityview Swift Project and it takes sometime before I can see the model on the screen, So I was wondering if there is a way to know how much time left for the RealityKit/RealityView Model to be loaded or a percentage that I can add on a progress bar to show for the user how much time left before he can see the full model on screen. and if there is a way how to do this on progress bar while loading. Something like that
2
0
201
Aug ’24
coordinates add
This effect was mentioned in https://developer.apple.com/wwdc24/10153 (the effect is demonstrated at 28:00), in which the demonstration is you can add coordinates by looking somewhere on the ground and clicking., but I don't understand his explanation very well. I hope you can give me a basic solution. I am very grateful for this!
1
0
309
Aug ’24
Coordinate conversion
Coordinate conversion was mentioned in https://developer.apple.com/wwdc24/10153 (the effect is demonstrated at 22:00), in which the demonstration is an entity that jumps out of volume into space, but I don't understand his explanation very well. I hope you can give me a basic solution. I am very grateful for this!
1
0
311
Aug ’24
Update, Content, and Attachments on Vision Pro
I have a scene setup that uses places images on planes to mimic an RPG-style character interaction. There's a large scene background image and a smaller character image in the foreground. Both are added as content to a RealityView. There's one attachment that is a dialogue window for interaction with the character, and it is attached to the character image. When the scene changes, I need the images and the dialogue window to refresh. My current approach has been to remove everything from the scene and add the new content in the update closure. @EnvironmentObject var narrativeModel: NarrativeModel @EnvironmentObject var dialogueModel: DialogueViewModel @State private var sceneChange = false private let dialogueViewID = "dialogue" var body: some View { RealityView { content, attachments in //at start, generate background image only and no characters if narrativeModel.currentSceneIndex == -1 { content.add(generateBackground(image: narrativeModel.backgroundImage!)) } } update : { content, attachments in print("update called") if narrativeModel.currentSceneIndex != -1 { print("sceneChange: \(sceneChange)") if sceneChange { //remove old entitites if narrativeModel.currentSceneIndex != 0 { content.remove(attachments.entity(for: dialogueViewID)!) } content.entities.removeAll() //generate the background image for the scene content.add(generateBackground(image: narrativeModel.scenes[narrativeModel.currentSceneIndex].backgroundImage)) //generate the characters for the scene let character = generateCharacter(image: narrativeModel.scenes[narrativeModel.currentSceneIndex].characterImage) content.add(character) print(content) if let character_attachment = attachments.entity(for: "dialogue"){ print("attachment clause executes") character_attachment.position = [0.45, 0, 0] character.addChild(character_attachment) } } } } attachments: { Attachment(id: dialogueViewID){ DialogueView() .environmentObject(dialogueModel) .frame(width: 400, height: 600) .glassBackgroundEffect() } } //load scene images .onChange(of:narrativeModel.currentSceneIndex){ print("SceneView onChange called") DispatchQueue.main.async{ self.sceneChange = true } print("SceneView onChange toggle - sceneChange = \(sceneChange)") } } If I don't use the dialogue window, this all works just fine. If I do, when I click the next button (in another view), which increments the current scene index, I enter some kind of loop where the sceneChange value gets toggled to true but never gets toggled back to false (even though it's changed in the update closure). The reason I have the sceneChange value is because I need to update the content and attachments whenever the scene index changes, and I need a state variable to trigger the update function to do this. My questions are: Why might I be entering this loop? Why would it only happen if I send a message in the dialogue view attachment, which is a whole separate view? Is there a better way to be doing this?
7
0
400
Aug ’24
Composing Interactive 3d Content example Build Failure
Hello, I downloaded the most recent Xcode 16.0 beta 6 along with the example project located here Currently I am experiencing the following build failures: RealityAssetsCompile ... error: [xrsimulator] Component Compatibility: BlendShapeWeights not available for 'xros 1.0', please update 'platforms' array in Package.swift error: [xrsimulator] Component Compatibility: EnvironmentLightingConfiguration not available for 'xros 1.0', please update 'platforms' array in Package.swift error: [xrsimulator] Component Compatibility: AudioLibrary not available for 'xros 1.0', please update 'platforms' array in Package.swift error: [xrsimulator] Exception thrown during compile: compileFailedBecause(reason: "compatibility faults") error: Tool exited with code 1 I saw that there is a similar issue reported. As a test I downloaded that project compiled as expected.
2
0
261
Aug ’24
VisionOS Shareplay - Persona Preview Profile
Hey, I was reading through the Happy Beam intro website(https://developer.apple.com/documentation/visionos/happybeam) and I stumbled upon the info about Persona Preview Profile, that suppose to help with testing SharePlay on the device. However, the link from the website points to 404- and I was curious if anyone knows what Persona Preview Profile is and how exactly can it help with testing SharePlay? Where can I find more info about it?
1
0
244
Aug ’24
Reference Error for VisionOS
when I try to import CompositorServices, I get an error for: dyld[596]: Symbol not found: _$sSo13cp_drawable_tV18CompositorServicesE17computeProjection37normalizedDeviceCoordinatesConvention9viewIndexSo13simd_float4x4aSo0A26_axis_direction_conventionV_SitF Referenced from: /private/var/containers/Bundle/Application/33008953-150D-4888-9860-28F41E916655/VolumeRenderingVision.app/VolumeRenderingVision.debug.dylib Expected in: <968F7985-72C8-30D7-866C-AD8A1B8E7EE6> /System/Library/Frameworks/CompositorServices.framework/CompositorServices The app wrongly refers to my Mac's local directory. However, I chose Vision Pro as a running device. My Mac has been updated to macOS 15 beta 7, and I have not had the same issue before.
1
0
217
Aug ’24
How can I completely clear the asset memory?
Background: This is a simple visionOS empty application. After the app launches, the user can enter an ImmersiveSpace by clicking a button. Another button loads a 33.9 MB USDZ model, and a final button exits the ImmersiveSpace. Below is the memory usage scenario for this application: After the app initializes, the memory usage is 56.8 MB. After entering the empty ImmersiveSpace, the memory usage increases to 64.1 MB. After loading a 33.9 MB USDZ model, the memory usage reaches 92.2 MB. After exiting the ImmersiveSpace, the memory usage slightly decreases to 90.4 MB. Question: While using a memory analysis tool, I noticed that the model's resources are not released after exiting the ImmersiveSpace. How should I address this issue? struct EmptDemoApp: App { @State private var appModel = AppModel() var body: some Scene { WindowGroup { ContentView() .environment(appModel) } ImmersiveSpace(id: appModel.immersiveSpaceID) { ImmersiveView() .environment(appModel) .onAppear { appModel.immersiveSpaceState = .open } .onDisappear { appModel.immersiveSpaceState = .closed } } .immersionStyle(selection: .constant(.mixed), in: .mixed) } } struct ContentView: View { @Environment(AppModel.self) private var appVM var body: some View { HStack { VStack { ToggleImmersiveSpaceButton() } if appVM.immersiveSpaceState == .open { Button { Task { if let url = Bundle.main.url(forResource: "Robot", withExtension: "usdz") { if let model = try? await ModelEntity(contentsOf: url, withName: "Robot") { model.setPosition(.init(x: .random(in: 0...1.0), y: .random(in: 1.0...1.6), z: -1), relativeTo: nil) appVM.root?.add(model) print("Robot: \(Unmanaged.passUnretained(model).toOpaque())") } } } } label: { Text("Add A Robot") } } } .padding() } } struct ImmersiveView: View { @Environment(AppModel.self) private var appVM var body: some View { RealityView { content in appVM.root = content } } } struct ToggleImmersiveSpaceButton: View { @Environment(AppModel.self) private var appModel @Environment(\.dismissImmersiveSpace) private var dismissImmersiveSpace @Environment(\.openImmersiveSpace) private var openImmersiveSpace var body: some View { Button { Task { @MainActor in switch appModel.immersiveSpaceState { case .open: appModel.immersiveSpaceState = .inTransition appModel.root = nil await dismissImmersiveSpace() case .closed: appModel.immersiveSpaceState = .inTransition switch await openImmersiveSpace(id: appModel.immersiveSpaceID) { case .opened: break case .userCancelled, .error: fallthrough @unknown default: appModel.immersiveSpaceState = .closed } case .inTransition: break } } } label: { Text(appModel.immersiveSpaceState == .open ? "Hide Immersive Space" : "Show Immersive Space") } .disabled(appModel.immersiveSpaceState == .inTransition) .animation(.none, value: 0) .fontWeight(.semibold) } }
2
0
256
Aug ’24
Window to Window container displacement
In Xcode 16 beta 6, we want to start the app with an Alert advising the user that they are about to enter an immersive space. To achieve this, I use an empty VStack (lets name it View1) with an alert modifier. Then, in the alert’s OK button action, we have the statement openWindow(id: "ContentView”). View1 is in the first WindowGroup in the App file. When pressing OK, the Alert and View1 dismiss themselves, then ContentView displays itself shifted vertically towards the top. ContentView is in a secondary WindowGroup. We should expect ContentView to display itself front and center to the user as every other window. What is wrong my code? Or, is there a bug in visionOS? Attached are images of my code, and a video illustrating the bad behavior.
5
0
293
Aug ’24
SwiftUI's alert window won't get automatically focus in visionOS
I have three basic elements in this UI page: View, Alert, Toolbar. I put Toolbar and Alert along with the View, when I click a button on Toolbar, my alert window shows up. Below could be a simple version of my code: @State private var showAlert = false HStack { // ... } .alert(Text("Quit the game?"), isPresented: $showAlert) { MyAlertWindow() } message: { Text("Description text about this alert") } .toolbar { ToolbarItem(placement: .bottomOrnament) { MyToolBarButton(showAlert: $showAlert) } } And in MyToolBarButton I just toggle the binded showAlert variable to try to open/close the alert window. When running on either simulator or device, the bahavior is quite strange. Where when toggle MyToolBarButton the alert window takes like 2-3 seconds to show-up, and all the elements on the alert window is grayed out, behaving like the whole window is losing focus. I have to click the moving control bar below (by dragging gesture) to make the whole window back to focus. And this is not the only issue, I also find MyToolBarButton cannot be pressed to close the alert window (even thogh when I try to click the button on my alert window it closes itself). Oh btw I don't know if this may affect but I open the window with my immersive view opened (though I tested it won't affect anything here) Any idea of what's going on here? XCode 16.1 / visionOS 2 beta 6
1
0
345
Aug ’24
ModelEntity move duration visionOS 2 issue
The following RealityView ModelEntity animated text works in visionOS 1.0. In visionOS 2.0, when running the same piece of code, the model entity move duration does not seem to work. Are there changes to the way it works that I am missing? Thank you in advance. RealityView { content in let textEntity = generateMovingText() content.add(textEntity) _ = try? await arkitSession.run([worldTrackingProvider]) } update: { content in guard let entity = content.entities.first(where: { $0.name == .textEntityName}) else { return } if let pose = worldTrackingProvider.queryDeviceAnchor(atTimestamp: CACurrentMediaTime()) { entity.position = .init( x: pose.originFromAnchorTransform.columns.3.x, y: pose.originFromAnchorTransform.columns.3.y, z: pose.originFromAnchorTransform.columns.3.z ) } if let modelEntity = entity as? ModelEntity { let rotation = Transform(rotation: simd_quatf(angle: -.pi / 6, axis: [1, 0, 0])) // Adjust angle as needed modelEntity.transform = Transform(matrix: rotation.matrix * modelEntity.transform.matrix) let animationDuration: Float = 60.0 // Adjust the duration as needed let moveUp = Transform(scale: .one, translation: [0, 2, 0]) modelEntity.move(to: moveUp, relativeTo: modelEntity, duration: TimeInterval(animationDuration), timingFunction: .linear) } } The source is available at the following: https://github.com/Sebulec/crawling-text
2
0
295
Aug ’24
How to convert FBX to USDZ in-app
Hi everyone, I'm looking for a way to convert an FBX file to USDZ directly within my iOS app. I'm aware of Reality Converter and the Python USDZ converter tool, but I haven't been able to find any documentation on how to do this directly within the app (assuming the user can upload their own file). Any guidance on how to achieve this would be greatly appreciated. I've heard about Model I/O and SceneKit, but I haven't found much information on using them for this purpose either. Thanks!
0
0
268
Aug ’24
How to update immersiveSpace from another window
Hi I have 2 views and an Immersive space. 1st and 2nd views are display in a TabView I open my ImmersiveSpace from a button in the 1st view of the tab. Then When I go to 2nd TabView I want to show an attachment in my Immersive space. This attachment should be visible in Immersive space only as long as the user os on the 2nd view. This is what I have done so far struct Second: View { @StateObject var sharedImageData = SharedImageData() var body: some View { VStack { // other code } .onAppear() { Task { sharedImageData.shouldCameraButtonShouw = true } } .onDisappear() { Task { sharedImageData.shouldCameraButtonShouw = false } } } } This is my Immersive space struct ImmersiveView: View { @EnvironmentObject var sharedImageData: SharedImageData var body: some View { RealityView { content, attachments in // some code } update: { content, attachments in guard let controlCenterAttachmentEntity = attachments.entity(for: Attachments.controlCenter) else { return } controlCenterentity.addChild(controlCenterAttachmentEntity) content.add(controlCenterentity) } attachments: { if sharedImageData.shouldCameraButtonShouw { Attachment(id: Attachments.controlCenter) { ControlCenter() } } } } } And this is my Observable class class SharedImageData: ObservableObject { @Published var takenImage: UIImage? = nil @Published var shouldCameraButtonShouw: Bool = false } My problem is, when I am on Second view my attachment never appears. Attachment appears without this if condition. But How can I achieve my goal?
1
0
317
Aug ’24
VisionOS App Runs Poorly And Crashes First Time It Launches
Here's a video clearly demonstrating the problem: https://youtu.be/-IbyaaIzh0I This is a major issue for my game, because it's not meant to be played multiple times. My game is designed to only play once, so it really ruins the experience if it runs poorly until someone force quits or crashes the game. Does anyone have a solution to this, or has encountered this issue of poor initial launch performance? I made this game in Unity and I'm not sure if this is an Apple issue or a Unity issue.
0
1
251
Aug ’24
Location of tap gesture in VisionOS
Is it possible to get the location of where a user uses the tap gesture on the screen? Like an x/y coordinate that can be used within my app. I know there is spatialTapGesture but from what I can tell that is only linked to content entities like a cube or something. Does this mean I can only get the x/y coordinate data by opening an immersive space and using the tap gesture in relation to some entity? TLDR: Can I get the location of a tap gesture in Vision OS in a regular app, without opening an immersive space?
0
0
193
Aug ’24