Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics

Post

Replies

Boosts

Views

Activity

To use ARSCNView to capture a 3D model of a scene and obtain the mesh information, how can I retrieve the texture information for the mesh?
arScnView = ARSCNView(frame: CGRect.zero, options: nil) arScnView.delegate = self arScnView.automaticallyUpdatesLighting = true arScnView.allowsCameraControl = true addSubview(arScnView) arSession = arScnView.session arSession.delegate = self config = ARWorldTrackingConfiguration() config.sceneReconstruction = .meshWithClassification config.environmentTexturing = .automatic func session(_ session: ARSession, didAdd anchors: [ARAnchor]) { anchors.forEach({ anchor in if let meshAnchor = anchor as? ARMeshAnchor { let node = meshAnchor.toSCNNode() self.arScnView.scene.rootNode.addChildNode(node) } if let environmentProbeAnchor = anchor as? AREnvironmentProbeAnchor { // Can I retrieve the texture map corresponding to ARMeshAnchor from Environment Probe Anchor? // Or how can I retrieve the texture map corresponding to ARMeshAnchor? } }) } How can I scan a 3D scene and save it as USDZ? I want to achieve the following scenario?
0
0
266
Aug ’24
Having trouble in loading audio file resources from RCP bundle.
RealityContentKit bundle resource issue Recently I always encounter weird loading bugs from RealityKitContent bundle. When I was trying to load audio resource as AudioFileResource or AudioFileGroupResource by loading from *.usda from RealityKitContent bundle, with this method. My code is nothing complicated but simple as below: let primPath: String = "/SampleAudios/SE_bounce_audio" guard let resource = try? AudioFileGroupResource.load(named: primPath, from: "MyScene.usda", in: realityKitContentBundle) else { return } And the runtime program "sometimes"(whenever I change something RCP it somethings work again but the behavior is unpredictable) reports that it "Cannot find MyScene.usda:/SampleAudios/SE_bounce_audio in RealityKitContent.bundle". I put MyScene.usda under the root folder of RealityKitContent package because I found that RealityKit just cannot find any *.usda scene if you didn't put that on the root level (could be a bug because of the way it indexes its files). I even double checked my .usda file with usdview, the primPath is absolutely correct. I think there are some unknown issues when RealityKitContent copy resources and build the package. I tried to play with the package Package.swift file a bit to see if I could manually copy my resources (everything) and let the package carry my resources but it just didn't work. So right now I just keep this file untouched below (just upgrade the swift-tools-version to 6.0 as only that can supports .visionOS(.v2)): // swift-tools-version:6.0 // The swift-tools-version declares the minimum version of Swift required to build this package. import PackageDescription let package = Package( name: "RealityKitContent", platforms: [ .visionOS(.v2) ], products: [ // Products define the executables and libraries a package produces, and make them visible to other packages. .library( name: "RealityKitContent", targets: ["RealityKitContent"]), ], dependencies: [ // Dependencies declare other packages that this package depends on. // .package(url: /* package url */, from: "1.0.0"), ], targets: [ // Targets are the basic building blocks of a package. A target can define a module or a test suite. // Targets can depend on other targets in this package, and on products in packages this package depends on. .target( name: "RealityKitContent" ), ] ) That is just issue one, RealityKitContent package build issue. Audio file format issue Another is about Audio File Format RCP supports. I remember is a place (WWDC?) saying .wav and .mp4 are supported to be used as audio source. But when I try to set up Spatial Audio, I find sometimes *.wav or *.mp3 can also be imported as AudioSourceFile. But the behavior is unpredictable. With two *.wav files SE_ball_hit_01.wav and SE_ball_hit_02.wav, only SE_ball_hit_01.wav is supported, 02 is reported as the format is not supported/ Check out my screenshots to see the details of two files. Two files have almost the same format (same sample rate or channel). I understand there might be different requirements for a source file to be used as Spatial or Ambient audio. But I haven't figured that out or there is nothing I can find helpful on Apple Documentation. So what is the rules? Thanks for reading and any thought is welcomed.
1
0
322
Aug ’24
Export USDZ With Unlit Shader From Maya
Hello! I have the great fortune to be working on a joint Apple/(unnamed brand) app. I am teamed up with a photogrammetry vendor, and will be assembling a scene in Maya, using a UDIM workflow from retouched assets in Substance Painter. This will be for an immersive environment, similar to Joshua Tree and the Hawaiian environments on AVP, which are great. We did a first pass, bringing USDZ files into Reality Composer Pro and sending to headset - very cool. However the models have a specular component that is washing everything out. After further researching, I learned that I need to not be using a Physically based shader, but instead, an Unlit shader. I had to manually create this in RCP, then it viewed properly. The issue is, every time I need to add a new asset, will I now have to manually create numerous custom shaders? Ideally, I am wanting to either figure out how to get an Unlit shader specified in Substance Painter that will get exported out properly, in addition to getting it to export from Maya. I tried using a Maya SurfaceShader - which is essentially unlit - but that does not export properly. I found that a Maya StandardSurface will export properly, however, it still imports as Physically Based shader. The conundrum is, I'll be working with a UDIM workflow, and would like to avoid having to manually create and hookup what could be between 10-40 textures per USDZ file. I guess what I'm trying to ask is - what is the preferred shader to use in Maya when exporting to USDZ that will import as Unlit? Or, is there a way to easily switch from Physically Based to Unlit inside RCP? And NOT doing it in Xcode/Swift, because the files I need to deliver need to be USDZ, the the developer will be assembling themselves. I'm using Maya because I need to work on my PC for the heavy GPU lifting, and also, it's just easier to assemble everything there VS Reality Composer Pro, which is like the iMovie of game engines. ;) (please take that constructively, it really needs to be more industry standard). I just need to make sure I can use Maya with the proper shaders to export my pieces, quickly send to RCP (with proper Unlit specification) then over to headset so I can check stuff in realtime with my photogrammetry vendor. Any help or advice is greatly appreciated! I'm really excited to be working on my first Vision Pro app. Thx!
4
0
376
Aug ’24
Update, Content, and Attachments on Vision Pro
I have a scene setup that uses places images on planes to mimic an RPG-style character interaction. There's a large scene background image and a smaller character image in the foreground. Both are added as content to a RealityView. There's one attachment that is a dialogue window for interaction with the character, and it is attached to the character image. When the scene changes, I need the images and the dialogue window to refresh. My current approach has been to remove everything from the scene and add the new content in the update closure. @EnvironmentObject var narrativeModel: NarrativeModel @EnvironmentObject var dialogueModel: DialogueViewModel @State private var sceneChange = false private let dialogueViewID = "dialogue" var body: some View { RealityView { content, attachments in //at start, generate background image only and no characters if narrativeModel.currentSceneIndex == -1 { content.add(generateBackground(image: narrativeModel.backgroundImage!)) } } update : { content, attachments in print("update called") if narrativeModel.currentSceneIndex != -1 { print("sceneChange: \(sceneChange)") if sceneChange { //remove old entitites if narrativeModel.currentSceneIndex != 0 { content.remove(attachments.entity(for: dialogueViewID)!) } content.entities.removeAll() //generate the background image for the scene content.add(generateBackground(image: narrativeModel.scenes[narrativeModel.currentSceneIndex].backgroundImage)) //generate the characters for the scene let character = generateCharacter(image: narrativeModel.scenes[narrativeModel.currentSceneIndex].characterImage) content.add(character) print(content) if let character_attachment = attachments.entity(for: "dialogue"){ print("attachment clause executes") character_attachment.position = [0.45, 0, 0] character.addChild(character_attachment) } } } } attachments: { Attachment(id: dialogueViewID){ DialogueView() .environmentObject(dialogueModel) .frame(width: 400, height: 600) .glassBackgroundEffect() } } //load scene images .onChange(of:narrativeModel.currentSceneIndex){ print("SceneView onChange called") DispatchQueue.main.async{ self.sceneChange = true } print("SceneView onChange toggle - sceneChange = \(sceneChange)") } } If I don't use the dialogue window, this all works just fine. If I do, when I click the next button (in another view), which increments the current scene index, I enter some kind of loop where the sceneChange value gets toggled to true but never gets toggled back to false (even though it's changed in the update closure). The reason I have the sceneChange value is because I need to update the content and attachments whenever the scene index changes, and I need a state variable to trigger the update function to do this. My questions are: Why might I be entering this loop? Why would it only happen if I send a message in the dialogue view attachment, which is a whole separate view? Is there a better way to be doing this?
7
0
418
Aug ’24
Object Tracking (moving objects)
From my early testing it seems like the object tracking works best for static objects. For example, if I am holding something in my hand the object tracker is slow to update. Is there anything that can be modified to decrease the tracking latency? I noticed that the Enterprise API has some override features is this something that can only be done using Enterprise?
1
0
325
Aug ’24
Composing Interactive 3d Content example Build Failure
Hello, I downloaded the most recent Xcode 16.0 beta 6 along with the example project located here Currently I am experiencing the following build failures: RealityAssetsCompile ... error: [xrsimulator] Component Compatibility: BlendShapeWeights not available for 'xros 1.0', please update 'platforms' array in Package.swift error: [xrsimulator] Component Compatibility: EnvironmentLightingConfiguration not available for 'xros 1.0', please update 'platforms' array in Package.swift error: [xrsimulator] Component Compatibility: AudioLibrary not available for 'xros 1.0', please update 'platforms' array in Package.swift error: [xrsimulator] Exception thrown during compile: compileFailedBecause(reason: "compatibility faults") error: Tool exited with code 1 I saw that there is a similar issue reported. As a test I downloaded that project compiled as expected.
2
0
280
Aug ’24
How to use SpatialTapGesture to pin a SwiftUI view to entity
My goal is to pin an attachment view precisely at the point where I tap on an entity using SpatialTapGesture. However, the current code doesn't pin the attachment view accurately to the tapped point. Instead, it often appears in space rather than on the entity itself. The issue might be due to an incorrect conversion of coordinates or values. My code: struct ImmersiveView: View { @State private var location: GlobeLocation? var body: some View { RealityView { content, attachments in guard let rootEnity = try? await Entity(named: "Scene", in: realityKitContentBundle) else { return } content.add(rootEnity) }update: { content, attachments in if let earth = content.entities.first?.findEntity(named: "Earth"),let desView = attachments.entity(for: "1") { let pinTransform = computeTransform(for: location ?? GlobeLocation(latitude: 0, longitude: 0)) earth.addChild(desView) // desView.transform = desView.setPosition(pinTransform, relativeTo: earth) } } attachments: { Attachment(id: "1") { DescriptionView(location: location) } } .gesture(DragGesture().targetedToAnyEntity().onChanged({ value in value.entity.position = value.convert(value.location3D, from: .local, to: .scene) })) .gesture(SpatialTapGesture().targetedToAnyEntity().onEnded({ value in })) } func lookUpLocation(at value: CGPoint) -> GlobeLocation? { return GlobeLocation(latitude: value.x, longitude: value.y) } func computeTransform(for location: GlobeLocation) -> SIMD3<Float> { // Constants for Earth's radius. Adjust this to match the scale of your 3D model. let earthRadius: Float = 1.0 // Convert latitude and longitude from degrees to radians let latitude = Float(location.latitude) * .pi / 180 let longitude = Float(location.longitude) * .pi / 180 // Calculate the position in Cartesian coordinates let x = earthRadius * cos(latitude) * cos(longitude) let y = earthRadius * sin(latitude) let z = earthRadius * cos(latitude) * sin(longitude) return position } } struct GlobeLocation { var latitude: Double var longitude: Double }
7
0
318
Aug ’24
VisionOS Shareplay - Persona Preview Profile
Hey, I was reading through the Happy Beam intro website(https://developer.apple.com/documentation/visionos/happybeam) and I stumbled upon the info about Persona Preview Profile, that suppose to help with testing SharePlay on the device. However, the link from the website points to 404- and I was curious if anyone knows what Persona Preview Profile is and how exactly can it help with testing SharePlay? Where can I find more info about it?
1
0
257
Aug ’24
Add basic story ceilings to RoomPlan model
The structure builder provides walls and floors for each captured story, but not a ceiling. For my case it is necessary that the scanned geometry is closed to open up the possibility to place objects on the ceiling for example and therefore it is important that there is an estimated ceiling for different rooms within a story. Is there any info that apple has something like this on the roadmap in the future because i think this can open opportunities especially when thinking about industrial application of the API. If somebody has more insights on this topic pls share :)
1
0
339
Aug ’24
Reference Error for VisionOS
when I try to import CompositorServices, I get an error for: dyld[596]: Symbol not found: _$sSo13cp_drawable_tV18CompositorServicesE17computeProjection37normalizedDeviceCoordinatesConvention9viewIndexSo13simd_float4x4aSo0A26_axis_direction_conventionV_SitF Referenced from: /private/var/containers/Bundle/Application/33008953-150D-4888-9860-28F41E916655/VolumeRenderingVision.app/VolumeRenderingVision.debug.dylib Expected in: <968F7985-72C8-30D7-866C-AD8A1B8E7EE6> /System/Library/Frameworks/CompositorServices.framework/CompositorServices The app wrongly refers to my Mac's local directory. However, I chose Vision Pro as a running device. My Mac has been updated to macOS 15 beta 7, and I have not had the same issue before.
1
0
230
Aug ’24
How can I completely clear the asset memory?
Background: This is a simple visionOS empty application. After the app launches, the user can enter an ImmersiveSpace by clicking a button. Another button loads a 33.9 MB USDZ model, and a final button exits the ImmersiveSpace. Below is the memory usage scenario for this application: After the app initializes, the memory usage is 56.8 MB. After entering the empty ImmersiveSpace, the memory usage increases to 64.1 MB. After loading a 33.9 MB USDZ model, the memory usage reaches 92.2 MB. After exiting the ImmersiveSpace, the memory usage slightly decreases to 90.4 MB. Question: While using a memory analysis tool, I noticed that the model's resources are not released after exiting the ImmersiveSpace. How should I address this issue? struct EmptDemoApp: App { @State private var appModel = AppModel() var body: some Scene { WindowGroup { ContentView() .environment(appModel) } ImmersiveSpace(id: appModel.immersiveSpaceID) { ImmersiveView() .environment(appModel) .onAppear { appModel.immersiveSpaceState = .open } .onDisappear { appModel.immersiveSpaceState = .closed } } .immersionStyle(selection: .constant(.mixed), in: .mixed) } } struct ContentView: View { @Environment(AppModel.self) private var appVM var body: some View { HStack { VStack { ToggleImmersiveSpaceButton() } if appVM.immersiveSpaceState == .open { Button { Task { if let url = Bundle.main.url(forResource: "Robot", withExtension: "usdz") { if let model = try? await ModelEntity(contentsOf: url, withName: "Robot") { model.setPosition(.init(x: .random(in: 0...1.0), y: .random(in: 1.0...1.6), z: -1), relativeTo: nil) appVM.root?.add(model) print("Robot: \(Unmanaged.passUnretained(model).toOpaque())") } } } } label: { Text("Add A Robot") } } } .padding() } } struct ImmersiveView: View { @Environment(AppModel.self) private var appVM var body: some View { RealityView { content in appVM.root = content } } } struct ToggleImmersiveSpaceButton: View { @Environment(AppModel.self) private var appModel @Environment(\.dismissImmersiveSpace) private var dismissImmersiveSpace @Environment(\.openImmersiveSpace) private var openImmersiveSpace var body: some View { Button { Task { @MainActor in switch appModel.immersiveSpaceState { case .open: appModel.immersiveSpaceState = .inTransition appModel.root = nil await dismissImmersiveSpace() case .closed: appModel.immersiveSpaceState = .inTransition switch await openImmersiveSpace(id: appModel.immersiveSpaceID) { case .opened: break case .userCancelled, .error: fallthrough @unknown default: appModel.immersiveSpaceState = .closed } case .inTransition: break } } } label: { Text(appModel.immersiveSpaceState == .open ? "Hide Immersive Space" : "Show Immersive Space") } .disabled(appModel.immersiveSpaceState == .inTransition) .animation(.none, value: 0) .fontWeight(.semibold) } }
2
0
266
Aug ’24
Window to Window container displacement
In Xcode 16 beta 6, we want to start the app with an Alert advising the user that they are about to enter an immersive space. To achieve this, I use an empty VStack (lets name it View1) with an alert modifier. Then, in the alert’s OK button action, we have the statement openWindow(id: "ContentView”). View1 is in the first WindowGroup in the App file. When pressing OK, the Alert and View1 dismiss themselves, then ContentView displays itself shifted vertically towards the top. ContentView is in a secondary WindowGroup. We should expect ContentView to display itself front and center to the user as every other window. What is wrong my code? Or, is there a bug in visionOS? Attached are images of my code, and a video illustrating the bad behavior.
5
0
307
Aug ’24
SwiftUI's alert window won't get automatically focus in visionOS
I have three basic elements in this UI page: View, Alert, Toolbar. I put Toolbar and Alert along with the View, when I click a button on Toolbar, my alert window shows up. Below could be a simple version of my code: @State private var showAlert = false HStack { // ... } .alert(Text("Quit the game?"), isPresented: $showAlert) { MyAlertWindow() } message: { Text("Description text about this alert") } .toolbar { ToolbarItem(placement: .bottomOrnament) { MyToolBarButton(showAlert: $showAlert) } } And in MyToolBarButton I just toggle the binded showAlert variable to try to open/close the alert window. When running on either simulator or device, the bahavior is quite strange. Where when toggle MyToolBarButton the alert window takes like 2-3 seconds to show-up, and all the elements on the alert window is grayed out, behaving like the whole window is losing focus. I have to click the moving control bar below (by dragging gesture) to make the whole window back to focus. And this is not the only issue, I also find MyToolBarButton cannot be pressed to close the alert window (even thogh when I try to click the button on my alert window it closes itself). Oh btw I don't know if this may affect but I open the window with my immersive view opened (though I tested it won't affect anything here) Any idea of what's going on here? XCode 16.1 / visionOS 2 beta 6
1
0
360
Aug ’24
ModelEntity move duration visionOS 2 issue
The following RealityView ModelEntity animated text works in visionOS 1.0. In visionOS 2.0, when running the same piece of code, the model entity move duration does not seem to work. Are there changes to the way it works that I am missing? Thank you in advance. RealityView { content in let textEntity = generateMovingText() content.add(textEntity) _ = try? await arkitSession.run([worldTrackingProvider]) } update: { content in guard let entity = content.entities.first(where: { $0.name == .textEntityName}) else { return } if let pose = worldTrackingProvider.queryDeviceAnchor(atTimestamp: CACurrentMediaTime()) { entity.position = .init( x: pose.originFromAnchorTransform.columns.3.x, y: pose.originFromAnchorTransform.columns.3.y, z: pose.originFromAnchorTransform.columns.3.z ) } if let modelEntity = entity as? ModelEntity { let rotation = Transform(rotation: simd_quatf(angle: -.pi / 6, axis: [1, 0, 0])) // Adjust angle as needed modelEntity.transform = Transform(matrix: rotation.matrix * modelEntity.transform.matrix) let animationDuration: Float = 60.0 // Adjust the duration as needed let moveUp = Transform(scale: .one, translation: [0, 2, 0]) modelEntity.move(to: moveUp, relativeTo: modelEntity, duration: TimeInterval(animationDuration), timingFunction: .linear) } } The source is available at the following: https://github.com/Sebulec/crawling-text
2
0
300
Aug ’24
How to convert FBX to USDZ in-app
Hi everyone, I'm looking for a way to convert an FBX file to USDZ directly within my iOS app. I'm aware of Reality Converter and the Python USDZ converter tool, but I haven't been able to find any documentation on how to do this directly within the app (assuming the user can upload their own file). Any guidance on how to achieve this would be greatly appreciated. I've heard about Model I/O and SceneKit, but I haven't found much information on using them for this purpose either. Thanks!
0
0
278
Aug ’24
How to update immersiveSpace from another window
Hi I have 2 views and an Immersive space. 1st and 2nd views are display in a TabView I open my ImmersiveSpace from a button in the 1st view of the tab. Then When I go to 2nd TabView I want to show an attachment in my Immersive space. This attachment should be visible in Immersive space only as long as the user os on the 2nd view. This is what I have done so far struct Second: View { @StateObject var sharedImageData = SharedImageData() var body: some View { VStack { // other code } .onAppear() { Task { sharedImageData.shouldCameraButtonShouw = true } } .onDisappear() { Task { sharedImageData.shouldCameraButtonShouw = false } } } } This is my Immersive space struct ImmersiveView: View { @EnvironmentObject var sharedImageData: SharedImageData var body: some View { RealityView { content, attachments in // some code } update: { content, attachments in guard let controlCenterAttachmentEntity = attachments.entity(for: Attachments.controlCenter) else { return } controlCenterentity.addChild(controlCenterAttachmentEntity) content.add(controlCenterentity) } attachments: { if sharedImageData.shouldCameraButtonShouw { Attachment(id: Attachments.controlCenter) { ControlCenter() } } } } } And this is my Observable class class SharedImageData: ObservableObject { @Published var takenImage: UIImage? = nil @Published var shouldCameraButtonShouw: Bool = false } My problem is, when I am on Second view my attachment never appears. Attachment appears without this if condition. But How can I achieve my goal?
1
0
331
Aug ’24