Discuss Spatial Computing on Apple Platforms.

Post

Replies

Boosts

Views

Activity

How to control the position of windows and volumes in immersive space
My app has a window and a volume. I am trying to display the volume on the right side of the window. I know .defaultWindowPlacement can achieve that, but I want more control over the exact position of my volume in relation to my window. I need the volume to move as I move the window so that it always stays in the same position relative to the window. I think I need a way to track the positions of both the window and the volume. If this can be achieved without immersive space, it would be great. If not, how do I do that in immersive space? Current code: import SwiftUI @main struct tiktokForSpacialModelingApp: App { @State private var appModel: AppModel = AppModel() var body: some Scene { WindowGroup(id: appModel.launchWindowID) { LaunchWindow() .environment(appModel) } .windowResizability(.contentSize) WindowGroup(id: appModel.mainViewWindowID) { MainView() .frame(minWidth: 500, maxWidth: 600, minHeight: 1200, maxHeight: 1440) .environment(appModel) } .windowResizability(.contentSize) WindowGroup(id: appModel.postVolumeID) { let initialSize = Size3D(width: 900, height: 500, depth: 900) PostVolume() .frame(minWidth: initialSize.width, maxWidth: initialSize.width * 4, minHeight: initialSize.height, maxHeight: initialSize.height * 4) .frame(minDepth: initialSize.depth, maxDepth: initialSize.depth * 4) } .windowStyle(.volumetric) .windowResizability(.contentSize) .defaultWindowPlacement { content, context in // Get WindowProxy from context based on id if let mainViewWindow = context.windows.first(where: { $0.id == appModel.mainViewWindowID }) { return WindowPlacement(.trailing(mainViewWindow)) } else { return WindowPlacement() } } ImmersiveSpace(id: appModel.immersiveSpaceID) { ImmersiveView() .onAppear { appModel.immersiveSpaceState = .open } .onDisappear { appModel.immersiveSpaceState = .closed } } .immersionStyle(selection: .constant(.progressive), in: .progressive) } }
1
0
281
Jul ’24
Content inside volume gets clipped
I am using the Xcode visionOS debugging tool to visualize the bounds of all the containers, but it shows my Entity is inside the Volume. Then why does it get clipped? Is there something wrong with the debugger, or am I missing something? import SwiftUI @main struct RealityViewAttachmentApp: App { var body: some Scene { WindowGroup { ContentView() } .windowStyle(.volumetric) .defaultSize(Size3D(width: 1, height: 1, depth: 1), in: .meters) } } import SwiftUI import RealityKit import RealityKitContent struct ContentView: View { var body: some View { RealityView { content, attachments in if let earth = try? await Entity(named: "Scene", in: realityKitContentBundle) { content.add(earth) if let earthAttachment = attachments.entity(for: "earth_label") { earthAttachment.position = [0, -0.15, 0] earth.addChild(earthAttachment) } if let textAttachment = attachments.entity(for: "text_label") { textAttachment.position = [-0.5, 0, 0] earth.addChild(textAttachment) } } } attachments: { Attachment(id: "earth_label") { Text("Earth") } Attachment(id: "text_label") { VStack { Text("This is just an example") .font(.title) .padding(.bottom, 20) Text("This is just some random content") .font(.caption) } .frame(minWidth: 100, maxWidth: 300, minHeight: 100, maxHeight: 300) .glassBackgroundEffect() } } } }
1
0
243
Jul ’24
Understanding the Topology of LowLevelMesh in RealityKitDrawingApp Sample Code
I have recently developed an interest in the shader effects commonly found in Apple's UI and have been studying them. Additionally, as I own a Vision Pro, I have a strong desire to understand LowLevelMesh and am currently analyzing the sample code after watching the related session. The part where I am completely stuck and unable to understand is the initializer section of CurveExtruder. /// Initializes the `CurveExtruder` with the shape to sweep along the curve. /// /// - Parameters: /// - shape: The 2D shape to sweep along the curve. init(shape: [SIMD2<Float>]) { self.shape = shape // Compute topology // // Triangle fan lists each vertex in `shape` once for each ring, except for vertex `0` of `shape` which // is listed twice. Plus one extra index for the end-index (0xFFFFFFFF). let indexCountPerFan = 2 * (shape.count + 1) + 1 var topology: [UInt32] = [] topology.reserveCapacity(indexCountPerFan) // Build triangle fan. for vertexIndex in shape.indices.reversed() { topology.append(UInt32(vertexIndex)) topology.append(UInt32(shape.count + vertexIndex)) } // Wrap around to the first vertex. topology.append(UInt32(shape.count - 1)) topology.append(UInt32(2 * shape.count - 1)) // Add end-index. topology.append(UInt32.max) assert(topology.count == indexCountPerFan) I have tried to understand why the capacity reserved for the topology array is 2 * (shape.count + 1) + 1, but I am struggling to figure it out. I do not understand the principle behind the order in which vertexIndex is added to the topology. The confusion is even greater because, while the comment mentions trianglefan, the actual creation of the LowLevelMesh.Part object uses the topology: .triangleStrip argument. (Did I misunderstand? I know that the topology option includes triangle, but this uses duplicated vertices.) I am feeling very stuck. It's hard to find answers even through search options or LLMs. Maybe this requires specialized knowledge in computer graphics, which makes me feel embarrassed to ask. However, personally, I have tried various directions without external help but still cannot find a clear path, so I am desperately seeking assistance! P.S. As Korean is my primary language, I apologize in advance if there are any awkward or rude expressions.
0
0
321
Jul ’24
visionOS 2 full immersive space permission change?
Does visionOS 2 still prompt the user with a permission alert when a full immersive space is presented? In visionOS 1, the first time an app presented an immersive space, the user was prompted with an alert to grant permission. openImmersiveSpace would return an error code if the user opted not to grant permission. In visionOS 1, it was important to handle this case correctly. In visionOS 1, the Settings > Developer menu had an option to reset the immersive user's space permission prompting state so developers could test this interaction flow. In visionOS 2, I no longer see the full immersive space permissions alert. I can't remember if I saw it once, the first time visionOS 2.0 beta was installed, or if I never saw it at all. The Settings > Developer menu no longer has an option to reset the permission prompting state. I can't find any way to test the interaction flow in my app to make sure that it will work correctly for users. Does visionOS 2 no longer ask for full immersive space permission at all? I can't find this change documented anywhere. If visionOS 2 does prompt the user for permission, is there any way to reproduce and test this interaction flow so I can make sure my app handles it correctly? Thanks for taking the time to answer this question.
3
0
507
Jul ’24
True depth map accuracy worse on streaming mode than photo
Hi, it seems the accuracy of the true depth map is far worse when streaming (using iPhone 13) with similar artefacts as shown in this post: https://forums.developer.apple.com/forums/thread/694147. However when taking static photos, the quality is pretty good, despite resolutions being the same (480x640). This is for an object <1m distance. Does anyone know how I can improve the accuracy when streaming?
0
0
236
Jul ’24
Xcode 16 can't target RealityKitContent on macOS 14?
I'm working on a multi-platform app (macOS and visionOS for now). In these early stages it’s easier to target the Mac, but I started with a visionOS project. One of the things the template creates is a RealityKitContent package dependency. I can target macOS 14.5 in Xcode, but when it goes to build the RealiityKitContent, I get this error: error: Building for 'macosx', but '14.0' must be &gt;= '15.0' [macosx] info: realitytool ["/Applications/Xcode-beta.app/Contents/Developer/usr/bin/realitytool" "compile" "--platform" "macosx" "--deployment-target" "14.0" … Unfortunately, I'm unwilling to update this machine to macOS 15, as it's too risky. Running macOS 15 in a VM is not possible (Apple Silicon). This strikes me as a bug, or severe shortcoming, of realitytool. This was introduced with visionOS 1.0, and should be able to target macOS &lt; 15. It's not really reasonable to use Xcode 15, since soon enough Apple will require I build with Xcode 16 for submission to the App Store. Is this a bug, or intentional?
2
1
364
Jul ’24
Window buttons not getting clicked when Scene Colliders Exist
Hi I am using this function to create collisions in my scene from Apple Developer Video I found. func processReconstructionUpdates() async { for await update in sceneReconstruction.anchorUpdates { let meshAnchor = update.anchor guard let shape = try? await ShapeResource.generateStaticMesh(from: meshAnchor) else {continue} switch update.event { case .added: let entity = ModelEntity() entity.transform = Transform(matrix: meshAnchor.originFromAnchorTransform) entity.collision = CollisionComponent(shapes: [shape], isStatic: true) entity.physicsBody = PhysicsBodyComponent() entity.components.set(InputTargetComponent()) meshEntities[meshAnchor.id] = entity contentEntity.addChild(entity) case .updated: guard let entity = meshEntities[meshAnchor.id] else { fatalError("...") } entity.transform = Transform(matrix: meshAnchor.originFromAnchorTransform) entity.collision?.shapes = [shape] case .removed: meshEntities[meshAnchor.id]?.removeFromParent() meshEntities.removeValue(forKey: meshAnchor.id) } } } The code works great. In the same immersive space I am opening a window: var body: some View { RealityView { content in // some other code here openWindow(id: "mywindowidhere") // some other code here } } The window opens in front of me, but I am not able to click or even hover on the buttons. At first I did not know why that was happening. But then I turned on pointer control and found out that the pointer is actually colliding with the wall. (the window is kinda inside the wall). That is why the pointer never reaches the window and the button never gets clicked. I initially thought this was a layering issue, but I was not able to find any documentation related to this. Is this a known issue and is there any way to fix this? Or I am doing anything wrong on my side?
1
0
285
Jul ’24
API Calls inside Vision OS Swift UI App
Hi, I'm brainstorming ideas for getting dynamic content inside my visionOS app on the Vision Pro. I have some data coming out of a piece of equipment, and reaching a cloud hub (something like IoT Hub on Azure). I want to get that data inside a visionOS app, ideally inside an attachment that is attached to some 3D entity inside my RealityView. Is something like this possible? Can someone give me some starter points on how I can enable a pipeline like this, and if there are any resources that I could use for reference.
1
0
294
Jul ’24
ImageAnchoringSource from URL
Hello, I was wondering how I can initialize an ImageAnchoringSource using https://developer.apple.com/documentation/realitykit/anchoringcomponent/imageanchoringsource/init(_:) When I construct one using a URL, it doesn't seem to be tracked and I see in the following when I debug print the component: ▿ 0 : AnchoringComponent ▿ target : Target ▿ referenceImage : 1 element ▿ from : ImageAnchoringSource ▿ url : Optional<URL> ▿ some : file:///var/mobile/Containers/Data/Application/D1126EA0-A1D7-468F-A40C-8578B7F5BDDF/Library/Caches/CodeCache/0E457AA7-2195-48B9-9DD4-58CEB9397F69.png - _url : file:///var/mobile/Containers/Data/Application/D1126EA0-A1D7-468F-A40C-8578B7F5BDDF/Library/Caches/CodeCache/0E457AA7-2195-48B9-9DD4-58CEB9397F69.png - _parseInfo : nil - _baseParseInfo : nil - name : nil - group : nil ▿ trackingMode : TrackingMode - trackingMode : 2 Is there a specific format for the parseInfo? When I use the same image to make an image anchoring source by group and name in AR Resources, it is tracked. Thank you!
1
0
256
Jul ’24
Can I use the photokit sample on Vision OS?
I am a student developer We are trying to implement an application that allows you to take photos in visionOS mr mode and access the photos you took. Can the contents of the link below be used on visionOS? https://developer.apple.com/tutorials/sample-apps/capturingphotos-captureandsave/ I would really appreciate your reply. For reference, we plan to package the methods in swift and import the framework into Unity to use them.
0
0
281
Jul ’24
How to add gestures to objects inside other objects
I have a scene with multiple RealityKit entities. There is a blue cube which I want to rotate along with all of its children (it's partly transparent). Inside the cube are a number of child entities (red) that I want to tap. The cube and red objects all have collision components as is required for gestures to work. If I want to rotate the blue cube, and also tap the red objects I can't do this as the blue cube's collision component intercepts the taps. Is there a way of accomplishing what I want? I'm targeting visionOS 2, and my scene is in a volume.
1
0
339
Jul ’24
How to move or scale WindowGroup by code in visionOS
WindowGroup(id: "Volumetic") { GeometryReader3D { geometry in VolumeView() .environment(appState) .volumeBaseplateVisibility(.visible) // 是否显示托盘,默认 .visible .scaleEffect(geometry.size.width / initialVolumeSize.width) } } .windowStyle(.volumetric) .windowResizability(.contentSize) .defaultSize(initialVolumeSize) I can move it through the drag bar that comes with the UI, and change the size by dragging the edge of the plate. I want to use code to achieve the same effect, how to achieve it
1
0
255
Jul ’24
How to Use RealityKitContent Package for App Targets Lower Than iOS 18.0
I am trying to display a 3D model in iOS app using RealityView. The same 3D model is displayed successfully in the visionOS app. Everything works perfectly only when I set my project’s minimum deployment target to iOS 18.0. However, my app’s minimum deployment target is iOS 15.0. When I use the RealityKitContent package to load the 3D model, it fails to compile and gives me the following error: Compiling for iOS 15.0, but module 'RealityKitContent' has a minimum deployment target of iOS 18.0: /Users/Library/Developer/Xcode/DerivedData/RealityViewForiOS-cbfkgimsqngtuegqwvezusvscllf/Index.noindex/Build/Products/Debug-iphonesimulator/RealityKitContent.swiftmodule/arm64-apple-ios-simulator.swiftmodule I have made the RealityKitContent package optional and tried importing using the following condition: #if canImport(RealityKitContent) import RealityKitContent #endif Despite this, it still fails to compile and produces the same error. I have not found a workaround for using the RealityKitContent package with app targets lower than iOS 18.0. Here is my package definition: let package = Package( name: "RealityKitContent", platforms: [ .visionOS(.v1), .macOS(.v15), .iOS(.v18) ], products: [ .library( name: "RealityKitContent", targets: ["RealityKitContent"]), ], dependencies: [], targets: [ .target( name: "RealityKitContent", dependencies: []), ] ) Here is the code I am using to load the 3D model with RealityView using the RealityKitContent package: import SwiftUI import RealityKit #if canImport(RealityKitContent) import RealityKitContent #endif struct ContentView: View { var body: some View { VStack { if #available(iOS 18.0, *) { RealityView { content in if let scene = try? await Entity(named: "Scene", in: realityKitContentBundle) { content.add(scene) } } update: { content in if let scene = content.entities.first { let uniformScale: Float = 3.0 scene.transform.scale = [uniformScale, uniformScale, uniformScale] } } } else { // Fallback for earlier versions } } } } #Preview { ContentView() } Any help or guidance on how to use the RealityKitContent package for app targets lower than iOS 18.0 would be greatly appreciated.
0
2
305
Jul ’24
VideoMaterial to display SBS Stereoscopic 3D video? [VisionOS]
Hi, I love VideoMaterial API that gives so much power to play video on any mesh. But I am trying to play a side-by-side 3D video usingVideoMaterial: RealityView { content in let mesh = MeshResource.generatePlane(width: 300.0, height: 300.0, cornerRadius: 0) //generate mesh let vidMaterial = VideoMaterial(avPlayer: AVPlayer(url: URL(string: "https://someurl/test/master.m3u8")!)) //VideoMaterial vidMaterial.controller.preferredViewingMode = .stereo //<-- no idea why it doesn't work for SBS video in simulator vidMaterial.avPlayer?.play() let planeEntity = Entity() //new entity planeEntity.components.set(ModelComponent(mesh: mesh, materials: [vidMaterial])) //set a new ModelComponent to the entity content.add(planeEntity) } this code works well for plain 2D video playback but how do I display a Side-by-Side or Top-Bottom 3D video? I found GeometrySwitchCameraIndex in custom ShaderGraphMaterial but if I use input node as a image texture then how do I pass the video frame as texture into my custom shader to achieve the 3D effect or maybe there is an even better way to deal with this? There seems to be additional API .preferredViewingMode on the VideoMaterial's controller that can be set to .stereo but it doesn't give any stereo effect. Perhaps it's only for MV-HEVC media playback?
1
0
456
Jul ’24
Objects do not behave properly with indirect pinch in a Mixed Reality Environment
In Mixed Reality Mode there is strange issues with indirect pinches on objects. If a user uses an indirect pinch to select an object and then walks around, or moves and re-orients their body while maintaining the pinch, the object moves as if there is some scalar being applied to it and it causes the object to behave in ways that are extremely counter-intuitive compared to other MR devices. If a user indirect pinches on an object and then walks forward the object flies away from the user, faster than they are walking. If a user indirect pinches on an object and then walks backward, the object flies towards and eventually past the user, faster than they are walking. If a user indirect pinches an object and then turns around, the object rotates around some unknown position and with some added scalar resulting in very strange behavior. Here are some examples of the issue in action. The first video is using Unity's Polyspatial SDK. The second video is using an entirely native stack of SwiftUI and RealityKit with NO Unity at all. For some reason I am not allowed to link videos here from Drive or Gyazo, so I am including it in plaintext for now. If someone could direct me how I can upload video examples of what I am describing directly to these forums, I would appreciate it. First Video Showing Issue in Unity with PolySpatial SDK: https://i.gyazo.com/95788cf9d4587c167b544db031fbf412.mp4 Second Video Showing Issue in native only stack with RealityKit and Swift UI: https://drive.google.com/file/d/1mgt8TXJiopbm6qdJw2rFG0geam0irnMn/view?usp=sharing Unity Forum Bug Discussion which, after Investigation, Confirmed this issue is on the Native Platform: https://discussions.unity.com/t/objects-do-not-behave-properly-when-manipulated-in-an-mr-space/1482439 For a Mixed Reality Environment, where a user may want to move around their space, while using Indirect Pinches to manipulate and "carry" objects with them this is a big issue. Thank you
4
0
474
Jul ’24
Spatial Video and/or Photo on Safari
I am a student at Utah Valley University doing a UX Research project involving spatial web browsing on Safari. I am trying to determine if spatial video and photos would be supported on a safari web page while using the AVP. I am not a developer, so my knowledge of that front is limited, but I am hoping to get any insight into if that feature would be able to be implemented into a web based experience. If so, what formats would need to be used? Is the MV-HEVC format able to be directly embedded? Or is there another format that needs to be explored? Any insight is appreciated!
3
1
342
Jul ’24