Discuss Spatial Computing on Apple Platforms.

Post

Replies

Boosts

Views

Activity

DragGesture for RealityView
I used such a gesture under a reality view. DragGesture().targetedToAnyEntity() .onChanged { value in print("DragGesture") self.dragOffset = value.translation self.startTimer() } .onEnded { _ in self.dragOffset = .zero self.direction = "None" self.stopTimer() } However, due to the special nature of Reality View, it is impossible to detect gestures normally, so I think some modifiers should be added after value.translation, but I don't know what modifiers are. Can you give me some? Do you know? Thank you.
1
0
327
Jul ’24
Vision Pro In app purchase
we are deveope an app on VisionOS IAP, we tried in app purchase code example on iphone, but seems not work, Xcode tell me: purchase(options:) is unavailable in visionOS: use @Environment(.purchase) to get a PurchaseAction value to call. If your app uses UIKit, use purchase(confirmin:options:)." 1.Anybody know how to solve this and give us any help? and we already searched on tutorials and forum,seems no result. Thankyou very much!
1
0
191
Jul ’24
Device on 2.0 Beta 4 not recognized by Xcode as run destination if minimum deployment is >= 2.0
Hi all! I'm new to VisionOS development, so please excuse my inexperience. I'm trying to run an XCode project generated by Unity Polyspatial (Unity 6 preview, Polyspatial 2.0.0.pre-9) on my Apple Vision Pro, which is running visionOS 2.0 (22N5286g). However, the device doesn't appear in XCode's list of Run destinations unless I lower the visionOS version number in the "minimum deployments" to below 2.0. Lowering to anything below 2 Makes the device appear as a run destination, but the build fails with errors that I assume are due to targeting a lower OS level. Note that I have been able to successfully build and deploy to my device using a unity-generated Xcode project that only used visionOS 1 features (built off of Unity 2022.3.35f1) -- the issue appears to be specific to when I'm trying to use 2.0 features. I'm sure I'm just missing something silly here -- why wouldn't the device appear as a valid run target for visionOS 2.0, when the device is decidedly running 2.0?
1
1
320
Jul ’24
Vision Pro projection matrix
Hello. I am trying to calculate rays from the NDC coordinates of the screen and the inverse of the projection and view matrices provided by the VisionOS API. It works perfectly in the simulator, but on device the projected rays do not match the (correct) projection of the raster scene rendered with the same projection and view matrices. Are there some differences between the device and simulator projection matrices that might cause this issue?
0
0
213
Jul ’24
SpatialTemplate in Fully Immersive Space
I'm having an issue with Group Activities and Spatial Templates in a fully immersive space. Basically, my app switches between various immersive spaces and sets a SpatialTemplate based on the RealityView it enters. However, whenever a SpatialTemplate is set, it randomly makes the immersive space disappear without dismissing it. The Digital Crown has to be pressed to properly dismiss the immersive space, even though it's not visible. I can get around this by toggling systemCoordinator.configuration.supportsGroupImmersiveSpace to false before entering and then waiting a couple of seconds before setting it to true. However, this doesn't seem like a great solution. Another issue is that sometimes when entering a fully immersive space with an active Group Activity session, it flips the rotation of the RealityKit content. The immersiveSpaceDisplacement values are way off, so setting any offsets based on that is not an option. It seems like when the system is attempting to place the participants in the "appropriate" location, it doesn't understand the fully immersive environment at all. Granted, my RealityKit content is pretty complex, but I don't think it should flip the scene's y-axis upside down. I was wondering if anyone else is experiencing these issues and has any workarounds?
0
0
157
Jul ’24
Vision Pro preview window looks different than on simulator
Hello, I have a simple SwiftUI view that shows this bottom bar in the view and I noticed that in SwiftUI previews the 2D window is squared off while in the simulator it has rounded edges. This effects the bottom bar because as you can see in the simulator the text is cut off. I am using Xcode 16 beta and visionOS 2 beta. Why do the two windows look different? And I am surprised the text is getting cut off in the rounded window. SwiftUI Preview: Vision Pro Simulator
6
0
409
Jul ’24
Spatial Video and/or Photo on Safari
I am a student at Utah Valley University doing a UX Research project involving spatial web browsing on Safari. I am trying to determine if spatial video and photos would be supported on a safari web page while using the AVP. I am not a developer, so my knowledge of that front is limited, but I am hoping to get any insight into if that feature would be able to be implemented into a web based experience. If so, what formats would need to be used? Is the MV-HEVC format able to be directly embedded? Or is there another format that needs to be explored? Any insight is appreciated!
3
1
341
Jul ’24
Objects do not behave properly with indirect pinch in a Mixed Reality Environment
In Mixed Reality Mode there is strange issues with indirect pinches on objects. If a user uses an indirect pinch to select an object and then walks around, or moves and re-orients their body while maintaining the pinch, the object moves as if there is some scalar being applied to it and it causes the object to behave in ways that are extremely counter-intuitive compared to other MR devices. If a user indirect pinches on an object and then walks forward the object flies away from the user, faster than they are walking. If a user indirect pinches on an object and then walks backward, the object flies towards and eventually past the user, faster than they are walking. If a user indirect pinches an object and then turns around, the object rotates around some unknown position and with some added scalar resulting in very strange behavior. Here are some examples of the issue in action. The first video is using Unity's Polyspatial SDK. The second video is using an entirely native stack of SwiftUI and RealityKit with NO Unity at all. For some reason I am not allowed to link videos here from Drive or Gyazo, so I am including it in plaintext for now. If someone could direct me how I can upload video examples of what I am describing directly to these forums, I would appreciate it. First Video Showing Issue in Unity with PolySpatial SDK: https://i.gyazo.com/95788cf9d4587c167b544db031fbf412.mp4 Second Video Showing Issue in native only stack with RealityKit and Swift UI: https://drive.google.com/file/d/1mgt8TXJiopbm6qdJw2rFG0geam0irnMn/view?usp=sharing Unity Forum Bug Discussion which, after Investigation, Confirmed this issue is on the Native Platform: https://discussions.unity.com/t/objects-do-not-behave-properly-when-manipulated-in-an-mr-space/1482439 For a Mixed Reality Environment, where a user may want to move around their space, while using Indirect Pinches to manipulate and "carry" objects with them this is a big issue. Thank you
4
0
473
Jul ’24
VideoMaterial to display SBS Stereoscopic 3D video? [VisionOS]
Hi, I love VideoMaterial API that gives so much power to play video on any mesh. But I am trying to play a side-by-side 3D video usingVideoMaterial: RealityView { content in let mesh = MeshResource.generatePlane(width: 300.0, height: 300.0, cornerRadius: 0) //generate mesh let vidMaterial = VideoMaterial(avPlayer: AVPlayer(url: URL(string: "https://someurl/test/master.m3u8")!)) //VideoMaterial vidMaterial.controller.preferredViewingMode = .stereo //<-- no idea why it doesn't work for SBS video in simulator vidMaterial.avPlayer?.play() let planeEntity = Entity() //new entity planeEntity.components.set(ModelComponent(mesh: mesh, materials: [vidMaterial])) //set a new ModelComponent to the entity content.add(planeEntity) } this code works well for plain 2D video playback but how do I display a Side-by-Side or Top-Bottom 3D video? I found GeometrySwitchCameraIndex in custom ShaderGraphMaterial but if I use input node as a image texture then how do I pass the video frame as texture into my custom shader to achieve the 3D effect or maybe there is an even better way to deal with this? There seems to be additional API .preferredViewingMode on the VideoMaterial's controller that can be set to .stereo but it doesn't give any stereo effect. Perhaps it's only for MV-HEVC media playback?
1
0
456
Jul ’24
How to Use RealityKitContent Package for App Targets Lower Than iOS 18.0
I am trying to display a 3D model in iOS app using RealityView. The same 3D model is displayed successfully in the visionOS app. Everything works perfectly only when I set my project’s minimum deployment target to iOS 18.0. However, my app’s minimum deployment target is iOS 15.0. When I use the RealityKitContent package to load the 3D model, it fails to compile and gives me the following error: Compiling for iOS 15.0, but module 'RealityKitContent' has a minimum deployment target of iOS 18.0: /Users/Library/Developer/Xcode/DerivedData/RealityViewForiOS-cbfkgimsqngtuegqwvezusvscllf/Index.noindex/Build/Products/Debug-iphonesimulator/RealityKitContent.swiftmodule/arm64-apple-ios-simulator.swiftmodule I have made the RealityKitContent package optional and tried importing using the following condition: #if canImport(RealityKitContent) import RealityKitContent #endif Despite this, it still fails to compile and produces the same error. I have not found a workaround for using the RealityKitContent package with app targets lower than iOS 18.0. Here is my package definition: let package = Package( name: "RealityKitContent", platforms: [ .visionOS(.v1), .macOS(.v15), .iOS(.v18) ], products: [ .library( name: "RealityKitContent", targets: ["RealityKitContent"]), ], dependencies: [], targets: [ .target( name: "RealityKitContent", dependencies: []), ] ) Here is the code I am using to load the 3D model with RealityView using the RealityKitContent package: import SwiftUI import RealityKit #if canImport(RealityKitContent) import RealityKitContent #endif struct ContentView: View { var body: some View { VStack { if #available(iOS 18.0, *) { RealityView { content in if let scene = try? await Entity(named: "Scene", in: realityKitContentBundle) { content.add(scene) } } update: { content in if let scene = content.entities.first { let uniformScale: Float = 3.0 scene.transform.scale = [uniformScale, uniformScale, uniformScale] } } } else { // Fallback for earlier versions } } } } #Preview { ContentView() } Any help or guidance on how to use the RealityKitContent package for app targets lower than iOS 18.0 would be greatly appreciated.
0
2
305
Jul ’24
How to move or scale WindowGroup by code in visionOS
WindowGroup(id: "Volumetic") { GeometryReader3D { geometry in VolumeView() .environment(appState) .volumeBaseplateVisibility(.visible) // 是否显示托盘,默认 .visible .scaleEffect(geometry.size.width / initialVolumeSize.width) } } .windowStyle(.volumetric) .windowResizability(.contentSize) .defaultSize(initialVolumeSize) I can move it through the drag bar that comes with the UI, and change the size by dragging the edge of the plate. I want to use code to achieve the same effect, how to achieve it
1
0
254
Jul ’24
How to add gestures to objects inside other objects
I have a scene with multiple RealityKit entities. There is a blue cube which I want to rotate along with all of its children (it's partly transparent). Inside the cube are a number of child entities (red) that I want to tap. The cube and red objects all have collision components as is required for gestures to work. If I want to rotate the blue cube, and also tap the red objects I can't do this as the blue cube's collision component intercepts the taps. Is there a way of accomplishing what I want? I'm targeting visionOS 2, and my scene is in a volume.
1
0
337
Jul ’24
Can I use the photokit sample on Vision OS?
I am a student developer We are trying to implement an application that allows you to take photos in visionOS mr mode and access the photos you took. Can the contents of the link below be used on visionOS? https://developer.apple.com/tutorials/sample-apps/capturingphotos-captureandsave/ I would really appreciate your reply. For reference, we plan to package the methods in swift and import the framework into Unity to use them.
0
0
281
Jul ’24
ImageAnchoringSource from URL
Hello, I was wondering how I can initialize an ImageAnchoringSource using https://developer.apple.com/documentation/realitykit/anchoringcomponent/imageanchoringsource/init(_:) When I construct one using a URL, it doesn't seem to be tracked and I see in the following when I debug print the component: ▿ 0 : AnchoringComponent ▿ target : Target ▿ referenceImage : 1 element ▿ from : ImageAnchoringSource ▿ url : Optional<URL> ▿ some : file:///var/mobile/Containers/Data/Application/D1126EA0-A1D7-468F-A40C-8578B7F5BDDF/Library/Caches/CodeCache/0E457AA7-2195-48B9-9DD4-58CEB9397F69.png - _url : file:///var/mobile/Containers/Data/Application/D1126EA0-A1D7-468F-A40C-8578B7F5BDDF/Library/Caches/CodeCache/0E457AA7-2195-48B9-9DD4-58CEB9397F69.png - _parseInfo : nil - _baseParseInfo : nil - name : nil - group : nil ▿ trackingMode : TrackingMode - trackingMode : 2 Is there a specific format for the parseInfo? When I use the same image to make an image anchoring source by group and name in AR Resources, it is tracked. Thank you!
1
0
256
Jul ’24
API Calls inside Vision OS Swift UI App
Hi, I'm brainstorming ideas for getting dynamic content inside my visionOS app on the Vision Pro. I have some data coming out of a piece of equipment, and reaching a cloud hub (something like IoT Hub on Azure). I want to get that data inside a visionOS app, ideally inside an attachment that is attached to some 3D entity inside my RealityView. Is something like this possible? Can someone give me some starter points on how I can enable a pipeline like this, and if there are any resources that I could use for reference.
1
0
293
Jul ’24