Discuss Spatial Computing on Apple Platforms.

Post

Replies

Boosts

Views

Activity

Device on 2.0 Beta 4 not recognized by Xcode as run destination if minimum deployment is >= 2.0
Hi all! I'm new to VisionOS development, so please excuse my inexperience. I'm trying to run an XCode project generated by Unity Polyspatial (Unity 6 preview, Polyspatial 2.0.0.pre-9) on my Apple Vision Pro, which is running visionOS 2.0 (22N5286g). However, the device doesn't appear in XCode's list of Run destinations unless I lower the visionOS version number in the "minimum deployments" to below 2.0. Lowering to anything below 2 Makes the device appear as a run destination, but the build fails with errors that I assume are due to targeting a lower OS level. Note that I have been able to successfully build and deploy to my device using a unity-generated Xcode project that only used visionOS 1 features (built off of Unity 2022.3.35f1) -- the issue appears to be specific to when I'm trying to use 2.0 features. I'm sure I'm just missing something silly here -- why wouldn't the device appear as a valid run target for visionOS 2.0, when the device is decidedly running 2.0?
1
1
395
Jul ’24
Vision Pro In app purchase
we are deveope an app on VisionOS IAP, we tried in app purchase code example on iphone, but seems not work, Xcode tell me: purchase(options:) is unavailable in visionOS: use @Environment(.purchase) to get a PurchaseAction value to call. If your app uses UIKit, use purchase(confirmin:options:)." 1.Anybody know how to solve this and give us any help? and we already searched on tutorials and forum,seems no result. Thankyou very much!
1
0
226
Jul ’24
DragGesture for RealityView
I used such a gesture under a reality view. DragGesture().targetedToAnyEntity() .onChanged { value in print("DragGesture") self.dragOffset = value.translation self.startTimer() } .onEnded { _ in self.dragOffset = .zero self.direction = "None" self.stopTimer() } However, due to the special nature of Reality View, it is impossible to detect gestures normally, so I think some modifiers should be added after value.translation, but I don't know what modifiers are. Can you give me some? Do you know? Thank you.
1
0
405
Jul ’24
Error when using system keyboard alongside custom Views in VisionOS application
Hello, I am running into a bug when I try to use a TextField in my SwiftUI project. As soon as I click on the TextField to begin entering characters, this warning appears twice: Trying to convert coordinates between views that are in different UIWindows, which isn't supported. Use convertPoint:fromCoordinateSpace: instead. followed by this warning: Unable to simultaneously satisfy constraints. Probably at least one of the constraints in the following list is one you don't want. Try this: (1) look at each constraint and try to figure out which you don't expect; (2) find the code that added the unwanted constraint or constraints and fix it. ( "<NSLayoutConstraint:0x600002241720 'accessoryView.bottom' _TtGC7SwiftUIP10$1c7cfcbc018InputAccessoryHostVS_P10$1c7cfcc5417InputAccessoryBar_:0x102973200.bottom == _UIRemoteKeyboardPlaceholderView:0x1038ef360.top + 86 (active)>", "<NSLayoutConstraint:0x60000226d540 'inputView.top' V:[_TtGC7SwiftUIP10$1c7cfcbc018InputAccessoryHostVS_P10$1c7cfcc5417InputAccessoryBar_:0x102973200]-(0)-[_UIRemoteKeyboardPlaceholderView:0x1038ef360] (active)>" ) Will attempt to recover by breaking constraint <NSLayoutConstraint:0x60000226d540 'inputView.top' V:[_TtGC7SwiftUIP10$1c7cfcbc018InputAccessoryHostVS_P10$1c7cfcc5417InputAccessoryBar_:0x102973200]-(0)-[_UIRemoteKeyboardPlaceholderView:0x1038ef360] (active)> Make a symbolic breakpoint at UIViewAlertForUnsatisfiableConstraints to catch this in the debugger. The methods in the UIConstraintBasedLayoutDebugging category on UIView listed in <UIKitCore/UIView.h> may also be helpful. Type: Error | Timestamp: 2024-07-31 06:30:56.177554-04:00 | Process: SG-002-Tutorial1 | Library: UIKitCore | Subsystem: com.apple.UIKit | Category: LayoutConstraints | TID: 0xfcd38 This is then followed by this series of error messages and my application freezes. Errors To clarify, in my project's source, I am not setting any constraints, or converting coordinates between views (at least not knowingly). I am going to attempt to reduce this to a simpler project which replicates the error, but I'd be thankful for any insights. I tried making a symbolic checkpoint as suggested in the warning, but this hit the breakpoint in a file of Assembly code I am not sure what to do with.
5
0
358
Jul ’24
VisionOS Beta 2 Control Center hijacking important gestures
I was surprised to find the update to the Home button for the Control Center in the Beta 2 update to VisionOS. It essentially cripples my app (in review) because it hijacks looking at the palm and pinching, which is a very natural position for the hand to be in if, for instance, you're holding something in it, as is done in my app. I can't imagine other apps will not want to do the same. I looked at a few others and noticed that in Blackbox, for instance, it hides the button at the beginning, but as soon as you do the gesture it comes back and there's no way to get rid of it again, so the gesture kicks you out of the app on subsequent uses. I'd like this feedback to reach the Product Development team at Apple and am hoping this makes a difference in this feature moving forward. If nothing else, I'd like to see a way to disable it in my app. If anyone else feels this way about this feature, please chime in so that we can get some eyes on it. Regards.
1
1
255
Jul ’24
Attachments with Object Tracking on visionOS 2
I have been able to get object tracking working with vision OS 2. So now in my reality view, when my reference object is detected - I am overlaying digital content on top of the reference object. I am implementing this with a Transform entity and attaching an object anchor to the entity and then placing my digital content in the scene (inside Reality Composer Pro) I now want to know if it's possible to create attachments and attach them to the digital content (say modelXYZ) that is spawned when the physical object is detected. If I need to write SwiftUI code to do this that works together with my RCP scene (that has the object tracking content), how do I do this? Some sample code or some reference to accomplish this would be extremely helpful
1
0
339
Jul ’24
[vOS2.0beta4] Compositor content always behind SwiftUI windows
Hi there, My app uses the .mixed immersion mode with an ImmersiveSpace rendering metal content into a compositor frame while also using Windows for SwiftUI content. In the screenshot below, you can see a red outline rendered in Metal, note that that the SwiftUI content is always rendered on top, even though the depth of the plane is behind the depth of the metal content. Is this behaving as expected or should I be hunting for a bug in my code? Thank you!
1
0
285
Jul ’24
Vision pro not pairing Macbook with pro
I'm having trouble pairing my apple vision pro to my macbook pro M3, my macbook pro is on sonoma 14.6 and i have tested pairing a visionOS1.2 and 2.0 vision pro but it still doesn't work, i have a mac mini that pairs and connects fine to the headsets and those are the steps i tried to do on vision pro and macbook pro to pair them together until now but with no success : On the same windows wifi hotspot On the same iPhone hotspot On an other wifi hotspot Tried to clear remote devices, still not recognized tried to turn off and turn on developper mode still nothing tried to reset network parameters tried to restart headset tried to restart Xcode tried to restart mac just after restart the headset showed up and i clicked pair and typed in the code but then the headset was still in "disconnected" and couldn't connect to mac tried to restart mac and headset tried to rename headset tried to switch mac tried 1 headset on at a time tried to clean build folder deleted contents of ~/Library/Developer/Xcode/DerivedData tried sudo defaults write "/Library/Preferences/com.apple.mDNSResponder.plist NoMulticastAdvertisements" -bool true tried to deactivate the firewall
2
0
361
Aug ’24
How to convert a Point3D value obtained by a SpatialTapGesture to an Entity coordinate?
I have a visionOS app that displays a ModelEntity in a RealityView. This entity can be tapped by a SpatialTapGesture, and this gesture calls .onEnded { event in let point3D = event.location3D // … } I am unable to convert point3D to the local coordinate of the entity. I wrote a little test project to investigate the situation (below). The RealityView shows a box, I can tap the visible faces, and I get point3D values that don't make much sense to me. These values are presented here. So, the main question is: How can I get the coordinate of the point tapped on the shown entity? I used for SpatialTapGesture all 3 options for the coordinateSpace, .local, .global, and .immersive, without success. Here is my code: struct ImmersiveView: View { var body: some View { RealityView { content in let mesh = MeshResource.generateBox(width: 1, height: 0.5, depth: 0.25, splitFaces: true) var frontMaterial = UnlitMaterial() frontMaterial.color.tint = .green var topMaterial = UnlitMaterial() topMaterial.color.tint = .red let boxEntity = ModelEntity(mesh: mesh, materials: [frontMaterial, topMaterial]) boxEntity.components.set(InputTargetComponent(allowedInputTypes: .all)) boxEntity.components[CollisionComponent.self] = CollisionComponent(shapes: [ShapeResource.generateConvex(from: mesh)]) boxEntity.transform.translation = [0, 0, -3] content.add(boxEntity) } .gesture(tapGesture) } var tapGesture: some Gesture { SpatialTapGesture(coordinateSpace: .local) .targetedToAnyEntity() .onEnded { event in let point3D = event.location3D print(point3D) } } }
3
0
359
Aug ’24
Models are too dark in Vision Pro App
I have followed the tutorial below to build a Vision Pro app. https://developer.apple.com/tutorials/develop-in-swift/create-3d-models-in-the-shared-space When I run the app in Vision Pro Simulator, the models are much darker than the sample in the last picture of the above link. How to fix the problem? Do I need to create lights to illuminate the model? It was not mentioned in the tutorial.
1
0
338
Aug ’24
Developing a keyboard App
Hi guys, is there a possible method or platform to code a precise and intensive input virtual tool (as sorts of keyboard app) on vision pro? (A little bit confused to choose Xcode or Unity cause the App may require plenty of 3D interactions, which as far as I'm concerned might be complicated to bring such here on Xcode trough either Volumes or Spaces.
0
0
290
Aug ’24
visionOS Move Bug
I was making a gesture to let the goose (character) walk, but I had two problems. 1: I added collision and physical body components to the goose and the collided entity, but I found that those physical formations could not completely block the way of the goose. For example, a tree is in front of it. After the goose is blocked, it will cross the tree or run to the top of the tree as long as it is a little faster. 2: Because the knowledge I have accumulated is not very complete, I can control the movement of the goose on the z-axis. I hope that the user's gestures can be realized by dragging back and forth (z-axis), but I can only realize the user's gestures by dragging up and down (y-axis). I hope you can give me some guidance: GooseOriginalPosition.z + Float(translation.height / 10000) This is the complete code: @State var goose: Entity? @State var isDraggingGoose = false @State var gooseOriginalPosition = SIMD3<Float>(repeating: 0) RealityView { content in if let model = try? await Entity(named: "WorldScene", in: realityKitContentBundle) { content.add(model) } if let gooseEntity = try? await Entity(named: "Goose", in: realityKitContentBundle) { gooseEntity.scale = SIMD3<Float>(repeating: 0.3) content.add(gooseEntity) goose = gooseEntity } } .simultaneousGesture(DragGesture() .targetedToAnyEntity() .onChanged { value in handleDrag(value) } .onEnded { _ in isDraggingGoose = false gooseTimer?.invalidate() }) func handleDrag(_ value: EntityTargetValue<DragGesture.Value>) { guard let goose = goose else { return } if !isDraggingGoose { isDraggingGoose = true gooseOriginalPosition = goose.position(relativeTo: nil) } let translation = value.gestureValue.translation let newPosition = SIMD3<Float>( gooseOriginalPosition.x + Float(translation.width / 10000), gooseOriginalPosition.y, gooseOriginalPosition.z + Float(translation.height / 10000)//I hope the gesture here should be z-axis drag. ) goose.setPosition(newPosition, relativeTo: nil) }
0
0
301
Aug ’24
VisionFramework does not work with VisionOS2.0
I try vision frameworks with VisionPro but does not work only with VisionOS2.0. When I perform requests, do not work and below error is caught. I try same code with VisionOS1.2, iOS18.0beta it works. I try also new beta API but does not work and same error. ex.GenerateForegroundInstanceMaskRequest do you have any idea? is it any permission for use vision framework with visionOS2.0. This is my try list with VisionOS2.0beta4 GenerateForegroundInstanceMaskRequest (not work error1) VNGenerateForegroundInstanceMaskRequest(not work error1) VNRecognizeTextRequest (not work error2) with VisionOS1.2 VNRecognizeTextRequest (work) with iOS 18beta GenerateForegroundInstanceMaskRequest (work) My Development Env Env1 VisionPro: VIsionOS2.0beta4 Xcode: 16.0beta4,16.0beta2. macOS: 14.5(23F79) Env2 VisionPro: VIsionOS1.2. Xcode: 15.4 macOS: 14.5(23F79). Error1 Error Domain=com.apple.Vision Code=9 "Could not build inference plan - ANECF error: failed to load ANE model file:///System/Library/Frameworks/Vision.framework/subject_lifting_gen1_rev5_gv8dsz6vxu_multihead_int8.espresso.net Error= (DESIGN)" UserInfo={NSLocalizedDescription=Could not build inference plan - ANECF error: failed to load ANE model file:///System/Library/Frameworks/Vision.framework/subject_lifting_gen1_rev5_gv8dsz6vxu_multihead_int8.espresso.net Error= (DESIGN)} Error2 Error Domain=com.apple.Vision Code=11 "VNRecognizeTextRequest produced an internal error" UserInfo={NSLocalizedDescription=VNRecognizeTextRequest produced an internal error, NSUnderlyingError=0x3001f6850 {Error Domain=CRImageReaderErrorDomain Code=-5 "Unknown error" UserInfo={NSLocalizedDescription=Unknown error}}}
8
0
723
Aug ’24
Integrating Apple Watch Health Data with Vision Pro
Hi everyone, I have a question regarding the integration of Apple Watch and Vision Pro. Is it possible to connect an Apple Watch to Vision Pro to access health data and display it within Vision Pro applications? If so, could you provide some guidance or point me towards relevant resources or APIs that would help in achieving this? Thank you in advance for your assistance!
4
0
490
Aug ’24
Requesting modelEntity from Photogrammetry complains about missing metallib in bundle
Hi! I've adapted the Mac Photogrammetry sample to iOS - works great. When I request a modelEntity, the completion callback doesn't get called (the other completions, like model file, poses and pointcloud, work fine), and "Could not locate file 'default-binaryarchive.metallib' in bundle." is printed to the console. Are they related? Should I be getting a modelEntity result? It's using the "Rock" images from the mac sample code.
0
1
262
Aug ’24