Discuss Spatial Computing on Apple Platforms.

Post

Replies

Boosts

Views

Activity

3DoF Tracking on Vision Pro
Hello everyone, It seems that Vision Pro supports 6DoF tracking, but is it possible to switch to 3DoF tracking? The reason for my question is that I would like to use it while riding in a car, but it seems that the 6DoF tracking is not working well in this situation. I was wondering if switching to 3DoF tracking might solve the issue. Note: I am using the travel mode.
0
0
322
Jun ’24
Spatial Video Capturing on iPhone 15 Pro
Hi all, I tried the "isSpatialVideoCaptureEnabled" with AVCaptureMovieFileOutput mentioned in WWDC24: Build compelling spatial photo and video experiences, and it works. But there are some issues and questions: Below codes, the change.newValue always nil so the code seems not work. let observation = videoDevice.observe(\.spatialCaptureDiscomfortReasons) { (device, change) in guard let newValue = change.newValue else { return } if newValue.contains(.subjectTooClose) { // Guide user to move back } if newValue.contains(.notEnoughLight) { // Guide user to find a brighter environment } } AVCaptureMovieFileOutput is support spatial video capturing. May I ask if AVCaptureVideoDataOutput will also support spatial video capturing?
1
2
360
Jul ’24
3D Object Capture not working on iphone 12 pro
The 3D object capture feature doesn’t seem to work on my iphone 12 pro. The circle that is supposed to show up when you begin to begin to move around the object doesnt show up so object capture doesn’t even begin. It says ‘more light..’ or ‘move closer’ but this doesnt happen on my iphone 14 pro. Works perfectly fine on that even with the same lighting. How can this be fixed?
1
0
421
Jul ’24
How to set the world alignment to gravity and heading for Roomplan?
So in the WWDC23 video on the Roomplan enhancement, it says that it is now possible to set a custom ARSession for the RoomCaptureSession. But how do you actually set the config for the custom ARSession? init() { let arConfig = ARWorldTrackingConfiguration() arConfig.worldAlignment = .gravityAndHeading arSession = ARSession() roomCaptureView = RoomCaptureView(frame: CGRect(x: 0, y: 0, width: 42, height: 42), arSession: arSession) sessionConfig = RoomCaptureSession.Configuration() roomCaptureView.captureSession.delegate = self roomCaptureView.delegate = self } However, I keep getting an issue that self is being used in the property access before being initialised. What can I do to fix it?
1
0
430
Jul ’24
EnvironmentResource.generate(fromEquirectangular:) does not compile with Xcode 16.0 beta 2
Hi! It seems that Xcode 16 beta 2 thinks that EnvironmentResource.generate(fromEquirectangular:) is unavailable even when the minimum target remains set to visionOS 1.0 (it is deprecated but Xcode reports a build error). The only way I was able to keep this in place for visionOS 1.x while compiling with Xcode 16 was the following: var environment: EnvironmentResource? if #available(visionOS 2.0, *) { environment = try? await EnvironmentResource(equirectangular: skyBoxWithSun()) } else { fatalError("EnvironmentResource.generate(fromEquirectangular:) does not compile with Xcode 16.0 beta 2.") } #else let environment = try? await EnvironmentResource.generate(fromEquirectangular: skyBoxWithSun()) #endif This will build with both Xcode 15.4 and 16 beta 2, but obviously crash when built with Xcode 16 and run on visionOS 1.x Do I have any better options? I would like to add some visionOS 2.0 features (e.g. try to replace my custom skybox with the new dynamic lighting) but prefer to maintain backward compatibility for now.
1
0
368
Jul ’24
Can't Get OrbitAnimation() to work on my project
DESCRIPTION OF PROBLEM I have an Apple Vision Pro App Store app called Starship SE Corps. I'm trying to add an animation for my app so that the starship entity orbits the Earth entity. I'm trying to use OrbitAnimation as discussed in the WWDC23 session "Build Spatial Experiences with RealityKit" (https://developer.apple.com/wwdc23/10080). However, I can't get the animation to work. STEPS TO REPRODUCE I created a sample test app called "SampleOrbitAnimationApp" to focus in on the code I'm having trouble with. When I build and run my sample test app, the app runs on both the visionOS 1.2 simulator and on my real Apple Vision Pro device running visionOS 1.2. However, my starship entity is static and is not animating/orbiting around my Earth entity. I tried putting my OrbitAnimation code in the RealityView update: closure. Doing that, however causes some property scope errors because the entity I refer to in the OrbitAnimation code is my entity that I create in the RealityView code block...so the update: closure code block can't see the entity property. Trying to make the entity reference more global at the top of the ImmersiveView (so update: closure sees the entity property) causes other parameter issues in the .app file call to the ImmersiveView and in the #Preview call to the ImmersiveView. Maybe that should be expected and I would need to workaround that (but I couldn't find a sensible way to do so). If this is the right approach, I need help on how to resolve this across the project files. I did find some example code online where a developer put the OrbitAnimation code directly in the RealityView code block without having an update: or attachments: closure at all. I tried that approach but also couldn't get that to work. The test sample app tries to target the OrbitAnimation and ImmersiveView code I'm struggling with (i.e. I can't get the starship to move and orbit around the Earth). It uses my same production app Package for Starship and Earth entities, built in Reality Composer Pro. Those entities, included in my sample test app, work fine on my latest production App Store release, so I think they are fine. The issue is how to do the OrbitAnimation code for those entities. I realize new capabilities are coming in visionOS 2, but I would like to make OrbitAnimation work now in my visionOS 1.2 app.
9
0
667
Jul ’24
App Environment SkyDome's UV values
I started a visionOS app using Apple's new "App Environment" template, and when I looked at the UV mapping for the half SkyDome, the bottom edge had a UV 'Y' value of 0.318. Naively, I had assumed the bottom edge of a half dome would have a UV 'Y' value of 0.5 (half way up the texture map). Is this the standard UV mapping for half a SkyDome? It has caused some issues when I've applied some HDRIs.
1
0
367
Jul ’24
Anime with TabView
In vision OS, the tab bar of TabView is outside the window by default. If I switch a page without TabView to a page that needs TabView in my program, the tab bar will suddenly appear on the left side of the screen without any animation. I hope it has an animation when it appears (such as easeIn, move). I tried it in Tab. Other animation-related modifiers such as animation are added under View, but there is no animation in the tab bar. Only the view in the tab has an animation effect, but this is not what I want. What I want is that the tab bar outside the window can have animation. What should I do?
1
0
480
Jul ’24
MTKView is now available on visionOS but isn't working on visionOS 1.x
Hello! I noticed that after WWDC 24 there was support added for MTKView in visionOS 1.0+. This is great! But when I use an MTKView in anything before visionOS 2.0 it doesn't work and the app ends up crashing. Console error when running on a device that is on visionOS 1.2: Symbol not found: _$s27_CompositorServices_SwiftUI0A5LayerV13configuration8rendererAcA0aE13Configuration_p_ySo019CP_OBJECT_cp_layer_G0CScMYcctcfC Expected in: <EFD973D2-97E1-380B-B89A-13CC3820B7F7> /System/Library/Frameworks/_CompositorServices_SwiftUI.framework/_CompositorServices_SwiftUI Looks like MTKView may be using compositor services under the hood? Any help would be great. Thank you!
3
2
467
Jul ’24
RealityKit Subdivide
In the Discover RealityKit APIs for iOS, macOS, and visionOS presentation, there was a slide at the end highlighting new features not covered in the video. One of them was surface subdivision, but I have not been able to find any documentation or APIs that support this feature. Does anyone have any further details or how this works in RealityKit?
3
1
430
Jul ’24
VisionOS 2 Beta crash - doesNotRecognizeSelector plane
In Xcode 16 beta 1 and 3, when running a VisionOS 2 simulator on an SwiftUI app that ran successfully in VisionOS 1, I received the following crash at startup: Thread 1: "*** -[NSProxy doesNotRecognizeSelector:plane] called!" I've gone through my code attempting to find any references to a plane method, but I have no such calls in my code, leading me to suspect that this is somehow related to VisionOS beta simulator code. Has anyone else run into this bug and worked around it somehow?
2
1
489
Jul ’24
Create 3D models with Object Capture VS Create 3D models with MAC
Create 3D models with Object Capture VS Create 3D models with MAC 1.After testing the model generated by the pictures taken on the mobile phone and comparing the .raw progress generated by the same set of data on the MAC side, the highest accuracy model has different effects. Sometimes the mobile phone model has higher accuracy, and sometimes the MAC model has higher accuracy. What are the two ends? The difference is that according to WWDC2023 MAC, a higher-precision model can be generated. However, in actual testing, it is possible that the integrity of MAC generation is not as good as that of the mobile phone. This is why. 2.Is it possible to set the accuracy of the generated model on the mobile phone?
0
0
451
Jul ’24
Drag Gestures
Dear all, I am experiencing some problems with the Drag Gesture in VisionOS. Typically, this gesture involves the user pinching an entity or, more commonly, a window, and moving/dragging it around. However, this is not always the case for entities (3D models) placed in the environment. It appears that the user can both pinch and drag and/or move the entity with their bare hands. In the latter case, the onChange cycle doesn't always end if the user keeps their hands near the object, causing it to keep moving even if that is not what the user intends. This also occurs when the user is no longer hovering over that entity. Larger entities, more so than those in the demo "TransformingRealityKitEntitiesUsingGestures," close to the user seem to become attached to their hands, causing the gesture to continue indefinitely. Entities often move to unintended positions. I believe that these two different behaviors within the same gesture container are intrinsically different: one involves pinching and dragging, while the other involves enabling hands physics, and it should be easy to distinguish between the two. How can we correctly address this situation? Thank you for your assistance
3
0
369
Jul ’24
Misaligned depth and rgb image truedepth from vga streaming
I'm currently streaming synchronised video and depth data from my iPhone 13, using AVFoundation, video set to AVCaptureSession.Preset.vga640x480. When looking at the corresponding images (with depth values mapped to a grey colour map), (both map and image are of size 640x480) it appears the two feeds have different fields of view, with the depth feed zoomed in and angled upwards, and the colour feed more zoomed out. I've looked at the intrinsics from both the depth map, and my colour sample buffer, they are identical. Does anyone know why this might be? My setup code is below (shortened): import AVFoundation import CoreVideo class VideoCaptureManager { private enum SessionSetupResult { case success case notAuthorized case configurationFailed } private enum ConfigurationError: Error { case cannotAddInput case cannotAddOutput case defaultDeviceNotExist } private let videoDeviceDiscoverySession = AVCaptureDevice.DiscoverySession(deviceTypes: [.builtInTrueDepthCamera], mediaType: .video, position: .front) private let session = AVCaptureSession() public let videoOutput = AVCaptureVideoDataOutput() public let depthDataOutput = AVCaptureDepthDataOutput() private var outputSynchronizer: AVCaptureDataOutputSynchronizer? private var videoDeviceInput: AVCaptureDeviceInput! private let sessionQueue = DispatchQueue(label: "session.queue") private let videoOutputQueue = DispatchQueue(label: "video.output.queue") private var setupResult: SessionSetupResult = .success init() { sessionQueue.async { self.requestCameraAuthorizationIfNeeded() } sessionQueue.async { self.configureSession() } sessionQueue.async { self.startSessionIfPossible() } } private func requestCameraAuthorizationIfNeeded() { switch AVCaptureDevice.authorizationStatus(for: .video) { case .authorized: break case .notDetermined: AVCaptureSession sessionQueue.suspend() AVCaptureDevice.requestAccess(for: .video, completionHandler: { granted in if !granted { self.setupResult = .notAuthorized } self.sessionQueue.resume() }) default: setupResult = .notAuthorized } } private func configureSession() { if setupResult != .success { return } let defaultVideoDevice: AVCaptureDevice? = videoDeviceDiscoverySession.devices.first guard let videoDevice = defaultVideoDevice else { print("Could not find any video device") setupResult = .configurationFailed return } do { videoDeviceInput = try AVCaptureDeviceInput(device: videoDevice) } catch { setupResult = .configurationFailed return } session.beginConfiguration() session.sessionPreset = AVCaptureSession.Preset.vga640x480 guard session.canAddInput(videoDeviceInput) else { print("Could not add video device input to the session") setupResult = .configurationFailed session.commitConfiguration() return } session.addInput(videoDeviceInput) if session.canAddOutput(videoOutput) { session.addOutput(videoOutput) if let connection = videoOutput.connection(with: .video) { connection.isCameraIntrinsicMatrixDeliveryEnabled = true } else { print("Cannot setup camera intrinsics") } videoOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_32BGRA)] } else { print("Could not add video data output to the session") setupResult = .configurationFailed session.commitConfiguration() return } if session.canAddOutput(depthDataOutput) { session.addOutput(depthDataOutput) depthDataOutput.isFilteringEnabled = false if let connection = depthDataOutput.connection(with: .depthData) { connection.isEnabled = true } else { print("No AVCaptureConnection") } } else { print("Could not add depth data output to the session") setupResult = .configurationFailed session.commitConfiguration() return } let depthFormats = videoDevice.activeFormat.supportedDepthDataFormats let filtered = depthFormats.filter({ CMFormatDescriptionGetMediaSubType($0.formatDescription) == kCVPixelFormatType_DepthFloat16 }) let selectedFormat = filtered.max(by: { first, second in CMVideoFormatDescriptionGetDimensions(first.formatDescription).width < CMVideoFormatDescriptionGetDimensions(second.formatDescription).width }) do { try videoDevice.lockForConfiguration() videoDevice.activeDepthDataFormat = selectedFormat videoDevice.unlockForConfiguration() } catch { print("Could not lock device for configuration: \(error)") setupResult = .configurationFailed session.commitConfiguration() return } session.commitConfiguration() } private func addVideoDeviceInputToSession() throws { do { var defaultVideoDevice: AVCaptureDevice? defaultVideoDevice = AVCaptureDevice.default( .builtInTrueDepthCamera, for: .depthData, position: .front ) guard let videoDevice = defaultVideoDevice else { print("Default video device is unavailable.") setupResult = .configurationFailed session.commitConfiguration() throw ConfigurationError.defaultDeviceNotExist } let videoDeviceInput = try AVCaptureDeviceInput(device: videoDevice) if session.canAddInput(videoDeviceInput) { session.addInput(videoDeviceInput) } else { setupResult = .configurationFailed session.commitConfiguration() throw ConfigurationError.cannotAddInput } }
0
0
378
Jul ’24
PortalComponent Clipping Behavior
Hello, I'm experimenting with the PortalComponent and clipping behaviors. My belief was that, with some arbitrary plane mesh, I could have the entire contents of a single world entity that has a PortalCrossingComponent clipped to the boundaries of the plane mesh. Instead, what I seem to be experiencing is that the mesh in the target world of the portal will actually display outside the plane boundaries. I've attached a video that shows the boundaries of my world escaping the portal clipping / transition plane, and also show how, when I navigate below a certain threshold in the scene, I can see what appears to be the "clipped" world ( here, it is obvious to see the dimensions of the clipping plane ), but when I move above a certain level, it appears that the world contents "escape" the clipping behavior. https://scale-assembly-dropbox.s3.amazonaws.com/clipping.mov ( I would have made the above a link but it is not a permitted domain - you can follow that link to see the behavior tho ) It almost seems as if "anything" with PortalCrossingComponent is allowed to appear in the PortalComponent 's parent scene, rather than being clipped by the PortalComponent 's boundary. For reference, the code I'm using is almost identical to the sample code in this document: https://developer.apple.com/documentation/realitykit/portalcomponent with the caveat that I'm using a plane that has .positiveY clipping and portal crossing behaviors, and the clipping plane mesh is as seen in the video. Do I misunderstand how PortalComponent is meant to be used? Or is there a bug in how it currently behaves?
2
0
329
Jul ’24
WWDC24 QR Scan Issue Code have a problem
I had got the Enterprise Developer Account , manage entitlements(com.apple.developer.arkit.barcode-detection.allow) Use WWDC24‘s Spatial barcode & QR code scanning example‘s code. When I run my project, my BarcodeDetectionProvider is ok, but at(for await anchorUpdate in barcodeDetection.anchorUpdates) is break ,I try more times call them ,but is useless. @Example I call this func startBarcodeScanning at ContentView var barcodeDetection = BarcodeDetectionProvider(symbologies: [.code39, .qr, .upce]) var arkitSession = ARKitSession() public func startBarcodeScanning() { Task { do { barcodeDetection = BarcodeDetectionProvider(symbologies: [.code39, .qr, .upce]) await arkitSession.queryAuthorization(for: [.worldSensing]) do { try await arkitSession.run([barcodeDetection]) print("arkitSession.run([barcodeDetection])") } catch { return } for await anchorUpdate in barcodeDetection.anchorUpdates { switch anchorUpdate.event { case .added: print("addEntity(myAnchor: anchorUpdate.anchor)") addEntity(myAnchor: anchorUpdate.anchor) case .updated: print("UpdateEntity") updateEntity(myAnchor: anchorUpdate.anchor) case .removed: print("RemoveEntity") removeEntity() } } //await loadInfo() } } }
0
0
296
Jul ’24
Convert .reality into USDZ
Hey Everyone, this is like my first post here in the apple forum. I need your help to understand better Reality Kit and file exports, but let me explain. I'm trying to create a little 3D Object editor, and it looks like to work pretty well using RealityViews and managing materials on the Entity. I'm currently working with all the Beta Apis and I would like to export my entity into an .usdz or a .obj file. I've found a method that allows me to create a .Reality File let path = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)[0].appendingPathComponent("model.reality") try await self.appState.parentEntity.write(to: path) but I now I don't know how to convert it into a .usdz or a .obj file, or otherwise any standard 3d format. Do you have any idea on how could I do? Thankyou so much! Have a nice day ^^
1
0
389
Jul ’24