Discuss Spatial Computing on Apple Platforms.

Post

Replies

Boosts

Views

Activity

Can’t Figure Out How to Get My Earth Entity to Rotate on its Axis
I can‘t Figure Out How to Get My Earth Entity to Rotate on its Axis. This is a follow up post from a previous Apple Developer forum post. How would I have the earth (parent) entity rotate CCW underneath the orbiting starship child? I tried adding the following code block to the RealityView but it is not working: if let rotatingEarth = starshipEntity.findEntity(named: "Earth") { rotatingEarth.transform.rotation = simd_quatf.init(angle: 360, axis: SIMD3(x: 0, y: 1, z: 0)) if let animation = try? AnimationResource.generate(with: rotatingEarth as! AnimationDefinition) { rotatingEarth.playAnimation(animation) } } Any advice on getting the earth to rotate? I tried reviewing the Hello World WWDC23 project code, but I was unable to understand the complexity and how that sample project got the earth to rotate. i want to do this for visionOS 1.2. I realize there are some new animation and possible other capabilities coming up in vision 2.0 but I want to try to address this issue in the current released visionOS version.
5
0
502
Jul ’24
Building for 'iOS', but linking in object file built for 'visionOS'
I have an application made from Flutter, which is possible to run on VisionOS by running as design to Ipad, and I would like that inside this application would be possible to go to mixed reality somehow. I am trying to do so far was to embedded the vision project that I have inside the swift application that flutter generates, but in this attempt I got an error from Xcode telling me that this way is not possible. I wonder if is there an another way that I could achieve my goal?
2
0
412
Jul ’24
Unable to display contextMenu
This is a visionOS App. I added contextMenu under a combination view, but when I pressed the view for a long time, there was no response. I tried to use this contextMenu in other views, which can be used normally, so I think there is something wrong with this combination view, but I don't know what the problem is. I hope you can remind me. Thank you! Views with problems: struct NAMEView: View { @StateObject private var placeStore = PlaceStore() var body: some View { ZStack { Group { HStack(spacing: 2) { Image(systemName: "mappin.circle.fill") .font(.system(size: 50)) .symbolRenderingMode(.multicolor) .accessibilityLabel("your location") .accessibilityAddTraits([.isHeader]) .padding(.leading, 5.5) VStack { Text("\(placeStore.locationName)") .font(.title3) .accessibilityLabel(placeStore.locationName) Text("You are here in App") .font(.system(size: 13)) .foregroundColor(.secondary) .accessibilityLabel("You are here in App") } .hoverEffect { effect, isActive, _ in effect.opacity(isActive ? 1 : 0) } .padding() } } .onAppear { placeStore.updateLocationName() } .glassBackgroundEffect() .hoverEffect { effect, isActive, proxy in effect.clipShape(.capsule.size( width: isActive ? proxy.size.width : proxy.size.height, height: proxy.size.height, anchor: .leading )) .scaleEffect(isActive ? 1.05 : 1.0) } } } }
1
0
353
Jul ’24
visionOS 2—Immersive Space/GroupActivity Issue
Platform and Version Development Environment: Xcode 16 Beta 3 visionOS 2 Beta 3 Description of Problem I am currently working on integrating SharePlay into my visionOS 2 application. The application features a fully immersive space where users can interact. However, I have encountered an issue during testing on TestFlight. When a user taps a button to activate SharePlay via the GroupActivity's activate() method within the immersive space, the immersive space visually disappears but is not properly dismissed. Instead, the immersive space can be made to reappear by turning the Digital Crown. Unfortunately, when it reappears, it overlaps with the built-in OS immersive space, resulting in a mixed and confusing user interface. This behavior is particularly concerning because the immersive space is not progressive and should not work with the Digital Crown being turned. It is important to note that this problem is only present when testing the app via TestFlight. When the same build is compiled with the Release configuration and run directly through Xcode, the immersive space behaves as expected, and the issue does not occur. Steps to Reproduce Build a project that includes a fully immersive space and incorporates GroupActivity support. Add a button within a window or through a RealityView attachment that triggers the GroupActivity's activate() method. Upload the build to TestFlight. Connect to a FaceTime call. Open the app and enter a immersive space then press the button to activate the Group Activity.
0
0
438
Jul ’24
Remote spatial images
Hello 👋 following questions: I am using a Simulator with VisionOS 2.0 installed on. I am trying to display a remote spatial image. But I cannot display it. I am trying to use the new updates form Webkit (https://webkit.org/blog/15443/news-from-wwdc24-webkit-in-safari-18-beta/#spatial-media) and show the image in a webview. But I cannot make it work. The image is not shown. In the native version I thought about the new quicklook features that would help to display the spatial media. But I think this is also just for local files. Right? I downloaded the file before but no success. Any Ideas how I can display remote spatial images?
0
0
338
Jul ’24
Failed loading .usda/.usdz from RealityKitContent package
I was trying to load an Entity by Entity(named: sceneName, in: realityKitContentBundle), which works for many of my .usda file. But this time I got an error: Error loading asset from scene PinballTable.usda: failed to load '7058602595919186152 Scene (RealityFileAsset)Bundle/RealityKitContent-RealityKitContent-resources/RealityKitContent.reality/Scene_14.compiledscene' (Asset provider load failed: type 'RealityFileAsset' -- Failed to load compiled data for asset path 'Scene_14.compiledscene', due to error: Failed to deserialize asset data.) Any ideas on why this won't work? I have checked the size of my .usda file it's around 42kb so I won't think it's sake of file size. Due to many .usda reference inside of this scene, I suspect that it might be the case the bundle cannot locate other usda reference. So I export the whole scene into .usdz file and it turns to 118kb. Wonder if this could be the only issue here that affect the loading result but this is what I have tried so far. visionOS System: visionOS beta 2/simulator 1.1 (neither works) XCode: 15.4/16.0 beta (neither works)
1
0
471
Jul ’24
Convert .reality into USDZ
Hey Everyone, this is like my first post here in the apple forum. I need your help to understand better Reality Kit and file exports, but let me explain. I'm trying to create a little 3D Object editor, and it looks like to work pretty well using RealityViews and managing materials on the Entity. I'm currently working with all the Beta Apis and I would like to export my entity into an .usdz or a .obj file. I've found a method that allows me to create a .Reality File let path = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)[0].appendingPathComponent("model.reality") try await self.appState.parentEntity.write(to: path) but I now I don't know how to convert it into a .usdz or a .obj file, or otherwise any standard 3d format. Do you have any idea on how could I do? Thankyou so much! Have a nice day ^^
1
0
390
Jul ’24
WWDC24 QR Scan Issue Code have a problem
I had got the Enterprise Developer Account , manage entitlements(com.apple.developer.arkit.barcode-detection.allow) Use WWDC24‘s Spatial barcode & QR code scanning example‘s code. When I run my project, my BarcodeDetectionProvider is ok, but at(for await anchorUpdate in barcodeDetection.anchorUpdates) is break ,I try more times call them ,but is useless. @Example I call this func startBarcodeScanning at ContentView var barcodeDetection = BarcodeDetectionProvider(symbologies: [.code39, .qr, .upce]) var arkitSession = ARKitSession() public func startBarcodeScanning() { Task { do { barcodeDetection = BarcodeDetectionProvider(symbologies: [.code39, .qr, .upce]) await arkitSession.queryAuthorization(for: [.worldSensing]) do { try await arkitSession.run([barcodeDetection]) print("arkitSession.run([barcodeDetection])") } catch { return } for await anchorUpdate in barcodeDetection.anchorUpdates { switch anchorUpdate.event { case .added: print("addEntity(myAnchor: anchorUpdate.anchor)") addEntity(myAnchor: anchorUpdate.anchor) case .updated: print("UpdateEntity") updateEntity(myAnchor: anchorUpdate.anchor) case .removed: print("RemoveEntity") removeEntity() } } //await loadInfo() } } }
0
0
297
Jul ’24
PortalComponent Clipping Behavior
Hello, I'm experimenting with the PortalComponent and clipping behaviors. My belief was that, with some arbitrary plane mesh, I could have the entire contents of a single world entity that has a PortalCrossingComponent clipped to the boundaries of the plane mesh. Instead, what I seem to be experiencing is that the mesh in the target world of the portal will actually display outside the plane boundaries. I've attached a video that shows the boundaries of my world escaping the portal clipping / transition plane, and also show how, when I navigate below a certain threshold in the scene, I can see what appears to be the "clipped" world ( here, it is obvious to see the dimensions of the clipping plane ), but when I move above a certain level, it appears that the world contents "escape" the clipping behavior. https://scale-assembly-dropbox.s3.amazonaws.com/clipping.mov ( I would have made the above a link but it is not a permitted domain - you can follow that link to see the behavior tho ) It almost seems as if "anything" with PortalCrossingComponent is allowed to appear in the PortalComponent 's parent scene, rather than being clipped by the PortalComponent 's boundary. For reference, the code I'm using is almost identical to the sample code in this document: https://developer.apple.com/documentation/realitykit/portalcomponent with the caveat that I'm using a plane that has .positiveY clipping and portal crossing behaviors, and the clipping plane mesh is as seen in the video. Do I misunderstand how PortalComponent is meant to be used? Or is there a bug in how it currently behaves?
2
0
331
Jul ’24
Misaligned depth and rgb image truedepth from vga streaming
I'm currently streaming synchronised video and depth data from my iPhone 13, using AVFoundation, video set to AVCaptureSession.Preset.vga640x480. When looking at the corresponding images (with depth values mapped to a grey colour map), (both map and image are of size 640x480) it appears the two feeds have different fields of view, with the depth feed zoomed in and angled upwards, and the colour feed more zoomed out. I've looked at the intrinsics from both the depth map, and my colour sample buffer, they are identical. Does anyone know why this might be? My setup code is below (shortened): import AVFoundation import CoreVideo class VideoCaptureManager { private enum SessionSetupResult { case success case notAuthorized case configurationFailed } private enum ConfigurationError: Error { case cannotAddInput case cannotAddOutput case defaultDeviceNotExist } private let videoDeviceDiscoverySession = AVCaptureDevice.DiscoverySession(deviceTypes: [.builtInTrueDepthCamera], mediaType: .video, position: .front) private let session = AVCaptureSession() public let videoOutput = AVCaptureVideoDataOutput() public let depthDataOutput = AVCaptureDepthDataOutput() private var outputSynchronizer: AVCaptureDataOutputSynchronizer? private var videoDeviceInput: AVCaptureDeviceInput! private let sessionQueue = DispatchQueue(label: "session.queue") private let videoOutputQueue = DispatchQueue(label: "video.output.queue") private var setupResult: SessionSetupResult = .success init() { sessionQueue.async { self.requestCameraAuthorizationIfNeeded() } sessionQueue.async { self.configureSession() } sessionQueue.async { self.startSessionIfPossible() } } private func requestCameraAuthorizationIfNeeded() { switch AVCaptureDevice.authorizationStatus(for: .video) { case .authorized: break case .notDetermined: AVCaptureSession sessionQueue.suspend() AVCaptureDevice.requestAccess(for: .video, completionHandler: { granted in if !granted { self.setupResult = .notAuthorized } self.sessionQueue.resume() }) default: setupResult = .notAuthorized } } private func configureSession() { if setupResult != .success { return } let defaultVideoDevice: AVCaptureDevice? = videoDeviceDiscoverySession.devices.first guard let videoDevice = defaultVideoDevice else { print("Could not find any video device") setupResult = .configurationFailed return } do { videoDeviceInput = try AVCaptureDeviceInput(device: videoDevice) } catch { setupResult = .configurationFailed return } session.beginConfiguration() session.sessionPreset = AVCaptureSession.Preset.vga640x480 guard session.canAddInput(videoDeviceInput) else { print("Could not add video device input to the session") setupResult = .configurationFailed session.commitConfiguration() return } session.addInput(videoDeviceInput) if session.canAddOutput(videoOutput) { session.addOutput(videoOutput) if let connection = videoOutput.connection(with: .video) { connection.isCameraIntrinsicMatrixDeliveryEnabled = true } else { print("Cannot setup camera intrinsics") } videoOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_32BGRA)] } else { print("Could not add video data output to the session") setupResult = .configurationFailed session.commitConfiguration() return } if session.canAddOutput(depthDataOutput) { session.addOutput(depthDataOutput) depthDataOutput.isFilteringEnabled = false if let connection = depthDataOutput.connection(with: .depthData) { connection.isEnabled = true } else { print("No AVCaptureConnection") } } else { print("Could not add depth data output to the session") setupResult = .configurationFailed session.commitConfiguration() return } let depthFormats = videoDevice.activeFormat.supportedDepthDataFormats let filtered = depthFormats.filter({ CMFormatDescriptionGetMediaSubType($0.formatDescription) == kCVPixelFormatType_DepthFloat16 }) let selectedFormat = filtered.max(by: { first, second in CMVideoFormatDescriptionGetDimensions(first.formatDescription).width < CMVideoFormatDescriptionGetDimensions(second.formatDescription).width }) do { try videoDevice.lockForConfiguration() videoDevice.activeDepthDataFormat = selectedFormat videoDevice.unlockForConfiguration() } catch { print("Could not lock device for configuration: \(error)") setupResult = .configurationFailed session.commitConfiguration() return } session.commitConfiguration() } private func addVideoDeviceInputToSession() throws { do { var defaultVideoDevice: AVCaptureDevice? defaultVideoDevice = AVCaptureDevice.default( .builtInTrueDepthCamera, for: .depthData, position: .front ) guard let videoDevice = defaultVideoDevice else { print("Default video device is unavailable.") setupResult = .configurationFailed session.commitConfiguration() throw ConfigurationError.defaultDeviceNotExist } let videoDeviceInput = try AVCaptureDeviceInput(device: videoDevice) if session.canAddInput(videoDeviceInput) { session.addInput(videoDeviceInput) } else { setupResult = .configurationFailed session.commitConfiguration() throw ConfigurationError.cannotAddInput } }
0
0
379
Jul ’24
Drag Gestures
Dear all, I am experiencing some problems with the Drag Gesture in VisionOS. Typically, this gesture involves the user pinching an entity or, more commonly, a window, and moving/dragging it around. However, this is not always the case for entities (3D models) placed in the environment. It appears that the user can both pinch and drag and/or move the entity with their bare hands. In the latter case, the onChange cycle doesn't always end if the user keeps their hands near the object, causing it to keep moving even if that is not what the user intends. This also occurs when the user is no longer hovering over that entity. Larger entities, more so than those in the demo "TransformingRealityKitEntitiesUsingGestures," close to the user seem to become attached to their hands, causing the gesture to continue indefinitely. Entities often move to unintended positions. I believe that these two different behaviors within the same gesture container are intrinsically different: one involves pinching and dragging, while the other involves enabling hands physics, and it should be easy to distinguish between the two. How can we correctly address this situation? Thank you for your assistance
3
0
370
Jul ’24
Create 3D models with Object Capture VS Create 3D models with MAC
Create 3D models with Object Capture VS Create 3D models with MAC 1.After testing the model generated by the pictures taken on the mobile phone and comparing the .raw progress generated by the same set of data on the MAC side, the highest accuracy model has different effects. Sometimes the mobile phone model has higher accuracy, and sometimes the MAC model has higher accuracy. What are the two ends? The difference is that according to WWDC2023 MAC, a higher-precision model can be generated. However, in actual testing, it is possible that the integrity of MAC generation is not as good as that of the mobile phone. This is why. 2.Is it possible to set the accuracy of the generated model on the mobile phone?
0
0
453
Jul ’24
VisionOS 2 Beta crash - doesNotRecognizeSelector plane
In Xcode 16 beta 1 and 3, when running a VisionOS 2 simulator on an SwiftUI app that ran successfully in VisionOS 1, I received the following crash at startup: Thread 1: "*** -[NSProxy doesNotRecognizeSelector:plane] called!" I've gone through my code attempting to find any references to a plane method, but I have no such calls in my code, leading me to suspect that this is somehow related to VisionOS beta simulator code. Has anyone else run into this bug and worked around it somehow?
2
1
490
Jul ’24
RealityKit Subdivide
In the Discover RealityKit APIs for iOS, macOS, and visionOS presentation, there was a slide at the end highlighting new features not covered in the video. One of them was surface subdivision, but I have not been able to find any documentation or APIs that support this feature. Does anyone have any further details or how this works in RealityKit?
3
1
432
Jul ’24
MTKView is now available on visionOS but isn't working on visionOS 1.x
Hello! I noticed that after WWDC 24 there was support added for MTKView in visionOS 1.0+. This is great! But when I use an MTKView in anything before visionOS 2.0 it doesn't work and the app ends up crashing. Console error when running on a device that is on visionOS 1.2: Symbol not found: _$s27_CompositorServices_SwiftUI0A5LayerV13configuration8rendererAcA0aE13Configuration_p_ySo019CP_OBJECT_cp_layer_G0CScMYcctcfC Expected in: <EFD973D2-97E1-380B-B89A-13CC3820B7F7> /System/Library/Frameworks/_CompositorServices_SwiftUI.framework/_CompositorServices_SwiftUI Looks like MTKView may be using compositor services under the hood? Any help would be great. Thank you!
3
2
468
Jul ’24
Anime with TabView
In vision OS, the tab bar of TabView is outside the window by default. If I switch a page without TabView to a page that needs TabView in my program, the tab bar will suddenly appear on the left side of the screen without any animation. I hope it has an animation when it appears (such as easeIn, move). I tried it in Tab. Other animation-related modifiers such as animation are added under View, but there is no animation in the tab bar. Only the view in the tab has an animation effect, but this is not what I want. What I want is that the tab bar outside the window can have animation. What should I do?
1
0
482
Jul ’24