Does one has to request a permission for every app?
How do I understand when the permission was given? Can it be visible in the App Store Connect or Xcode?
General
RSS for tagDiscuss Spatial Computing on Apple Platforms.
Post
Replies
Boosts
Views
Activity
Hi!
It seems that Xcode 16 beta 2 thinks that EnvironmentResource.generate(fromEquirectangular:) is unavailable even when the minimum target remains set to visionOS 1.0 (it is deprecated but Xcode reports a build error). The only way I was able to keep this in place for visionOS 1.x while compiling with Xcode 16 was the following:
var environment: EnvironmentResource?
if #available(visionOS 2.0, *) {
environment = try? await EnvironmentResource(equirectangular: skyBoxWithSun())
} else {
fatalError("EnvironmentResource.generate(fromEquirectangular:) does not compile with Xcode 16.0 beta 2.")
}
#else
let environment = try? await EnvironmentResource.generate(fromEquirectangular: skyBoxWithSun())
#endif
This will build with both Xcode 15.4 and 16 beta 2, but obviously crash when built with Xcode 16 and run on visionOS 1.x Do I have any better options? I would like to add some visionOS 2.0 features (e.g. try to replace my custom skybox with the new dynamic lighting) but prefer to maintain backward compatibility for now.
DESCRIPTION OF PROBLEM
I have an Apple Vision Pro App Store app called Starship SE Corps. I'm trying to add an animation for my app so that the starship entity orbits the Earth entity. I'm trying to use OrbitAnimation as discussed in the WWDC23 session "Build Spatial Experiences with RealityKit" (https://developer.apple.com/wwdc23/10080).
However, I can't get the animation to work.
STEPS TO REPRODUCE
I created a sample test app called "SampleOrbitAnimationApp" to focus in on the code I'm having trouble with.
When I build and run my sample test app, the app runs on both the visionOS 1.2 simulator and on my real Apple Vision Pro device running visionOS 1.2. However, my starship entity is static and is not animating/orbiting around my Earth entity.
I tried putting my OrbitAnimation code in the RealityView update: closure. Doing that, however causes some property scope errors because the entity I refer to in the OrbitAnimation code is my entity that I create in the RealityView code block...so the update: closure code block can't see the entity property.
Trying to make the entity reference more global at the top of the ImmersiveView (so update: closure sees the entity property) causes other parameter issues in the .app file call to the ImmersiveView and in the #Preview call to the ImmersiveView. Maybe that should be expected and I would need to workaround that (but I couldn't find a sensible way to do so). If this is the right approach, I need help on how to resolve this across the project files.
I did find some example code online where a developer put the OrbitAnimation code directly in the RealityView code block without having an update: or attachments: closure at all. I tried that approach but also couldn't get that to work.
The test sample app tries to target the OrbitAnimation and ImmersiveView code I'm struggling with (i.e. I can't get the starship to move and orbit around the Earth). It uses my same production app Package for Starship and Earth entities, built in Reality Composer Pro. Those entities, included in my sample test app, work fine on my latest production App Store release, so I think they are fine. The issue is how to do the OrbitAnimation code for those entities. I realize new capabilities are coming in visionOS 2, but I would like to make OrbitAnimation work now in my visionOS 1.2 app.
I started a visionOS app using Apple's new "App Environment" template, and when I looked at the UV mapping for the half SkyDome, the bottom edge had a UV 'Y' value of 0.318.
Naively, I had assumed the bottom edge of a half dome would have a UV 'Y' value of 0.5 (half way up the texture map).
Is this the standard UV mapping for half a SkyDome?
It has caused some issues when I've applied some HDRIs.
Is there anyway to get a 'right' click working on magic trackpad or magic mouse (or any mouse for that matter)
It seems that VisionOS only registers a single button (primary) and nothing else.
In vision OS, the tab bar of TabView is outside the window by default.
If I switch a page without TabView to a page that needs TabView in my program, the tab bar will suddenly appear on the left side of the screen without any animation. I hope it has an animation when it appears (such as easeIn, move). I tried it in Tab. Other animation-related modifiers such as animation are added under View, but there is no animation in the tab bar. Only the view in the tab has an animation effect, but this is not what I want. What I want is that the tab bar outside the window can have animation. What should I do?
Hello!
I noticed that after WWDC 24 there was support added for MTKView in visionOS 1.0+. This is great! But when I use an MTKView in anything before visionOS 2.0 it doesn't work and the app ends up crashing.
Console error when running on a device that is on visionOS 1.2:
Symbol not found: _$s27_CompositorServices_SwiftUI0A5LayerV13configuration8rendererAcA0aE13Configuration_p_ySo019CP_OBJECT_cp_layer_G0CScMYcctcfC
Expected in: <EFD973D2-97E1-380B-B89A-13CC3820B7F7> /System/Library/Frameworks/_CompositorServices_SwiftUI.framework/_CompositorServices_SwiftUI
Looks like MTKView may be using compositor services under the hood?
Any help would be great.
Thank you!
Hello,
Does the Apple Vision Pro have an API for creating custom triggers for selecting things on the screen instead of the hand pinch gesture? For instance, using an external button/signal/controller instead of pinching fingers?
In the Discover RealityKit APIs for iOS, macOS, and visionOS presentation, there was a slide at the end highlighting new features not covered in the video. One of them was surface subdivision, but I have not been able to find any documentation or APIs that support this feature. Does anyone have any further details or how this works in RealityKit?
In Xcode 16 beta 1 and 3, when running a VisionOS 2 simulator on an SwiftUI app that ran successfully in VisionOS 1, I received the following crash at startup:
Thread 1: "*** -[NSProxy doesNotRecognizeSelector:plane] called!"
I've gone through my code attempting to find any references to a plane method, but I have no such calls in my code, leading me to suspect that this is somehow related to VisionOS beta simulator code. Has anyone else run into this bug and worked around it somehow?
Create 3D models with Object Capture VS Create 3D models with MAC
1.After testing the model generated by the pictures taken on the mobile phone and comparing the .raw progress generated by the same set of data on the MAC side, the highest accuracy model has different effects. Sometimes the mobile phone model has higher accuracy, and sometimes the MAC model has higher accuracy. What are the two ends? The difference is that according to WWDC2023 MAC, a higher-precision model can be generated. However, in actual testing, it is possible that the integrity of MAC generation is not as good as that of the mobile phone. This is why.
2.Is it possible to set the accuracy of the generated model on the mobile phone?
Dear all,
I am experiencing some problems with the Drag Gesture in VisionOS. Typically, this gesture involves the user pinching an entity or, more commonly, a window, and moving/dragging it around. However, this is not always the case for entities (3D models) placed in the environment. It appears that the user can both pinch and drag and/or move the entity with their bare hands.
In the latter case, the onChange cycle doesn't always end if the user keeps their hands near the object, causing it to keep moving even if that is not what the user intends. This also occurs when the user is no longer hovering over that entity. Larger entities, more so than those in the demo "TransformingRealityKitEntitiesUsingGestures," close to the user seem to become attached to their hands, causing the gesture to continue indefinitely. Entities often move to unintended positions.
I believe that these two different behaviors within the same gesture container are intrinsically different: one involves pinching and dragging, while the other involves enabling hands physics, and it should be easy to distinguish between the two.
How can we correctly address this situation?
Thank you for your assistance
I'm currently streaming synchronised video and depth data from my iPhone 13, using AVFoundation, video set to AVCaptureSession.Preset.vga640x480. When looking at the corresponding images (with depth values mapped to a grey colour map), (both map and image are of size 640x480) it appears the two feeds have different fields of view, with the depth feed zoomed in and angled upwards, and the colour feed more zoomed out. I've looked at the intrinsics from both the depth map, and my colour sample buffer, they are identical.
Does anyone know why this might be?
My setup code is below (shortened):
import AVFoundation
import CoreVideo
class VideoCaptureManager {
private enum SessionSetupResult {
case success
case notAuthorized
case configurationFailed
}
private enum ConfigurationError: Error {
case cannotAddInput
case cannotAddOutput
case defaultDeviceNotExist
}
private let videoDeviceDiscoverySession = AVCaptureDevice.DiscoverySession(deviceTypes: [.builtInTrueDepthCamera],
mediaType: .video,
position: .front)
private let session = AVCaptureSession()
public let videoOutput = AVCaptureVideoDataOutput()
public let depthDataOutput = AVCaptureDepthDataOutput()
private var outputSynchronizer: AVCaptureDataOutputSynchronizer?
private var videoDeviceInput: AVCaptureDeviceInput!
private let sessionQueue = DispatchQueue(label: "session.queue")
private let videoOutputQueue = DispatchQueue(label: "video.output.queue")
private var setupResult: SessionSetupResult = .success
init() {
sessionQueue.async {
self.requestCameraAuthorizationIfNeeded()
}
sessionQueue.async {
self.configureSession()
}
sessionQueue.async {
self.startSessionIfPossible()
}
}
private func requestCameraAuthorizationIfNeeded() {
switch AVCaptureDevice.authorizationStatus(for: .video) {
case .authorized:
break
case .notDetermined:
AVCaptureSession
sessionQueue.suspend()
AVCaptureDevice.requestAccess(for: .video, completionHandler: { granted in
if !granted {
self.setupResult = .notAuthorized
}
self.sessionQueue.resume()
})
default:
setupResult = .notAuthorized
}
}
private func configureSession() {
if setupResult != .success {
return
}
let defaultVideoDevice: AVCaptureDevice? = videoDeviceDiscoverySession.devices.first
guard let videoDevice = defaultVideoDevice else {
print("Could not find any video device")
setupResult = .configurationFailed
return
}
do {
videoDeviceInput = try AVCaptureDeviceInput(device: videoDevice)
} catch {
setupResult = .configurationFailed
return
}
session.beginConfiguration()
session.sessionPreset = AVCaptureSession.Preset.vga640x480
guard session.canAddInput(videoDeviceInput) else {
print("Could not add video device input to the session")
setupResult = .configurationFailed
session.commitConfiguration()
return
}
session.addInput(videoDeviceInput)
if session.canAddOutput(videoOutput) {
session.addOutput(videoOutput)
if let connection = videoOutput.connection(with: .video) {
connection.isCameraIntrinsicMatrixDeliveryEnabled = true
}
else {
print("Cannot setup camera intrinsics")
}
videoOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_32BGRA)]
} else {
print("Could not add video data output to the session")
setupResult = .configurationFailed
session.commitConfiguration()
return
}
if session.canAddOutput(depthDataOutput) {
session.addOutput(depthDataOutput)
depthDataOutput.isFilteringEnabled = false
if let connection = depthDataOutput.connection(with: .depthData) {
connection.isEnabled = true
} else {
print("No AVCaptureConnection")
}
} else {
print("Could not add depth data output to the session")
setupResult = .configurationFailed
session.commitConfiguration()
return
}
let depthFormats = videoDevice.activeFormat.supportedDepthDataFormats
let filtered = depthFormats.filter({
CMFormatDescriptionGetMediaSubType($0.formatDescription) == kCVPixelFormatType_DepthFloat16
})
let selectedFormat = filtered.max(by: {
first, second in CMVideoFormatDescriptionGetDimensions(first.formatDescription).width < CMVideoFormatDescriptionGetDimensions(second.formatDescription).width
})
do {
try videoDevice.lockForConfiguration()
videoDevice.activeDepthDataFormat = selectedFormat
videoDevice.unlockForConfiguration()
} catch {
print("Could not lock device for configuration: \(error)")
setupResult = .configurationFailed
session.commitConfiguration()
return
}
session.commitConfiguration()
}
private func addVideoDeviceInputToSession() throws {
do {
var defaultVideoDevice: AVCaptureDevice?
defaultVideoDevice = AVCaptureDevice.default(
.builtInTrueDepthCamera,
for: .depthData,
position: .front
)
guard let videoDevice = defaultVideoDevice else {
print("Default video device is unavailable.")
setupResult = .configurationFailed
session.commitConfiguration()
throw ConfigurationError.defaultDeviceNotExist
}
let videoDeviceInput = try AVCaptureDeviceInput(device: videoDevice)
if session.canAddInput(videoDeviceInput) {
session.addInput(videoDeviceInput)
} else {
setupResult = .configurationFailed
session.commitConfiguration()
throw ConfigurationError.cannotAddInput
}
}
Hello,
I'm experimenting with the PortalComponent and clipping behaviors. My belief was that, with some arbitrary plane mesh, I could have the entire contents of a single world entity that has a PortalCrossingComponent clipped to the boundaries of the plane mesh.
Instead, what I seem to be experiencing is that the mesh in the target world of the portal will actually display outside the plane boundaries.
I've attached a video that shows the boundaries of my world escaping the portal clipping / transition plane, and also show how, when I navigate below a certain threshold in the scene, I can see what appears to be the "clipped" world ( here, it is obvious to see the dimensions of the clipping plane ), but when I move above a certain level, it appears that the world contents "escape" the clipping behavior.
https://scale-assembly-dropbox.s3.amazonaws.com/clipping.mov
( I would have made the above a link but it is not a permitted domain - you can follow that link to see the behavior tho )
It almost seems as if "anything" with PortalCrossingComponent is allowed to appear in the PortalComponent 's parent scene, rather than being clipped by the PortalComponent 's boundary.
For reference, the code I'm using is almost identical to the sample code in this document:
https://developer.apple.com/documentation/realitykit/portalcomponent
with the caveat that I'm using a plane that has .positiveY clipping and portal crossing behaviors, and the clipping plane mesh is as seen in the video.
Do I misunderstand how PortalComponent is meant to be used? Or is there a bug in how it currently behaves?
I had got the Enterprise Developer Account , manage entitlements(com.apple.developer.arkit.barcode-detection.allow)
Use WWDC24‘s Spatial barcode & QR code scanning example‘s code.
When I run my project, my BarcodeDetectionProvider is ok, but at(for await anchorUpdate in barcodeDetection.anchorUpdates) is break ,I try more times call them ,but is useless.
@Example I call this func startBarcodeScanning at ContentView
var barcodeDetection = BarcodeDetectionProvider(symbologies: [.code39, .qr, .upce])
var arkitSession = ARKitSession()
public func startBarcodeScanning() {
Task {
do {
barcodeDetection = BarcodeDetectionProvider(symbologies: [.code39, .qr, .upce])
await arkitSession.queryAuthorization(for: [.worldSensing])
do
{
try await arkitSession.run([barcodeDetection])
print("arkitSession.run([barcodeDetection])")
}
catch
{
return
}
for await anchorUpdate in barcodeDetection.anchorUpdates
{
switch anchorUpdate.event {
case .added:
print("addEntity(myAnchor: anchorUpdate.anchor)")
addEntity(myAnchor: anchorUpdate.anchor)
case .updated:
print("UpdateEntity")
updateEntity(myAnchor: anchorUpdate.anchor)
case .removed:
print("RemoveEntity")
removeEntity()
}
}
//await loadInfo()
}
}
}
Hey Everyone, this is like my first post here in the apple forum.
I need your help to understand better Reality Kit and file exports, but let me explain.
I'm trying to create a little 3D Object editor, and it looks like to work pretty well using RealityViews and managing materials on the Entity.
I'm currently working with all the Beta Apis and I would like to export my entity into an .usdz or a .obj file.
I've found a method that allows me to create a .Reality File
let path = FileManager.default.urls(for: .documentDirectory,
in: .userDomainMask)[0].appendingPathComponent("model.reality")
try await self.appState.parentEntity.write(to: path)
but I now I don't know how to convert it into a .usdz or a .obj file, or otherwise any standard 3d format.
Do you have any idea on how could I do?
Thankyou so much!
Have a nice day ^^
I was trying to load an Entity by Entity(named: sceneName, in: realityKitContentBundle), which works for many of my .usda file. But this time I got an error:
Error loading asset from scene PinballTable.usda: failed to load '7058602595919186152 Scene (RealityFileAsset)Bundle/RealityKitContent-RealityKitContent-resources/RealityKitContent.reality/Scene_14.compiledscene' (Asset provider load failed: type 'RealityFileAsset' -- Failed to load compiled data for asset path 'Scene_14.compiledscene', due to error: Failed to deserialize asset data.)
Any ideas on why this won't work? I have checked the size of my .usda file it's around 42kb so I won't think it's sake of file size. Due to many .usda reference inside of this scene, I suspect that it might be the case the bundle cannot locate other usda reference. So I export the whole scene into .usdz file and it turns to 118kb. Wonder if this could be the only issue here that affect the loading result but this is what I have tried so far.
visionOS System: visionOS beta 2/simulator 1.1 (neither works)
XCode: 15.4/16.0 beta (neither works)
Hello 👋
following questions:
I am using a Simulator with VisionOS 2.0 installed on. I am trying to display a remote spatial image. But I cannot display it.
I am trying to use the new updates form Webkit (https://webkit.org/blog/15443/news-from-wwdc24-webkit-in-safari-18-beta/#spatial-media) and show the image in a webview. But I cannot make it work. The image is not shown.
In the native version I thought about the new quicklook features that would help to display the spatial media. But I think this is also just for local files. Right?
I downloaded the file before but no success.
Any Ideas how I can display remote spatial images?
https://developer.apple.com/documentation/avfoundation/media_reading_and_writing/converting_side-by-side_3d_video_to_multiview_hevc_and_spatial_video
While using this sample project to convert SBS video into Spatial MVHEVC video, it cannot be recognized as spatial video on visionOS 2.0 Beta 3.
Platform and Version
Development Environment: Xcode 16 Beta 3
visionOS 2 Beta 3
Description of Problem
I am currently working on integrating SharePlay into my visionOS 2 application. The application features a fully immersive space where users can interact. However, I have encountered an issue during testing on TestFlight.
When a user taps a button to activate SharePlay via the GroupActivity's activate() method within the immersive space, the immersive space visually disappears but is not properly dismissed. Instead, the immersive space can be made to reappear by turning the Digital Crown. Unfortunately, when it reappears, it overlaps with the built-in OS immersive space, resulting in a mixed and confusing user interface. This behavior is particularly concerning because the immersive space is not progressive and should not work with the Digital Crown being turned.
It is important to note that this problem is only present when testing the app via TestFlight. When the same build is compiled with the Release configuration and run directly through Xcode, the immersive space behaves as expected, and the issue does not occur.
Steps to Reproduce
Build a project that includes a fully immersive space and incorporates GroupActivity support.
Add a button within a window or through a RealityView attachment that triggers the GroupActivity's activate() method.
Upload the build to TestFlight.
Connect to a FaceTime call.
Open the app and enter a immersive space then press the button to activate the Group Activity.