When running a modified version of the RoomPlan Demo I get frequent Session Interrupted conditions. In looking at the traces I find a status of SensorDidPause in the interruption Side of the error but am mystified as to how to determine which sensor it was that paused and how to diagnose it. It appears there is a bitmap of available and active sensor devices in the sensor info passed with the session data on the error. In looking at the error status I can see that one or two of the motion sensors have had a problem. How do I do further diagnostic checks on what the cause of the error is? I am also curious why the error occurred as soon as the AR Session for my test started via the “session.run” call. The documentation in this area seems difficult to find. Attached are traces from running the test and stack dumps for the calls. Please send me guidance on how to proceed. The device in question is an iPad iPhone(3) that is attached to the Mac mini named “Hawkeye”. There is no known direct involvement for the Hawkeye system
Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.
Post
Replies
Boosts
Views
Activity
I have the following issue regarding running 2 AR service. I am trying to develop an app for my masters thesis.
Case 1: I first scan the room using the roomplan api. Then I stop the roomplan api session and start the realitykit session. When the realitykit session starts, the camera is not showing anything but black screen.
Case 2: When I had the issue with case one, I tried a seperate test app where I had 2 seperate screen for roomplan api and realitykit. There is no relation. but as soon as I introduced roomplan api, realitykit stopped working, having the same black screen as above.
There might be any states that changed by the roomplan api, that's why realitykit is not able to access the camera. Let me know if you have any idea about it or any sample.
I am using the following stack:
Xcode - Latest; Swiftui; latest os in mac mini and iphone
I'm in Europe, Vision Pro isn't available here yet. I'm a developer / designer, and I want to find out whether it's worthwhile to try and sell the idea of investing in a bunch of Vision Pro devices as well as in app development for it, to the people overseeing the budget for a project I'm part of. The project is broadly in an "industry" where several constraints apply, most of them are security and safety.
So far, all the Vision Pro discussion I've seen is about consumer-level media consumption and tippy-tappy-app-stuff for a broad user base.
Now, the hardware and the OS features and SDK definitely look like professional niche use cases are possible. But some features, such as SharePlay, will for example require an Apple ID and internet connection (I guess?). This for example is a strict nope in my case, for security reasons.
I'd like to start a discussion of what works and what doesn't work, outside the realm of watching Disney+ in your condo.
Potentially, this device has several marks ticked with regards to incredibly useful features in general.
very good indoor tracking
pass through with good fidelity
hands free operation
The first point especially, is kind of a really big deal, and for me, the biggest open question. I have multiple make or break questions with regard to this. (These features are not available in the simulator)
For sake of argument, lets say the app I'm building is Cave Mapper.
it's meant to be used by archeologists inside a cave system where we have no internet, no reliable compass, and no GPS. We have a local network that we can carry around though. We can also bring lights.
One feature of the app is to build out a catalog of cave paintings and store them in a database. The archeologist wants to walk around, look at a cave painting, and tap on it to capture its position relative to the cave entrance. The next day, another archeologist may work inside the same cave, and they would want to have synchronised access to the same spatial data from the day before. For that:
How good, precise, reliable, stable is the indoor tracking really? Hyped reviewers said it's rock solid, others have said it can drift.
How well do the persistent WorldAnchor objects work?
How well do they work when you're in a concrete bunker or a cave without GPS?
Can I somehow share a world anchor with another user? is it possible to sync the ARKit map that one device has built, with another device?
Other showstoppers?
in case you cannot share your mapped world or world anchors: How solid is the tracking of an ImageAnchor (which we could physically nail to the cave entrance to use as a shared positional / rotational reference)
Other, practical stuff:
can you wear Vision Pro with a safety helmet?
does it work with gloves?
Is there any callback in the code for the reset position from the crown button(when you long press on it)?
Is there a method for finding APIs that are compatible with both iOS and Vision OS (ex. hoverStyle)?
I'm encountering difficulties in developing for Vision OS, although I can successfully build for 'Apple Vision (designedForIPad)'. Are there any methods for discovering APIs that support both platforms? I'm looking to enhance my application and would appreciate any guidance on where to find such APIs.
Additionally, I'm interested in changing the background to a glass style. However, it seems that this feature may not be supported by the available APIs, particularly those designed for Vision OS. Any suggestions or insights would be greatly appreciated."
I am attempting to integrate VisionOS support into my existing iOS app, which utilizes Swift UI and CocoaPods. However, upon adding VisionOS as a supported platform and attempting to run the app, I encounter two errors:
"'jot/jot.h' file not found" at "/Users/xxxxxx/Desktop/IOS_DEVELOPMENT/iOS/xxxxxxxxx/xxxxxxxxx-Bridging-Header.h:17:9".
"Failed to emit precompiled header" at "/Users/xxxxxx/Library/Developer/Xcode/DerivedData/xxxxxxxxx-bnhvaxypgfhmvqgklzjdnxxbrdhu/Build/Intermediates.noindex/PrecompiledHeaders/xxxxxxxxx-Bridging-Header-swift_6TTOG1OAZB5F-clang_21TRHDW14EDOZ.pch" for bridging header "/Users/xxxxxxxx/Desktop/IOS_DEVELOPMENT/iOS/xxxxxxx/xxxxxxxx-Bridging-Header.h".
I'm seeking assistance with resolving these errors. Below is my Podfile configuration:
source 'https://github.com/CocoaPods/Specs.git'
platform :ios, '15.0'
target 'xxxxxxxxxx' do
use_frameworks!
pod 'RealmSwift'
pod 'JGProgressHUD'
pod 'BadgeLabel'
pod 'jot'
pod 'MaterialComponents/Chips'
pod 'GoogleMaps'
pod 'Firebase/Crashlytics'
pod 'Firebase/Analytics' # Firebase pod for Google Analytics
# Add pods for any other desired Firebase products
# https://firebase.google.com/docs/ios/setup#available-pods
end
post_install do |installer|
installer.pods_project.targets.each do |target|
target.build_configurations.each do |config|
config.build_settings['IPHONEOS_DEPLOYMENT_TARGET'] = '15.0'
end
end
end
Any assistance in resolving these errors would be greatly appreciated.
I am attempting to integrate VisionOS support into my existing iOS app, which utilizes Swift UI and CocoaPods. However, upon adding VisionOS as a supported platform and attempting to run the app, I encounter two errors:
"'jot/jot.h' file not found" at "/Users/xxxxxx/Desktop/IOS_DEVELOPMENT/iOS/xxxxxxxxx/xxxxxxxxx-Bridging-Header.h:17:9".
"Failed to emit precompiled header" at "/Users/xxxxxx/Library/Developer/Xcode/DerivedData/xxxxxxxxx-bnhvaxypgfhmvqgklzjdnxxbrdhu/Build/Intermediates.noindex/PrecompiledHeaders/xxxxxxxxx-Bridging-Header-swift_6TTOG1OAZB5F-clang_21TRHDW14EDOZ.pch" for bridging header "/Users/xxxxxxxx/Desktop/IOS_DEVELOPMENT/iOS/xxxxxxx/xxxxxxxx-Bridging-Header.h".
I'm seeking assistance with resolving these errors. Below is my Podfile configuration:
source 'https://github.com/CocoaPods/Specs.git'
platform :ios, '15.0'
target 'xxxxxxxxxx' do
use_frameworks!
pod 'RealmSwift'
pod 'JGProgressHUD'
pod 'BadgeLabel'
pod 'jot'
pod 'MaterialComponents/Chips'
pod 'GoogleMaps'
pod 'Firebase/Crashlytics'
pod 'Firebase/Analytics' # Firebase pod for Google Analytics
# Add pods for any other desired Firebase products
# https://firebase.google.com/docs/ios/setup#available-pods
end
post_install do |installer|
installer.pods_project.targets.each do |target|
target.build_configurations.each do |config|
config.build_settings['IPHONEOS_DEPLOYMENT_TARGET'] = '15.0'
end
end
end
Any assistance in resolving these errors would be greatly appreciated.
Through testing, I have been able to get 5.1 and 7.1 Dolby Atmos files created in Logic Pro to work in Reality Composer Pro and then in Vision Pro.
However, 5.1.4 and 7.1.4 files crash when added. Can someone confirm that these are not supported?
Flow:
User enters app and starts an arkit session with worldtracking and scene reconstruction.
User closes app so we stop the session.
User re-enters app and we try to run the session but app crashes with error: "It is not possible to re-run a stopped data provider.
If we remove code to stop the session, when the user re-enters the app the scene reconstruction doesn't work properly and shows inaccurate meshing data.
Is this a bug or am I doing something wrong here? Any ideas or insight are appreciated
I want to set collection in curve view with fix paging in vision pro, How can i do?
While WorldTrackingProvider.removeAnchor() completes without error, the WorldAnchor might be back the next time the App is run. This can easily be replicated by the ObjectPlacement sample. Just add 10 objects, Remove All, then run App again. The first run the anchors might be gone, but run the App a couple more times and the anchors will come back.
This becomes a big problem when paired with the issue that anchors are not always found when the App enter Immersive mode. When an anchor is not found our App will add an anchor. That usually works fine for that run. The next run, however, the other anchors will show up. Anchors accumulate and it becomes difficult to track.
spatial video is ok ,but can wen play the spatial photo also, thank you:)
I have a simple visionOS app that uses a RealityView to map floors and ceilings using PlaneDetectionProvider and PlaneAnchors.
I can look at a location on the floor or ceiling, tap, and place an object at that location (I am currently placing a small cube with X-Y-Z axes sticking out at the location).
The tap locations are consistently about 0.35m off along the horizontal plane (it is never off vertically) from where I was looking.
Has anyone else run into the issue of a spatial tap gesture resulting in a location offset from where they are looking?
And if I move to different locations, the offset is the same in real space, so the offset doesn't appear to be associated with the orientation of the Apple Vision Pro (e.g. it isn't off a little to the left of the headset of where I was looking).
Attached is an image showing this. I focused on the corner of the carpet (yellow circle), tapped my fingers to trigger a tap gesture in RealityView, extracted the location, and placed a purple cube at that location.
I stood in 4 different locations (where the orange squares are), looked at the corner of the rug (yellow circle) and tapped. All 4 purple cubes are place at about the same location ~0.35m away from the look location.
Here is how I captured the tap gesture and extracted the 3D location:
var myTapGesture: some Gesture {
SpatialTapGesture()
.targetedToAnyEntity()
.onEnded { event in
let location3D = event.convert(event.location3D, from: .global, to: .scene)
let entity = event.entity
model.handleTap(location: location3D, entity: entity)
}
}
Here is how I set the position of the purple cube:
func handleTap(location: SIMD3<Float>, entity: Entity) {
let positionEntity = Entity()
positionEntity.setPosition(location, relativeTo: nil)
...
}
Hello, I tried to build something with scene reconstruction but I want to add occlusion on the Surfaces how can I do that? I tried to create an entity and than apply an Occlusion Material but I received an ShapeResourece and I should pass an MeshResource to create a mesh for the entity and than apply a material. Any suggestions?
Rendering the scene onto a RenderTarget with twice the resolution of the Drawable, and then downsampling to the Drawable, causes the image to appear distorted.
Modifications were made on the Xcode VisionOS template
Foveation should be enabled by default
struct ContentStageConfiguration: CompositorLayerConfiguration {
func makeConfiguration(capabilities: LayerRenderer.Capabilities, configuration: inout LayerRenderer.Configuration) {
configuration.depthFormat = .depth32Float
configuration.colorFormat = .bgra8Unorm_srgb
let foveationEnabled = capabilities.supportsFoveation
configuration.isFoveationEnabled = foveationEnabled
let options: LayerRenderer.Capabilities.SupportedLayoutsOptions = foveationEnabled ? [.foveationEnabled] : []
let supportedLayouts = capabilities.supportedLayouts(options: options)
configuration.layout = supportedLayouts.contains(.layered) ? .layered : .dedicated
}
}
To avoid errors, rasterizationRateMap is not set.
var renderPassDescriptor = MTLRenderPassDescriptor()
renderPassDescriptor.colorAttachments[0].texture = self.renderTarget.currentFrameColor
renderPassDescriptor.renderTargetWidth = self.renderTarget.currentFrameColor.width
renderPassDescriptor.renderTargetHeight = self.renderTarget.currentFrameColor.height
renderPassDescriptor.colorAttachments[0].loadAction = .clear
renderPassDescriptor.colorAttachments[0].storeAction = .store
renderPassDescriptor.colorAttachments[0].clearColor = MTLClearColor(red: 0.0, green: 0.0, blue: 0.0, alpha: 0.0)
renderPassDescriptor.depthAttachment.texture = self.renderTarget.currentFrameDepth
renderPassDescriptor.depthAttachment.loadAction = .clear
renderPassDescriptor.depthAttachment.storeAction = .store
renderPassDescriptor.depthAttachment.clearDepth = 0.0
//renderPassDescriptor.rasterizationRateMap = drawable.rasterizationRateMaps.first
if layerRenderer.configuration.layout == .layered {
renderPassDescriptor.renderTargetArrayLength = drawable.views.count
}
The rendering process is as follows:
I am planning to build a VisionOS app and need to get access to the persona (avatar). I have not found any information regarding integration possibilities in the docs. Does anyone know if and how I can access the user's persona?
Other applications like Zoom and Teams for VisionOS use the persona, so I think it is basically possible. Apparently (if it's not fake) there is also a chess game with integrated persona: https://www.youtube.com/watch?v=mMzK8C3t14I
Any help is very welcome, thanks.
Hello. I am trying to load my own Image Based Lighting file in a visionOS RealityView. I used the code you get when creating a new project from scratch and selecting the immersive space to full when creating the project. With the sample file Apple provides, it works. But when I put my image in PNG, HEIC or EXR format in the same location the example file was in, it doesn't load and the error states:
Failed to find resource with name "SkyboxUpscaled2" in bundle
In this image you can see the file "ImageBasedLight", which is the one that comes with the project and the file "SkyboxUpscaled2" which is my own in the .exr format.
if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) {
content.add(immersiveContentEntity)
do{
let resource = try await EnvironmentResource(named: "SkyboxUpscaled2")
let iblComponent = ImageBasedLightComponent(source: .single(resource), intensityExponent: 0.25)
immersiveContentEntity.components.set(iblComponent)
immersiveContentEntity.components.set(ImageBasedLightReceiverComponent(imageBasedLight: immersiveContentEntity))
}catch{
print(error.localizedDescription)
}
Does anyone have an idea why the file is not found? Thanks in advance!
I am developing an iOS app intended to be used only a specific location (a campus). In this case, I'd like to use ARGeoAnchor to anchor content across a relatively large space, though in the pedestrian-only areas this is not supported and tracking begins to fail.
Is it possible to additionally use ARReferenceObject to re-localize to a specific location when I am walking in an unsupported area.
(FB13719373)
I setup an entity with a collision component on it. But it was hard to target the object for I tap gesture, until I increased the radius quite a bit. Now I am unsure if it is too large. Is there a way to visualize these components somehow, maybe even in a running scene?
Also, I find it pretty confusing that the size is given in cm. This made me wonder if this cm setting is affected by the entity's size at all? In Unity, it's just (local) "units".
I wanted to create a particle effect using particle images I copied from a Unity project. These images are PNGs with an alpha channel. In Unity, these look georgeous, but on visionOS, they look rather weird, since the alpha channel is not respected. All pixel which are not pitch black are full white. Is there a way to change this behavior?