Scene understanding missing from visionOS simulator?

SceneReconstructionProvider.isSupported and PlaneDetectionProvider.isSupported both return false when running in the simulator (Xcode 15b2).

There is no mention of this in release notes. Seems that this makes any kind of AR apps that depend on scene understanding impossible to run in the sim.

For example, this code described in this article is not possible to run in simulator: https://developer.apple.com/documentation/visionos/incorporating-surroundings-in-an-immersive-experience

Am I missing something or is this really the current state of the sim?

Does this mean if we want to build mixed-immersion apps we need to wait to get access to Vision Pro hardware?

Filed bug report as FB12457682

If that is not supported in simulator, that will be very inconvenient to develop AR immersive apps before Vision Pro available. Hope apple fix this asap...

Still missing in Xcode 15b3(15A5195k)

Still missing in Beta5.. This has completely blocked development for our team.

Also filed this. I've about run out of things I can develop and feel I need to begin coding interactions with planes and the persistence of world anchors. https://feedbackassistant.apple.com/feedback/12639395

Hope the simulator gets this and I don't have to wait several years to continue.

Still missing in Beta 6.

Also found this, and reported it as FB.

As of Xcode 15 Beta 8, all existing ARSession DataProviders report as unsupported in the visionOS simulator. Filed this as FB13125675.

It is logical that scene understanding is missing in the visionOS simulator. Scene understanding requires sensor data about the physical environment. The visionOS simulator without physical sensors cannot have sensor data.

Additionally, it should be noted that AVP includes cameras and LiDAR, but the sensor data is not shared with developers for privacy reasons. What is made available to developers is horizontal and vertical plane information and mesh information (ARMeshAnchor) that ARKit generates by internally processing sensor data.

Although what @JoonAhn says is correct, what the visionOS simulator needs to do is provide "simulated" plane information that matches the environments that are provided by Apple. This should not be too difficult, and would remove the requirement of having devs invest on expensive hardware in order to create apps for it.

its XCode Version 15.1 Beta(15C5028h), visionOS Simulator Version 15.1 (1018) and it's still the same. No DataProvider other than WorldTrackingProvider works in simulator. It's kinda funny that mesh data/camera position is already known to simulator and can populate ARKit sensor data very easily (with even more accuracy than real world) but who knows it might be too hard /s.

Scene understanding missing from visionOS simulator?
 
 
Q