Inquiry About the Precision of Apple Vision Pro LiDAR

Hello everyone,

I am a developer working on the Apple Vision Pro platform, currently developing an application that relies heavily on the Vision Pro LiDAR sensor. To ensure the accuracy and performance of my application, I would like to gather more detailed information about the technical specifications of the LiDAR sensor, particularly in the following areas:

1.	Distance Accuracy: How accurate is the LiDAR sensor at different distances?
2.	Spatial Resolution: What is the smallest object size that the sensor can detect?
3.	Environmental Impact: How does the performance of the LiDAR sensor vary under different lighting conditions or environmental factors (e.g., reflective surfaces, fog)?

I would greatly appreciate any detailed information or technical documentation regarding these questions. If there are any developers or Apple staff members who have insights on this, your input would be highly valued.

Thank you in advance for your assistance!

Apple Vision Pro (visionOS ARKit) does not allow you an access to LiDAR sensor data, only MeshAnchor. The data processing chains of LiDAR sensor are:

  1. 576 laser points (Apple internal data. undisclosed)
  2. 256x192 depthMap (iPadOS/iOS ARKit), generated from 576 laser points and RGB image
  3. MeshAnchor (iPadOS/iOS/visionOS ARKit), generated from 256x192 depthMap.
Inquiry About the Precision of Apple Vision Pro LiDAR
 
 
Q