RealityKit PhotogrammetrySession not recognizing depth scale

I'm creating a custom scanning solution for iOS and using RealityKit Object Capture PhotogrammetrySession API to build a 3D model. I'm finding the data I'm sending to it is ignoring the depth and not building the model to scale. The documentation is a little light on how to format the depth so I'm wondering if someone could take a look at some example files I send to the PhotogrammetrySession. Would you be able to tell me what I'm not doing correctly?

https://drive.google.com/file/d/1-GoeR_KMhX_i7-y8M8ElDRrySasOdoHy/view?usp=sharing

Thank you!

Hello,

  1. In the sample files provided, each set has *_depth.tiff, *_gravity.txt & *.png. This follows the data format generated with CaptureSample app, but instead of *.HEIC images, there are *.png images in your data. So it's likely that you did not use CaptureSample app. With the CaptureSample app, the depth data is embedded into the *.HEIC files, so you do not need to explicitly read the depth data and it would still produce the model to correct scale. If you generated the *_depth.tiff files yourself, see point 2 below.

  2. If you generated *_depth.tiff files by yourself, make sure the depth data is in either kCVPixelFormatType_DisparityFloat32 or kCVPixelFormatType_DepthFloat32 format. You may follow the instruction for Creating Auxiliary Depth Data Manually. If you did follow these instructions and correctly created the depth data manually, see point 3 below.

  3. Make sure you are actually reading these files in your code to create the PhotogrammetrySample. If you simply keep the *_depth.tiff files in the input folder, but do not read the depthDataMap while creating the PhotogrammetrySample, then these depth files will not be used.

RealityKit PhotogrammetrySession not recognizing depth scale
 
 
Q