Hello!
I am having trouble calculating accurate distances in the real world using the camera's returned intrinsic matrix and pixel coordinates/depths captured from the iPhone's LiDAR. For example, in the image below, I set a mug 0.5m from the phone. The mug is 8.5cm wide. The intrinsic matrix returned from the phone's AVCameraCalibrationData class has focalx = 1464.9269
, focaly = 1464.9269
, cx = 960.94916
, and cy = 686.3547
. Selecting the two pixel locations denoted in the image below, I calculated each one's xyz coordinates using the formula:
x = d * (u - cx) / focalx
y = d * (v - cy) / focaly
z = d
Where I get depth from the appropriate pixel in the depth map - I've verified that both depths were 0.5m. I then calculate the distance between the two points to get the mug width. This gives me a calculated width of 0.0357, or 3.5 cm, instead of the 8.5cm I was expecting. What could be accounting for this discrepancy?
Thank you so much for your help!