Hi, I'm currently working on an ARKit project where I need to implement object occlusion on devices that do not have a LiDAR sensor (e.g., iPhone XR, iPhone 11). I used CoreML models like DepthAnythingV2 to create depth maps and DETRResnet50SemanticSegmentationF16P8 to to perform real-time segmentation. But these models are too heavy for devices. Much appreciated on any advice or pointers to resources.
The FindSurfaceFramework for iOS basically requires a point set generated by a scanner (e.g. LiDAR) or even manually.
How to collect rawFeaturePoints from ARKit: https://github.com/CurvSurf/ARKitPointCloudRecorder https://developer.apple.com/documentation/arkit/arframe/2887449-rawfeaturepoints
Once a point set is prepared, it can be fed to FindSurfaceFramework.
The following videos demonstrate object occlusion by detecting and measuring object geometry from the provided point set:
- 3-D Augmented Reality - Apple ARKit (2018), https://youtu.be/FzdrxtPQzfA
- Lee JaeHyo Gallery Ball Park - ARKit (2019),
https://youtu.be/QhBtGHmfBOg
- Apple ARKit: Occlusion Tree Trunk (2019),
https://youtu.be/rGW-FtA6P1Q
- Apple ARKit: Augmented Reality Based on Curved Object Surfaces (2019),
https://youtu.be/4U4FlavRKa4