Using MLHandActionClassifierwith visionOS

How do I use either of these data sources with MLHandActionClassifierwith on visionOS?

  1. MLHandActionClassifier.DataSource.labeledKeypointsDataFrame
  2. MLHandActionClassifier.DataSource.labeledKeypointsData

visionOS ARKit HandTracking provides us with 27 joints and 3D co-ordinates which differs from the 21 joint, 2D co-ordinates that these two data sources mention in their documentation.

Answered by DTS Engineer in 796963022

Hello @user654321,

You can reference these two WWDC videos to determine a mapping between the MLHandActionClassifier joints and the HandAnchor's hand skeleton joints:

HandAnchor: https://developer.apple.com/videos/play/wwdc2023/10082/?time=971

Vision joints: https://developer.apple.com/videos/play/wwdc2020/10653/?time=302

And this video shows the order that the 21 joints are expected in: https://developer.apple.com/videos/play/wwdc2021/10039/?time=1248

As for projecting the 3D HandAnchor skeleton joints to a 2D plane, you will need to make use of a projection matrix. For more information about this projection, take a look at https://developer.apple.com/videos/play/wwdc2024/10092/?time=637.

Best regards,

Greg

Hello @user654321,

You can reference these two WWDC videos to determine a mapping between the MLHandActionClassifier joints and the HandAnchor's hand skeleton joints:

HandAnchor: https://developer.apple.com/videos/play/wwdc2023/10082/?time=971

Vision joints: https://developer.apple.com/videos/play/wwdc2020/10653/?time=302

And this video shows the order that the 21 joints are expected in: https://developer.apple.com/videos/play/wwdc2021/10039/?time=1248

As for projecting the 3D HandAnchor skeleton joints to a 2D plane, you will need to make use of a projection matrix. For more information about this projection, take a look at https://developer.apple.com/videos/play/wwdc2024/10092/?time=637.

Best regards,

Greg

Using MLHandActionClassifierwith visionOS
 
 
Q