Synchronizing Multi-User AR Experiences on Apple Vision Pro

Hello Developers,

I am currently in the initial planning stages of my bachelor thesis in computer science, where I will be developing an application in collaboration with a manufacturer of large-scale machinery. One of the core features I aim to implement is the ability for multiple Apple Vision Pro users to view the same object in augmented reality simultaneously, each from their respective positions relative to the object.

I am still exploring how best to achieve this feature. My initial approach involves designating one device as the host of a "room" within the application, allowing other users to join. If I can accurately determine the relative positions of all users to the host device, it should be possible to display the AR content correctly in terms of angle, size, and location for each user.

Despite my research, I haven't found much information on similar projects, and I would appreciate any insights or suggestions. Specifically, I am curious about common approaches for synchronizing AR experiences across multiple devices. Given that the Apple Vision Pro does not have a GPS sensor, I am also looking for alternative methods to precisely determine the positions of multiple devices relative to each other.

Any advice or shared experiences would be greatly appreciated!

Best regards, Revin

The Iterative Closest Point (ICP) point cloud registration algorithm could be the solution. ICP determines the relative 6DoFs of multiple users by comparing the vertices of MeshAnchors.

Hi @Revin93

Assuming your experience is view only and there's no shared interaction, this can be done with an AnchorEntity. An AnchorEntity allows you to position an entity relative to real world content. For your use case consider the image or object targets. Here's an example that attaches someEntity to an image in the real world:

guard let url = Bundle.main.url(forResource: "marker", withExtension: "png") else {return}
let anchorEntity = AnchorEntity(.referenceImage(from: .init(url, physicalSize: [0.2032, 0.2032])))

anchorEntity.addChild(someEntity)
content.add(anchorEntity)

Hi @Revin93

I forgot to mention, Explore object tracking for visionOS is relevant to your goal. You can use image tracking to achieve your goal, but if you want a markerless solution, object tracking is the way to go.

Thank you for your helpful insights. I did some research as well and discovered that ARKit includes a built-in feature for creating multi-user AR experiences, as outlined here: https://developer.apple.com/documentation/arkit/arkit_in_ios/creating_a_multiuser_ar_experience. Although I am not yet certain if this will be fully compatible with the Apple Vision Pro, I assume it will be, as the device also relies on ARKit for development. I will have more clarity on this once the development process begins.

Best regards, Revin

Synchronizing Multi-User AR Experiences on Apple Vision Pro
 
 
Q