Integrate iOS device camera and motion features to produce augmented reality experiences in your app or game using ARKit.

ARKit Documentation

Post

Replies

Boosts

Views

Activity

RealityView not displaying content
I'm playing with visionOS and trying to get a usdz file to load in a RealityView. It works fine if I use a Model3D but if I use a RealityView nothing shows up. I'm just using the fender_stratocaster asset right off the apple web site so it seems like it should work. This is the code: RealityView { content in if let sphereEntity = try? await Entity(named: "fender_stratocaster") { content.add(sphereEntity) sphereEntity.position = [0,0,0] sphereEntity.transform.scale = [scale, scale, scale] let _ = print(sphereEntity) } } update: { content in if let sphereEntity = content.entities.first { sphereEntity.transform.scale = [scale, scale, scale] } Any clues as to why this is not showing would be appreciated.
3
0
396
Jul ’24
VisionOS access ARKit when in shared space
I was planning to experiment with ARKit for Vision OS to create a widget app that places small room persistent objects in the user room, which the user can anchor anywhere they like. Trouble is, I don’t find it an amazing experience the fact that this needs to be used in a full space, as it’s limiting. those types of widgets would make sense only when one want to glance at them quickly, not as part of the main task a user is performing. Is there any way the room positional anchors can be stored and reestablished any time somebody opens an app in the shared space, rather than in the full one?
1
0
330
Jul ’24
AR Kit Body Skeleton is terrible, right?
Hello, I am trying to create new outfits based on ar kits body tracking skeleton example - the controlled robot. Is it just me or is this skeleton super annoying to work with? The bones all stand out like thorns and don't follow along the actual limb, which makes it impossible to automatically weight paint new meshes to the skeleton. Changing the bones is also not possible, since this will result in a distorted body tracking. I am an experienced modeller but I have never seen such a crazy skeleton. Even simple meshes are a pain in the bud to pair with these bones. You basically have to weight paint everything manually. Or am I missing something?
0
0
292
Jul ’24
Object Tracking Sample Code
Are you planning on publishing a complete sample code project related to the Explore object tracking for visionOS session (wwdc2024/10101)? The animation at 12:50 where the globe opens up was especially impressive. Seeing how that was done while tracking to the globe would be very interesting. (I realize that we would have to create our own globe object in order for the code to work.)
2
0
394
Jun ’24
Roomplan + Object Capture
We have an issue with Apple Roomplan - on regular bases the objects which are captured are not positioned corretly in the model which happens 50% of the cases we have - that makes the feature almost useless. Is there any idea how to solve that problem?
0
0
298
Jul ’24
Attach a Attachment to the hand VisionOS
I am trying to attach a button to user's left hand. the position is tracked. the button stays above the user's left hand. but it doesn't face the user. or it doesn't even face where the wrist is pointing. this is the main code snippet: if (model.editWindowAdded) { let originalMatrix = model.originFromWristLeft let theattachment = attachments.entity(for: "sample")! entityDummy.addChild(theattachment) let testrotvalue = simd_quatf(real: 0.9906431, imag: SIMD3<Float>(-0.028681312, entityDummy.orientation.imag.y, 0.025926698)) entityDummy.orientation = testrotvalue theattachment.position = [0, 0.1, 0] var timer = Timer.scheduledTimer(withTimeInterval: 0.1, repeats: true) {_ in let originalMatrix = model.originFromWristLeft print(originalMatrix.columns.0.y) let testrotvalue = simd_quatf(real: 0.9906431, imag: SIMD3<Float>(-0.028681312,0.1, 0.025926698)) entityDummy.orientation = testrotvalue } }
2
0
448
Jul ’24
ObjectCapture and ARObjectAnchor
Is it possible to both capture the images required for ObjectCapture and the scan data required to create an ARObjectAnchor (and be able to align the two to each other)? Perhaps an extension of this WWDC 2020 example that also integrates usdz object capture (instead of just import external one)? https://developer.apple.com/documentation/arkit/arkit_in_ios/content_anchors/scanning_and_detecting_3d_objects?changes=_2
2
0
457
Jul ’24
visionOS2.0 main camera image fusion
I want to align and fuse the video streams from the main camera and my external camera in visionOS 2.0, ensuring that the fused image remains directly in front of the field of view as the head moves, similar to a normal passthrough mode video image. I have already achieved and verified the static image alignment and fusion on the Vision Pro using screenshots from the main camera and the external video stream. However, I don't know how to perform real-time fusion with the main camera images. Could you please advise on how I can achieve this?
2
0
282
Jul ’24
Where does the device anchor locate?
Hi, My goal is to obtain the device location (6 DoF) of the Apple Vision Pro and I find a function that might satisfy my need: final func queryDeviceAnchor(atTimestamp timestamp: TimeInterval) -> DeviceAnchor? which returns a device anchor (containing the position and orientation of the headset). However, I couldn't find any document specify where does the device anchor exactly locate on the headset. Does it locate at the midpoint between the user's eyes? Does it locate at the centroid of the six world facing tracking cameras? It would be really helpful if someone can provide a local transformation matrix (similar to a camera extrinsic) from a visible rigid component (say the digital crown, top button, or the laser scanner) to the device anchor. Thanks.
1
1
344
Jul ’24
How to save point cloud data and view it
Hello, Recently, I’ve been studying point cloud development. As a beginner in this field, I’m seeking some guidance on how to approach this. I want to obtain point cloud data and be able to display and view it as shown in the picture below. I used the "Displaying a Point Cloud Using Scene Depth" code as my starting point and then attempted to add a button to save the data in the Renderer private variable particlesBuffer: MetalBuffer. The particlesBuffer array contains data structured as follows: struct ParticleUniforms { simd_float3 position; simd_float3 color; float confidence; }; My understanding is that this data represents the point cloud. If I am wrong, please let me know. Next, I wrote my own code to display this data using SceneKit by creating small spheres based on the position and color values. However, in practice, this method only allows me to display about 30,000 spheres before it becomes very laggy. I believe my implementation might be incorrect because I noticed that using the 3D scanner App to display point clouds achieves a much better performance. Could you please advise me on how to achieve an effect like the one shown in the image below? Thank you.
0
0
254
Jul ’24
Object Tracking with RealtyView
When I wanted to call the Reality Composer Pro scene containing Object Tracking, I tried the following code: RealityView { content in if let model = try? await Entity(named: "Scene", in: realityKitContentBundle) { content.add(model) } } Obviously, this is wrong. We need to add some configurations that can enable Object Tracking to Reality View. What do we need to add?
2
0
505
Jun ’24
Can you limit the area for AR Body Tracking to avoid the mesh from jumping across multiple bodies?
Hello there, I am currently experimenting with the body tracking sample from the AR Foundation Example Project. It works fine, but when there are multiple persons in front of the camera, the mesh jumps randomly from tracked skeleton to tracked skeleton. So I am looking for a way to define a specific area in front of the camera or to implement some other marker (maybe a body pose) to start and stop tracking. I am guessing Pose-Tracking could work. Whenever a body stands in a T-Pose, the Mesh gets applied to that body and then the script stops looking for new skeletons until the original skeleton is lost. Does somebody know which code to look at to achieve this?
1
0
271
Jul ’24
How to test an ARKit app in the simulator (without using ARKit features)
We are building an app that uses ARKit occasionally, but not always. We would like to test the non-ARKit parts in the simulator, since it offers more debugging features (e.g. SwiftUI previews or the Thread Sanitizer). However, we can't even build the app for the simulator, since the simulator SDK does not know about certain classes (e.g. "AnchorEntity"). This also means that none of the SwiftUI previews work, even if the views are not using ARKit. What is the best approach to test such an app in the simulator, without using any ARKit features?
2
1
362
Jul ’24
Why is VisionOS Barcode Scanning an Enterprise API?
I'm seeking insight on why the new VisionOS Barcode Scanning API is categorized as an Enterprise API and restricted only for proprietary and in-house apps. I understand Apple's focus on privacy and I can see how this restriction could make sense for other Enterprise APIs like main camera access and passthrough screen capture. Why is barcode scanning restricted from open apps? What makes barcode scanning more of a risk to privacy versus the unrestricted APIs for object tracking, image tracking, or hand tracking?
4
2
627
Jul ’24
Trouble with Core ML Object Tracking for Spherical Objects Using WWDC Sample Code and Object Capture
Hi everyone, I'm working with Core ML for object tracking and have successfully trained a couple of models. However, when I try to use the reference object file in the object tracking sample code from the WWDC video, it doesn't work. I'm training the model on a single-color plastic spherical object. Could this be the reason for the issue? I also attempted to use USDZ 3D assets that resemble the real object. Do these need to be trained with the Object Capture app to work properly? Speaking of the Object Capture app, my experience hasn't been great. When I scanned my spherical object, the result was far from a sphere—it looked more like a mushy dough. Any insights or suggestions would be greatly appreciated!
2
0
653
Jun ’24
Object Tracking Training: Objects to avoid
I'm working on training an object tracking model in CreateML for visionOS that has fan blades on it and looking to try to train while ignoring a section of the geometry. When I train currently, it can detect the object if the fan blades are in the same orientation as when scanned but if they move it struggles. I see there is an "objects to avoid" data source that can be added but upon reading the description, I don't think it does what I'm needing but I could be wrong. Is there anyway to have the training ignore a part of the geometry that has a significant effect on the silhouette of the object?
0
0
457
Jul ’24