Integrate iOS device camera and motion features to produce augmented reality experiences in your app or game using ARKit.

ARKit Documentation

Post

Replies

Boosts

Views

Activity

ARConfidenceLevel values lower on iPhone 15 Pro and M4 iPad Pro
[tldr version: all the point cloud capture apps rushed out an update when the iPhone 15 Pro was released because they were capturing far fewer points on that device. The same is observed with the new M4 iPad Pro. What was the fix for compatibility with these new devices?] I am running an ARKit replay file through "Displaying a Point Cloud Using Scene Depth" from WWDC20 and recording the ARConfidenceLevel values of the incoming ARDepthData. I am doing this side-by-side on an iPhone 12 Pro and an iPhone 15 Pro. The ARKit replay file was originally recorded on the 12 Pro. We get a certain percentage of points where the ARConfidenceLevel is not ".high" when running on the iPhone 12 Pro. It's varies a lot by frame but averages about 5% and is the same on all devices prior to the iPhone 15 Pro. The same test using the same iPhone 12 Pro replay file on an iPhone 15 Pro gives about twice as many points where the ARConfidenceLevel is not ".high" (about 10% on average on this particular replay file). This corresponds with real-world usage of our app on the iPhone 15 Pro where far fewer points are captured on that device compared with all previous models. (Our app filters out points where the ARConfidenceLevel is not ".high".) Apple's interpretation of the same LiDAR data is clearly different on the iPhone 15 Pro and M4 iPad Pro when compared to earlier devices. Can you please advise how to maintain equivalent behaviour on the new devices? Steps to reproduce: Run "Displaying a Point Cloud Using Scene Depth" from WWDC20 session 10611: Explore ARKit 4 following the instructions at https://developer.apple.com/documentation/arkit/arkit_in_ios/environmental_analysis/displaying_a_point_cloud_using_scene_depth Use Xcode's setting to replay data to ARKit while running on an iPhone 12 Pro (using any replay file recorded on that device). An iPhone 13 or 14 Pro will work just as well. Record what % of points have an ARConfidenceLevel that is not .high. Now do it again running the same replay file on an iPhone 15 Pro. Note that the % of points have an ARConfidenceLevel that is not .high is much higher.
0
1
342
May ’24
SceneReconstructionProvider stops providing updates
I have found that my Vision Pro device can get into a state where my app is no longer receiving fresh SceneReconstructionProvider updates. It reports that the SceneReconstructionProvider goes into the DataProviderState.running state, and .anchorUpdates will report a set of stale mesh anchors when first fired up, but does not produce any further updates. Once the device gets into this state, I can force quit the app, and even uninstall and re-install it, and I get the same few mesh updates, but no fresh updates until I restart the device. Sample async function below. I can confirm that print("WE FELL OFF THE END OF sceneReconstruction.anchorUpdates") never gets executed, so it stays inside the sceneReconstruction.anchorUpdates loop. let session = ARKitSession() var handTracking = HandTrackingProvider() let sceneReconstruction = SceneReconstructionProvider() let planeDetection = PlaneDetectionProvider(alignments: [.horizontal, .vertical]) let worldTracking = WorldTrackingProvider() ... func start() async { do { await requestAuth() if dataProvidersAreSupported && isReadyToRun && !isRunning { // print("ARKitSession starting.") try await session.run([sceneReconstruction, handTracking, planeDetection, worldTracking]) startCount += 1 // TODO: Fail gracefully if we have to attempt start too many (# TBD) times } else { print("dataProvidersAreSupported: \(dataProvidersAreSupported). isReadyToRun: \(isRunning)") print("handTracking.state: \(handTracking.state), sceneReconstruction.state: \(sceneReconstruction.state) worldTracking.state: \(worldTracking.state), planeDetection.state; \(planeDetection.state)") } }catch { print("ARKitSession error:", error) } } ... func processReconstructionUpdates() async { while (true) { for await update in sceneReconstruction.anchorUpdates { let meshAnchor = update.anchor guard let shape = try? await ShapeResource.generateStaticMesh(from: meshAnchor) else { continue } switch update.event { case .added: let entity = try! await generateModelEntity(geometry: meshAnchor.geometry) entity.transform = Transform(matrix: meshAnchor.originFromAnchorTransform) entity.collision = CollisionComponent(shapes: [shape], isStatic: true) entity.components.set(InputTargetComponent()) entity.name = "mesh" entity.physicsBody = PhysicsBodyComponent(mode: .static) let sortComponent = ModelSortGroupComponent(group: modelSortGroup, order: 1) entity.components.set(sortComponent) entity.components.set(OpacityComponent(opacity: 0.5)) meshEntities[meshAnchor.id] = entity meshesParent.addChild(entity, preservingWorldTransform: true) case .updated: guard let entity = meshEntities[meshAnchor.id], let updatedEntity = try? await generateModelEntity(geometry: meshAnchor.geometry) else { continue } entity.transform = Transform(matrix: meshAnchor.originFromAnchorTransform) entity.collision?.shapes = [shape] if let newMesh = updatedEntity.model?.mesh { entity.model?.mesh = newMesh } case .removed: meshEntities[meshAnchor.id]?.removeFromParent() meshEntities.removeValue(forKey: meshAnchor.id) } print("We now have '\(meshEntities.count)' mesh entities") } print("WE FELL OFF THE END OF sceneReconstruction.anchorUpdates") try? await Task.sleep(nanoseconds: 1_000_000) }
5
0
569
May ’24
USD/Z schemas and Declarations
How is it possible to add a schema for ar to a usd file using the python tools (or any other way). Following the instructions in: https://developer.apple.com/documentation/arkit/arkit_in_ios/usdz_schemas_for_ar/actions_and_triggers/preliminary_behavior The steps are to have the following declaration: class Preliminary_Behavior "Preliminary_Behavior" ( inherits = </Typed> ) and a usd file #usda 1.0 def Preliminary_Behavior "TapAndFlip" { rel triggers = [ <Tap> ] rel actions = [ <Entry> ] def Preliminary_Trigger "Tap" ( inherits = </TapGestureTrigger> ) { rel affectedObjects = [ </Cube> ] } def Preliminary_Action "Entry" ( inherits = </GroupAction> ) { uniform token type = "parallel" rel actions = [ <Flip> ] } def Preliminary_Action "Flip" ( inherits = </EmphasizeAction> ) { rel affectedObjects = [ </Cube> ] uniform token motionType = "flip" } } def Cube "Cube" { } How do these parts fit together? I saved the usda file, but it didn't have any interactions. Obviously, I have to add that declaration, but how do I do this? is this all in an AR Xcode project? Or can I do this with python tools (I would prefer something very lightweight).
0
0
537
May ’24
Possible to use DataProviders in GroupActivities on VisionOS?
I built two parts of my app a bit disjointed: my physics component, which controls all SceneReconstruction, HandTracking, and WorldTracking. my spatial GroupActivities component that allows you to see personas of those that join the activity. My problem: When trying to use any DataProvider in a spatial experience, I get the ARKit Session Event: dataProviderStateChanged, which disables all of my providers. My question: Has anyone successfully been able to find a workaround for this? I think it would be amazing to have one user be able to be the "host" for the activity and the scene reconstruction provider still continue to run for them.
1
0
658
May ’24
Actions with USDZ files
Hi all, I am generating some USDZ files that will be used in quicklook and be accessible with the Vision Pro. I was wondering are there any examples with USDZ files with actions? like the ability to change a state of assesses by tapping on them? I know this works with .reality files, but I would like to use Python to create some automatically generated USDZ files that allow some interaction. I'm currently stuck! So an example of the capabilities would be great - or pointing to some code that has done this in python. Thanks!
0
1
522
May ’24
The behavior of hand interacts with object in Vision Pro
I am developing an immersive application featured with hands interacting my virtual objects. When my hand passes through the object, the rendered color of my hand is like blending hand color with object's color together, both semi transparent. I wonder if it is possible to make my hand be always "opaque", or say the alpha value of rendered hand (coz it's VST) is always 1, but the object's alpha value could be varied in terms of whether it is interacting with hand. (I was thinking this kind of feature might be supported by a specific component (just like HoverEffectComponent), but I didn't find that out)
0
0
481
May ’24
Quicklook AugmentedReality Button Greyed-out
Hi, We are having problems with IOs Quick Look not working. Specifically, the AR button being greyed out after having opened the Scene / AR model previously. This is all running off our Web-App. What we have figured out is clearing the device's cache solves the issue and the greyed out button turns blue and clickable again. We are receiving this issue very inconsistently though - on iPad as well as iPhone and on both newer and older IOs versions. Very happy for any responses and advice to solve this issue as its behaviour makes the quick look function - even if it's great (when it works) unviable for Production (because it doesn't work consistently). Best Regards
0
0
521
May ’24
'init(make:update:attachments:)' is unavailable in visionOS
Im trying to use a RealityView with attachments and this error is being thowen. Am i using the RealityView wrong? I've seen other people use a RealityView with Attachments in visionOS... Please let this be a bug... RealityView { content, attachments in contentEntity = ModelEntity(mesh: .generatePlane(width: 0.3, height: 0.5)) content.add(contentEntity!) } attachments: { Text("Hello!") }.task { await loadImage() await runSession() await processImageTrackingUpdates() }
1
0
453
May ’24
VisonOS Image tracking help
Hi all, I need some help debugging some code I wrote. Just as a preface, I'm an extremely new VR/AR developer and also very new to using ARKit + RealityKit. So please bear with me :) I'm just trying to make a simple program that will track an image and place an entity on it. The image is tracked correctly, but the moment the program recognizes the image and tries to place an entity on it, the program crashes. Here’s my code: VIEWMODEL CODE: Observable class ImageTrackingModel { var session = ARKitSession() // ARSession used to manage AR content var imageAnchors = [UUID: Bool]() // Tracks whether specific anchors have been processed var entityMap = [UUID: ModelEntity]() // Maps anchors to their corresponding ModelEntity var rootEntity = Entity() // Root entity to which all other entities are added let imageInfo = ImageTrackingProvider( referenceImages: ReferenceImage.loadReferenceImages(inGroupNamed: "referancePaper") ) init() { setupImageTracking() } func setupImageTracking() { if ImageTrackingProvider.isSupported { Task { try await session.run([imageInfo]) for await update in imageInfo.anchorUpdates { updateImage(update.anchor) } } } } func updateImage(_ anchor: ImageAnchor) { let entity = ModelEntity(mesh: .generateSphere(radius: 0.05)) // THIS IS WHERE THE CODE CRASHES if imageAnchors[anchor.id] == nil { rootEntity.addChild(entity) imageAnchors[anchor.id] = true print("Added new entity for anchor \(anchor.id)") } if anchor.isTracked { entity.transform = Transform(matrix: anchor.originFromAnchorTransform) print("Updated transform for anchor \(anchor.id)") } } } APP: @main struct MyApp: App { @State var session = ARKitSession() @State var immersionState: ImmersionStyle = .mixed private var viewModel = ImageTrackingModel() var body: some Scene { WindowGroup { ModeSelectView() } ImmersiveSpace(id: "appSpace") { ModeSelectView() } .immersionStyle(selection: $immersionState, in: .mixed) } } Content View: RealityView { content in Task { viewModel.setupImageTracking() } } //Im serioulsy so clueless on how to use this view
1
0
629
May ’24
Applying post-processing to SceneKit's Scene and saving it to a USDZ file
I am fairly new to 3D model rendering and do not know where to start. I am trying to, ideally with ARKit & RealityKit or SceneKit, do a scan of an environment. This includes: Applying realistic textures to the model. Being able to save it as a .usdz file (to be able to open it within the App itself) Once it is save do post-processing measurements within the model. I would prefer to accomplish this feature by using a mesh, instead of the pointCloud that is used in the sample project of apple. Would this be doable using Apple's APIs and on a mobile device or would it be necessary to use a third party program? I have managed to create a USDZ file using SceneKit's .scene.write(to:,delegate:) method. However the saved file is a "single object" and it is not possible to use raycasting to do post-processing measurements in the model.
0
0
588
May ’24
VisionOS programmatically toggle .showSceneUnderstanding
I was executing some code from Incorporating real-world surroundings in an immersive experience func processReconstructionUpdates() async { for await update in sceneReconstruction.anchorUpdates { let meshAnchor = update.anchor guard let shape = try? await ShapeResource.generateStaticMesh(from: meshAnchor) else { continue } switch update.event { case .added: let entity = ModelEntity() entity.transform = Transform(matrix: meshAnchor.originFromAnchorTransform) entity.collision = CollisionComponent(shapes: [shape], isStatic: true) entity.components.set(InputTargetComponent()) entity.physicsBody = PhysicsBodyComponent(mode: .static) meshEntities[meshAnchor.id] = entity contentEntity.addChild(entity) case .updated: guard let entity = meshEntities[meshAnchor.id] else { continue } entity.transform = Transform(matrix: meshAnchor.originFromAnchorTransform) entity.collision?.shapes = [shape] case .removed: meshEntities[meshAnchor.id]?.removeFromParent() meshEntities.removeValue(forKey: meshAnchor.id) } } } I would like to toggle the Occlusion mesh available on the dev tools below, but programmatically. I would like to have a button, that would activate and deactivate that. I was checking .showSceneUnderstanding but it does not seem to work in visionOS. I get the following error 'ARView' is unavailable in visionOS when I try what is available Visualizing and Interacting with a Reconstructed Scene
1
0
598
May ’24
Map ARSkeleton3D of ARBodyAnchor to normalized camera image
I am trying to map the 3D skeleton joint positions of an ARBodyAnchor to the real body on the camera image. I know I could simply use the "detectedBody" of the ARFrame, which would already deliver the normalized 2D position of each joint, but what I am mostly interested in is the z-axis (the distance of each joint to the camera). I am starting a ARBodyTrackingConfiguration, setting the world alignment to ARWorldAlignmentCamera (in which case the camera transform is an identity matrix) and multiplying each joint transform in model space (via modelTransformForJointName:) with the transform of the ARBodyAnchor. And then tried many different ways to get the joints to line up with the image, by for example multiplying the transforms with the projectionMatrix of the ARCamera. But whatever I do, it never lines up correctly. For example, the doesn't really seem to be a scale factor in the projectionMatrix or the ARBodyAnchor transform, no matter the distance of the camera to the detected body, the scale of the body is always the same. Which means I am missing something important, and I haven't figured out what. So does anyone have an example of how I can get the body align to the camera image? (or get the distance to each joint in any other way?) Thanks!
1
0
567
May ’24
RealityKit PhotogrammetrySession not recognizing depth scale
I'm creating a custom scanning solution for iOS and using RealityKit Object Capture PhotogrammetrySession API to build a 3D model. I'm finding the data I'm sending to it is ignoring the depth and not building the model to scale. The documentation is a little light on how to format the depth so I'm wondering if someone could take a look at some example files I send to the PhotogrammetrySession. Would you be able to tell me what I'm not doing correctly? https://drive.google.com/file/d/1-GoeR_KMhX_i7-y8M8ElDRrySasOdoHy/view?usp=sharing Thank you!
1
0
1.8k
Jan ’22
Trying to use ARView session's currentFrame camera transforms to identify wall corners in capturedImage
I am trying to determine the corners of a RoomPlan-detected wall using the information available in the ARView session's frame, but can't quite figure out what I'm doing wrong. The corners appear to be correct relative to each other, but the wall appears too large when I render it. (I'm also not sure I'm handling the image rotation correctly either, which may be compounding my problem). Here is the code I currently have, along with a sample image, and the resulting image when I pass it through the perspective filter. it is close but isn't cropping the walls and floors correctly. func captureSession(_ session: RoomCaptureSession, didChange room: CapturedRoom) { for surface in room.walls { if let frame = self.arView.session.currentFrame { var image: CGImage? = nil VTCreateCGImageFromCVPixelBuffer(frame.capturedImage, options: nil, imageOut: &image) let wallTransform = surface.transform let cameraTransform = frame.camera.transform let intrinsics = frame.camera.intrinsics let projectionMatrix = frame.camera.projectionMatrix let width = surface.dimensions.y let height = surface.dimensions.x let inverseCameraTransform = simd_inverse(cameraTransform) let wallTopRight = simd_float4(width/2, height/2, 0, 1) let wallTopLeft = simd_float4(-width/2, height/2, 0, 1) let wallBottomRight = simd_float4(width/2, -height/2, 0, 1) let wallBottomLeft = simd_float4(-width/2, -height/2, 0, 1) let worldTopRight = wallTransform * wallTopRight let worldTopLeft = wallTransform * wallTopLeft let worldBottomRight = wallTransform * wallBottomRight let worldBottomLeft = wallTransform * wallBottomLeft let cameraTopRight = projectionMatrix * inverseCameraTransform * worldTopRight let cameraTopLeft = projectionMatrix * inverseCameraTransform * worldTopLeft let cameraBottomRight = projectionMatrix * inverseCameraTransform * worldBottomRight let cameraBottomLeft = projectionMatrix * inverseCameraTransform * worldBottomLeft let imageTopRight = intrinsics * simd_float3(cameraTopRight.x / cameraTopRight.w, cameraTopRight.y / cameraTopRight.w, cameraTopRight.z / cameraTopRight.w) let imageTopLeft = intrinsics * simd_float3(cameraTopLeft.x / cameraTopLeft.w, cameraTopLeft.y / cameraTopLeft.w, cameraTopLeft.z / cameraTopLeft.w) let imageBottomRight = intrinsics * simd_float3(cameraBottomRight.x / cameraBottomRight.w, cameraBottomRight.y / cameraBottomRight.w, cameraBottomRight.z / cameraBottomRight.w) let imageBottomLeft = intrinsics * simd_float3(cameraBottomLeft.x / cameraBottomLeft.w, cameraBottomLeft.y / cameraBottomLeft.w, cameraBottomLeft.z / cameraBottomLeft.w) let topRight = CGPoint(x: CGFloat(imageTopRight.x), y: CGFloat(imageTopRight.y)) let topLeft = CGPoint(x: CGFloat(imageTopLeft.x), y: CGFloat(imageTopLeft.y)) let bottomRight = CGPoint(x: CGFloat(imageBottomRight.x), y: CGFloat(imageBottomRight.y)) let bottomLeft = CGPoint(x: CGFloat(imageBottomLeft.x), y: CGFloat(imageBottomLeft.y)) if let image { let filter = CIFilter.perspectiveCorrection() filter.inputImage = CIImage(image: UIImage(cgImage: image)) filter.topRight = topRight filter.topLeft = topLeft filter.bottomRight = bottomRight filter.bottomLeft = bottomLeft let transformedImage = filter.outputImage if let transformedImage { let context = CIContext() if let outputImage = context.createCGImage(transformedImage, from: transformedImage.extent) { let wall = Wall(id: surface.identifier, image: outputImage, surface: surface) self.walls.append(wall) } } } } } }
0
0
470
Apr ’24
Photogrammetry failed with crash(Assert: in line 417)
I'm developing a 3D scanner works on a iPad(6th gen, 12-inch). Photogrammetry with ObjectCaptureSession was successful, but other trials are not. I've tried Photogrammetry with URL inputs, these are pictures from AVCapturePhoto. It is strange... if metadata is not replaced, photogrammetry would be finished but it seems to be no depthData or gravity info were used. (depth and gravity is separated files). but if metadata is injected, this trial are fails. and this time i tried to Photogrammetry with PhotogrammetrySamples sequence and it also failed. the settings are: camera: back Lidar camera, image format: kCVPicelFormatType_32BGRA(failed with crash) or hevc(just failed) image depth format: kCVPixelFormatType_DisparityFloat32 or kCVPixelFormatType_DepthFloat32 photoSetting: isDepthDataDeliveryEnabled = true, isDepthDataFiltered = false, embeded = true I wonder iPad supports Photogrammetry with PhotogrammetrySamples I've already tested some sample codes provided by apple: https://developer.apple.com/documentation/realitykit/creating_a_photogrammetry_command-line_app https://developer.apple.com/documentation/avfoundation/additional_data_capture/capturing_depth_using_the_lidar_camera https://developer.apple.com/documentation/realitykit/taking_pictures_for_3d_object_capture What should I do to make Photogrammetry successful?
2
1
639
Nov ’23
When using ARImageTrackingConfiguration, the entity in the scene keeps shaking
When isAutoFocusEnabled is set to true, the entity in the scene keeps shaking. No focus when isAutoFocusEnabled is set to false. How to set up to solve this problem. override func viewDidLoad() { super.viewDidLoad() arView.session.delegate = self guard let arCGImage = UIImage(named: "111", in: .main, with: .none)?.cgImage else { return } let arReferenceImage = ARReferenceImage(arCGImage, orientation: .up, physicalWidth: CGFloat(0.1)) let arImages: Set<ARReferenceImage> = [arReferenceImage] let configuration = ARImageTrackingConfiguration() configuration.trackingImages = arImages configuration.maximumNumberOfTrackedImages = 1 configuration.isAutoFocusEnabled = false arView.session.run(configuration) } func session(_ session: ARSession, didAdd anchors: [ARAnchor]) { anchors.compactMap { $0 as? ARImageAnchor }.forEach { let anchor = AnchorEntity(anchor: $0) let mesh = MeshResource.generateBox(size: 0.1, cornerRadius: 0.005) let material = SimpleMaterial(color: .gray, roughness: 0.15, isMetallic: true) let model = ModelEntity(mesh: mesh, materials: [material]) model.transform.translation.y = 0.05 anchor.children.append(model) arView.scene.addAnchor(anchor) } }
0
0
430
Apr ’24