Integrate iOS device camera and motion features to produce augmented reality experiences in your app or game using ARKit.

ARKit Documentation

Post

Replies

Boosts

Views

Activity

How to Detect Gaze & Gesture on Entity
It's a common system interaction to look at an item in SwiftUI and tap to select it. I'm confused how to do the same with ModelEntities. How do I use gaze to select a ModelEntity for context based actions? e.g. look at the green sphere and tap to pull up a menu. Or look in a direction and clap to **** away virtual objects etc. etc. If this is not possible is there a workaround?
1
0
497
Jun ’24
Can i use Ultra-Wide AVKit, AvPlayer VisionOS 2 ?
hello i watched WWDC24, Ultra-Wide Mac Display. i wanna use to my player like that Ultra-wide mac Display. i wanna play for mp4 movie file in Ultra-wide mode (like that curved mode) Can i use Ultra-Wide AVKit, VisionOS 2 ? when i check in Apple documentation, AVExperienceController.Experience.expanded, Is this the function(Ultra-wide mode) I think it is? (https://developer.apple.com/documentation/avkit/avexperiencecontroller/experience-swift.enum/expanded#discussion)
0
0
304
Jun ’24
Get distance from uvd and intrinsic matrix?
Hello! I am having trouble calculating accurate distances in the real world using the camera's returned intrinsic matrix and pixel coordinates/depths captured from the iPhone's LiDAR. For example, in the image below, I set a mug 0.5m from the phone. The mug is 8.5cm wide. The intrinsic matrix returned from the phone's AVCameraCalibrationData class has focalx = 1464.9269, focaly = 1464.9269, cx = 960.94916, and cy = 686.3547. Selecting the two pixel locations denoted in the image below, I calculated each one's xyz coordinates using the formula: x = d * (u - cx) / focalx y = d * (v - cy) / focaly z = d Where I get depth from the appropriate pixel in the depth map - I've verified that both depths were 0.5m. I then calculate the distance between the two points to get the mug width. This gives me a calculated width of 0.0357, or 3.5 cm, instead of the 8.5cm I was expecting. What could be accounting for this discrepancy? Thank you so much for your help!
5
0
2.3k
May ’22
ARGeoAnchor: Supplement imagery in private locations
I am developing an iOS app intended to be used only a specific location (a campus). In this case, I'd like to use ARGeoAnchor to anchor content across a relatively large space, though in the pedestrian-only areas this is not supported and tracking begins to fail. Is it possible to additionally use ARReferenceObject to re-localize to a specific location when I am walking in an unsupported area. (FB13719373)
1
0
419
Apr ’24
Object Tracking with Rotation of Objects
Hey, In the "Explore object tracking for visionOS" session we explore how a Globe can be tracked, and objects can be anchored to various positions. My question is if the physical Globe is rotated, will the anchored objects also respond to this in real-time? I would like to overlap a virtual map on top of a physical globe, so when the user rotates the physical globe, the virtual map also seamlessly responds. Is this possible using Object Tracking? Thanks
3
1
604
Jun ’24
Object Tracking Failed to load from object reference
I was following Explore object tracking for visionOS to load an object reference, but got this error: Failed to load reference object from URL: ObjectTrackingProvider.Error(code: referenceObjectLoadingFailed, errorDescription: "The operation couldn’t be completed. (com.apple.arkit error 1101.)", failureReason: "", recoverySuggestion: "" Here is what I have, not sure if it is an code error, or something with the system: private func loadReferenceObject() { Task { // Load the reference object let refObjURL = Bundle.main.url(forResource: "objectTrackerBox", withExtension: ".referenceobject") if let refObjURL = refObjURL { do { let refObj = try await ReferenceObject(from: refObjURL) logMessage = "Reference object loaded successfully: \(refObj)" print(logMessage) } catch { logMessage = "Failed to load reference object from URL: \(error)" print(logMessage) } } else { logMessage = "Failed to find the reference object file." print(logMessage) } } }
1
0
509
Jun ’24
RealityKit on iOS: New anchor entity takes ages to show up
I'm implementing an AR app with Image Tracking capabilities. I noticed that it takes very long for the entities I want to overlay on a detected image to show up in the video feed. When debugging using debugOptions.insert(.showAnchorOrigins), I realized that the image is actually detected very quickly, the anchor origins show up almost immediately. And I can also see that my code reacts with adding new anchors for my ModelEntities there. However, it takes ages for these ModelEntities to actually show up. Only if I move the camera a lot, they will appear after a while. What might be the reason for this behaviour? I also noticed that for the first image target, a huge amount of anchors are being created. They start from the image and go all up towards the user. This does not happen with subsequent (other) image targets.
1
0
398
Jun ’24
Questions about WorldTrackedAnchor Resiliency
Background: The app that I am working on lets the user place things in their surroundings and recovers those placements the next time their enter the immersive scene. From the documentation and discussions I have had, World Tracked Anchors are local to the device. My questions are: What happens to these anchors when the user updates their device to the next generation? What happens to these anchors if the user gets an Apple Care replacement? Are they backed up and restored via iCloud? If not, I filed a feedback about it a few months back :D FB13613066
1
0
640
Jun ’24
Generating accurate CollisionComponent
Currently in an app I am working on, we are adding collision shapes/components to objects by using the ShapeResource.generateConvex method to generate the shape from the mesh of our ModelEntity. Unfortunately, this does not result in a totally accurate collision shape. The following example is how the collision component looks currently. Is there anyway to generate a collision shape that fits the exact bounds of the ModelEntity?
2
1
654
May ’24
Progressive immersive space and Digital Crown (and ARKit)
I am new to visionOS development, just slowly figuring out the difference in immersion styles to figure out how I want my app to behave. It seems that when you use a progressive immersive space the minimum immersion level (set via the digital crown) is not 0? Meaning, there is no way to go from mixed to full by using the Digital Crown. Even when I try to set it to 0 (such as in the Destination Video sample), it pops back up to around 30-40%, and I always see the background. Is this expected behavior, or are there some settings that allow me to change this minimum immersion level? Further, in the video 'Meet ARKit for spatial computing', it is stated that to get access to ARKit tracking data you must use a 'Full Space', not the 'Shared Space'. This wording is confusing to me. Is an ImmersiveSpace set to the .mixed (or .progressive) immersion style still a 'Full Space' (because it isn't in the shared space, with other apps)? OR, is ARKit only available in an ImmersiveSpace with the .full immersion style? Just feels like maybe 'full' is being used in two different ways here... Thanks in advance, -pj
2
0
1.1k
Feb ’24
How do I use RoomAnchor?
What I want to do: I want to turn only the walls of a room into RealityKit Entities that I can collide with, or turn into occlusion surfaces. This requires adding and maintaining RealityKit entities that with mesh information from the RoomAnchor. It also requires creating a "collision shape" from the mesh information. What I've explored: A RoomAnchor can provide me MeshAnchor.Geometry's that match only the "wall" portions of a Room. I can use this mesh information to create RealityKit entities and add them to my immersive view. But those Mesh's don't come with UUIDs, so I'm not sure how I could know which entities meshes need to to be updated as the RoomAnchor is updated. As such I just keep adding duplicate wall entities. A RoomAnchor also provides me with the UUIDs of its plane anchors, but no way to connect those to the provided meshes that I've discovered so far. Here is how I add the green walls from the RoomAnchor wall meshes. Note: I don't like that I need to wrap this in a task to satisfy the async nature of making a shape from a mesh. could be stuck with it, though. Warning: this code will keep adding walls, even if there are duplicates and will likely cause performance issues :D. func updateRoom(_ anchor: RoomAnchor) async throws { print("ROOM ID: \(anchor.id)") anchor.geometries(of: .wall).forEach { mesh in Task { let newEntity = Entity() newEntity.components.set(InputTargetComponent()) realityViewContent?.addEntity(newEntity) newEntity.components.set(PlacementUtilities.PlacementSurfaceComponent()) collisionEntities[anchor.id]?.components.set(OpacityComponent(opacity: 0.2)) collisionEntities[anchor.id]?.transform = Transform(matrix: anchor.originFromAnchorTransform) // Generate a mesh for the plane do { let contents = MeshResource.Contents(planeGeometry: mesh) let meshResource = try MeshResource.generate(from: contents) // Make this plane occlude virtual objects behind it. // entity.components.set(ModelComponent(mesh: meshResource, materials: [OcclusionMaterial()])) collisionEntities[anchor.id]?.components.set(ModelComponent(mesh: meshResource, materials: [SimpleMaterial.init(color: .green, roughness: 1.0, isMetallic: false)])) } catch { print("Failed to create a mesh resource for a plane anchor: \(error).") return } // Generate a collision shape for the plane (for object placement and physics). var shape: ShapeResource? = nil do { let vertices = anchor.geometry.vertices.asSIMD3(ofType: Float.self) shape = try await ShapeResource.generateStaticMesh(positions: vertices, faceIndices: anchor.geometry.faces.asUInt16Array()) } catch { print("Failed to create a static mesh for a plane anchor: \(error).") return } if let shape { let collisionGroup = PlaneAnchor.verticalCollisionGroup collisionEntities[anchor.id]?.components.set(CollisionComponent(shapes: [shape], isStatic: true, filter: CollisionFilter(group: collisionGroup, mask: .all))) // The plane needs to be a static physics body so that objects come to rest on the plane. let physicsMaterial = PhysicsMaterialResource.generate() let physics = PhysicsBodyComponent(shapes: [shape], mass: 0.0, material: physicsMaterial, mode: .static) collisionEntities[anchor.id]?.components.set(physics) } collisionEntities[anchor.id]?.components.set(InputTargetComponent()) } } }
1
0
631
Jun ’24
Trouble loading a reference object from Asset Catalog
Hi, I'm trying to test object recognition using ARKit. I scanned a couple of objects using the Apple demo app, copied the .arobject files to my laptop. I added them to my new project in Assets like shown in the image. However, as I follow the tutorial to load these objects to use as reference objects, I run into an error. let configuration = ARWorldTrackingConfiguration() guard let referenceObjects = ARReferenceObject.referenceObjects(inGroupNamed: "Test", bundle: Bundle.main) else { let ro = ARReferenceObject.referenceObjects(inGroupNamed: "Test", bundle: nil); let ro1 = ARReferenceObject.referenceObjects(inGroupNamed: "Gallery", bundle: .main); fatalError("Resource not found") } Here, we fail the guard statement and the ro and ro1 both are set to nil. I created a new project with just this one statement and that fails too. I'm using SwiftUI instead of UIKit if that makes a difference and am calling this in the makeUIView() function . Any pointers to what I might be doing wrong here are appreciated.
1
1
413
Jun ’24
ARKit : Couldn't load AR reference object from URL or AR Resource group
I have a arobject files that's already tested and working perfectly in Reality Composer as an anchor. But for whatever reason when I try them both as AR Resource group in my asset or even loading it directly from the url it always fails (returns nil) I've double check all the file/group names and they seems fine and I couldn't find the error, just always nil. This is my code : var referenceObject: ARReferenceObject? if let referenceObjects = ARReferenceObject.referenceObjects(inGroupNamed: "TestAR", bundle: Bundle.main) { referenceObject = referenceObjects[referenceObjects.startIndex] } if let referenceObject = referenceObject{ delegate.didFinishScan(referenceObject, false) }else { do { if let url = Bundle.main.url(forResource: "Dragonball1", withExtension: "arobject") { referenceObject = try ARReferenceObject(archiveURL: url) } } catch let myError{ let error = myError as NSError print("try \(error.code)") } } Any idea? Thanks
1
1
705
Aug ’22