Integrate iOS device camera and motion features to produce augmented reality experiences in your app or game using ARKit.

ARKit Documentation

Post

Replies

Boosts

Views

Activity

Index out of range in CoreRE framework
Hi, encountered an issue when running Underwater RealityKit app example (https://developer.apple.com/documentation/realitykit/building_an_immersive_experience_with_realitykit) Target device was iPhone 14 Pro running iOS 17.1 The same issue occurred on my own project, so the bug is not in Underwater app The bug itself: Crashed Thread: Render Func: re::DataArray<re::MeshInstance>::get(re::DataArrayHandle<re::MeshInstance>) const + 80 Message: Index out of range in operator[]. index = 18 446 744 073 709 551 615, maximum = 112
0
1
331
Oct ’23
Exporting scripts to a USDZ file in Reality Composer PRO.
Hello. I've started exploring the new features in Reality Composer PRO and noticed that Composer now supports adding custom scripts as components to any objects in the scene. I'm curious about the following: will these scripts work if I export such a scene to a USDZ file and try to open it using Apple Quick Look? For instance, I want to add a 3D button and a cube model. When I press the button (touch it), I want to change the material or material color to another one using a script component. Is such functionality possible?
0
0
636
Oct ’23
ARKitCoachingOverlay translation / localization
Hi, is it possible to localize the text for the ARKitCoachingOverlay? https://miro.medium.com/v2/resize:fit:640/format:webp/1*FDzypCQtuU10Ky203NQr-A.png I`m developing with Unity and use this sample script provided: https://github.com/Unity-Technologies/arfoundation-samples/blob/main/Assets/Scenes/ARKit/ARKitCoachingOverlay/ARKitCoachingOverlay.cs So in the best case, you can describe how I can get translations working for German. Thanks
2
0
307
Nov ’23
How do we author a "reality file" like the ones on Apple's Gallery?
How do we author a Reality File like the ones under Examples with animations at https://developer.apple.com/augmented-reality/quick-look/ ?? For example, "The Hab" : https://developer.apple.com/augmented-reality/quick-look/models/hab/hab_en.reality Tapping on various buttons in this experience triggers various complex animations. I don't see any way to accomplish this in Reality Composer. And I don't see any way to export/compile to a "reality file" from within Xcode. How can I use multiple animations within a single GLTF file? How can I set up multiple "tap target" on a single object, where each one triggers a different action? How do we author something similar? What tools do we use? Thanks
5
1
1.1k
Nov ’23
Object recognition and tracking on visionOS
Hello! I would like to develop a visionOS application that tracks a single object in a user's environment. Skimming through the documentation I found out that this feature is currently unsupported in ARKit (we can only recognize images). But it seems it should be doable by combining CoreML and Vision frameworks. So I have a few questions: Is it the best approach or is there a simpler solution? What is the best way to train a CoreML model without access to the device? Will videos recorded by iPhone 15 be enough? Thank you in advance for all the answers.
1
0
665
Nov ’23
ARKit: Anchor and Entity looked disabled outside Xcode simulator
Issue Condition Below code lines run successfully although a connected device did not render newly added anchors and scene while Xcode simulator did. Is there any device specific context to halt such a rendering of scene? IsEnabled == true but below anchor instance was unexpectedly invisible. let sphere = MeshResource.generateSphere(radius: 0.03) let material = SimpleMaterial(color: sphereColor, roughness: 0, isMetallic: true) let entityChild = ModelEntity(mesh: sphere, materials: [material]) let anchor = AnchorEntity(world: position) anchor.addChild(entityChild) scene.addAnchor(anchor) AR Tacks.keynote page 5 contained this issue condition as an image. Issuing Technologies ARKit RealityKit
0
0
242
Nov ’23
ARKit, visionOS: Creating my own data provider
As the scene data providers in ARKit on visionOS simulator are not supported, I try to create my own with dummy data. As soon as I try to run an ARKit session with an instance of that provider I get a crash (EXC_BREAKPOINT). So what am I doing wrong? Definition of data provider: @available(visionOS 1.0, *) public final class TestSceneDataProvider: DataProvider, Equatable, @unchecked Sendable { public static func == (lhs: TestSceneDataProvider, rhs: TestSceneDataProvider) -> Bool { lhs.id == rhs.id } public typealias ID = UUID public var id: UUID = UUID() public static var isSupported: Bool = true public static var requiredAuthorizations: [ARKitSession.AuthorizationType] = [] public var state: DataProviderState = .initialized public var description: String = "TestSceneDataProvider" } Running the session: do { if TestSceneDataProvider.isSupported { print("ARKitSession starting.") let sceneReconstruction = TestSceneDataProvider() try await session.run([sceneReconstruction]) } } catch { print("ARKitSession error:", error) }
2
0
519
Nov ’23
Mapping 3D coordinates to the screen using Entity's convert(position:from:) and ARView's project(_:) methods.
I'm still trying to understand how to correctly convert 3D coordinates to 2D screen coordinates using convert(position:from:) and project(_:) Below is the example ContentView.swift from the default Augmented Reality App project, with a few important modifications. Two buttons have been added, one that toggles visibility of red circular markers on the screen, and a second button that adds blue spheres to the scene. Additionally a timer has been added to trigger regular screen updates. When run, the markers should line up with the spheres on screen and follow them on screen, as the camera is moved around. However, the red circles are all very far from their corresponding spheres on screen. What am I doing wrong in my conversion that is causing the circles to not line up with the spheres? // ContentView.swift import SwiftUI import RealityKit class Coordinator { var arView: ARView? var anchor: AnchorEntity? var objects: [Entity] = [] } struct ContentView : View { let timer = Timer.publish(every: 1.0/30.0, on: .main, in: .common).autoconnect() var coord = Coordinator() @State var showMarkers = false @State var circleColor: Color = .red var body: some View { ZStack { ARViewContainer(coordinator: coord).edgesIgnoringSafeArea(.all) if showMarkers { // Add circles to the screen ForEach(coord.objects) { obj in Circle() .offset(projectedPosition(of: obj)) .frame(width: 10.0, height: 10.0) .foregroundColor(circleColor) } } VStack { Button(action: { showMarkers = !showMarkers }, label: { Text(showMarkers ? "Hide Markers" : "Show Markers") }) Spacer() Button(action: { addSphere() }, label: { Text("Add Sphere") }) } }.onReceive(timer, perform: { _ in // silly hack to force circles to redraw if circleColor == .red { circleColor = Color(#colorLiteral(red: 1, green: 0, blue: 0, alpha: 1)) } else { circleColor = .red } }) } func addSphere() { guard let anchor = coord.anchor else { return } // pick random point for new sphere let pos = SIMD3<Float>.random(in: 0...0.5) print("Adding sphere at \(pos)") // Create a sphere let mesh = MeshResource.generateSphere(radius: 0.01) let material = SimpleMaterial(color: .blue, roughness: 0.15, isMetallic: true) let model = ModelEntity(mesh: mesh, materials: [material]) model.setPosition(pos, relativeTo: anchor) anchor.addChild(model) // record sphere for later use coord.objects.append(model) } func projectedPosition(of object: Entity) -> CGPoint { // convert position of object into "world space" // (i.e., "the 3D world coordinate system of the scene") // https://developer.apple.com/documentation/realitykit/entity/convert(position:to:) let worldCoordinate = object.convert(position: object.position, to: nil) // project worldCoordinate into "the 2D pixel coordinate system of the view" // https://developer.apple.com/documentation/realitykit/arview/project(_:) guard let arView = coord.arView else { return CGPoint(x: -1, y: -1) } guard let screenPos = arView.project(worldCoordinate) else { return CGPoint(x: -1, y: -1) } // At this point, screenPos should be the screen coordinate of the object's positions on the screen. print("3D position \(object.position) mapped to \(screenPos) on screen.") return screenPos } } struct ARViewContainer: UIViewRepresentable { var coordinator: Coordinator func makeUIView(context: Context) -> ARView { let arView = ARView(frame: .zero) // Create a sphere model let mesh = MeshResource.generateSphere(radius: 0.01) let material = SimpleMaterial(color: .gray, roughness: 0.15, isMetallic: true) let model = ModelEntity(mesh: mesh, materials: [material]) // Create horizontal plane anchor for the content let anchor = AnchorEntity(.plane(.horizontal, classification: .any, minimumBounds: SIMD2<Float>(0.2, 0.2))) anchor.children.append(model) // Record values needed elsewhere coordinator.arView = arView coordinator.anchor = anchor coordinator.objects.append(model) // Add the horizontal plane anchor to the scene arView.scene.anchors.append(anchor) return arView } func updateUIView(_ uiView: ARView, context: Context) {} } #Preview { ContentView() }
2
0
1k
Nov ’23
Photogrammetry failed with crash(Assert: in line 417)
I'm developing a 3D scanner works on a iPad(6th gen, 12-inch). Photogrammetry with ObjectCaptureSession was successful, but other trials are not. I've tried Photogrammetry with URL inputs, these are pictures from AVCapturePhoto. It is strange... if metadata is not replaced, photogrammetry would be finished but it seems to be no depthData or gravity info were used. (depth and gravity is separated files). but if metadata is injected, this trial are fails. and this time i tried to Photogrammetry with PhotogrammetrySamples sequence and it also failed. the settings are: camera: back Lidar camera, image format: kCVPicelFormatType_32BGRA(failed with crash) or hevc(just failed) image depth format: kCVPixelFormatType_DisparityFloat32 or kCVPixelFormatType_DepthFloat32 photoSetting: isDepthDataDeliveryEnabled = true, isDepthDataFiltered = false, embeded = true I wonder iPad supports Photogrammetry with PhotogrammetrySamples I've already tested some sample codes provided by apple: https://developer.apple.com/documentation/realitykit/creating_a_photogrammetry_command-line_app https://developer.apple.com/documentation/avfoundation/additional_data_capture/capturing_depth_using_the_lidar_camera https://developer.apple.com/documentation/realitykit/taking_pictures_for_3d_object_capture What should I do to make Photogrammetry successful?
2
1
584
Nov ’23
FPS drop while updating ModelEntity position
Hi there. I have a performance issue when updating ModelEntity's position. There are two models with the same parent: arView.scene.add(anchorEntity) anchorEntity.addChild(firstModel) anchorEntity.addChild(secondModel) The firstModel is very large model. I am taking position of second model and applying it to the first: func session(_ session: ARSession, didUpdate frame: ARFrame) { // ... // Here the FPS drops firstModel.position = secondModel.position // ... } In other 3D Engines changing the transform matrix do not affects the performance. You can change it like hundred times in a single frame. It's only renders the last value on next frame. It means that the changing position itself should not cause FPS drop. If it's low, it will be always low, because there is always a value in transform matrix, and the renderer always renders what stored there. If you change the value, the next frame will basically be rendered with the new value, nothing heavy will not be happen. But in my case the FPS drops only if the model's position got changed. If it's not, the FPS is 60. So the changing transform matrix caused FPS drop. Can anyone describe why the RealityKit's renderer works in that way?
0
0
393
Nov ’23
When "zooming" in on the camera, the "joint information" comes out wrong.
Hello. I am a student who just studied Swift. It has succeeded in obtaining body joint information through 'skelton.jointLandmarks' from a typical screen, but every time I zoom in, there is a problem that this joint information is not on the human body and moves sideways and downward. From my guess, there may be a problem that the center of the AR screen is not located in the center of the cell phone screen. I've been searching for information for 3 days due to this problem, but I couldn't find a similar case, and I haven't been able to solve it. If there is a case of solving a similar problem, I would appreciate it if you could let me know. Below link is how I zoomed in on the ARView screen. https://stackoverflow.com/questions/64896064/can-i-use-zoom-with-arview Thank you. Below is how I'm currently having trouble.
0
0
442
Nov ’23
Reality Converter smoothing all geometry
This wasn't happening with older versions of Reality Converter... I open a GLTf file created in Cinema 4D and all corners are smoothed, completely transforming the object's geometry. Reality Converter was a very useful tool for quick object testing: save in USDz and a look using iPhone, unfortunately is this smoothing happens this is not usable anymore. Does anyone know why this is happening and how to turn it off? This is a screenshot from C4D And this is the same object imported in Reality Converter
0
0
259
Nov ’23
USD particles not supported in RealityKit with iOS (*not VisionOS*)
RealityKit doesn't appear to support particles. After exporting particles from Blender 4.0.1, in standard .usdz format, the particle system renders correctly in Finder and Reality Converter, but when loaded into and anchored in RealityKit...nothing happens. This appears to be a bug in RealityKit. I tried one or more particle instances and nothing renders.
0
0
656
Nov ’23
Plane detection does not work in simulators
I have configured ARKit and PlaneDetectionProvider, but after running the code in the simulator, PlaneEntity is not displayed correctly import Foundation import ARKit import RealityKit class PlaneViewModel: ObservableObject{ var session = ARKitSession() let planeData = PlaneDetectionProvider(alignments: [.horizontal]) var entityMap: [UUID: Entity] = [:] var rootEntity = Entity() func start() async { do { if PlaneDetectionProvider.isSupported { try await session.run([planeData]) for await update in planeData.anchorUpdates { if update.anchor.classification == .window { continue } switch update.event { case .added, .updated: updatePlane(update.anchor) case .removed: removePlane(update.anchor) } } } } catch { print("ARKit session error \(error)") } } func updatePlane(_ anchor: PlaneAnchor) { if entityMap[anchor.id] == nil { // Add a new entity to represent this plane. let entity = ModelEntity( mesh: .generateText(anchor.classification.description) ) entityMap[anchor.id] = entity rootEntity.addChild(entity) } entityMap[anchor.id]?.transform = Transform(matrix: anchor.originFromAnchorTransform) } func removePlane(_ anchor: PlaneAnchor) { entityMap[anchor.id]?.removeFromParent() entityMap.removeValue(forKey: anchor.id) } } var body: some View { @stateObject var planeViewModel = PlaneViewModel() RealityView { content in content.add(planeViewModel.rootEntity) } .task { await planeViewModel.start() } }
1
0
584
Dec ’23