Integrate iOS device camera and motion features to produce augmented reality experiences in your app or game using ARKit.

Posts under ARKit tag

200 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

[App synchronization] I have a question about synchronizing Vision Pro app contents.
Hi! I'm creating an app like this: Using Image Tracking to set world anchor in real world first. The timeline in Reality Composer Pro scene needs to be played in same time(for the people in same place using the app). People using the app will see the same contents in same position in same time in same place. I already made Image Tracking feature worked. But the big problem is "Synchronization". I found Group Activities and TabletopKit to solve the problem. But I don't know if this are the right modules for this project. How do I solve this problem technically? If you have ideas, please let me know. I really need help for this.
0
0
75
13h
Horizontal window/map on visionOS
Hi, would anyone be so kind and try to guide me, which technologies, Kits, APIs, approaches etc. are useful for creating a horizontal window with map (preferrably MapKit) on visionOS using SwiftUI? I was hoping to achieve scenario: User can walk around and interact with horizontal map window and also interact with (3D) pins on the map. Similar thing was done by SAP in their "SAP Analytics Cloud" app (second image from top). Since I am complete beginner in this area, I was looking for a clean, simple solution. I need to know, if AR/RealityKit is necessary or is this achievable only using native SwiftUI? I tried using just Map() with .rotation3DEffect() which actually makes the map horizontal, but gestures on the map are out of sync and I really dont know, if this approach is valid or complete rubbish. Any feedback appreciated.
0
0
54
16h
AR anchor shared across multiple immersive scenes
Hello, I am currently working on an app that features multiple environments in which I combine Reality Composer Pro scenes with objects managed at runtime as well as make heavy use of RealityView attachments that modify the appearance of certain objects. Is it possible to keep track of an AR anchor when transitioning between immersive spaces? About my app: There are two main contexts/scenes in the app that the user progresses through. The first takes place in AR and is non-interactive and driven by a timeline animation. The second is in VR and allows the user to change materials of select models. Both scenes need to be placed relative to a real-life object that functions as an image anchor. Anchoring is necessary for visual purposes in AR context and it would be nice to use it in the VR context as well in order to provide passive haptics to the user. If the user doesn't have access to the physical object, we make use of plane-based anchoring. Either way, we would like to keep the anchor's position across the scenes.
0
0
56
18h
RoomCaptureSession persistence, ARSession pause broken?
Hi all, Our app allows a user to scan a room and then save that scan on a separate view, followed by additional scans. We're looking into allowing room combining via CapturedStructure, so we need rooms to be scanned in the same ARWorldMap without necessarily needing to re-localize in the same session. This should fit within the first scenario that Apple described. The only way I have found that allows our requirements is to save RoomCaptureView and to re-use that RoomCaptureView whenever we need to start a session again. This creates a number of other issues, and ideally, we wouldn't need to save a View in something like a singleton. We are using captureSession.stop(pauseARSession: false). Additionally, if we use the same RoomCaptureView and an error occurs during the scanning process, we can't get the instructions overlay to appear again if we reuse this view (specifically, the instructions in the middle of the view that state "Move device to start"). It's as if the instructions are completely removed and scanning is stuck on an error state if an error occurs. These instructions also seem to be separate from the instructions we can grab from RoomCaptureViewDelegate via didProvide instruction: RoomCaptureSession.Instruction), so we can't use that either. There's a couple subviews that seem relevant to this: RoomCaptureCoachingOverlayView and ARGlyphView - but both are not public, so we can't force them to appear. Also attempted a number of other things to try to get these subviews to appear, such as layoutIfNeeded(). Saving the ARSession and using it in let roomCaptureView = RoomCaptureView(frame: viewBounds, arSession: arSession) where we're creating a new view with the same ARSession seems much more ideal as that solves the above issues, but we run into another issue: world tracking seems to be completely lost when a new RoomCaptureView (and thus a new RoomCaptureSession) is started, even with the same already started ARSession, almost as if captureSession.stop(pauseARSession: false) doesn't work as described. Is there any way around needing to use the same RoomCaptureView or RoomCaptureSession for subsequent scans in the same session without needing to re-localize via ARWorldMap loading? Is there a way to force the guiding instructions to appear?
0
2
55
20h
Maintain rotation sense while dragging
Hi, I heve a problem with an visionOS app and I couldn't find a solution. I have 3D carousel with cards and when I use the drag gesture and I drag to the left I want the carousel to rotate clockwise and when I drag to the right I want the carousel to rotate counter clockwise. My problem is when I rotate my body more than 90 degrees to the left or to the right the drag gesture changes it's value and the carousel rotates in the opposite direction? Do you know how can I maintain the right sense taking into account that the user can rotate his body? I've tried to take the user orientation with device tracking and check if rotation on Y axis is greater than 90 degrees in both direction but It is a very small area bettween 70-110 degrees when it still rotates in the opposite direction. I think that's because the device traker doesn't update at the same rate as drag gesture or it doesn't have the same acurracy.
0
0
85
1d
Hover Effect & Tap Problem
Hi, since I updated my device to visionOS 2.0 or higher I have some problems with my app. Sometimes when I go with my eyes over buttons or the tabview which is a SwiftUI component that many apps use, the hover effect doesn't trigger anymore, I can tap on the icon from the tabview but doesn't extend when I hover it to se the description of the button. It is weird because it doesn't happen all the time. But in VisionOS 1.0 doesn't happen at all. My second issue is that I have an navbar as an attachment and this attachment has a draggable modifier that we created. The buttons are not tappable until I drag the navbar a little, this also never happened in VisionOS 1.0 but always happen in Vision 2.0+ because I've tested in the simulator with different versions. Is it possible these problems to be related to handTracking Service that we use from ARKit? Because sometimes when we close the trackers the app works as intended.
0
0
108
2d
Xcode 16 cannot load AR scenes from .rcproject files
Ever since updating to Xcode 16 my AR app doesn't compile, because Xcode doesn't recognize the .rcproject files used to load the AR experiences in iOS app. The .rcproject files were authored in Reality Composer on iPadOS. The expected behavior is described in this official Apple documentation article: https://developer.apple.com/documentation/realitykit/loading-entities-from-a-file How do I submit a ticket to Apple?
0
0
84
2d
INFOS FROM CMSSampleBuffer
==> Which information will I get from CMSSampleBuffer ? Is there an option to block close up accomodation of the camera ? Is there a way for the object capture module to take a video instead of a series of picture ? It would be fantastic to have an answer on all of these questions to be able to move forward on new implementations.
0
0
100
3d
Why does the planeNode in SceneKit flicker when using class property instead of a local variable?
I am working on a SceneKit project where I use a CAShapeLayer as the content for SCNMaterial's diffuse.contents to display a progress bar. Here's my initial code: func setupProgressWithCAShapeLayer() { let progressLayer = createProgressLayer() progressBarPlane?.firstMaterial?.diffuse.contents = progressLayer DispatchQueue.main.async { var progress: CGFloat = 0.0 Timer.scheduledTimer(withTimeInterval: 0.1, repeats: true) { timer in progress += 0.01 if progress > 1.0 { progress = 0.0 } progressLayer.strokeEnd = progress // Update progress } } } // MARK: - ARSCNViewDelegate func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) { progressBarPlane = SCNPlane(width: 0.2, height: 0.2) setupProgressWithCAShapeLayer() let planeNode = SCNNode(geometry: progressBarPlane) planeNode.position = SCNVector3(x: 0, y: 0.2, z: 0) node.addChildNode(planeNode) } This works fine, and the progress bar updates smoothly. However, when I change the code to use a class property (self.progressLayer) instead of a local variable, the rendering starts flickering on the screen: func setupProgressWithCAShapeLayer() { self.progressLayer = createProgressLayer() progressBarPlane?.firstMaterial?.diffuse.contents = progressLayer DispatchQueue.main.async { [weak self] in var progress: CGFloat = 0.0 Timer.scheduledTimer(withTimeInterval: 0.1, repeats: true) { [weak self] timer in progress += 0.01 if progress > 1.0 { progress = 0.0 } self?.progressLayer?.strokeEnd = progress // Update progress } } } After this change, the progressBarPlane in SceneKit starts flickering while being rendered on the screen. My Question: Why does switching from a local variable (progressLayer) to a class property (self.progressLayer) cause the flickering issue in SceneKit rendering?
0
0
61
3d
ARSCNView ignores output of SCNTechnique (sometimes)
I am using SCNTechnique in combination with ARSCNView. The technique is doing so minor post-processing. I have written several filter variant for this post-processing, but I'm facing an issue when with one of the filters/fragment shaders, SCNTechnique discards my output and just presents the plain camera feed on screen instead. This is clearly visible in the Metal pipeline, using the GPU frame debugger. Let me stress that my setup works for 90% of my filters, but not this one and I want to know why. iOS 18.1, iPhone 13 Mini. Xcode 16.1. Encoder 0 & 1 are injected by the system. Render encoder 2 & 3 correspond to my SCNTechnique's render passes: one to manipulate pixel data (darken it in this case) and another to BLIT it back to the main texture. I know the separate buffer is not strictly for this particular operation, but it shouldn't matter. Note that the issue occurs in encoder 4 (not mine but ARKit's). In Render Encoder 4, scn_postprocess_AR_fragment handle my texture (#0, ending in f980) and another from the camera feed (Texture 2). I know this pass is typically used for grain because that's what it used to do before I disabled grain on ARSCNView (+ the buffer still contains grain paramaters). I have other post-processing filters that work just fine. By what magic is ARKit determining to use Texture 2 instead of my Texture 0? Sure, I could keep digging into the minute differences between my shaders to find out which LoC affects how some ARKit shader down the line operates, but it's awfully opaque so far.
0
0
122
4d
Limited Window Manipulation in Mixed Immersive Style in VisionOS
In VisionOS versions 2.1 and 2.2, I’m encountering a significant limitation when using the .immersionStyle(selection: .constant(.mixed), in: .mixed) mode, specifically in mixed immersive style. Here’s a breakdown of the behavior: In full immersion mode (.immersionStyle(selection: .constant(.full), in: .full)), users can interact with and manipulate system windows while inside a 3D model, allowing typical interactions like moving windows, pinching, or activating UI switches. However, in mixed immersive mode, using the exact same layout “inside” a 3D model (which doesn’t visually obstruct the window), users are unable to interact with window content or move the window. Basic interactions like pinching or toggling switches require users to physically touch these elements in AR space, which is inconsistent with the behavior in full immersion. From a usability perspective, this restriction seems unnecessary, as the software should ideally allow for similar interaction capabilities across both immersive styles. The expected behavior is to enable window manipulation within a 3D model in mixed mode, matching the functionality observed in full immersion. The scene in question is a House in which the user is placed during the immersion that's why I am referring to the user being "Inside" of the scene. Has anyone else experienced this or found a workaround?
1
0
158
1w
How to use `unproject(_:from:to:ontoPlane:)` of Content of RealityView
When using ARView of RealityKit, I can code like this let results = arView.raycast(from: point, allowing: .estimatedPlane, alignment: .any) to get the 3D position of where I tap on the plane. In iOS 18, we can use RealityView and I found that unproject(_:from:to:ontoPlane:) may implement the same function, but I don't know how to set the ontoPlane parameter. Can someone help me with some code snippets?
1
0
162
1w
ObjectCapture from ARKit
We are currently using ObjectCapture from ARKit, and we would like to fix exposure time, white balance parameter and ISO. How can we do this ? Additionally, we'd like to obtain the following information from the ARKit : white balance parameters (in case we cannot fix them) and color correction matrices ?
0
0
130
1w
Error: captureSession(_:didEndWith:error:) nearly matches defaulted requirement in RoomCaptureSessionDelegate
Hello, I’m working with the RoomPlan API to capture and export 3D room models in an iOS app. My goal is to implement the RoomCaptureSessionDelegate protocol to handle the end of a room capture session. However, I’m encountering a cautionary warning in Xcode: Warning: "captureSession(:didEndWith:error:) nearly matches defaulted requirement captureSession(:didEndWith:error:) of protocol RoomCaptureSessionDelegate." I’ve verified that my delegate method signature matches the protocol, but the warning persists. I suspect this might be due to minor discrepancies in the parameter types or naming conventions required by the protocol. Below is my full RoomScanner.swift file: import RoomPlan import SwiftUI import UIKit func exportModelToFiles() { let exportURL = FileManager.default.temporaryDirectory.appendingPathComponent("RoomModel.usdz") let documentPicker = UIDocumentPickerViewController(forExporting: [exportURL]) documentPicker.modalPresentationStyle = .formSheet if let windowScene = UIApplication.shared.connectedScenes.first as? UIWindowScene, let rootViewController = windowScene.windows.first?.rootViewController { rootViewController.present(documentPicker, animated: true, completion: nil) } } class RoomScanner: ObservableObject { var roomCaptureSession: RoomCaptureSession? private let sessionConfig = RoomCaptureSession.Configuration() private var capturedRoom: CapturedRoom? func startSession() { roomCaptureSession = RoomCaptureSession() roomCaptureSession?.delegate = self roomCaptureSession?.run(configuration: sessionConfig) } func stopSession() { roomCaptureSession?.stop() guard let room = capturedRoom else { print("No room data available for export.") return } let exportURL = FileManager.default.temporaryDirectory.appendingPathComponent("RoomModel.usdz") do { try room.export(to: exportURL) print("3D floor plan exported to \(exportURL)") } catch { print("Error exporting room model: \(error)") } } } // MARK: - RoomCaptureSessionDelegate extension RoomScanner: RoomCaptureSessionDelegate { func captureSession(_ session: RoomCaptureSession, didUpdate room: CapturedRoom) { // Handle real-time updates if necessary } func captureSession(_ session: RoomCaptureSession, didEndWith capturedRoom: CapturedRoom, error: Error?) { if let error = error { print("Capture session ended with error: \(error.localizedDescription)") } else { self.capturedRoom = capturedRoom } } }
1
0
190
1w
distance calculation:I want to implement the same code functionality for distance calculation in Unity using SwiftUI. visionos
I use this piece of code in Unity to get the distance length of my model entering another model. I have set collision markers at the tip and end of the model and performed raycasting, but Unity currently does not support object tracking in VisionOS. Therefore, I plan to use SwiftUI for native development. In Reality Composer Pro, I haven't seen a collision editing feature like in Unity; I can only set the size of the collision body but cannot manually adjust or visualize the shape and size of my collision body. I want to achieve similar functionality using SwiftUI, to be able to calculate and display the distance that my model A, like a needle or ruler, penetrates into another model or a physical object's interior. Is there a similar functionality available, or other coding methods to achieve this? void CalculateLengthInsideOrgan() { // Direction from the base of the probe to the tip Vector3 direction = probeTip.position - probeBase.position; float probeLength = direction.magnitude; // Raycasting RaycastHit[] hits = Physics.RaycastAll(probeBase.position, direction, probeLength, organLayerMask); if (hits.Length > 0) { // Calculate the length entering the organ float distanceToFirstHit = hits[0].distance; lengthInsideOrgan = probeLength - distanceToFirstHit; } else { lengthInsideOrgan = 0f; } }
1
0
254
1w
ARKit AnchorUpdate<ImageAnchor>.event Behavior Changes in visionOS 2.1
In visionOS beta, when using ARKit for image detection, the initially detected AnchorUpdate status is .add, and subsequent detections of the same image are marked as .update. However, after toggling immersiveSpace, the same image is detected with the status .add again. After updating to visionOS 2.1, the first detection status remains `add, and subsequent detections of the same image remain .update, even after toggling immersiveSpace. Could this be due to a change in processing flow?
2
0
176
1w
How To Create Dual Screen of AR using RealityView and SwiftUI iOS 18
I have this code to make ARVR Stereo View To Be Used in VR Box Or Google Cardboard, it uses iOS 18 New RealityView but it is not Act as an AR but rather Static VR on a Camera background so as I move the iPhone the cube move with it and that's not suppose to happen if its Anchored in a plane or to world coordinate. import SwiftUI import RealityKit struct ContentView : View { let anchor1 = AnchorEntity(.camera) let anchor2 = AnchorEntity(.camera) var body: some View { HStack (spacing: 0){ MainView(anchor: anchor1) MainView(anchor: anchor2) } .background(.black) } } struct MainView : View { @State var anchor = AnchorEntity() var body: some View { RealityView { content in content.camera = .spatialTracking let item = ModelEntity(mesh: .generateBox(size: 0.25), materials: [SimpleMaterial()]) anchor.addChild(item) content.add(anchor) anchor.position.z = -1.0 anchor.orientation = .init(angle: .pi/4, axis:[0,1,1]) } } } the thing is if I remove .camera like this let anchor1 = AnchorEntity() let anchor2 = AnchorEntity() It would work as AR Anchored to world coordinates but on the other hand is does not work but on the left view only not both views Meanwhile this was so easy before RealityView and SwiftUI by cloning the view like in ARSCNView Example : import UIKit import ARKit class ViewController: UIViewController, ARSCNViewDelegate, ARSessionDelegate { //create Any Two ARSCNView's in Story board // and link each to the next (dont mind dimensions) @IBOutlet var sceneView: ARSCNView! @IBOutlet var sceneView2: ARSCNView! override func viewDidLoad() { super.viewDidLoad() // Do any additional setup after loading the view. sceneView.delegate = self sceneView.session.delegate = self // Create SceneKit box let box = SCNBox(width: 0.1, height: 0.1, length: 0.1, chamferRadius: 0.01) let item = SCNNode(geometry: box) item.geometry?.materials.first?.diffuse.contents = UIColor.green item.position = SCNVector3(0.0, 0.0, -1.0) item.orientation = SCNVector4(0, 1, 1, .pi/4.0) // retrieve the ship node sceneView.scene.rootNode.addChildNode(item) } override func viewDidLayoutSubviews() // To Do Add the 4 Buttons { // Stop Screen Dimming or Closing While The App Is Running UIApplication.shared.isIdleTimerDisabled = true let screen: CGRect = UIScreen.main.bounds let topPadding: CGFloat = self.view.safeAreaInsets.top let bottomPadding: CGFloat = self.view.safeAreaInsets.bottom let leftPadding: CGFloat = self.view.safeAreaInsets.left let rightPadding: CGFloat = self.view.safeAreaInsets.right let safeArea: CGRect = CGRect(x: leftPadding, y: topPadding, width: screen.size.width - leftPadding - rightPadding, height: screen.size.height - topPadding - bottomPadding) DispatchQueue.main.async { if self.sceneView != nil { self.sceneView.frame = CGRect(x: safeArea.size.width * 0 + safeArea.origin.x, y: safeArea.size.height * 0 + safeArea.origin.y, width: safeArea.size.width * 0.5, height: safeArea.size.height * 1) } if self.sceneView2 != nil { self.sceneView2.frame = CGRect(x: safeArea.size.width * 0.5 + safeArea.origin.x, y: safeArea.size.height * 0 + safeArea.origin.y, width: safeArea.size.width * 0.5, height: safeArea.size.height * 1) } } } override func viewDidAppear(_ animated: Bool) { super.viewDidAppear(animated) let configuration = ARWorldTrackingConfiguration() sceneView.session.run(configuration) sceneView2.scene = sceneView.scene sceneView2.session = sceneView.session } } And here is the video for it
1
0
150
1w
ARVR RealityView Showing left Camera view Entity near more than the right view
I have this code to make ARVR Stereo View To Be Used in VR Box Or Google Cardboard, it uses iOS 18 New RealityView but for some reason the left side showing the Entity (Box) more near to the camera than the right side which make it not identical, I wonder is this a bug and need to be fixed or what ? thanx Here is the code import SwiftUI import RealityKit struct ContentView : View { let anchor1 = AnchorEntity(.camera) let anchor2 = AnchorEntity(.camera) var body: some View { HStack (spacing: 0){ MainView(anchor: anchor1) MainView(anchor: anchor2) } .background(.black) } } struct MainView : View { @State var anchor = AnchorEntity() var body: some View { RealityView { content in content.camera = .spatialTracking let item = ModelEntity(mesh: .generateBox(size: 0.25), materials: [SimpleMaterial()]) anchor.addChild(item) content.add(anchor) anchor.position.z = -1.0 anchor.orientation = .init(angle: .pi/4, axis:[0,1,1]) } } } And Here is the View
0
0
152
2w
RealityView to show two screens of AR in iOS 18/macOS 15 using SwiftUI
I have an issue using RealityView to show two screens of AR, while I did succeed to make it as a non AR but now my code not working. Also it is working using Storyboard and Swift with SceneKit, so why it is not working in RealityView? import SwiftUI import RealityKit struct ContentView : View { var body: some View { HStack (spacing: 0){ MainView() MainView() } .background(.black) } } struct MainView : View { @State var anchor = AnchorEntity() var body: some View { RealityView { content in let item = ModelEntity(mesh: .generateBox(size: 0.2), materials: [SimpleMaterial()]) content.camera = .spatialTracking anchor.addChild(item) anchor.position = [0.0, 0.0, -1.0] anchor.orientation = .init(angle: .pi/4, axis:[0,1,1]) // Add the horizontal plane anchor to the scene content.add(anchor) } } }
2
0
141
2w
How to insert modified video frames into the system camera to achieve AI erase effect?
By applying for the enterprise API, we can obtain the data of video frames collected by VisionPro glasses, and then we process the collected video frames to achieve the function of eliminating a certain object. But it was not found how to insert the processed video frames into the data source collected by the system camera. So I would like to ask if there is any API that can insert processed video frames into the original data and present them to the user? This effect is similar to the right side twist of VisionPro glasses, which allows the physical world and digital space to blend perfectly after rotation. So, I would like to ask if there is a related API that can solve this problem? STEPS TO REPRODUCE Obtain video frames, Process the obtained video frames Insert the processed video frames into the VisonOS system camera. System: VisionOS 2.0 API used: Enterprise APIs Main camera access permissions
1
0
216
2w