Can AR projects run on a visionOS simulator?
ARKit
RSS for tagIntegrate iOS device camera and motion features to produce augmented reality experiences in your app or game using ARKit.
Post
Replies
Boosts
Views
Activity
I have configured ARKit and PlaneDetectionProvider, but after running the code in the simulator, PlaneEntity is not displayed correctly
import Foundation
import ARKit
import RealityKit
class PlaneViewModel: ObservableObject{
var session = ARKitSession()
let planeData = PlaneDetectionProvider(alignments: [.horizontal])
var entityMap: [UUID: Entity] = [:]
var rootEntity = Entity()
func start() async {
do {
if PlaneDetectionProvider.isSupported {
try await session.run([planeData])
for await update in planeData.anchorUpdates {
if update.anchor.classification == .window { continue }
switch update.event {
case .added, .updated:
updatePlane(update.anchor)
case .removed:
removePlane(update.anchor)
}
}
}
} catch {
print("ARKit session error \(error)")
}
}
func updatePlane(_ anchor: PlaneAnchor) {
if entityMap[anchor.id] == nil {
// Add a new entity to represent this plane.
let entity = ModelEntity(
mesh: .generateText(anchor.classification.description)
)
entityMap[anchor.id] = entity
rootEntity.addChild(entity)
}
entityMap[anchor.id]?.transform = Transform(matrix: anchor.originFromAnchorTransform)
}
func removePlane(_ anchor: PlaneAnchor) {
entityMap[anchor.id]?.removeFromParent()
entityMap.removeValue(forKey: anchor.id)
}
}
var body: some View {
@stateObject var planeViewModel = PlaneViewModel()
RealityView { content in
content.add(planeViewModel.rootEntity)
}
.task {
await planeViewModel.start()
}
}
Greetings,
I've been using RPScreenRecorder to record the screen, getting the buffer of it and copying it into a different one to use it, but recently, due to the last big iOS update, now each time I execute it, it crashes. Probably is because it's copying it while the first buffer is being overwritten. That's why I'm using a semaphore. But even with that, it still crashes. I don't know why it doesn't work.
I tried to make a CIImage from the buffer to later copy it in a new buffer, but it keeps crashing. I tried to put some conditions so it's not nil, but nothing is working. I don't know how could it work. I even tried to create a new empty buffer and use it, and that works properly, so the problem is when I combine the RPScreenRecorder buffer and the copying of it. And the worst thing is that before that update, everything was working properly. Do you know any way that I could make it work?
I have an app which shows a screen using ObjectCaptureView and each time that the view appears and disappears the memory increases around 400-500Mb.
After check the memory graph I found that I was related to the SwiftUI's view which is creating the ARKit view under the hood.
To be sure that the ObjectCaptureView was which has the memory leak I only commented in the view the line to create the ObjectCaptureView keeping the rest of the logic to handle the session state, feedback, etc.
I want to convert this uikit code to swiftui but i have some problems and it doesn't work, please help me
See LICENSE folder for this sample’s licensing information.
Abstract:
The sample app's main view controller.
*/
import UIKit
import RealityKit
import ARKit
import Combine
class ViewController: UIViewController, ARSessionDelegate {
@IBOutlet var arView: ARView!
var character: BodyTrackedEntity?
let characterOffset: SIMD3<Float> = [-1.0, 0, 0]
let characterAnchor = AnchorEntity()
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
arView.session.delegate = self
guard ARBodyTrackingConfiguration.isSupported else {
fatalError("This feature is only supported on devices with an A12 chip")
}
// 运行人体跟踪配置。
let configuration = ARBodyTrackingConfiguration()
arView.session.run(configuration)
arView.scene.addAnchor(characterAnchor)
var cancellable: AnyCancellable? = nil
cancellable = Entity.loadBodyTrackedAsync(named: "character/robot").sink(
receiveCompletion: { completion in
if case let .failure(error) = completion {
print("Error: Unable to load model: \(error.localizedDescription)")
}
cancellable?.cancel()
}, receiveValue: { (character: Entity) in
if let character = character as? BodyTrackedEntity {
character.scale = [1.0, 1.0, 1.0]
self.character = character
cancellable?.cancel()
} else {
print("Error: Unable to load model as BodyTrackedEntity")
}
})
}
func session(_ session: ARSession, didUpdate anchors: [ARAnchor]) {
for anchor in anchors {
guard let bodyAnchor = anchor as? ARBodyAnchor else { continue }
// 更新角色定位点位置的位置。
let bodyPosition = simd_make_float3(bodyAnchor.transform.columns.3)
characterAnchor.position = bodyPosition + characterOffset
characterAnchor.orientation = Transform(matrix: bodyAnchor.transform).rotation
if let character = character, character.parent == nil {
// 1. the body anchor was detected and
// 2. the character was loaded.
characterAnchor.addChild(character)
}
}
}
}
Here's the code I wrote in SwiftUI
import SwiftUI
import RealityKit
import ARKit
import Combine
struct ContentView : View {
var body: some View {
ARViewContainer().edgesIgnoringSafeArea(.all)
}
}
struct ARViewContainer: UIViewRepresentable {
var character: BodyTrackedEntity?
let characterOffset: SIMD3<Float> = [-1.0, 0, 0] /
let characterAnchor = AnchorEntity()
func makeUIView(context: Context) -> ARView {
let arView = ARView(frame: .zero)
guard ARBodyTrackingConfiguration.isSupported else {
fatalError("This feature is only supported on devices with an A12 chip")
}
let configuration = ARBodyTrackingConfiguration()
arView.session.run(configuration)
arView.scene.addAnchor(characterAnchor)
var cancellable: AnyCancellable? = nil
cancellable = Entity.loadBodyTrackedAsync(named: "character/robot").sink(
receiveCompletion: { completion in
if case let .failure(error) = completion {
print("Error: Unable to load model: \(error.localizedDescription)")
}
cancellable?.cancel()
}, receiveValue: { (character: Entity) in
if let character = character as? BodyTrackedEntity {
character.scale = [1.0, 1.0, 1.0]
self.character = character
cancellable?.cancel()
} else {
print("Error: Unable to load model as BodyTrackedEntity")
}
})
return arView
}
func updateUIView(_ uiView: ARView, context: Context) {}
func session(_ session: ARSession, didUpdate anchors: [ARAnchor]) {
for anchor in anchors {
guard let bodyAnchor = anchor as? ARBodyAnchor else { continue }
let bodyPosition = simd_make_float3(bodyAnchor.transform.columns.3)
characterAnchor.position = bodyPosition + characterOffset
characterAnchor.orientation = Transform(matrix: bodyAnchor.transform).rotation
if let character = character, character.parent == nil {
// 1. the body anchor was detected and
// 2. the character was loaded.
characterAnchor.addChild(character)
}
}
}
}
#Preview {
ContentView()
}
Hi -
I've searched all over the docs and might simply be missing something very big. Is raycasting available on front-facing True Depth camera like on the iPad Pro?
I'm working on an application currently where I'm in ARFaceTrackingConfiguration and a simple raycast from the screen center is not yielding results.
That same code in World configuration using the rear camera is producing results.
My understanding, given the examples around bitmojis and face tracking, was that the front camera would have essentially the same depth data as the rear, just with less total distance available.
Thanks for setting me straight! This is a very big deal for this particular project and I'm fearful I missed something in my pre-planning and investigation.
Kane
Hi. I want to make iOS app that when use camera AR, it can show someplace around me have annotations. ARGeoAnchor something but I don't have any idea. Can anyone give me some keywords? I can use Mapkit to search but don't know how to map it in AR.
Greetings!
I have made use of Apple ARKit documentations to create a simple ARKit application which utilizes SceneKit (Tried Metal too)
I am currently unsure of how to make use of SmoothedSceneDepth(SceneDepth) in general to acquire the DepthData from the DataMap acquired in the View.
is there any particular method or way that I can access this data for displaying the depth.
would be grateful with any inputs or suggestions.
Thanks in advance
Hello everyone
I'm using the detectPlane feature in ARKit
Get back ARPlaneAnchor from ARSCNViewDelegate (func renderer(SCNSceneRenderer, didAdd: SCNNode, for: ARAnchor), func renderer(SCNSceneRenderer, didUpdate: SCNNode, for: ARAnchor)), func renderer(SCNSceneRenderer, didRemove: SCNNode, for: ARAnchor)
Occasionally ARPlaneAnchors are cleared by the call func renderer(SCNSceneRenderer, didRemove: SCNNode, for: ARAnchor) from ARKit
I think that after deleting ARPlanAnchor, ARkit will recreate an ARPlaneAnchor in that location.
so is there any relationship between deleted ARPlanAnchor and newly created ARPlaneAnchor?
(Does the identifier, name, .... information reflect that relationship?)
https://developer.apple.com/documentation/arkit/arkit_in_ios/content_anchors/visualizing_and_interacting_with_a_reconstructed_scene
It says that fourth-generation iPad Pro running iPad OS 13.4 or later works because of the lidar. If iPhone 13 also has lidar then would it work too?
struct ARViewContainer: UIViewRepresentable {
func makeUIView(context: Context) -> ARView {
let arView = ARView(frame: .zero)
arView.debugOptions = .showStatistics // Error:
return arView
}
func updateUIView(_ uiView: ARView, context: Context) {}
}
-[MTLDebugRenderCommandEncoder validateCommonDrawErrors:]:5775: failed assertion `Draw Errors Validation
Vertex Function(vsSdfFont): the offset into the buffer viewConstants that is bound at buffer index 4 must be a multiple of 256 but was set to 61840.
'
Hi all.
I can get disparity/depth data map from AVDepthData.depthDataMap and directly use it to generate a depth image. I found that under some situations, objects on the depth image cannot be clearly distinguished.
When using disparity data, objects below 1 meter can't be clearly distinguished.
When using depth data, objects larger than 1 meter can't be clearly distinguished.
Does anyone know why this happens and how to fix it ?
Example Code:
struct ContentView : View {
@State private var isRemoveEntityModel = false
var body: some View {
ZStack(alignment: .bottom) {
ARViewContainer(isRemoveEntityModel: $isRemoveEntityModel).edgesIgnoringSafeArea(.all)
Button {
isRemoveEntityModel = true
} label: {
Image(systemName: "trash")
.font(.system(size: 35))
.foregroundStyle(.orange)
}
}
}
}
struct ARViewContainer: UIViewRepresentable {
@Binding var isRemoveEntityModel: Bool
let arView = ARView(frame: .zero)
func makeUIView(context: Context) -> ARView {
let model = CustomEntityModel()
model.transform.translation.y = 0.05
model.generateCollisionShapes(recursive: true)
arView.installGestures(.all, for: model) // --> After executing this line of code, it allows the deletion of a custom EntityModel in ARView.scene, but the deinit {} method of the custom EntityModel is not executed.
let anchor = AnchorEntity(.plane(.horizontal, classification: .any, minimumBounds: SIMD2(0.2, 0.2)))
anchor.children.append(model)
arView.scene.anchors.append(anchor)
return arView
}
func updateUIView(_ uiView: ARView, context: Context) {
if isRemoveEntityModel {
let customEntityModel = uiView.scene.findEntity(named: "Box_EntityModel")
uiView.gestureRecognizers?.removeAll() // --->After executing this line of code, ARView.scene can correctly delete the CustomEntityModel, and the deinit {} method of CustomEntityModel can also be executed properly. However, other CustomEntityModels in ARView.scene lose their Gestures as well.
customEntityModel?.removeFromParent()
}
}
}
class CustomEntityModel: Entity, HasModel, HasAnchoring, HasCollision {
required init() {
super.init()
let mesh = MeshResource.generateBox(size: 0.1)
let material = SimpleMaterial(color: .gray, isMetallic: true)
self.model = ModelComponent(mesh: mesh, materials: [material])
self.name = "Box_EntityModel"
}
deinit {
print("CustomEntityModel_remove")
}
}
We scan the room using the RoomPlan API, and after the scan, we obtain objects with a white color along with shadows and shading. However, upon updating the color of these objects, we experience a loss of shadows and shading.
RoomPlan scan
After Update
After adding gestures to the EntityModel, when it is necessary to remove the EntityModel, if the method uiView.gestureRecognizers?.removeAll() is not executed, the instance in memory cannot be cleared. However, executing this method affects gestures for other EntityModels in the ARView. Does anyone have a better method to achieve this?
Example Code:
struct ContentView : View {
@State private var isRemoveEntityModel = false
var body: some View {
ZStack(alignment: .bottom) {
ARViewContainer(isRemoveEntityModel: $isRemoveEntityModel).edgesIgnoringSafeArea(.all)
Button {
isRemoveEntityModel = true
} label: {
Image(systemName: "trash")
.font(.system(size: 35))
.foregroundStyle(.orange)
}
}
}
}
ARViewContainer:
struct ARViewContainer: UIViewRepresentable {
@Binding var isRemoveEntityModel: Bool
let arView = ARView(frame: .zero)
func makeUIView(context: Context) -> ARView {
let model = CustomEntityModel()
model.transform.translation.y = 0.05
model.generateCollisionShapes(recursive: true)
__**arView.installGestures(.all, for: model)**__ // here--> After executing this line of code, it allows the deletion of a custom EntityModel in ARView.scene, but the deinit {} method of the custom EntityModel is not executed.
arView.installGestures(.all, for: model)
let anchor = AnchorEntity(.plane(.horizontal, classification: .any, minimumBounds: SIMD2<Float>(0.2, 0.2)))
anchor.children.append(model)
arView.scene.anchors.append(anchor)
return arView
}
func updateUIView(_ uiView: ARView, context: Context) {
if isRemoveEntityModel {
let customEntityModel = uiView.scene.findEntity(named: "Box_EntityModel")
// --->After executing this line of code, ARView.scene can correctly delete the CustomEntityModel, and the deinit {} method of CustomEntityModel can also be executed properly. However, other CustomEntityModels in ARView.scene lose their Gestures as well.
__** uiView.gestureRecognizers?.removeAll()**__
customEntityModel?.removeFromParent()
}
}
}
CustomEntityModel:
class CustomEntityModel: Entity, HasModel, HasAnchoring, HasCollision {
required init() {
super.init()
let mesh = MeshResource.generateBox(size: 0.1)
let material = SimpleMaterial(color: .gray, isMetallic: true)
self.model = ModelComponent(mesh: mesh, materials: [material])
self.name = "Box_EntityModel"
}
deinit {
**print("CustomEntityModel_remove")**
}
}
Hello Guys,
I am currently stuck on understanding how I can place a 3D Entity from a USDZ file or a Reality Composer Pro project in the middle of a table in a mixed ImmersiveSpace. When I use the
AnchorEntity(.plane(.horizontal, classification: .table, minimumBounds: SIMD2<Float>(0.2, 0.2)))
it just places it somewhere on the table and not in the middle and not in the orientation of the table so the edges are not aligned.
Has anybody got a clue on how to to this? I would be very thankful for a response,
Thanks
Is there a way to capture video on the front facing camera (ie the selfie camera) on iPhone while using face anchors, left/right eye transforms for AR?
Hope to get support for ARKit in high resolution
using this api rightnow:
NSArray<ARVideoFormat *> *supportedVideoFormats = [ARWorldTrackingConfiguration supportedVideoFormats];
Qs:
If a high-resolution ARVideoFormat is not included in the supportedVideoFormats, is it still supported?"
Hello everyone,
I'm working on an AR app. There I load a 3D model of an human arm and place it on a QR code (ARImageAnchor). The user can now move the model and change its texture.
Is it possible to draw on this 3D model with my finger?
I have seen videos where models react to a touch. But I don't just want to touch the model, I want to create a small sphere exactly at the point where I touch the model, for example.
I would like to be able to draw a line on the arm. My model has a CollisionShape.
Error:
RoomCaptureSession.CaptureError.exceedSceneSizeLimit
Apple Documentation Explanation:
An error that indicates when the scene size grows past the framework’s limitations.
Issue:
This error is popping up in my iPhone 14 Pro (128 GB) after a few roomplan scans are done. This error shows up even if the room size is small. It occurs immediately after I start the RoomCaptureSession after the relocalisation of previous AR session (in world tracking configuration). I am having trouble understanding exactly why this error shows and how to debug/solve it.
Does anyone have any idea on how to approach to this issue?