Integrate iOS device camera and motion features to produce augmented reality experiences in your app or game using ARKit.

ARKit Documentation

Post

Replies

Boosts

Views

Activity

Is it possible to implement a Billboard System in a volumetric window?
When i call queryDeviceAnchor in my Billboard system I get transform updates but I'm unsure how to process them (similar to the Diorama sample app). Is it a bug that I recieve these updates? The documentation says that ARKit data is only provided in a full space so I would expect this not to work at all. But if this is the case, why am I getting deviceAnchor values in this situation?
3
0
768
Feb ’24
Limitations of visionOS
Hi, What are the limitations and capabilities of visionOS? I cannot find answers to the questions I have. Let's say you have some USDZ files stored in a cloud service, there are so many of them that the app would be huge if you put them in assets. You want to fetch the one you are interested in and show it while an app is running. Is it possible to load USDZ files at runtime from the network? Is there a limit to how many objects can be visible at once? Let's say I am in an open space, with no walls. I want to place 100 3D objects somewhere in space. Is it possible? What if I placed 500, 1000? Is there a way to save the anchor point of the object? I want to open the app again and have an object in the same place I left it. I would like to arrange my space and have objects always in the same spots. How does the OS behave if objects are in different rooms? Is it possible to walk around, visit different rooms, and have objects anchored there? Would it behave like real objects? Is it possible to color a plane? Let's say there is a wall and it's black. I want this wall to be orange. Is it possible?
2
0
783
Feb ’24
ARKit for BIM
Hi, Please forgive me if i am asking a basic question. Because after my R&D I didn't see how can I build a solution where user can scan a QR code hanging on a specific wall at a specific fixed position. So when workers scan qr code from their iOS device they could see all the wirings, pipeline e.t.c. It would be really helpful If someone can let me know if its possible with ARKit and how.
1
0
480
Feb ’24
How to use SceneReconstruction with persisted WorldAnchors and AnchorEntities
Hi, I'm prototyping a visionOS app for which I'm trying to create the following behavior in mixed immersive space: users pinch and drag to position a model entity in the real world starting from the ray-cast of the pinch, meaning that the initial position should be on a MeshAnchor from scene reconstruction (I got that working, even though it's less precise than I expected) once the model entity is positioned, I want to anchor this to the world so that it will always stays there no matter what, from what I understand I need to create and add a WorldAnchor to a WorldTrackingProvider for that after positioning the model entity, users should be able to pinch and drag the entity to change its position and have that be persisted from then onwards It's not clear to me what the relationship between AnchorEntity(world:) and WorldAnchor is (looks like AnchorEntity(anchor:) isn't available in visionOS). What is the recommended way to keep these together? Afterwards, what is the recommended way to covert coordinate spaces between repositioned scene coordinate space and the anchor entity hierarchy coordinate space? I tried a DragGesture on the model entity and convert the translation to the scene, that works only when the scene origin hasn't changed. After it has changed, the translation is using the wrong coordinate space. Thanks for the help! Geert
2
0
494
Feb ’24
Importing USDZ into Reality Composer Pro doesn't include textures
I'm trying to import the USDZ file of a model with multiple textures attached to each part of the model. When I preview the file by double-clicking on the USDZ, it views fine. However, when I import it into Reality Composer Pro, it only shows the pink striped model. I also get the message - "Multiple root level objects exist for HU_EVO_SPY-8.usdc". There are so many components of the model that binding each texture to each component will be very difficult to do manually. How can I fix the file such that when I import to Reality Composer Pro, textures are attached to the model?
1
1
1.2k
Feb ’24
SceneReconstruction alongside WorldTracking silently fails?
Hello, I've noticed that when I have my ARSession run the sceneReconstruction provider and the world tracking provider at the same time, I receive no scene reconstruction mesh updates. My catch closure doesn't receive any errors, it just doesn't send anything to the async list. If I run just the scene reconstruction provider by itself, then I do get mesh updates. Is this a bug? Is it expected that it's not possible to do this? Thank you
1
0
476
Feb ’24
room plan with ARKit image anchor
I want to have realtime image anchor tracking together with RoomPlan. But it's frustrating to not seeing any thing that can support this. Because it is useful to have interactive things in the scanned room. Ideally it should be running the same time, but if not possible, how do you align the two tracking space if running RoomPlan and then ARKit image tracking? sounds like headache
1
1
558
Feb ’24
SceneKit C3DTransactionGetStack Crash
My application uses ARKit to capture faces in real time, there are two occasional crashes during use, I can not reproduce it, the following is the crash stack, These are all system API calls. I have no clue, any suggestions to fix it? Thank you so much! Additional information: BUG IN CLIENT OF LIBPLATFORM: Trying to recursively lock an os_unfair_lock the first kind: EXC_BREAKPOINT 0x00000001f6d2d20c 0 libsystem_platform.dylib _os_unfair_lock_recursive_abort + 36 1 libsystem_platform.dylib _os_unfair_lock_lock_slow + 284 2 SceneKit C3DTransactionGetStack + 160 3 SceneKit _commitImplicitTransaction + 36 4 CoreFoundation CFRUNLOOP_IS_CALLING_OUT_TO_AN_OBSERVER_CALLBACK_FUNCTION + 36 5 CoreFoundation __CFRunLoopDoObservers + 548 6 CoreFoundation __CFRunLoopRun + 1028 7 CoreFoundation CFRunLoopRunSpecific + 608 8 Foundation -[NSRunLoop(NSRunLoop) runMode:beforeDate:] + 212 9 Foundation -[NSRunLoop(NSRunLoop) run] + 64 10 UIKitCore __66-[UIViewInProcessAnimationManager startAdvancingAnimationManager:]_block_invoke_7 + 108 11 Foundation NSThread__start + 732 12 libsystem_pthread.dylib _pthread_start + 136 13 libsystem_pthread.dylib thread_start + 8 the second kind: 已崩溃:com.apple.arkit.ardisplaylink.0x28083bd80 EXC_BREAKPOINT 0x00000001fe43920c 0 libsystem_platform.dylib _os_unfair_lock_recursive_abort + 36 1 libsystem_platform.dylib _os_unfair_lock_lock_slow + 284 2 SceneKit C3DTransactionGetStack + 160 3 SceneKit _commitImplicitTransaction + 36 4 CoreFoundation CFRUNLOOP_IS_CALLING_OUT_TO_AN_OBSERVER_CALLBACK_FUNCTION + 36 5 CoreFoundation __CFRunLoopDoObservers + 548 6 CoreFoundation __CFRunLoopRun + 1028 7 CoreFoundation CFRunLoopRunSpecific + 608 8 CoreFoundation CFRunLoopRun + 64 9 ARKitCore -[ARRunLoop _startThread] + 616 10 Foundation NSThread__start + 732 11 libsystem_pthread.dylib _pthread_start + 136 12 libsystem_pthread.dylib thread_start + 8
0
0
270
Feb ’24
photogrammetry
i do not really know how this works but hi I am Philemon. for a school assignment I need to program a app I have 2 years for this and it is for people that are interested in coding. I want to make a iOS app that can make 3d models from pictures (photogrammetry) and I know that there are already apps for this but I want to code this myself. I have a little bit of experience coding c# in unity but I really don't know where to start can someone help me? and I know that apple has reality kit but I want that people without a LiDAR Scanner can use this too. so where do I start witch language do I need to learn? every comment is welcome!!! kind regards Philemon
0
1
529
Jan ’24
Transform Entities in RealityView?
So I have a RealityView with an Entity (from my bundle) being rendered in it like so: struct ImmersiveView: View { var body: some View { RealityView { content in // Add the initial RealityKit content if let entity = try? await Entity(named: "MyContent", in: realityKitContentBundle) { content.add(entity) } } } } Is it possible to programatically transform the entity? Specifically I want to (1) translate it horizontally in space, eg 1m to the right, and (2) rotate it 90°. I've been looking through the docs and haven't found the way to do this, but I fear I'm not too comfortable with Apple docs quite yet. Thanks in advance!
1
0
636
Jan ’24
Xyz coordinates to less than 0.1m accuracy?
I have an app idea that would map an OLD photo onto the front of the same existing building. The underlying work has already been done see https://lowestoftoldandnow.org/full/strolleast#45 but obviously you would have to accurately record 4 points in 3d space but also the user of the app would have to take these points (given to them by the app) and map them back onto the real world with the same accuracy. If the photo was partly on the next door building it would not work. I am beginning to think that the technology is not there yet :-(
0
0
302
Jan ’24
Record coordinates in 3d space
Scenario: building with old shopfront to be renewed, we have a visual of new concept. Is there an app that can give us the coordinates in line with the plane of the front of the building so we can map the visual on and it alter perspective as you walk around as it ‘sticks’ to front of real building please? Gif attached is visual concept but showing historic pic
2
0
347
Jan ’24
WorldTrackingProvider's queryDeviceAnchor is not giving correct deviceAnchor
I'm constructing a RealityView where I'd like to display content in front of user's face. When testing, I found that the deviceAnchor I initially get was wrong, so I implement following code to wait until the deviceAnchor I get from worldTrackingProvider has the correct value: private let arkitSession = ARKitSession() private let worldTrackingProvider = WorldTrackingProvider() var body: some View { RealityView { content, attachments in Task { do { // init worldTrackingProvider try await arkitSession.run([worldTrackingProvider]) // wait until deviceAnchor returns correct info var deviceAnchor : DeviceAnchor? // continuously get deviceAnchor and check until it's valid while (deviceAnchor == nil || !checkDeviceAnchorValid(Transform(matrix: deviceAnchor!.originFromAnchorTransform).translation)) { deviceAnchor = worldTrackingProvider.queryDeviceAnchor(atTimestamp: CACurrentMediaTime()) } let cameraTransform = Transform(matrix: deviceAnchor!.originFromAnchorTransform) // ...codes that update my entity's translation } catch { print("Error: \(error)") } } } } private func checkDeviceAnchorValid(_ translation: SIMD3<Float>) -> Bool { // codes that check if the `deviceAnchor` has a valid translation. } However, I found that sometimes I can't get out from the while loop defined above. Not because my rules inside checkDeviceAnchorValid func are too strict, but because the translation I get from deviceAnchor is always invalid(it is [0,0,0] and never changed) Why is this happening? Is this a known issue? I wonder if I can get recalled when the worldTrackingProvider returns the correct deviceAnchor,
2
0
733
Jan ’24
Use CoreImage filters on Vision Pro (visionOS) view
I have an iOS app that uses (camera) video feed and applies CoreImage filters to simulate a specific real world effect (for educational purposes). Now I wanted to make a similar app for visionOS and apply the same CoreImage filters to the content (live view) users sees while wearing Apple Vision Pro headset. Is there a way to do it with current APIs and what would you recommend? I saw that we cannot get video feed from camera(s), is there a way to do it with ARKit and applying the filters somehow using that? I know visionOS is a young/fresh platform but any help would be great! Thank you!
1
0
1k
Jan ’24
Is ARGeoTrackingConfiguration always more accurate than ARWorldTrackingConfiguration for world scale AR?
We are working on a world scale AR app that leverages the device location and heading to place objects in the streets, so that they are correctly and stably anchored to certain locations. Since the geo-tracking imagery is only available in certain cities and areas, we are trying to figure out how to fallback when geo-tracking is not available as the device move away, to still retain good AR camera accuracy. We might need to come up with some algorithm using the device GPS, to line up the ARCamera with our objects. Question: Does geo-tracking always provide greater than or equal to the accuracy of world tracking, for a GPS outdoor AR experience? If so, we can simply use the ARGeoTrackingConfiguration for the entire time, and rely on the ARView keeping itself aligned. Otherwise, we need to switch between it and ARWorldTrackingConfiguration when geo-tracking is not available and/or its accuracy is low, then roll our own algorithm to keep the camera aligned. Thanks.
2
0
599
Jan ’24
Roomplan exceeded scene size limit error. (RoomCaptureSession.CaptureError.exceedSceneSizeLimit)
Error: RoomCaptureSession.CaptureError.exceedSceneSizeLimit Apple Documentation Explanation: An error that indicates when the scene size grows past the framework’s limitations. Issue: This error is popping up in my iPhone 14 Pro (128 GB) after a few roomplan scans are done. This error shows up even if the room size is small. It occurs immediately after I start the RoomCaptureSession after the relocalisation of previous AR session (in world tracking configuration). I am having trouble understanding exactly why this error shows and how to debug/solve it. Does anyone have any idea on how to approach to this issue?
0
1
490
Jan ’24