Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics

Post

Replies

Boosts

Views

Activity

RoomPlan Framework v2 - Stairs missing
Hello Community, I'm encountering an issue with the latest iOS 17 update, specifically related to RoomPlan version-2. In iOS 16, when using RoomPlan version-1, we were able to display stairs in our app. However, after upgrading to iOS 17 and implementing RoomPlan version-2, the stairs are no longer visible. Despite thorough investigation, I couldn't find any option within the code to show or hide stairs, or any other objects for that matter. It seems like a specific issue with the update rather than a coding error on our part. Has anyone else encountered a similar problem? If so, I would greatly appreciate any insights or solutions you might have. It's crucial for our app functionality to have stairs displayed accurately, and we're currently at a loss on how to address this issue. Thank you in advance for any assistance you can provide. Best regards
1
0
467
Mar ’24
Meet Object Capture for iOS
Hi ,
We are using Scanning objects using Object Capture app provided by apple it was working fine but suddenly it started crashing while scanning the object with bounding box settings.
Getting device log but showing different reasons for app crash.
Device : iPad Pro / iPhone 15 Pro
iOS : 17.4
Attaching the device log for crash, waiting for your response. -------------------------------------
Translated Report (Full Report Below)
-------------------------------------
Incident Identifier: 0797ADAF-D653-4C92-8AA0-300AA167002B
CrashReporter Key: 48724a3b30ef15e069f513afff7d1aa2e935a520
Hardware Model: iPhone16,1
Process: GuidedCapture [918]
Path: /private/var/containers/Bundle/Application/ACCE6C58-98F0-4DD7-AA6E-732190E0FD30/GuidedCapture.app/GuidedCapture
Identifier: com.example.apple-samplecode.GuidedCaptureH28X75MLUY
Version: 1.0 (1)
Code Type: ARM-64 (Native)
Role: Foreground
Parent Process: launchd [1]
Coalition: com.example.apple-samplecode.GuidedCaptureH28X75MLUY [743]
Date/Time: 2024-03-26 18:23:07.8269 +0530
Launch Time: 2024-03-26 18:20:02.5964 +0530
OS Version: iPhone OS 17.4.1 (21E236)
Release Type: User
Baseband Version: 1.55.04
Report Version: 104
Exception Type: EXC_BREAKPOINT (SIGTRAP)
Exception Codes: 0x0000000000000001, 0x000000022e6577a4
Termination Reason: SIGNAL 5 Trace/BPT trap: 5
Terminating Process: exc handler [918]
Triggered by Thread: 33
Thread 33 name: Dispatch queue: com.apple.coreoc.queues.serial.session
Thread 33 Crashed:
0 CoreOC 0x22e6577a4 0x22e657400 + 932
1 CoreOC 0x22e657588 0x22e657401 + 391
2 CoreOC 0x22e697628 0x22e697221 + 1031
3 CoreOC 0x22e684864 0x22e684431 + 1075
4 CoreOC 0x22e6831ec 0x22e6828d5 + 2327
5 CoreOC 0x22e6c1fb4 0x22e6c1f5d + 87
6 CoreOC 0x22e5ef128 0x22e5ef105 + 35
7 libdispatch.dylib 0x19291113c _dispatch_call_block_and_release + 31
8 libdispatch.dylib 0x192912dd4 _dispatch_client_callout + 19
9 libdispatch.dylib 0x19291a400 _dispatch_lane_serial_drain + 747
10 libdispatch.dylib 0x19291af30 _dispatch_lane_invoke + 379
11 libdispatch.dylib 0x192925cb4 _dispatch_root_queue_drain_deferred_wlh + 287
12 libdispatch.dylib 0x192925528 _dispatch_workloop_worker_thread + 403
13 libsystem_pthread.dylib 0x1e69f8f20 _pthread_wqthread + 287
14 libsystem_pthread.dylib 0x1e69f8fc0 start_wqthread + 7
0
0
544
Mar ’24
Multi-User AR
I'm developing a motion tracking app that takes requires a real-time view of an iPhone camera to capture the person's body. The motion is mapped to a virtual body. Currently this appears overlayed on the person that the iPhone sees. However, I want to transmit this real time 3D virtual body to a different Apple device, as an AR app, that the other user can place in their environment. Any suggestions on how I can get this 3d model to be viewable by another user (and maintain live updating based on motion tracking)?
1
0
420
Mar ’24
RealityKit framework crash: cv3d::applecv3d::concurrent_sd::SurfaceDetection::PushAndDetect
I have a RealityKit based app in TestFlight and I see the following crash happening twice. It appears to be coming from the RealityKit framework itself in cv3d::applecv3d::concurrent_sd::SurfaceDetection::PushAndDetect has anyone seen this before and have you discovered what is causing it? Thread 32 Crashed: 0 libsystem_kernel.dylib 0x00000001cfd81fbc __pthread_kill + 8 (:-1) 1 libsystem_pthread.dylib 0x00000001f271f680 pthread_kill + 268 (pthread.c:1681) 2 libsystem_c.dylib 0x000000019069ab90 abort + 180 (abort.c:118) 3 Recon3D 0x0000000211b8cd7c cv3d::acv::surfacedetection::DepthMapPlaneDetector::detect(cv3d::esn::arr::ArrayView<float const, cv3d::esn::dim::DX<2u>, float const*>, cv3d::esn::arr::ArrayView<float const, cv3d::esn::dim::DX<2u... + 6136 (DepthMapPlaneDetector.cpp:346) 4 Recon3D 0x0000000211bb0fe4 cv3d::acv::surfacedetection::SurfaceDetector::detectAndTrack(cv3d::acv::surfacedetection::SurfaceDetector::DetectAndTrackWithDepthParams const&) + 844 (SurfaceDetector.cpp:635) 5 Recon3D 0x000000021142fd24 cv3d::applecv3d::concurrent_sd::SurfaceDetection::PushAndDetect(cv3d::applecv3d::concurrent_sd::InputSemanticsWithDepthBundle const&) + 2672 (SurfaceDetection.cpp:645) 6 Recon3D 0x00000002114678ec cv3d::kit::concurrency::detail::ProcessorInputMessageHandlingStrategy<cv3d::applecv3d::concurrent_sd::InputSemanticsWithDepthBundle, std::experimental::expected<cv3d::applecv3d::concurrent_sd::Surf... + 92 (ProcessorInputMessageHandlingStrategy.h:136) 7 Recon3D 0x00000002114675b4 std::__1::__function::__func<void cv3d::kit::concurrency::detail::Processor<cv3d::applecv3d::concurrent_sd::InputSemanticsWithDepthBundle, std::experimental::expected<cv3d::applecv3d::concurrent_sd... + 184 (function.h:356) 8 Recon3D 0x0000000211794330 void std::__1::__invoke_void_return_wrapper<void, true>::__call<std::__1::future<void> cv3d::esn::thread::IWorkQueue::DispatchAsync<void>(std::__1::function<void ()>&&)::'lambda'()&>(std::__1::futu... + 68 (invoke.h:487) 9 Recon3D 0x0000000212387830 dispatch_async_C_CallBack + 76 (GrandCentralDispatchUtil.cpp:94) 10 libdispatch.dylib 0x00000001905e2300 _dispatch_client_callout + 20 (object.m:561) 11 libdispatch.dylib 0x00000001905e9964 _dispatch_lane_serial_drain + 956 (queue.c:3885) 12 libdispatch.dylib 0x00000001905ea3f8 _dispatch_lane_invoke + 432 (queue.c:3976) 13 libdispatch.dylib 0x00000001905eb6a8 _dispatch_workloop_invoke + 1756 (queue.c:4485) 14 libdispatch.dylib 0x00000001905f5004 _dispatch_root_queue_drain_deferred_wlh + 288 (queue.c:6913) 15 libdispatch.dylib 0x00000001905f4878 _dispatch_workloop_worker_thread + 404 (queue.c:6507) 16 libsystem_pthread.dylib 0x00000001f271b964 _pthread_wqthread + 288 (pthread.c:2629) 17 libsystem_pthread.dylib 0x00000001f271ba04 start_wqthread + 8 (:-1)
0
0
484
Mar ’24
Frequent session interruption events when running a modified RoomPlan.
When running a modified version of the RoomPlan Demo I get frequent Session Interrupted conditions. In looking at the traces I find a status of SensorDidPause in the interruption Side of the error but am mystified as to how to determine which sensor it was that paused and how to diagnose it. It appears there is a bitmap of available and active sensor devices in the sensor info passed with the session data on the error. In looking at the error status I can see that one or two of the motion sensors have had a problem. How do I do further diagnostic checks on what the cause of the error is? I am also curious why the error occurred as soon as the AR Session for my test started via the “session.run” call. The documentation in this area seems difficult to find. Attached are traces from running the test and stack dumps for the calls. Please send me guidance on how to proceed. The device in question is an iPad iPhone(3) that is attached to the Mac mini named “Hawkeye”. There is no known direct involvement for the Hawkeye system
4
0
539
Mar ’24
Having issue with two ar session togather
I have the following issue regarding running 2 AR service. I am trying to develop an app for my masters thesis. Case 1: I first scan the room using the roomplan api. Then I stop the roomplan api session and start the realitykit session. When the realitykit session starts, the camera is not showing anything but black screen. Case 2: When I had the issue with case one, I tried a seperate test app where I had 2 seperate screen for roomplan api and realitykit. There is no relation. but as soon as I introduced roomplan api, realitykit stopped working, having the same black screen as above. There might be any states that changed by the roomplan api, that's why realitykit is not able to access the camera. Let me know if you have any idea about it or any sample. I am using the following stack: Xcode - Latest; Swiftui; latest os in mac mini and iphone
2
1
742
Apr ’24
Vision Pro - viable for industry applications?
I'm in Europe, Vision Pro isn't available here yet. I'm a developer / designer, and I want to find out whether it's worthwhile to try and sell the idea of investing in a bunch of Vision Pro devices as well as in app development for it, to the people overseeing the budget for a project I'm part of. The project is broadly in an "industry" where several constraints apply, most of them are security and safety. So far, all the Vision Pro discussion I've seen is about consumer-level media consumption and tippy-tappy-app-stuff for a broad user base. Now, the hardware and the OS features and SDK definitely look like professional niche use cases are possible. But some features, such as SharePlay, will for example require an Apple ID and internet connection (I guess?). This for example is a strict nope in my case, for security reasons. I'd like to start a discussion of what works and what doesn't work, outside the realm of watching Disney+ in your condo. Potentially, this device has several marks ticked with regards to incredibly useful features in general. very good indoor tracking pass through with good fidelity hands free operation The first point especially, is kind of a really big deal, and for me, the biggest open question. I have multiple make or break questions with regard to this. (These features are not available in the simulator) For sake of argument, lets say the app I'm building is Cave Mapper. it's meant to be used by archeologists inside a cave system where we have no internet, no reliable compass, and no GPS. We have a local network that we can carry around though. We can also bring lights. One feature of the app is to build out a catalog of cave paintings and store them in a database. The archeologist wants to walk around, look at a cave painting, and tap on it to capture its position relative to the cave entrance. The next day, another archeologist may work inside the same cave, and they would want to have synchronised access to the same spatial data from the day before. For that: How good, precise, reliable, stable is the indoor tracking really? Hyped reviewers said it's rock solid, others have said it can drift. How well do the persistent WorldAnchor objects work? How well do they work when you're in a concrete bunker or a cave without GPS? Can I somehow share a world anchor with another user? is it possible to sync the ARKit map that one device has built, with another device? Other showstoppers? in case you cannot share your mapped world or world anchors: How solid is the tracking of an ImageAnchor (which we could physically nail to the cave entrance to use as a shared positional / rotational reference) Other, practical stuff: can you wear Vision Pro with a safety helmet? does it work with gloves?
1
0
571
Apr ’24
Searching for APIs compatible with iOS and VisionOS
Is there a method for finding APIs that are compatible with both iOS and Vision OS (ex. hoverStyle)? I'm encountering difficulties in developing for Vision OS, although I can successfully build for 'Apple Vision (designedForIPad)'. Are there any methods for discovering APIs that support both platforms? I'm looking to enhance my application and would appreciate any guidance on where to find such APIs. Additionally, I'm interested in changing the background to a glass style. However, it seems that this feature may not be supported by the available APIs, particularly those designed for Vision OS. Any suggestions or insights would be greatly appreciated."
1
0
403
Apr ’24
Integrating VisionOS Support into existing SwiftUI iOS App that uses CocoaPods
I am attempting to integrate VisionOS support into my existing iOS app, which utilizes Swift UI and CocoaPods. However, upon adding VisionOS as a supported platform and attempting to run the app, I encounter two errors: "'jot/jot.h' file not found" at "/Users/xxxxxx/Desktop/IOS_DEVELOPMENT/iOS/xxxxxxxxx/xxxxxxxxx-Bridging-Header.h:17:9". "Failed to emit precompiled header" at "/Users/xxxxxx/Library/Developer/Xcode/DerivedData/xxxxxxxxx-bnhvaxypgfhmvqgklzjdnxxbrdhu/Build/Intermediates.noindex/PrecompiledHeaders/xxxxxxxxx-Bridging-Header-swift_6TTOG1OAZB5F-clang_21TRHDW14EDOZ.pch" for bridging header "/Users/xxxxxxxx/Desktop/IOS_DEVELOPMENT/iOS/xxxxxxx/xxxxxxxx-Bridging-Header.h". I'm seeking assistance with resolving these errors. Below is my Podfile configuration: source 'https://github.com/CocoaPods/Specs.git' platform :ios, '15.0' target 'xxxxxxxxxx' do use_frameworks! pod 'RealmSwift' pod 'JGProgressHUD' pod 'BadgeLabel' pod 'jot' pod 'MaterialComponents/Chips' pod 'GoogleMaps' pod 'Firebase/Crashlytics' pod 'Firebase/Analytics' # Firebase pod for Google Analytics # Add pods for any other desired Firebase products # https://firebase.google.com/docs/ios/setup#available-pods end post_install do |installer| installer.pods_project.targets.each do |target| target.build_configurations.each do |config| config.build_settings['IPHONEOS_DEPLOYMENT_TARGET'] = '15.0' end end end Any assistance in resolving these errors would be greatly appreciated.
0
0
446
Apr ’24
Issues while adding VisionOS support to pre existing IOS App that uses swift UI and cocoapods
I am attempting to integrate VisionOS support into my existing iOS app, which utilizes Swift UI and CocoaPods. However, upon adding VisionOS as a supported platform and attempting to run the app, I encounter two errors: "'jot/jot.h' file not found" at "/Users/xxxxxx/Desktop/IOS_DEVELOPMENT/iOS/xxxxxxxxx/xxxxxxxxx-Bridging-Header.h:17:9". "Failed to emit precompiled header" at "/Users/xxxxxx/Library/Developer/Xcode/DerivedData/xxxxxxxxx-bnhvaxypgfhmvqgklzjdnxxbrdhu/Build/Intermediates.noindex/PrecompiledHeaders/xxxxxxxxx-Bridging-Header-swift_6TTOG1OAZB5F-clang_21TRHDW14EDOZ.pch" for bridging header "/Users/xxxxxxxx/Desktop/IOS_DEVELOPMENT/iOS/xxxxxxx/xxxxxxxx-Bridging-Header.h". I'm seeking assistance with resolving these errors. Below is my Podfile configuration: source 'https://github.com/CocoaPods/Specs.git' platform :ios, '15.0' target 'xxxxxxxxxx' do use_frameworks! pod 'RealmSwift' pod 'JGProgressHUD' pod 'BadgeLabel' pod 'jot' pod 'MaterialComponents/Chips' pod 'GoogleMaps' pod 'Firebase/Crashlytics' pod 'Firebase/Analytics' # Firebase pod for Google Analytics # Add pods for any other desired Firebase products # https://firebase.google.com/docs/ios/setup#available-pods end post_install do |installer| installer.pods_project.targets.each do |target| target.build_configurations.each do |config| config.build_settings['IPHONEOS_DEPLOYMENT_TARGET'] = '15.0' end end end Any assistance in resolving these errors would be greatly appreciated.
0
0
439
Apr ’24
Restarting a stopped ARKitSession in VisionOS causes app to crash
Flow: User enters app and starts an arkit session with worldtracking and scene reconstruction. User closes app so we stop the session. User re-enters app and we try to run the session but app crashes with error: "It is not possible to re-run a stopped data provider. If we remove code to stop the session, when the user re-enters the app the scene reconstruction doesn't work properly and shows inaccurate meshing data. Is this a bug or am I doing something wrong here? Any ideas or insight are appreciated
3
0
679
Apr ’24
Removed WorldAnchors reappear
While WorldTrackingProvider.removeAnchor() completes without error, the WorldAnchor might be back the next time the App is run. This can easily be replicated by the ObjectPlacement sample. Just add 10 objects, Remove All, then run App again. The first run the anchors might be gone, but run the App a couple more times and the anchors will come back. This becomes a big problem when paired with the issue that anchors are not always found when the App enter Immersive mode. When an anchor is not found our App will add an anchor. That usually works fine for that run. The next run, however, the other anchors will show up. Anchors accumulate and it becomes difficult to track.
2
0
497
Apr ’24
visionOS 3D tap location offset by ~0.35m?
I have a simple visionOS app that uses a RealityView to map floors and ceilings using PlaneDetectionProvider and PlaneAnchors. I can look at a location on the floor or ceiling, tap, and place an object at that location (I am currently placing a small cube with X-Y-Z axes sticking out at the location). The tap locations are consistently about 0.35m off along the horizontal plane (it is never off vertically) from where I was looking. Has anyone else run into the issue of a spatial tap gesture resulting in a location offset from where they are looking? And if I move to different locations, the offset is the same in real space, so the offset doesn't appear to be associated with the orientation of the Apple Vision Pro (e.g. it isn't off a little to the left of the headset of where I was looking). Attached is an image showing this. I focused on the corner of the carpet (yellow circle), tapped my fingers to trigger a tap gesture in RealityView, extracted the location, and placed a purple cube at that location. I stood in 4 different locations (where the orange squares are), looked at the corner of the rug (yellow circle) and tapped. All 4 purple cubes are place at about the same location ~0.35m away from the look location. Here is how I captured the tap gesture and extracted the 3D location: var myTapGesture: some Gesture { SpatialTapGesture() .targetedToAnyEntity() .onEnded { event in let location3D = event.convert(event.location3D, from: .global, to: .scene) let entity = event.entity model.handleTap(location: location3D, entity: entity) } } Here is how I set the position of the purple cube: func handleTap(location: SIMD3<Float>, entity: Entity) { let positionEntity = Entity() positionEntity.setPosition(location, relativeTo: nil) ... }
5
0
918
Apr ’24