Hi Team
Is there a way to extract a colorized scan as well with using the roomplan SDK ? . If yes, can you point me to the right reference link ?
Does the roomplan SDK provide dimensions of the room ?
Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.
Post
Replies
Boosts
Views
Activity
Our app needs to scan QR codes (or a similar mechanism) to populate it with content the user wants to see.
Is there any update on QR code scanning availability on this platform? I asked this before, but never got any feedback.
I know that there is no way to access the camera (which is an issue in itself), but at least the system could provide an API to scan codes.
(It would be also cool if we were able to use the same codes Vision Pro uses for detecting the Zeiss glasses, as long as we could create these via server-side JavaScript code.)
I built two parts of my app a bit disjointed:
my physics component, which controls all SceneReconstruction, HandTracking, and WorldTracking.
my spatial GroupActivities component that allows you to see personas of those that join the activity.
My problem: When trying to use any DataProvider in a spatial experience, I get the ARKit Session Event: dataProviderStateChanged, which disables all of my providers.
My question: Has anyone successfully been able to find a workaround for this? I think it would be amazing to have one user be able to be the "host" for the activity and the scene reconstruction provider still continue to run for them.
How is it possible to add a schema for ar to a usd file using the python tools (or any other way).
Following the instructions in: https://developer.apple.com/documentation/arkit/arkit_in_ios/usdz_schemas_for_ar/actions_and_triggers/preliminary_behavior
The steps are to have the following declaration:
class Preliminary_Behavior "Preliminary_Behavior" (
inherits = </Typed>
)
and a usd file
#usda 1.0
def Preliminary_Behavior "TapAndFlip"
{
rel triggers = [ <Tap> ]
rel actions = [ <Entry> ]
def Preliminary_Trigger "Tap" ( inherits = </TapGestureTrigger> )
{
rel affectedObjects = [ </Cube> ]
}
def Preliminary_Action "Entry" ( inherits = </GroupAction> )
{
uniform token type = "parallel"
rel actions = [ <Flip> ]
}
def Preliminary_Action "Flip" ( inherits = </EmphasizeAction> )
{
rel affectedObjects = [ </Cube> ]
uniform token motionType = "flip"
}
}
def Cube "Cube" { }
How do these parts fit together? I saved the usda file, but it didn't have any interactions. Obviously, I have to add that declaration, but how do I do this? is this all in an AR Xcode project? Or can I do this with python tools (I would prefer something very lightweight).
As the title says. While I can find the video capture on the desktop but I can not find where it is storing the screenshots even when it says Screenshot's succeeded.
I am referencing this: https://developer.apple.com/documentation/visionos/capturing-screenshots-and-video-from-your-apple-vision-pro-for-2d-viewing
We want to make a multi-person networked application of vision pro. The first step is to need multiple vision pro with the same spatial information, so as to make the world coordinates consistent.
Can we completely copy the spatial information created by one of the vision pro scans to other devices?
I'm taking my iOS/iPadOS app and converting it so it runs on visionOS. I’m trying to compile my app, build it, for both visionOS and iOS. When I try to build for an iPhone and iPad simulator, I get the following error:
 Building for 'iphonesimulator', but realitytool only supports [xros, xrsimulator]
I’m thinking I might need to do a # if conditional compilation statement for visionOS so iOS doesn’t try to build lines of code but I can’t for this particular error find out for which file or code I need to do the conditional compilation. Anyone know how to get rid of this error? 
What is the reason the hand-tracking joints have these axes? I'm trying to create a virtual hands model and that's a mess.
I have found that my Vision Pro device can get into a state where my app is no longer receiving fresh SceneReconstructionProvider updates. It reports that the SceneReconstructionProvider goes into the DataProviderState.running state, and .anchorUpdates will report a set of stale mesh anchors when first fired up, but does not produce any further updates. Once the device gets into this state, I can force quit the app, and even uninstall and re-install it, and I get the same few mesh updates, but no fresh updates until I restart the device.
Sample async function below. I can confirm that print("WE FELL OFF THE END OF sceneReconstruction.anchorUpdates") never gets executed, so it stays inside the sceneReconstruction.anchorUpdates loop.
let session = ARKitSession()
var handTracking = HandTrackingProvider()
let sceneReconstruction = SceneReconstructionProvider()
let planeDetection = PlaneDetectionProvider(alignments: [.horizontal, .vertical])
let worldTracking = WorldTrackingProvider()
...
func start() async {
do {
await requestAuth()
if dataProvidersAreSupported && isReadyToRun && !isRunning {
// print("ARKitSession starting.")
try await session.run([sceneReconstruction, handTracking, planeDetection, worldTracking])
startCount += 1
// TODO: Fail gracefully if we have to attempt start too many (# TBD) times
} else {
print("dataProvidersAreSupported: \(dataProvidersAreSupported). isReadyToRun: \(isRunning)")
print("handTracking.state: \(handTracking.state), sceneReconstruction.state: \(sceneReconstruction.state) worldTracking.state: \(worldTracking.state), planeDetection.state; \(planeDetection.state)")
}
}catch {
print("ARKitSession error:", error)
}
}
...
func processReconstructionUpdates() async {
while (true) {
for await update in sceneReconstruction.anchorUpdates {
let meshAnchor = update.anchor
guard let shape = try? await ShapeResource.generateStaticMesh(from: meshAnchor) else { continue }
switch update.event {
case .added:
let entity = try! await generateModelEntity(geometry: meshAnchor.geometry)
entity.transform = Transform(matrix: meshAnchor.originFromAnchorTransform)
entity.collision = CollisionComponent(shapes: [shape], isStatic: true)
entity.components.set(InputTargetComponent())
entity.name = "mesh"
entity.physicsBody = PhysicsBodyComponent(mode: .static)
let sortComponent = ModelSortGroupComponent(group: modelSortGroup, order: 1)
entity.components.set(sortComponent)
entity.components.set(OpacityComponent(opacity: 0.5))
meshEntities[meshAnchor.id] = entity
meshesParent.addChild(entity, preservingWorldTransform: true)
case .updated:
guard let entity = meshEntities[meshAnchor.id],
let updatedEntity = try? await generateModelEntity(geometry: meshAnchor.geometry) else { continue }
entity.transform = Transform(matrix: meshAnchor.originFromAnchorTransform)
entity.collision?.shapes = [shape]
if let newMesh = updatedEntity.model?.mesh {
entity.model?.mesh = newMesh
}
case .removed:
meshEntities[meshAnchor.id]?.removeFromParent()
meshEntities.removeValue(forKey: meshAnchor.id)
}
print("We now have '\(meshEntities.count)' mesh entities")
}
print("WE FELL OFF THE END OF sceneReconstruction.anchorUpdates")
try? await Task.sleep(nanoseconds: 1_000_000)
}
I'm wondering if it's possible to implement object tracking on Vision Pro using the Vision framework of Apple? I see that the Vision documentation offers a variety of classes for computer vision which have a tag "visionOS", but all the example codes in the documentation are only for iOS, iPadOS or macOS. So can those classes also be used for developing Vision Pro apps? If so, how do they get data feed from the camera of Vision Pro?
I am developing an iPhone app, but I've been targeting the AVP, as well. In fact, since I got the AVP, I've mainly be building and running my app on it. This morning, I had an upgrade to Xcode 15.4 (15F31d). Ever since I have not been able to see my AVP as a run destination.
It does show up in the device list, although there are no provisioning files on it for some reason. But I can't target it for building. I've tried unpairing and turning developer mode off and on.
Has anyone else seen this problem after upgrading Xcode? Any help is appreciated.
I'm writing a Vision Pro app that's fully immersive and rendered using Metal. Occasionally, some users of this app would benefit from being able to use a physical keyboard (or other accessory like a game controller). It seems very straightforward to capture and handle spatial gesture events, but I cannot find an interface that allows the detection, capture, or handling of keyboard events in any of the objects associated with fully immersive metal rendering: CompositorServices, LayerRenderer, and its associated .frame, .drawable, and .drawable.view don't seem to have any accessory awareness. Can you help me handle a keyboard event?
We are hoping to port our app which requires printing to visionOS. Can I check if visionOS supports AirPrint/printing? If anyone has already done it, anything we should watch out for?
Hi, I'm working on a virtual furniture placement app. I have used Object Placement example, and wonder if is possible to backup a anchor to cloud or share anchor to another device which will let users to view the same model in the same place? Thanks
We followed this documentation https://developer.apple.com/documentation/compositorservices/drawing_fully_immersive_content_using_metal to display a fully immersive map using our metal rendering engine, which worked great. But this part of the article: https://developer.apple.com/documentation/compositorservices/drawing_fully_immersive_content_using_metal#4193614 mentions how to use the onSpatialEvent callback to receive gesture events. We are receiving the gesture events but the location property of the event (https://developer.apple.com/documentation/swiftui/spatialeventcollection/event/location) is always coming back as (x: 0, y:0) which is not helpful. We are unable to get a single valid location of any gesture, therefore, we are unable to hook up these gestures. We tried this on a simulator and a Vision Pro device.
[tldr version: all the point cloud capture apps rushed out an update when the iPhone 15 Pro was released because they were capturing far fewer points on that device. The same is observed with the new M4 iPad Pro. What was the fix for compatibility with these new devices?]
I am running an ARKit replay file through "Displaying a Point Cloud Using Scene Depth" from WWDC20 and recording the ARConfidenceLevel values of the incoming ARDepthData.
I am doing this side-by-side on an iPhone 12 Pro and an iPhone 15 Pro. The ARKit replay file was originally recorded on the 12 Pro.
We get a certain percentage of points where the ARConfidenceLevel is not ".high" when running on the iPhone 12 Pro. It's varies a lot by frame but averages about 5% and is the same on all devices prior to the iPhone 15 Pro.
The same test using the same iPhone 12 Pro replay file on an iPhone 15 Pro gives about twice as many points where the ARConfidenceLevel is not ".high" (about 10% on average on this particular replay file).
This corresponds with real-world usage of our app on the iPhone 15 Pro where far fewer points are captured on that device compared with all previous models. (Our app filters out points where the ARConfidenceLevel is not ".high".)
Apple's interpretation of the same LiDAR data is clearly different on the iPhone 15 Pro and M4 iPad Pro when compared to earlier devices. Can you please advise how to maintain equivalent behaviour on the new devices?
Steps to reproduce:
Run "Displaying a Point Cloud Using Scene Depth" from WWDC20 session 10611: Explore ARKit 4 following the instructions at https://developer.apple.com/documentation/arkit/arkit_in_ios/environmental_analysis/displaying_a_point_cloud_using_scene_depth
Use Xcode's setting to replay data to ARKit while running on an iPhone 12 Pro (using any replay file recorded on that device). An iPhone 13 or 14 Pro will work just as well. Record what % of points have an ARConfidenceLevel that is not .high.
Now do it again running the same replay file on an iPhone 15 Pro. Note that the % of points have an ARConfidenceLevel that is not .high is much higher.
Hello,
I'm trying to download a native spatial video for a software program I'm putting together where people can upload spatial videos from the web and deploy them inside a native VisionOS app showing a breadth of different file formats.
Have a bug I'm trying to resolve on an app review through the store.
The basic flow is this:
User presses a button and enters a fully immersive space
While in the the fully immersive space, user presses the digital crown button to exit fully immersive mode and return to shared space (Note: this is not rotating the digital crown to control immersion level)
At this point I need an event or onchange (or similar) to know when a user is in immersive mode so I can reset a flag I've been manually setting to track whether or not the user is currently viewing an immersive space.
I have an onchange watching the scenePhase changes and printing to console the old/new values however this is never triggered.
Seems like it might be an edge case but curious if there's another way to detect whether or not a user is in an immersive scene.
I have followed every step of all the instructions.
Nothing happens.
Did factory settings of both my Macbook Pro & Vision Pro
with the same apple ID.
Still Vision Pro doesn't appear.
Hi,
We are currently trying to implement very simple test application using Vision Pro.
We display a virtual copy of an object (based on CAD data) and then we try to align the real object with the virtual one.
It seems to be impossible!
You can align them to a certain degree but if you walk around the object to control the alignment it seems the reality us warping and wobbling for almost 2 cm.
Is there any way to fix this?