When i call queryDeviceAnchor in my Billboard system I get transform updates but I'm unsure how to process them (similar to the Diorama sample app).
Is it a bug that I recieve these updates? The documentation says that ARKit data is only provided in a full space so I would expect this not to work at all.
But if this is the case, why am I getting deviceAnchor values in this situation?
Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.
Post
Replies
Boosts
Views
Activity
Dear Apple Developer Forum Community,
I hope this message finds you well. I am writing to seek assistance regarding an error I encountered while attempting to create a "Hello World" application using Xcode.
Upon launching Xcode and starting a new project, I followed the standard procedure for creating a simple iOS application. However, during the process, I encountered an unexpected error that halted my progress. The error message I received was [insert error message here].
I have attempted to troubleshoot the issue by see two images, but unfortunately, I have been unsuccessful in resolving it.
I am reaching out to the community in the hope that someone might have encountered a similar issue or have expertise in troubleshooting Xcode errors. Any guidance, suggestions, or solutions would be greatly appreciated.
Thank you very much for your time and assistance.
Sincerely,
Zipzy games
y
Games
Hi, I tried to change the default size for a volumetric window but It looks like this window has a maximum width value. Is it true?
WindowGroup(id: "id") {
ItemToShow()
}.windowStyle(.volumetric)
.defaultSize(width: 100, height: 0.8, depth: 0.3, in: .meters)
Here I set the width to 100 meters but It still looks like about 2 meters
I have a main app window that presents an Immersive style in Mixed Reality. I am trying to determine the anchor/position of this glass window in the 3D space and place a Sphere entity right next to it. The goal is to ensure that if the user moves the window, the Sphere entity remains attached to it. Does anyone have insights on how to achieve this?
The below code snippet provides the position of the device, and I have positioned it 0.5 meters away from the z-axis. However, my objective is to obtain the position of the glass window and anchor the sphere to it. Any guidance on achieving this would be appreciated.
import RealityKit
import RealityKitContent
import ARKit
struct ImmersiveView: View {
let visionProPose = VisionProPose()
var body: some View {
RealityView { content in
Task { await visionProPose.runArSession() }
// Add the initial RealityKit content
if let scene = try? await Entity(named: "Immersive", in: realityKitContentBundle) {
content.add(scene)
}
} update: { content in
if let scene = content.entities.first {
if let sphere = scene.findEntity(named: "Sphere") as? ModelEntity {
Task {
let transfrom = await visionProPose.getTransform()
sphere.position = [Float((transfrom?.columns.3.x)!),
Float((transfrom?.columns.3.y)!),
Float((transfrom?.columns.3.z)!) - 1 ]
}
}
}
}
}
}
@Observable class VisionProPose {
let session = ARKitSession()
let worldTracking = WorldTrackingProvider()
func runArSession() async {
Task {
try? await session.run([worldTracking])
}
}
func getTransform() async -> simd_float4x4? {
guard let deviceAnchor = worldTracking.queryDeviceAnchor(atTimestamp: 1)
else { return nil }
let transform = deviceAnchor.originFromAnchorTransform
return transform
}
}
Aloha,
I'm wondering where the documentation for the Vision Pro "Developer Strap" is located. I have the Vision Pro devices and the developer straps, but I'm not sure how to go about using the developer straps for VisionOS development in Xcode & Unity.
Hello,
I want to be able to tap on a previously-placed ModelEntity box and add a dot or a text at that location on the box (kind of like I'm adding an annotation on the box)
I have something like this, but not sure how I should do it correctly:
class MyARView: ARView {
// ...
private func didTap(_ gestureRecognizer: UITapGestureRecognizer) {
let pos = gestureRecognizer.location(in: self)
if !didPlaceCube {
placeCube(pos)
return
}
let hitTestResult = self.hitTest(pos)
guard let firstResult = hitTestResult.first else { return}
let entity = firstResult.entity
let textEntity = ModelEntity(mesh: .generateText("Hello there", extrusionDepth: 0.4, font: .boldSystemFont(ofSize: 0.05), containerFrame: .zero, alignment: .center, lineBreakMode: .byWordWrapping))
textEntity.setPosition(entity.position + firstResult.position, relativeTo: entity)
entity.addChild(textEntity)
}
// ...
}
Hi,
I'm working on a simple visionOS app and I'm testing on device.
For one part of the app, I load an object in and place it on the user's hand. If I use a primitive shape, like a sphere or cylinder, this works fine. However, now I'm trying to load a an object from my RealityKitContent package. But everytime I try this, I get a an error message, resourceNotFound("Stone"), where "Stone" is one of my usda scenes.
This is what the guts of my function looks like that should return a ModelEntity:
do {
let entity = try await ModelEntity(named: "Stone", in: realityKitContentBundle)
entity.generateCollisionShapes(recursive: true)
return entity
} catch {
print("Error \(error)")
}
I can see the "Stone" in my Xcode sidebar as part of the RealityKitContent package and inside that scene, there is a simple sphere, but alas I always get this in the Xcode console, "Error resourceNotFound("Stone")"
I'm probably doing something pretty silly, hopefully it's obvious to someone else.
Thanks for the help.
Ian
I'd like to map a SwiftUI view (in my case: a map) onto a 3D curved plane in immersive view, so user can literally immersive themselves into the map. The user should also be able to interact with the map, by panning it around and selecting markers.
Is this possible?
I'm developing a vision pro application. However, when the user takes off the Apple Vision Pro device, the application goes into the background. How can I prevent this behavior programmatically?
Following this thread I'm able to render a simple picture in a Plane material, however, I'm unable to scale it to show bigger than the window itself, or move it behind the window.
Here's my relevant code so far.-
var body: some View {
ZStack {
RealityView { content in
var material = UnlitMaterial()
material.color = try! .init(tint: .white,
texture: .init(.load(named: "image",
in: nil)))
let entity = Entity()
let component = ModelComponent(
mesh: .generatePlane(width: 1, height: 1),
materials: [material]
)
entity.components.set(component)
let currentTransform = entity.transform
var newTransform = Transform(scale: currentTransform.scale,
rotation: currentTransform.rotation,
translation: SIMD3(0, 0, -0.2))
entity.move(to: newTransform, relativeTo: nil)
/*
let scalingPivot = Entity()
scalingPivot.position.y = entity.visualBounds(relativeTo: nil).center.y
scalingPivot.addChild(entity)
content.add(scalingPivot)
scalingPivot.scale *= .init(x: 1, y: 1, z: 1)
*/
}
}
}
It belongs to an ImmersiveSpace I'm opening directly from my main window, but I have several issues:
The texture shows always in front of the window
I'm unable to scale it (scaling seems to affect to the texture coordinates inside the material instead of scaling the mesh itself)
I can only see the texture in the canvas preview (not in simulator)
I am using the room plan api to implement the function of multiple space merging, but I found that after performing multiple space merging, the generated json would miss some of the newly added areas, but the usd file and plist file were complete.Does anyone have this problem? Look forward to official support
this is my code:
public func mergeScan(_ data:String,_ scanName:String,_ directoryName:String){
var capturedRoomArray: [CapturedRoom] = []
//解析主结构
let jsonURL = getRootURL().appending(path: "/\(directoryName)/\(scanName)/scan.json")
guard let mainStructureRoom = try?loadCapturedRoom(from: jsonURL) else { return }
capturedRoomArray.append(mainStructureRoom)
// 添加子结构
if let subStructureRoom = try? loadCapturedRoom(from: data) {
os_log("loadCapturedRoom string data success: %@", type: .error, String(describing: data))
capturedRoomArray.append(subStructureRoom)
}
os_log("merge scan capturedRoomArray: %@", type: .error, String(describing: capturedRoomArray.count))
//合并
Task {
do {
finalStructureResults = try await structureBuilder.capturedStructure(from: capturedRoomArray)
}catch {
print("Merging Error:\(error.localizedDescription)")
return
}
do{
//保存
//导出json
guard let finalStructureResults else { return }
try exportJson(from: finalStructureResults, to: jsonURL)
//导出usd
let meshDestinationURL = jsonURL.deletingPathExtension().appendingPathExtension("usdz")
//导出plist
let metadataDestinationURL = jsonURL.deletingPathExtension().appendingPathExtension("plist")
try finalStructureResults.export(to: meshDestinationURL,
metadataURL: metadataDestinationURL,
exportOptions: [.mesh])
} catch {
print("Merge Error:\(error.localizedDescription)")
return
}
}
}
func exportJson(from capturedStructure: CapturedStructure, to url: URL) throws {
let encoder = JSONEncoder()
encoder.outputFormatting = [.prettyPrinted, .sortedKeys]
let data = try encoder.encode(capturedStructure)
try data.write(to: url)
}
Note: Only json is missing the content of this or the next scan, usdz and plist are complete
I am new to visionOS development, just slowly figuring out the difference in immersion styles to figure out how I want my app to behave.
It seems that when you use a progressive immersive space the minimum immersion level (set via the digital crown) is not 0? Meaning, there is no way to go from mixed to full by using the Digital Crown. Even when I try to set it to 0 (such as in the Destination Video sample), it pops back up to around 30-40%, and I always see the background. Is this expected behavior, or are there some settings that allow me to change this minimum immersion level?
Further, in the video 'Meet ARKit for spatial computing', it is stated that to get access to ARKit tracking data you must use a 'Full Space', not the 'Shared Space'. This wording is confusing to me. Is an ImmersiveSpace set to the .mixed (or .progressive) immersion style still a 'Full Space' (because it isn't in the shared space, with other apps)? OR, is ARKit only available in an ImmersiveSpace with the .full immersion style? Just feels like maybe 'full' is being used in two different ways here...
Thanks in advance,
-pj
I am trying to implement a game where the character walks on the scene mesh. I am controlling the character with a game controller. I noticed there is a character controller component in Reality Composer Pro, I am aware that when this component is added, the player cannot have a collision or a physics component.
I need an example that covers adding an entity with the character controller component to the scene and then moving the character using the moveCharacter function.
I was also looking at the documentation https://developer.apple.com/documentation/realitykit/entity/movecharacter(by:deltatime:relativeto:collisionhandler:)
Here it is also looking for deltaTime. Where do we get the deltaTime from? does it come from a system's update function, does that also mean that the character controller needs to be moved in the update method?
Thanks,
Sarang
I am working on a sports training app for VisionOS that requires recognition of fast-moving objects. Currently, I am using ImageTrackingProvider to tag the objects I need. I have noticed that while recognition works well for stationary objects, it does not perform well in tracking moving objects. I assume there are a mix of factors at play:
I am not sure if ARKit is actually built for tracking moving objects, so there could be a refresh rate limit enforced to save battery.
My reference image could be suboptimal/too complex to recognize quickly.
While I can't do anything about #1, I am curious about recommendations for #2. Are there recommendations for the best size of a reference image, its color (would black and white work better?), and its complexity? Also, since the ARKit Resource Group seems to support JPEG & PNG, is there any specific preference for one over the other? Should I prepare the images in any special way to achieve the best possible performance?
Thanks.
I am working on a sports training app for VisionOS that requires recognition of fast-moving objects. Currently, I am using ImageTrackingProvider to tag the objects I need. I have noticed that while recognition works well for stationary objects, it does not perform well in tracking moving objects. I assume there are a mix of factors at play:
I am not sure if ARKit is actually built for tracking moving objects, so there could be a refresh rate limit enforced to save battery.
My reference image could be suboptimal/too complex to recognize quickly.
I am not sure if ARKit is actually built for tracking moving objects, so there could be a refresh rate limit enforced to save battery.
My reference image could be suboptimal/too complex to recognize quickly.
While I can't do anything about #1, I am curious about recommendations for #2. Are there recommendations for the best size of a reference image, its color (would black and white work better?), and its complexity? Also, since the ARKit Resource Group seems to support JPEG & PNG, is there any specific preference for one over the other? Should I prepare the images in any special way to achieve the best possible performance?
Thanks.
let apple = try Entity.load(named: "apple", in: realityKitContentBundle)
works, but
let apple = try Entity.loadModel(named: "apple", in: realityKitContentBundle)
does not work
ie. (error.localizedDescription = Failed to find resource with name "apple" in bundle)
I am unsure what is causing the problem, apple.usda was created in Reality Composer Pro from primitives and has a single apple object (no root). When I load with Entity.load and print apple, I get:
▿ 'apple' : Entity, children: 1
⟐ Transform
⟐ SynchronizationComponent
▿ 'apple' : ModelEntity
⟐ ModelComponent
⟐ Transform
⟐ CollisionComponent
⟐ PhysicsBodyComponent
⟐ SynchronizationComponent
This nested hierarchy seems redundant to me, is it preferred in AR kit to have such a structure? Why am I unable to load usda directly as a ModelEntity?
According to https://developer.apple.com/documentation/visionos/bringing-your-arkit-app-to-visionos#Isolate-ARKit-features-not-available-in-visionOS, Body Tracking and several other features are not available on VisionOS.
So is there any ETA for these ARKit's features to be supported in VisionOS? Thanks.
Hi,
I have a code that uses ImageTrackingProvider. I am experimenting with glyphs of various complexity and structure to understand which ones would be more superior for recognition. Due to the absence of a color printer, I am mostly experimenting with monochrome glyphs as well as some color-paper squares. I am getting mixed results and would like to validate whether what I got are the expected results for the current capabilities of ARKit & VisionPro, or if there is still an opportunity for improvement by selecting different glyphs.
So far, I have used a colored square of size 5x5 cm, as well as two glyphs provided below.
ARKit Glyph
Abstract Glyph
The ARKit Glyph is not recognizable by ARKit or VisionPro at all, no matter the lighting conditions or the angles from which I view it.
The Abstract Glyph is recognizable consistently at a 90-degree angle, and sometimes at other angles too. The maximum distance at which I was able to detect it was around 15cm, maybe less.
I am really curious if there is any specification that I can check against to understand whether my glyphs are good or not, and at what maximum distance such glyphs can be recognized if they were 5x5cm in size.
I am also curious whether ARKit is capable of recognizing images of 5x5cm size at a distance between 2 and 3 meters, and if so, how I should prepare the glyph for such requirements.
Thanks in advance,
Nikita
ps I am skipping a question about yaw angles of image, as well as angel between normal of an image & camera view but I guess they also have their impact on ability to recognize original image.
When I use LiDAR, AVCaptureDeviceTypeBuiltInLiDARDepthCamera is used.
As AVCaptureDeviceTypeBuiltInLiDARDepthCamera is A device that consists of two cameras, one LiDAR and one YUV.
I found that the LiDAR data is 30fps, even making the YUV data 30 fps. But I really need the 240fps YUV data.
Is there a way to utilize the 30fps LiDAR with 240fps YUV camera?
Any reply would be appreciated.
Hi folks!
I have been working with a team on a Vision Pro app using Reality Composer Pro. One thing we have found is that multiple developers editing the RCPro scene are a continuous problem, similar to when multiple developers edit a storyboard.
RC Pro maintains a SceneMetadataList.json file that indexes the file contents of the project that is updated even as the scene hierarchy is opened and closed, not to mention other changes to scene content. We are getting frequent continuous version control conflicts with this file as we each make changes and edits to the scene, or even browse the scene without making any substantive changes.
It seems like it would be safe to add the SceneMetadataList.json file in a RC Pro project to .gitignore. Is that recommended? Any downsides to that?