Hi all,
I am having trouble debugging an error where the wireframe object entity representation for the Object Tracking Demo: "Explore object tracking for visionOS" appears incorrect in the right eye of the Vision Pro but correct in the left eye. Would anyone happen to know what is going on? I have attempted to offset the object by changing world coordinates, but this moves the object in both the left and the right eye. Could this be due to the new visionOS beta update (2.0 --> 2.2) ? I am currently using visionOS 2.2.
Thanks!
ARKit
RSS for tagIntegrate iOS device camera and motion features to produce augmented reality experiences in your app or game using ARKit.
Post
Replies
Boosts
Views
Activity
By applying for the enterprise API, we can obtain the data of video frames collected by VisionPro glasses, and then we process the collected video frames to achieve the function of eliminating a certain object. But it was not found how to insert the processed video frames into the data source collected by the system camera.
So I would like to ask if there is any API that can insert processed video frames into the original data and present them to the user?
This effect is similar to the right side twist of VisionPro glasses, which allows the physical world and digital space to blend perfectly after rotation. So, I would like to ask if there is a related API that can solve this problem?
STEPS TO REPRODUCE
Obtain video frames,
Process the obtained video frames
Insert the processed video frames into the VisonOS system camera.
System: VisionOS 2.0
API used: Enterprise APIs Main camera access permissions
How to find main (left) camera transform from world anchor? (Enterprise API)
From CameraFrameProvider() I can get a frame sample which has an "extrinsics" parameter. How is it defined? Relative to what point/anchor?
Hi!
I wanna know that if it's possible that loading Immersive Scene after scanning(recognizing) preregistered images or objects?
I tried to load the Immersive scene after scanning image and objects, it didn't work well.
Please let me know about the solution if it's possible. Here the ImmersiveView.swift code i tried.
// ImmersiveView.swift
import SwiftUI
import RealityKit
import RealityKitContent // Using the RealityKitContent module
struct ImmersiveView: View {
@ObservedObject var viewModel: TrackingViewModel
@State private var immersiveScene: Entity?
@State private var isToggleOn: Bool = false // Variable for toggle state
var body: some View {
ZStack { // Overlay RealityView and UI elements
RealityView { content in
if let scene = immersiveScene {
content.add(scene)
print("Immersive scene successfully added.")
if let moneyGunsEntity = scene.findEntity(named: "MoneyGuns") {
NotificationCenter.default.post(
name: Notification.Name("RealityKit.NotificationTrigger"),
object: nil,
userInfo: [
"RealityKit.NotificationTrigger.Scene": scene,
"RealityKit.NotificationTrigger.Identifier": "PlayTimeline"
]
)
print("PlayTimeline notification sent.")
} else {
print("MoneyGuns entity not found.")
}
}
}
.onAppear {
Task {
if let scene = try? await Entity(named: "Immersive", in: realityKitContentBundle) {
immersiveScene = scene
} else {
print("Failed to load immersive scene.")
}
}
}
VStack {
Spacer()
Toggle(isOn: $isToggleOn) { // Add toggle button
Text("Toggle Option")
.foregroundColor(.white)
}
.padding()
.background(Color.black.opacity(0.7))
.cornerRadius(8)
.padding()
}
}
}
}
The method of taking screenshots in IOS can be done through the "view. layer. render (in: UIGraphicsGetCurrentContext()!)" method. What should be replaced with "view. layer" in VisionOS to call the ". render (in: UIGraphicsGetCurrentContext()!)" method??
In visionOS, ARKit is to integrate virtual and reality. However, most of the functions RealityKit can be easily implemented (except for Scene reconstruction, Room Tracking and enterprise API), so do I still need to use ARKit? Is there any difference between them?
Hi!
I'm making content using Room Tracking for vision pro these days.
So I searched information about it. Here the links I visited. But I could not found the info I wanted to know
Apple ARKit
Create enhanced spatial computing experiences with ARKit
RoomTrackingProvider
I wanna know that if it's possible remembering room structure that recognized before and adding contents in certain world anchor in the room space when user entered the room again?
For example, a developer can save the room structure, room info (with room ID) and world anchor of the room with Room Tracking feature.
After this, the developer can add entities via Xcode and Reality Composer Pro in certain position of the room to show contents to users when users enter the room. So users can see the contents whenever they visit the room.
Is this possible?
If there are example codes or projects about it, please let me know.
I have a VideoMaterial inside a RealityView and want to attach this to a DockingRegion inside an immersive environment.
It appears that adding the VideoMaterial entity as a child of the docking region somewhat works, but there are no lighting effects (specular, diffuse) from the playing video.
So essentially, how can you add a VideoMaterial to a DockingRegion and achieve the same reflections/behavior as using AVPlayerViewController.
The latter is not an option as I need custom controls.
Hi,
I'm currently working on an ARKit project where I need to implement object occlusion on devices that do not have a LiDAR sensor (e.g., iPhone XR, iPhone 11).
I used CoreML models like DepthAnythingV2 to create depth maps and DETRResnet50SemanticSegmentationF16P8 to to perform real-time segmentation. But these models are too heavy for devices.
Much appreciated on any advice or pointers to resources.
I am trying to create reflections and lighting in a fully virtual scene, and the way the documentation is written, the VirtualEnvironmentProbeComponent would be key. I cannot seem to get it to work, however. Has anyone got this figured out?
I'm seeking detailed information about the rotation matrix of the iPhone's front-facing (selfie) camera when using ARKit.
Specifically, I need to understand:
The exact rotation matrix applied to the front-facing camera's output in ARKit.
Whether this matrix is consistent across all iPhone models or if there are variations.
If there are any transformations applied to align the camera's coordinate system with the device's orientation, particularly in portrait mode.
How this rotation matrix relates to the transform property of `ARFrame.camera
Hi, I’m working on a portfolio project for Vision Pro these days.
I have two projects and each of projects are made with Unity and made with Xcode(using ARKit and RealityKit tracking feature). Is it able to combine these two projects in an app?
For example, using the buttons made with SwiftUI in a Reality Composer Pro, jumping to a scene in Unity, and back from a scene in Unity to a scene in Reality Composer Pro in an app.
if you check the code here,
https://developer.apple.com/documentation/compositorservices/interacting-with-virtual-content-blended-with-passthrough
var body: some Scene {
ImmersiveSpace(id: Self.id) {
CompositorLayer(configuration: ContentStageConfiguration()) { layerRenderer in
let pathCollection: PathCollection
do {
pathCollection = try PathCollection(layerRenderer: layerRenderer)
} catch {
fatalError("Failed to create path collection \(error)")
}
let tintRenderer: TintRenderer
do {
tintRenderer = try TintRenderer(layerRenderer: layerRenderer)
} catch {
fatalError("Failed to create tint renderer \(error)")
}
Task(priority: .high) { @RendererActor in
Task { @MainActor in
appModel.pathCollection = pathCollection
appModel.tintRenderer = tintRenderer
}
let renderer = try await Renderer(layerRenderer,
appModel,
pathCollection,
tintRenderer)
try await renderer.renderLoop()
Task { @MainActor in
appModel.pathCollection = nil
appModel.tintRenderer = nil
}
}
layerRenderer.onSpatialEvent = {
pathCollection.addEvents(eventCollection: $0)
}
}
}
.immersionStyle(selection: .constant(appModel.immersionStyle), in: .mixed, .full)
.upperLimbVisibility(appModel.upperLimbVisibility)
the only way it's dealing with the error is fatalError.
And don't think I can throw anything or return anything else?
Is there a way I can gracefully handle this and show a message box in UI?
I was hoping I could somehow trigger a failure and have https://developer.apple.com/documentation/swiftui/openimmersivespaceaction return fail.
but couldn't find a nice way to do so.
Let me know if you have ideas.
Hi everyone,
I'm looking for some guidance on how to create an .arobject file. Is there a way to generate it from a 3D model (like a .objcap or .usdz or .fbx/.obj file)? Or can it only be generated by scanning a real-world object using the ARKitScanner project? Any advice or resources on this would be greatly appreciated!
I've only found this project for scanning real-world objects.
Thanks in advance!
Hello Apple Team,
Is it possible to change the zoom factor, exposure, white balance and other settings, of an iOS ARKit session?
I know how to do it using an AVCaptureSession,
however, I can't figure out how to access the AVCaptureDeviceInput of the current AR session.
Thanks
PS: I'm using ARkit and RealityKit on iOS 17
I have created two scenes, one immersive and one volumetric using Reality Composer Pro. In my test app I can view both and they render correctly.
However, I would like to add entities programmatically. I am trying this;
var body: some View {
RealityView { content in
if let scene = try? await Entity(named: "Scene", in: realityKitContentBundle) {
viewModel.rootEntity = scene
content.add(scene)
var anchorEntity = AnchorEntity(world: [0, 0, -0.5])
let sphere = MeshResource.generateSphere(radius: 2.0)
let material = SimpleMaterial(color: .red, roughness: 0.5, isMetallic: true)
let modelEntity = ModelEntity(mesh: sphere, materials: [material])
anchorEntity.addChild(modelEntity)
content.add(anchorEntity)
}
}
}
However, the sphere does not appear in the volume. I also tried it in the immersive space and it does not appear there either.
What am I missing?
Hi everyone,
I’m working on an app for VisionOS that needs to recognize individual rooms in a hallway based on the person the room belongs to (using the name displayed on each office door). Is there any sample code or resource that can guide me in implementing this feature?
Thanks in advance for your help!
I have created a portal and attached it to a wall using the AnchorEntity. However, I am seeking guidance on how to determine the size of the wall so that the portal can fully occupy it. Initially, I attempted to locate relevant information within the demo code, but I encountered difficulties in comprehending certain sections. I would appreciate it if someone could provide a step-by-step explanation or a reference to the appropriate code. Thank you for your assistance.
In the visionOS App, I want to detect whether the user is in the room. My idea is as follows:
Check whether there are walls around the user
May I ask how to do it? Thanks!
With Xcode16 and VIsionOS SDK 2.0, result of consecutive ar_anchor_get_timestamp may differ many seconds.
Is there any way to detect the timestamp jumping?