In full immersive (VR) mode on visionOS, if I want to use compositor services and a custom Metal renderer, can I still get the user’s hands texture so my hands appear as they are in reality? If so, how?
If not, is this a valid feature request in the short term? It’s purely for aesthetic reasons. I’d like to see my own hands, even in immersive mode.
Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.
Post
Replies
Boosts
Views
Activity
I have a UIKit app and would like to provide spacial experience when run on VisionOS.
I added VisionOS support, but not sure how to present an immersive view. All tutorials are in SwiftUI, but my app is in UIKit.
This is an example from a SwiftUI project, but how how do I declare this ImmersiveView in UIKit?
struct VirtualApp: App {
var body: some Scene {
WindowGroup {
ContentView()
}.windowStyle(.volumetric)
ImmersiveSpace(id: "ImmersiveSpace") {
ImmersiveView()
}
}
}
and in UIKit how do I make the call to open the ImmersiveView?
Hi community,
I have a pair of stereo images, one for each eye. How should I render it on visionOS?
I know that for 3D videos, the AVPlayerViewController could display them in fullscreen mode. But I couldn't find any docs relating to 3D stereo images.
I guess my question can be brought up in a more general way: Is there any method that we can render different content for each eye? This could also be helpful to someone who only has sight on one eye.
Hi,
I have an existing iPhone/iPad app that I'm considering for Vision Pro. It's an informational app that shares info. with the watch.
What would be better:
To add a target for the vision pro.
Make a separate app so I can use all of the VP's features.
What are YOU doing to support the Vision Pro?
Thanks,
Dan
I am trying to make a world anchor where a user taps a detected plane.
How am I trying this?
First, I add an entity to a RealityView like so:
let anchor = AnchorEntity(.plane(.vertical, classification: .wall, minimumBounds: [2.0, 2.0]), trackingMode: .continuous)
anchor.transform.rotation *= simd_quatf(angle: -.pi / 2, axis: SIMD3<Float>(1, 0, 0))
let interactionEntity = Entity()
interactionEntity.name = "PLANE"
let collisionComponent = CollisionComponent(shapes: [ShapeResource.generateBox(width: 2.0, height: 2.0, depth: 0.02)])
interactionEntity.components.set(collisionComponent)
interactionEntity.components.set(InputTargetComponent())
anchor.addChild(interactionEntity)
content.add(anchor)
This:
Declares an anchor that requires a wall 2 meters by 2 meters to appear in the scene with continuous tracking
Makes an empty entity and gives it a 2m by 2m by 2cm collision box
Attaches the collision entity to the anchor
Finally then adds the anchor to the scene
It appears in the scene like this:
Great! Appears to sit right on the wall.
I then add a tap gesture recognizer like this:
SpatialTapGesture()
.targetedToAnyEntity()
.onEnded { value in
guard value.entity.name == "PLANE" else { return }
var worldPosition: SIMD3<Float> = value.convert(value.location3D, from: .local, to: .scene)
let pose = Pose3D(position: worldPosition, rotation: value.entity.transform.rotation)
let worldAnchor = WorldAnchor(transform: simd_float4x4(pose))
let model = ModelEntity(mesh: .generateBox(size: 0.1, cornerRadius: 0.03), materials: [SimpleMaterial(color: .blue, isMetallic: true)])
model.transform = Transform(matrix: worldAnchor.transform)
realityViewContent?.add(model)
I ASSUME This:
Makes a world position from the where the tap connects with the collision entity.
Integrates the position and the collision plane's rotation to create a Pose3D.
Makes a world anchor from that pose (So it can be persisted in a world tracking provider)
Then I make a basic cube entity and give it that transform.
Weird Stuff: It doesn't appear on the plane.. it appears behind it...
Why, What have I done wrong?
The X and Y of the tap location appears spot on, but something is "off" about the z position.
Also, is there a recommended way to debug this with the available tools?
I'm guessing I'll have to file a DTS about this because feedback on the forum has been pretty low since labs started.
Seeing a flurry of these in Xcode 15 Beta 8 while debugging an ARKit app
<<<< AVPointCloudData >>>> Fig assert: "_dataBuffer" at bail (AVPointCloudData.m:217) - (err=0)
Hi, I'm doing a research on AR using real world object as the anchor. For this I'm using ARKit's ability to scan and detect 3d object.
Here's what I found so far after scanning and testing object detection on many objects :
It works best on 8-10" object (basically objects you can place on your desk)
It works best on object that has many visual features/details (makes sense just like plane detection)
Although things seem to work well and exactly the behavior I need, I noticed issues in detecting when :
Different lighting setup, this means just directions of the light. I always try to maintain bright room light. But I noticed testing in the morning and in the evening sometimes if not most of the time will make detection harder/fails.
Different environment, this means simply moving the object from one place to another will make detection fails or harder(will take significant amount of time to detect it). -> this isn't scanning process, this is purely anchor detection from the same arobject file on the same real world object.
These two difficulties make me wonder if scanning and detecting 3d object will ever be reliable enough for real world case. For example you want to ship an AR app that contains the manual of your product where you can use AR app to detect and point the location/explanation of your product features.
Has anyone tried this before? Is your research show the same behavior as mine? Does using LIDAR will help in scanning and detection accuracy?
So far there doesn't seem to be any information on what actually ARKit does when scanning and detecting, maybe if anyone has more information I can learn on how to make better scan or what not.
Any help or information regarding this matter that any of you willing to share will be really appreciated
Thanks
Where can I find sample video, sample encoding apps, viewing apps, etc?
I see specs, high level explanations, etc. but not finding any samples or command lines or app documentation to explain how to make and view these files.
Thank you, looking forward to promoting a spatial video rich future.
Hello,
I am developing a VisionOS based application, that uses the various data providers like Image Tracking, Plane Detection, Scene Reconstruction but these are not supported on VisionOS Simulator. What is the Work Around for this issue ?
Greetings,
Been playing around with some of the first examples and apps documented. After a while, I wanted to move to a custom space in the simulator. Scanned throughout the options in the simulator and in XCode but couldn't find how to add custom Simulated Scenes (e.g. my own room) to the Simulator.
Could someone point out how to get this done? Even if programatically, some pointers would be welcome.
Thanks,
We are attempting to update the texture on a node. The code below works correctly when we use a color, but it encounters issues when we attempt to use an image. The image is available in the bundle, and it image correctly in other parts of our application. This texture is being applied to both the floor and the wall. Please assist us with this issue."
for obj in Floor_grp[0].childNodes {
let node = obj.flattenedClone()
node.transform = obj.transform
let imageMaterial = SCNMaterial()
node.geometry?.materials = [imageMaterial]
node.geometry?.firstMaterial?.diffuse.contents = UIColor.brown
obj.removeFromParentNode()
Floor_grp[0].addChildNode(node)
}
Hello! I have a question about usage snapshots from ios17 sample app on macOS 14. I tried to export folders "Photos" and "Snapshots" captured from ios and then wrote like:
let checkpointDirectoryPath = "/path/to/the/Snapshots/"
let checkpointDirectoryURL = URL(fileURLWithPath: checkpointDirectoryPath, isDirectory: true)
if #available(macOS 14.0, *) {
configuration.checkpointDirectory = checkpointDirectoryURL
} else {
// Fallback on earlier versions
}
But I didn't notice any speed or performance improvements. It looks like "Snapshots" folder was simply ignored. Please advise what I can do so that "Snapshots" folder is affected during calculations.
I am using DeepAR and PixelSDK in my project and these both SDK's has same file name "cameraVieewController" when I try to run application I get this error. Is their any solution to resolve this naming issue?
I am writing to seek assistance with a challenge I am facing while working on a 3D model rendering project. I believe your expertise in this area could be immensely helpful in resolving the issue.
The problem I am encountering involves difficulties in displaying textures on both parent and child nodes within the 3D model. Here are the key details of the problem:
This model contents wall_grp(doors, windows and wall) objects. We are using roomplan data in SCNView.
This code dependent on scene kit and room plan apis
When we are comment childnode code its working but in this case we don’t have windows and door on wall.
func updateWallObjects() {
if arch_grp.count > 0 {
if !arch_grp.isEmpty {
for obj in arch_grp[0].childNodes {
let color = UIColor.init(red: 255/255, green: 229/255, blue: 204/255, alpha: 1.0)
let parentNode = obj.flattenedClone()
for childObj in obj.childNodes {
let childNode = childObj.flattenedClone()
let childMaterial = SCNMaterial()
childNode.geometry?.materials = [childMaterial]
if let name = childObj.name {
if (removeNumbers(from: name) != "Wall") {
childNode.geometry?.firstMaterial?.diffuse.contents = UIColor.white
} else {
childNode.geometry?.firstMaterial?.diffuse.contents = color
}
}
childObj.removeFromParentNode()
parentNode.addChildNode(childObj)
}
let material = SCNMaterial()
parentNode.geometry?.materials = [material]
parentNode.geometry?.firstMaterial?.diffuse.contents = color
obj.removeFromParentNode()
arch_grp[0].addChildNode(parentNode)
}
}
}
}```
Please suggest us
I wanted to scan the person's face via a true depth camera generate a 3d face model and display the scene I didn't find any solutions.
Has anybody integrated this type of functionality? Please help me out or provide a sample code to scan and preview the scanned face into sceneview or ARview
Hi all,
I don't have a Vision Pro (yet), and I'm wondering if it is possible to preview my Reality Composer Pro project in AR using an iPad Pro or latest iPhones?
I also am interested in teaching others - I'm also a college professor, and I don't believe that my students have Vision Pros either.
I could always use the iOS versions, as I have done in the past, but the Pro version is much more capable and it would be great to be able to use it.
Thanks for any comments on this!
The documentation says that MeshAnchor should have a property with enum MeshAnchor.MeshClassification type to get the classification of the mesh (if it is a floor, furniture, table etc...)
Is there a way to access this property?
Using the face anchor feature in Reality Composer, I'm exploring the potential for generating content movement based on facial expressions and head movement.
In my current project, I've positioned a horizontal wood plane on the user's face, and I've added some dynamic physics-enabled balls on the wood surface. While I've successfully anchored the wood plane to the user's head movements, I'm facing a challenge with the balls. I'm aiming to have these balls respond to the user's head tilts, effectively rolling in the direction of the head movement. For instance, a tilt to the right should trigger the balls to roll right, and likewise for leftward tilts.
However, my attempts thus far have not yielded the expected results, as the balls seem to be unresponsive to the user's head movements. The wood plane, on the other hand, follows the head's motion seamlessly.
I'd greatly appreciate any insights, guidance, or possible solutions you may have regarding this matter. Are there specific settings or techniques I should be implementing to enable the balls to respond to the user's head movement as desired?
Thank you in advance for your assistance.
I'm using ArKit with Metal in my ios app without ARSCNView. Most of times it works fine. However sometimes the whole scene is drifting away (especially first few seconds), then it sort of comes back but far from the original place. I don't see this effect in other model viewers - Unity based or WebAR based. Surely they use ArKit under the hood (not all but most of them do).
Here is a snippet of my code:
private let session = ARSession()
func initialize() {
let configuration = ARWorldTrackingConfiguration()
configuration.maximumNumberOfTrackedImages = 10
configuration.planeDetection = [.horizontal, .vertical]
configuration.automaticImageScaleEstimationEnabled = true
configuration.isLightEstimationEnabled = true
session.delegate = self
session.run(configuration)
}
func getCameraViewMat(...) -> simd_float4x4 {
//...
return session.currentFrame.camera.viewMatrix(for: .portrait)
}
func createAnchor(...) -> Int {
// ...
anchors[id] = ARAnchor(transform: mat)
session.add(anchor: anchors[id]!)
return id
}
func getAnchorTransform(...) -> simd_float4x4 {
//...
return anchors[id]!.transform
}
func onUpdate(...) {
// draw session.currentFrame.rawFeaturePoints!.points
// draw all ARPlaneAnchor
}
func session(_ session: ARSession, cameraDidChangeTrackingState camera: ARCamera) {
print("TRACKING STATE CHANGED: \(camera.trackingState)")
}
I can see it's not just anchors problem - everything moves including the point cloud. The tracking state is normal and changes to limited only after the drift occurred which is too late.
What can I do to prevent the drift?