I'm exploring face tracking and experimenting with ARKit's ARSCNFaceGeometry face mesh. I'm running a minimal demo application on the latest iPad Pro M4 11-inch, and I've provided the code below.
I've heard that Apple still offers some of the best face tracking technology on consumer devices, largely because they are one of the few that combine depth and image data. Both a colleague and I tested the demo, and while it works as well or better than some other solutions we tried, we weren’t particularly impressed compared to Google’s MediaPipe or Nvidia’s Maxine, both of which rely solely on image data without depth. In our case, the ARKit face mesh doesn’t always align perfectly with the chin, and as the face rotates, in some areas vertices shift by up to a centimeter from their original position.
This led us to question whether our demo app was using the TrueDepth sensor at all. To test this, we used a piece of cardboard with a small hole punched in it and taped it over the sensor array, leaving only the camera exposed. On the iOS lock screen, this prevents FaceID from working, but we still get a clear image from the camera. With the TrueDepth sensor blocked, the face mesh tracking in our app still functioned, but honestly, we couldn’t detect a significant difference in tracking performance with or without the TrueDepth sensor obscured.
Could we be setting up the face tracking configuration incorrectly? Or has face tracking in newer versions of iOS become less dependent on the TrueDepth sensor?
The controller:
import SwiftUI
import ARKit
struct FaceTrackingView1: UIViewControllerRepresentable {
func makeUIViewController(context: Context) -> FaceTrackingViewController1 {
return FaceTrackingViewController1()
}
func updateUIViewController(_ uiViewController: FaceTrackingViewController1, context: Context) {
}
}
class FaceTrackingViewController1: UIViewController, ARSCNViewDelegate, ARSessionDelegate {
var sceneView: ARSCNView!
override func viewDidLoad() {
super.viewDidLoad()
sceneView = ARSCNView(frame: view.bounds)
sceneView.delegate = self
sceneView.automaticallyUpdatesLighting = true
view.addSubview(sceneView)
let config = ARFaceTrackingConfiguration()
sceneView.session.run(config)
}
override func viewWillDisappear(_ animated: Bool) {
super.viewWillDisappear(animated)
sceneView.session.pause()
}
func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
guard anchor is ARFaceAnchor else { return nil }
let faceGeometry = ARSCNFaceGeometry(device: sceneView.device!)!
let faceNode = SCNNode(geometry: faceGeometry)
faceNode.geometry?.firstMaterial?.fillMode = .lines // Makes it a wireframe mesh
return faceNode
}
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
guard let faceAnchor = anchor as? ARFaceAnchor,
let faceGeometry = node.geometry as? ARSCNFaceGeometry else { return }
faceGeometry.update(from: faceAnchor.geometry)
}
}
The view:
import SwiftUI
struct ContentView: View {
@State private var isFaceTrackingActive = false
var body: some View {
VStack {
Text("Face mesh tracking demo")
.font(.title)
.padding()
Button(action: {
isFaceTrackingActive.toggle()
}) {
Text("Start Face Tracking")
.font(.title2)
.padding()
.background(Color.blue)
.foregroundColor(.white)
.cornerRadius(10)
}
.fullScreenCover(isPresented: $isFaceTrackingActive) {
FaceTrackingView1()
}
}
.padding()
}
}
#Preview {
ContentView()
}
Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.
Post
Replies
Boosts
Views
Activity
I'm setting:
.immersionStyle(selection: .constant(.progressive(0.1...1.0, initialAmount: 0.1)), in: .progressive(0.1...1.0, initialAmount: 0.1))
In UnityVisionOSSettings.swift before build out in Xcode.
I'm having an issue where this only works on occasion. Seems random. I'll either get no immersion level available (crown dial is greyed out and no changes can be made) or it will only allow 0.5 - 1.0 immersion (dial will go below 0.5 but springs back to 0.5 when released).
With no changes to my setup or how I'm setting immersionStyle I've been able to get this to work as I would expect. Wondering if there is some bug that would be causing this to fail. I've tested a simple NativeSDK progressive immersion style with same code for custom setting and it works everytime, so it's something related to Unity.
Here is the entire UnityVisionOSSettings that, from as far as I can tell, are controlling this:
`// GENERATED BY BUILD
import Foundation
import SwiftUI
import PolySpatialRealityKit
import UnityFramework
let unityStartInBatchMode = false
extension UnityPolySpatialApp {
func initialWindowName() -> String { return "Unbounded" }
func getAllAvailableWindows() -> [String] { return ["Bounded-0.500x0.500x0.500", "Unbounded"] }
func getAvailableWindowsForMatch() -> [simd_float3] { return [] }
func displayProviderParameters() -> DisplayProviderParameters { return .init(
framebufferWidth: 1830,
framebufferHeight: 1600,
leftEyePose: .init(position: .init(x: 0, y: 0, z: 0),
rotation: .init(x: 0, y: 0, z: 0, w: 1)),
rightEyePose: .init(position: .init(x: 0, y: 0, z: 0),
rotation: .init(x: 0, y: 0, z: 0, w: 1)),
leftProjectionHalfAngles: .init(left: -1, right: 1, top: 1, bottom: -1),
rightProjectionHalfAngles: .init(left: -1, right: 1, top: 1, bottom: -1)
)
}
@SceneBuilder
var mainScenePart0: some Scene {
ImmersiveSpace(id: "Unbounded", for: UUID.self) { uuid in
PolySpatialContentViewWrapper(minSize: .init(1.000, 1.000, 1.000), maxSize: .init(1.000, 1.000, 1.000))
.environment(\.pslWindow, PolySpatialWindow(uuid.wrappedValue, "Unbounded", .init(1.000, 1.000, 1.000)))
.onImmersionChange() { oldContext, newContext in
PolySpatialWindowManagerAccess.onImmersionChange(oldContext.amount, newContext.amount)
}
KeyboardTextField().frame(width: 0, height: 0).modifier(LifeCycleHandlerModifier())
} defaultValue: { UUID() } .upperLimbVisibility(.automatic)
.immersionStyle(selection: .constant(.progressive(0.1...1.0, initialAmount: 0.1)), in: .progressive(0.1...1.0, initialAmount: 0.1))
WindowGroup(id: "Bounded-0.500x0.500x0.500", for: UUID.self) { uuid in
PolySpatialContentViewWrapper(minSize: .init(0.100, 0.100, 0.100), maxSize: .init(0.500, 0.500, 0.500))
.environment(\.pslWindow, PolySpatialWindow(uuid.wrappedValue, "Bounded-0.500x0.500x0.500", .init(0.500, 0.500, 0.500)))
KeyboardTextField().frame(width: 0, height: 0).modifier(LifeCycleHandlerModifier())
} defaultValue: { UUID() } .windowStyle(.volumetric).defaultSize(width: 0.500, height: 0.500, depth: 0.500, in: .meters).windowResizability(.contentSize) .upperLimbVisibility(.automatic) .volumeWorldAlignment(.gravityAligned)
}
@SceneBuilder
var mainScene: some Scene {
mainScenePart0
}
struct LifeCycleHandlerModifier: ViewModifier {
func body(content: Content) -> some View {
content
.onOpenURL(perform: { url in
UnityLibrary.instance?.setAbsoluteUrl(url.absoluteString)
})
}
}
}`
I am attempting to use the Barcode Detection enterprise API. I have the necessary entitlements and license file. I'm following the sample code online, and whenever I attempt to run the barcode detection using arKitSession.run I get the following error message:
ar_barcode_detection_provider_t <0x300d82130>: Failed to run provider with transient error code: 1
It obviously isn't running the barcode detection, even though it's running in an immersive space in mixed mode. Any idea what might be going on?
Hi,
My app allows users to share and view spatial photos.
For viewing spatial photos, I'm using a plane in a RealityView that has a camera index switch material node, which takes the stereo images as the inputs.
For sharing native spatial photos taken on the vision pro, prior to visionOS 2.0, I extract the stereo image pair and merge them into a single side-by-side image to upload to the app's backend.
However, since visionOS 2.0 introduced generating spatial photos from normal photos, I've been seeing some unexpected behaviours in my app, while on the other hand, they can be viewed correctly in the system Photos app:
Sometimes the extracted images have different size, the right image is smaller than the left image. See the first image in the google drive below, taken with iPhone 15 Pro.
Even if the image pair have the same size, when viewed in my app, it has some artefacts, especially around the edge of objects which are closer to the camera. See the second image in the google drive below, taken with iPhone 11.
Google drive link here:
https://drive.google.com/drive/folders/1UTfpxvO3-ChqshwfyzY5E_KCgk8VgUaa
I know that now Quicklook preview application can support viewing spatial photos now, but I would like to keep it the way I implemented in the app, for compatibility concerns.
Below is a code snippet that deals with the extraction. Please point out the correct way to extract stereo image pair from a generated spatial photo.
Happy to submit a code-level support request if more information is needed.
// the data is from photos picker item
let data = try await photo.loadTransferable(type: Data.self)
let source = CGImageSourceCreateWithData(data as CFData, nil)
let sbsImage = source.extractSpatialPhoto()
extension CGImageSource {
func extractSpatialPhoto() -> UIImage? {
guard let leftCGImage = extractSpatialImage(at: 0),
let rightCGImage = extractSpatialImage(at: 1)
else {
return nil
}
let leftImage = UIImage(ciImage: leftCGImage)
let rightImage = UIImage(ciImage: rightCGImage)
guard leftImage.size == rightImage.size else {
return nil
}
// merge left + right
let size = CGSize(width: leftImage.size.width * 2, height: leftImage.size.height)
UIGraphicsBeginImageContextWithOptions(size, true, 1.0)
leftImage.draw(in: CGRect(x: 0, y: 0, width: leftImage.size.width, height: leftImage.size.height))
rightImage.draw(in: CGRect(x: leftImage.size.width, y: 0, width: rightImage.size.width, height: rightImage.size.height))
let mergedImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return mergedImage
}
// not sure if this actually works
func extractSpatialImage(at index: Int) -> CIImage? {
guard let cgImage = CGImageSourceCreateImageAtIndex(self, index, nil) else {
return nil
}
var ciImage = CIImage(cgImage: cgImage)
if let properties = CGImageSourceCopyPropertiesAtIndex(self, index, nil) as? [String: Any],
let heifDictionary = properties[kCGImagePropertyHEIFDictionary as String] as? [String: Any],
let extrinsics = heifDictionary[kIIOMetadata_CameraExtrinsicsKey as String] as? [String: Any],
let position = extrinsics[kIIOCameraExtrinsics_Position as String] as? [Double]
{
// Default baseline is 64mm (0 for left camera, 0.064m for right camera)
let standardBaseline = 0.064
// Check if it's the right image (should be at [0.064, 0, 0])
let isRightImage = (index == 1)
let expectedPosition = isRightImage ? standardBaseline : 0.0
// Calculate the translation needed to align to standard baseline
let positionDelta = position[0] - expectedPosition
// Apply translation only if there's a mismatch in position
if positionDelta != 0 {
let transform = CGAffineTransform(translationX: CGFloat(positionDelta), y: 0)
ciImage = ciImage.transformed(by: transform)
}
}
return ciImage
}
}
I'm working on a school project that allows users to open a .USDZ file (using Quick Look) on the webpage while using Apple Vision Pro to put the object in their physical envirnment, the project is deployed on Vercel. I'm testing the page with my apple vision pro, when I click open the .USDZ file, I'm seeing a triangle with an exclamation mark while it's trying to load, but it won't load. Does anybody know how to troubleshoot this issue?
Hello,
I'm currently working on a project that requires real-world object recognition and labeling. I understand that due to the security and privacy issues, we are unable to access the vision pro camera feed. Is there any other external way to solve this problem?
Thank you!
I have been implementing the LongPressGesture to have a menu come up upon a long press. I love the functionality and it is very close to being where I want it to be.
I don't know if this is a visionOS-specific thing, but I am hoping to control the corner radius of the pulled-out element behind my "button." I've wrangled hover effects in the past with overlays, but I'm not sure what to target in this case.
Worst case, I'll have to change the border radius on all of my tiles to match this LongPressGesture-controlled behavior, or I could possibly change the radius onLongPressGesture to match.
Is there a simpler solution? Thanks!
How to detect whether two entities collide in RealityView and execute some instructions after the collision. It is assumed that both entities have collision components, and one of them also has an anchor component (such as a binding hand).
How to make a specified entity in RealityView be captured by users:
This entity has physical and collision components, and the user will not change when he does not grasp the action. However, when the user makes a grab hand gesture and is very close to the entity (there can be a small deviation), an Anchor component will be enabled to bind the entity to the hand, but when the user lets go, he will fall along the y-axis of the current position (affected by the physical component).
I hope you can help me. Thank you.
I saw onnoffitacation in the Behavior configuration of Reality Composer pro, which asked me to enter the Nofficatition name, that is to say, this requires swift in Xcode to send a message. There is a message name in the message, so I hope you can write a list for me how to use Swift in Xcode to send a message containing the message name.(There is an answer in https://developer.apple.com/forums/thread/756978, but it doesn't work.)
and in the time line in Reality Composer Pro, there is a Notification action, which is used to send messages to swift. How can I ask swift to detect whether the Notification action has sent a message?(There is an answer in https://developer.apple.com/videos/play/wwdc2024/10102/, but it doesn't work.)
I have asked this question before (https://developer.apple.com/forums/thread/756978). Those answers were available before, but now they are all invalid in the latest system. I hope you can help me. Thank you.
Hi,
Following the wwdc24 video - What’s new in Quick Look for visionOS, I've managed to open a 3D model using PreviewApplication by calling
let previewItem = PreviewItem(url: modelURL, displayName: "Easter", editingMode: .disabled)
_ = PreviewApplication.open(items: [previewItem])
However, the "Save to Downloads" option is aways there(see attached screenshot). As the models are user generated content, and I don't want the download option to be available to all users. How to disable it?
Now I'm developing a 3D motion capture app by using ARKit.
So I tested this sample code, but in iOS18, hand's and leg's orientations seems to be wrong.
Forrowing image is sample app's screen captures in iOS17 and iOS18.
To set the stage:
I made a prototype of an app for a company, the app is to be used internally right now. Prototype runs perfectly on iOS, so now I got VP to port the app to its final destination. The first thing I found out is that the image tracking on VP is useless for moving images (and that's the core of my app). Also distance at which image is lost seems to be way shorter on VP. Now I'm trying to figure out if it's possible to fix/work around it in any way and I'm wondering if Enterprise API would change anything.
So:
Is it possible to request Enterprise API access as a single person with basic Apple Developer subscription? I looked around the forum and only got more confused.
Does QR code detection and tracking work any better than image detection, or anchor updates are the same?
Does the increased "object detection" frequency affect in any way image/QR tracking, or is it (as name implies) only for object tracking?
Would increasing the CPU/GPU headroom make any change to image/QR detection frequency?
Is there something to disable to make anchor updates more frequent? I don't need complex models, shadows, physics, etc.
Greetings
Michal
I am having trouble with initializing the SharePlay. It works but we have to leave the game (click the close button) and rejoin it, sometimes several times, for it to establish the connection.
I am also having trouble sharing images over SharePlay with GroupSessionJournal. I am not able to get it to transfer any amount of data or even get recognition on the other participants in the SharePlay that an image is being sent. We have look at all the information we can find online and are not able to establish a connection. I am not sure if I am missing a step, or if I am incorrectly sending the data through the GroupSessionJournal.
Here are the steps I took take to replicate the issue I have:
FaceTime another person with the app.
Open the app and click the SharePlay button to SharePlay it with the other person.
Establish the SharePlay and by making sure that the board states are syncronized across participants. If its not click the close button and click open app again to rejoin the SharePlay. (This is one of the bugs that I would like to fix. This is just a work around we developed to establish the SharePlay. We would like it so that when you click SharePLay and they join the session it works.)
Once the SharePlay has been established, change the image by clicking change 1 image.
Select a jpg image.
The image that represents 1 should be not set. If you dont see the image click on any of the X in the squares and it will change to the image.
The image should appear on the other participant in the SharePlay. (This does not happen and is what we have not been able to figure out how to get working.)
Here are the classes for the example project I created:
Content View
Game Model Class
Activity Manager
Main Starter Class
I am trying to position the share sheet popped up by the shareLink API on VisionOS, but the share sheet is always anchored by at the label position.
I checked the Photos app is achieving this already, the share sheet there appears at very center of the window while the share button locates at the corner inside the menu.
How is this possible to make it?
Hello
I was wondering if the keyboard awareness feature that came with visionOS 2 would also work for the Mac Book keyboard if someone is in an immersive .progressive custom environment such as the "Garden" environment from Construct an immersive environment for visionOS in e.g. an app I'm currently developing, to see one's keyboard. I haven't managed to achieve it so far.
Thank you very much in advance!
Hi, we have in our app an immersive space and we taught the palm menu button is not available in immersive spaces, but when I look in the hand and tap the menu button appear. Is it possible to keep it hidden? Because we a have an hand tracking feature in palm and when we try to press a button to overlap the palm it triggers the menu button and then when the user presses again by mistake, it sends the application to the background.
This is very important for us because we would like to release this hand-tracking feature as soon as possible.
Here is a link with to a video with the problem:
https://drive.google.com/file/d/1cfOcdzF19h_mbmpvkVNCJjXEBJecVeJL/view?usp=sharing
Today I updated my macbook pro to macOS sequoia. with this I also downloaded the latest Xcode and visionOS 2 packages.
I had a working project. Which did work with my vision pro which is updated to the latest visionOS 2. But now whenever I try to click on preview in xcode while editing a swift file I am receiving the following error:
(lot of lines here)
Library/Developer/Xcode/DerivedData/test2-fznbrpphddkqdaddrzamkayoajjm/Build/Intermediates.noindex/RealityKitContent.build/Debug-xrsimulator/RealityKitContent_RealityKitContent.build/DerivedSources/RealityAssetsGenerated/CustomComponentUSDInitializers.usda
error: Tool terminated by signal 'Segmentation fault: 11'
I tried exiting and restarting my mac but the problem is not going away. Can someone help me with this?
Thank you!
How can I create a 3D model of clothing that behaves like real fabric, with realistic physics? Is it possible to achieve this model by photogrammetry? I want to use this model in the Apple Vision Pro and interact with it using hand gestures.
https://developer.apple.com/documentation/realitykit/model3d/ontapgesture(count:coordinatespace:perform:)
link -> double tap gesture deprecated in visionOS2.0. use only watchOS. right..?
so how can i make a double tap gesture in visionOS??