I am developing an app based on visionOS and need to utilize the main camera access provided by the Enterprise API. I have applied for an enterprise license and added the main camera access capability and the license file in Xcode. In my code, I used
await arKitSession.queryAuthorization(for: [.cameraAccess])
to request user permission for camera access. After obtaining the permission, I used arKitSession to run the cameraFrameProvider.
However, when running
for await cameraFrame in cameraFrameUpdates{
print("hello")
guard let mainCameraSample = cameraFrame.sample(for: .left) else {
continue
}
pixelBuffer = mainCameraSample.pixelBuffer
}
, I am unable to receive any frames from the camera, and even print("hello") within the braces do not execute. The app does not crash or throw any errors.
Here is my full code:
import SwiftUI
import ARKit
struct cameraTestView: View {
@State var pixelBuffer: CVPixelBuffer?
var body: some View {
VStack{
Button(action:{
Task {
await loadCameraFeed()
}
}){
Text("test")
}
if let pixelBuffer = pixelBuffer {
let ciImage = CIImage(cvPixelBuffer: pixelBuffer)
let context = CIContext(options: nil)
if let cgImage = context.createCGImage(ciImage, from: ciImage.extent) {
Image(uiImage: UIImage(cgImage: cgImage))
}
}else{
Image("exampleCase")
.resizable()
.scaledToFill()
.frame(width: 400,height: 400)
}
}
}
func loadCameraFeed() async {
// Main Camera Feed Access Example
let formats = CameraVideoFormat.supportedVideoFormats(for: .main, cameraPositions:[.left])
let cameraFrameProvider = CameraFrameProvider()
let arKitSession = ARKitSession()
// main camera feed access example
var cameraAuthorization = await arKitSession.queryAuthorization(for: [.cameraAccess])
guard cameraAuthorization == [ARKitSession.AuthorizationType.cameraAccess:ARKitSession.AuthorizationStatus.allowed] else {
return
}
do {
try await arKitSession.run([cameraFrameProvider])
} catch {
return
}
let cameraFrameUpdates = cameraFrameProvider.cameraFrameUpdates(for: formats[0])
if cameraFrameUpdates != nil {
print("identify cameraFrameUpdates")
} else{
print("fail to get cameraFrameUpdates")
return
}
for await cameraFrame in cameraFrameUpdates! {
print("hello")
guard let mainCameraSample = cameraFrame.sample(for: .left) else {
continue
}
pixelBuffer = mainCameraSample.pixelBuffer
}
}
}
#Preview(windowStyle: .automatic) {
cameraTestView()
}
When I click the button, the console prints:
identify cameraFrameUpdates
It seems like it stuck in getting cameraFrame from cameraFrameUpdates.
Occurring on VisionOS 2.0 Beta (just updated), Xcode 16 Beta 6 (just updated).
Does anyone have a workaround for this? I would be grateful if anyone can help.
ARKit
RSS for tagIntegrate iOS device camera and motion features to produce augmented reality experiences in your app or game using ARKit.
Post
Replies
Boosts
Views
Activity
Hello I want to ask help from VisionOS devs inside Apple, if it is possible to extend or disable(toggle) the Play Space boundary which is 1.5 meter or 10 feets right now, it is really a shame with such great display and computing power we can't run any room scale VR, I'm currently working on a Undergrad Thesis which choose to use the AVP but I didn't know about this boundary until I've build my room in Unity and put onto my device, is it possible to cut us some slacks regarding the boundary? much thanks
Like title, I want to ask how to use this APIs: CameraFrameProvider
I got the warning : Cannot find 'CameraFrameProvider' in scope
Xcode 16.0 beta 4
imported ARKit
imported Vision
I have noticed the enterprise api of main camera access can only use to develop. I need to test by testflight and deliver on ABM. So when can I do this?
Hello All,
I'm desperate to found a solution and I need your help please.
I've create a simple cube in Vision OS. I can get it by hand (close my hand on it) and move it pretty where I want. But, I would like to throw it (exemple like a basket ball). Not push it, I want to have it in hand and throw it away of me with a velocity and direction = my hand move (and finger opened to release it).
Please put me on the wait to do that.
Cheers and thanks
Mathis
When I use RoomPlan, I notice performance issues in larger rooms or those with a lot of furniture. Is there a way to configure RoomPlan to focus only on detecting properties of a surface (window, door opening and wall) during scanning, possibly through an argument or setting? Filtering afterward is an option, but it doesn't address the slowdown during the scan.
Hi,
I was wondering during developing for visionOS why when I try to use queryDeviceAnchor() with WorldTrackingProvider() after opening the immersive space in the update(context: SceneUpdateContext) function, it initially seems to provide the DeviceAnchor data every frame but stops at some point (about 5-10 seconds after pressing the Button which opens the immersive space) and then stops updating constantly and only updates somehow randomly if I move my head abruptly to the left, right, etc. Somehow, the tracking doesn't seem to work as it should directly on the AVP device.
Any help would be greatly appreciated!
See my code down below:
ContentView.swift
import SwiftUI
struct ContentView: View {
@Environment(\.openImmersiveSpace) private var openImmersiveSpace
@Environment(\.scenePhase) private var scenePhase
var body: some View {
VStack {
Text("Head Tracking Prototype")
.font(.largeTitle)
Button("Start Head Tracking") {
Task {
await openImmersiveSpace(id: "appSpace")
}
}
}
.onChange(of: scenePhase) {_, newScenePhase in
switch newScenePhase {
case .active:
print("...")
case .inactive:
print("...")
case .background:
break
@unknown default:
print("...")
}
}
}
}
HeadTrackingApp.swift
import SwiftUI
@main
struct HeadTrackingApp: App {
init() {
HeadTrackingSystem.registerSystem()
}
var body: some Scene {
WindowGroup {
ContentView()
}
ImmersiveSpace(id: "appSpace") {
}
}
}
HeadTrackingSystem.swift
import SwiftUI
import ARKit
import RealityKit
class HeadTrackingSystem: System {
let arKitSession = ARKitSession()
let worldTrackingProvider = WorldTrackingProvider()
required public init(scene: RealityKit.Scene) {
setUpSession()
}
func setUpSession() {
Task {
do {
try await arKitSession.run([worldTrackingProvider])
} catch {
print("Error: \(error)")
}
}
}
public func update(context: SceneUpdateContext) {
guard worldTrackingProvider.state == .running else { return }
let avp = worldTrackingProvider.queryDeviceAnchor(atTimestamp: CACurrentMediaTime())
print(avp!)
}
Hey guys,
I was wondering if anyone could help me. I'm currently trying to run an ARKitSession() with a WorldTrackingProvider() that makes use of DeviceAnchor. In the simulator everything seems to work fine and the WorldTrackingProvider runs, but if I'm trying to run the app on my AVP, the WorldTrackingProvider pauses after the initialization. I'm new to Apple development and I would be thankful for any helpful input!
Below my current code:
HeadTrackingApp.swift
import SwiftUI
@main
struct HeadTrackingApp: App {
init() {
HeadTrackingSystem.registerSystem()
}
var body: some Scene {
WindowGroup {
ContentView()
}
}
}
ContentView.swift
import SwiftUI
struct ContentView: View {
var body: some View {
VStack {
Text("Head Tracking Prototype")
.font(.largeTitle)
}
}
}
HeadTrackingSystem.swift
import SwiftUI
import ARKit
import RealityKit
class HeadTrackingSystem: System {
let arKitSession = ARKitSession()
let worldTrackingProvider = WorldTrackingProvider()
var avp: DeviceAnchor?
required public init(scene: RealityKit.Scene) {
setUpSession()
}
func setUpSession() {
Task {
do {
print("Starting ARKit session...")
try await arKitSession.run([worldTrackingProvider])
print("Initial World Tracking Provider State: \(worldTrackingProvider.state)")
self.avp = worldTrackingProvider.queryDeviceAnchor(atTimestamp: CACurrentMediaTime())
if let avp = getAVPPositionOrientation() {
print("AVP data: \(avp)")
} else {
print("No AVP position and orientation available.")
}
} catch {
print("Error: \(error)")
}
}
}
func getAVPPositionOrientation() -> DeviceAnchor? {
return avp
}
}
Its my understanding that to use the CameraFrameProvider, which provides access to the Apple Vision Pro front facing camera feed the enterprise main camera access "com.apple.developer.arkit.main-camera-access.allow" entitlement is required.
Is there a method to prototype apps on a that use the CameraFrameProvider running on an apple vision pro that has developer mode enable without having the "com.apple.developer.arkit.main-camera-access.allow" entitlement?
When I using Image Tracking in Vision OS2 beta, add an AVPlayer to play one MP4 file when tracking some picture. I Can't get removed event in "for await update in imageInfo.anchorUpdates {" code, so I can't stop or remove the palyer when Image disappear.
Then I used updated event and check "if anchor.isTracked" to remove or add the player again, and It worked.
Now, if I dont move my head, show or hide the picture, It worked like assume. But if the picture dont move, and I move my head away, I cant get updated event, and the player still play even I cant see it. No updated event, and no removed event for me.
Is this a bug?
Hello,
I am trying to use the new Enterprise API to capture main camera frames using the CameraFrameProvider. Until now, I could not make it work. I followed the sample code provided in this thread (literally copy past it): https://forums.developer.apple.com/forums/thread/758364.
When I run the application on the Vision Pro, no frame is captured. I get a message in the XCode's console that no entitlement is found. However, the entitlement is created and the license file is also in the project. Besides, all authorization keys are added in the plist file.
What I am missing? How to know if the license file is wrong?
Thank you.
Can provide a demo or code snippets?
struct GameSystem: System {
static let rootQuery = EntityQuery(where: .has(GameMoveComponent.self) )
init(scene: RealityKit.Scene) { }
func update(context: SceneUpdateContext) {
let root = context.scene.performQuery(Self.rootQuery)
for entity in root{
let game = entity.components[GameMoveComponent.self]!
if let xMove = game.game.gc?.extendedGamepad?.dpad.xAxis.value ,
let yMove = game.game.gc?.extendedGamepad?.dpad.yAxis.value {
print("x:\(xMove),y:\(yMove)")
let x = entity.transform.translation.x + xMove * 0.01
let y = entity.transform.translation.z - yMove * 0.01
entity.transform.translation = [x , entity.transform.translation.y , y]
}
}
}
}
I want to use the game controller's direction keys to control the continuous movement of Entity in visionOS. When I added a query for handle button presses in the ECS System, I found that the update interface was not called at a frequency of 30 frames per second. Instead, it executes once when I press or release the key.
Is this what is the reason?
I want to keep moving by holding down the controller button, is there a better solution? I hope this moving process will be smooth and not stuck.
ARKit to capture data
What we want to do : use the ARKit to capture data around an object (pictures). Is there a way to :
Increase the number of picture captured by default (120) to a higher number without increase the time required to capture data ? We managed to increase the number of pictures to 1000, but the data capture now lasts 20minutes, which is too long. Is there a way to capture a video instead of pictures ?
Capture IMU data : how can we use the ARKit to capture IMU data around an object ?
I'm playing with visionOS and trying to get a usdz file to load in a RealityView. It works fine if I use a Model3D but if I use a RealityView nothing shows up. I'm just using the fender_stratocaster asset right off the apple web site so it seems like it should work. This is the code:
RealityView { content in
if let sphereEntity = try? await Entity(named: "fender_stratocaster") {
content.add(sphereEntity)
sphereEntity.position = [0,0,0]
sphereEntity.transform.scale = [scale, scale, scale]
let _ = print(sphereEntity)
}
} update: { content in
if let sphereEntity = content.entities.first {
sphereEntity.transform.scale = [scale, scale, scale]
}
Any clues as to why this is not showing would be appreciated.
For RoomAnchors there's different mesh classifications for mesh anchors, but only walls and floors are supported by geometries() function.
So given this how can I get information about other mesh classifications?
I'm developing an augmented images app using ARKit. The images themselves are sourced online. The app is mostly done and working fine. However, I download the images the app will be tracking every time the app starts up. I'd like to avoid this by perhaps downloading the images and storing them to the device.
My concern is that as the number of images grow, the app would download too many images to the device. I'd like some thoughts on how to best approach this. For example, should I download and store some of the images in CoreData, or perhaps not store them at all?
this is my code:
import Foundation
import ARKit
import SwiftUI
class CameraViewModel: ObservableObject {
private var arKitSession = ARKitSession()
@Published var capturedImage: UIImage?
private var pixelBuffer: CVPixelBuffer?
private var cameraAccessAuthorizationStatus = ARKitSession.AuthorizationStatus.notDetermined
func startSession() {
guard CameraFrameProvider.isSupported else {
print("Device does not support main camera")
return
}
Task {
await requestCameraAccess()
guard cameraAccessAuthorizationStatus == .allowed else {
print("User did not authorize camera access")
return
}
let formats = CameraVideoFormat.supportedVideoFormats(for: .main, cameraPositions: [.left])
let cameraFrameProvider = CameraFrameProvider()
print("Requesting camera authorization...")
let authorizationResult = await arKitSession.requestAuthorization(for: [.cameraAccess])
cameraAccessAuthorizationStatus = authorizationResult[.cameraAccess] ?? .notDetermined
guard cameraAccessAuthorizationStatus == .allowed else {
print("Camera data access authorization failed")
return
}
print("Camera authorization successful, starting ARKit session...")
do {
try await arKitSession.run([cameraFrameProvider])
print("ARKit session is running")
guard let cameraFrameUpdates = cameraFrameProvider.cameraFrameUpdates(for: formats[0]) else {
print("Unable to get camera frame updates")
return
}
print("Successfully got camera frame updates")
for await cameraFrame in cameraFrameUpdates {
guard let mainCameraSample = cameraFrame.sample(for: .left) else {
print("Unable to get main camera sample")
continue
}
print("Successfully got main camera sample")
self.pixelBuffer = mainCameraSample.pixelBuffer
}
DispatchQueue.main.async {
self.capturedImage = self.convertToUIImage(pixelBuffer: self.pixelBuffer)
if self.capturedImage != nil {
print("Successfully captured and converted image")
} else {
print("Image conversion failed")
}
}
} catch {
print("ARKit session failed to run: \(error)")
}
}
}
private func requestCameraAccess() async {
let authorizationResult = await arKitSession.requestAuthorization(for: [.cameraAccess])
cameraAccessAuthorizationStatus = authorizationResult[.cameraAccess] ?? .notDetermined
if cameraAccessAuthorizationStatus == .allowed {
print("User granted camera access")
} else {
print("User denied camera access")
}
}
private func convertToUIImage(pixelBuffer: CVPixelBuffer?) -> UIImage? {
guard let pixelBuffer = pixelBuffer else {
print("Pixel buffer is nil")
return nil
}
let ciImage = CIImage(cvPixelBuffer: pixelBuffer)
let context = CIContext()
if let cgImage = context.createCGImage(ciImage, from: ciImage.extent) {
return UIImage(cgImage: cgImage)
}
print("Unable to create CGImage")
return nil
}
}
this my log:
User granted camera access
Requesting camera authorization...
Camera authorization successful, starting ARKit session...
ARKit session is running
Successfully got camera frame updates
void * _Nullable NSMapGet(NSMapTable * _Nonnull, const void * _Nullable): map table argument is NULL
I want to use 3dmax software to generate two panoramic renderings, one for the left eye and the other for the right eye, so that I can get a realistic sense of space.
At the technical implementation level, are there relevant APIs that can control the left and right eyes to see different content?
Hello,
I am trying to create new outfits based on ar kits body tracking skeleton example - the controlled robot.
Is it just me or is this skeleton super annoying to work with? The bones all stand out like thorns and don't follow along the actual limb, which makes it impossible to automatically weight paint new meshes to the skeleton.
Changing the bones is also not possible, since this will result in a distorted body tracking.
I am an experienced modeller but I have never seen such a crazy skeleton. Even simple meshes are a pain in the bud to pair with these bones. You basically have to weight paint everything manually.
Or am I missing something?