We are building an app that uses ARKit occasionally, but not always.
We would like to test the non-ARKit parts in the simulator, since it offers more debugging features (e.g. SwiftUI previews or the Thread Sanitizer).
However, we can't even build the app for the simulator, since the simulator SDK does not know about certain classes (e.g. "AnchorEntity"). This also means that none of the SwiftUI previews work, even if the views are not using ARKit.
What is the best approach to test such an app in the simulator, without using any ARKit features?
ARKit
RSS for tagIntegrate iOS device camera and motion features to produce augmented reality experiences in your app or game using ARKit.
Post
Replies
Boosts
Views
Activity
I am using ArKit to create an augmented reality application in Unity. Following the addition of an object reference object Because it tracks the object in front of it slowly and inaccurately, the application does not update the screen quickly.
How can I track objects more quickly?
I'm seeking insight on why the new VisionOS Barcode Scanning API is categorized as an Enterprise API and restricted only for proprietary and in-house apps.
I understand Apple's focus on privacy and I can see how this restriction could make sense for other Enterprise APIs like main camera access and passthrough screen capture.
Why is barcode scanning restricted from open apps? What makes barcode scanning more of a risk to privacy versus the unrestricted APIs for object tracking, image tracking, or hand tracking?
Trouble with Core ML Object Tracking for Spherical Objects Using WWDC Sample Code and Object Capture
Hi everyone,
I'm working with Core ML for object tracking and have successfully trained a couple of models. However, when I try to use the reference object file in the object tracking sample code from the WWDC video, it doesn't work.
I'm training the model on a single-color plastic spherical object. Could this be the reason for the issue? I also attempted to use USDZ 3D assets that resemble the real object. Do these need to be trained with the Object Capture app to work properly?
Speaking of the Object Capture app, my experience hasn't been great. When I scanned my spherical object, the result was far from a sphere—it looked more like a mushy dough.
Any insights or suggestions would be greatly appreciated!
I'm working on training an object tracking model in CreateML for visionOS that has fan blades on it and looking to try to train while ignoring a section of the geometry. When I train currently, it can detect the object if the fan blades are in the same orientation as when scanned but if they move it struggles.
I see there is an "objects to avoid" data source that can be added but upon reading the description, I don't think it does what I'm needing but I could be wrong. Is there anyway to have the training ignore a part of the geometry that has a significant effect on the silhouette of the object?
I have indoor map and I want to draw path between two route location ex. from A to B I want to draw ARKit based Arrow path in ios Application. Currently I am using ARAnchor to achieve this but challenges is if A to B is 10 meter and I am adding Nodes on each one meter so instead of 10 different nodes i am getting single Arrow nodes showing all 10 in it. I am using below code.
// Below Code from where I am calling addArpath function
if let lat1 = mCurrentPosition?.latitude, let long1 = mCurrentPosition?.longitude {
let latEnd = steplocation.latitude
let longEnd = steplocation.longitude
// if let lastLat = arrpath.last?.latitude,let lastLong = arrpath.last?.longitude,let lastAltitude = arrpath.last?.altitude{
let userLocation = CLLocation(latitude: lat1, longitude: long1)
let endLocation = CLLocation(coordinate: CLLocationCoordinate2DMake(CLLocationDegrees(latEnd), CLLocationDegrees(longEnd)), altitude: CLLocationDistance(steplocation.altitude), horizontalAccuracy: CLLocationAccuracy(5), verticalAccuracy: CLLocationAccuracy(0), course: CLLocationDirection(-1), speed: CLLocationSpeed(5), timestamp: Date())
let heading = getHeadingForDirectionFromCoordinate(from: userLocation, to: endLocation)
let lon1 = degreesToRadians(long1) //DegreesToRadians(long1)
let lon2 = degreesToRadians(longEnd); //DegreesToRadians(longEnd);
let lat2 = degreesToRadians(latEnd);
let dLon = lon2 - lon1
let y = sin(dLon) * cos(lat2);
yVal = yVal + y
// let distanceToendpoint = calculateDistance(lat: endLocation.coordinate.latitude, long: endLocation.coordinate.longitude)
let distvalue = Int(distance) + Int(pathlength)
distance += CGFloat(distvalue)
for i in stride(from: 0, to: distance, by:1) {
print("current loop iteration is:" ,i)
let headingValue = heading - self.heading
zValue = zValue + headingValue
distanceVal = CGFloat(i) + distanceVal
zGlobal = zValue
// Calling addARPathtoLocation
addARPathtoLocation(stepLocation:endLocation,zvalue: zValue, yvalue: yVal, distance:Float(i), direction: manuoverType)
}
// }
}
// MARK: - ARSessionDelegate
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
guard !(anchor is ARPlaneAnchor) else { return }
let sphereNode = generateArrowNodes(anchor: anchor)
DispatchQueue.main.async {
node.addChildNode(sphereNode)
}
}
//create ARAnchor to add to nodes
func generateArrowNodes(anchor: ARAnchor) -> SCNNode {
let imageMaterial = SCNMaterial()
imageMaterial.isDoubleSided = true
imageMaterial.diffuse.contents = UIImage(named: "blueArrow")
let plane = SCNPlane(width:0.5, height:0.5)
plane.materials = [imageMaterial]
plane.firstMaterial?.isDoubleSided = true
let blueNode = SCNNode(geometry: plane)
blueNode.name = "blueNode"
blueNode.position = SCNVector3(x:Float(zGlobal), y:0, z:Float(distanceVal))
blueNode.eulerAngles.x = -.pi / 2
blueNode.eulerAngles.y -= Float(CGFloat(CGFloat.pi/4*6))
return blueNode
}
func addARPathtoLocation(stepLocation: CLLocation, zvalue: CGFloat, yvalue: CGFloat, distance:Float, direction:VMEManeuverType) {
// give you the depth of anything ARKit has detected
guard let query = sceneView.raycastQuery(from: sceneView.center , allowing: .estimatedPlane, alignment: .any) else {
return
}
let results = sceneView.session.raycast(query)
guard let hitResult = results.first else {
print("No surface found")
return
}
// Add ARAnchor to Scene
let anchor = ARAnchor(transform: hitResult.worldTransform)
sceneView.session.add(anchor: anchor)
}
func radiansToDegrees(_ radians: Double) -> Double {
return (radians) * (180.0 / Double.pi)
}
func degreesToRadians(_ degrees: Double) -> Double {
return (degrees) * (Double.pi / 180.0)
}
func getHeadingForDirectionFromCoordinate(from: CLLocation, to: CLLocation) -> Double {
let fLat = degreesToRadians(from.coordinate.latitude)
let fLng = degreesToRadians(from.coordinate.longitude)
let tLat = degreesToRadians(to.coordinate.latitude)
let tLng = degreesToRadians(to.coordinate.longitude)
let deltaL = tLng - fLng
let x = sin(deltaL) * cos(tLng) //cos(tLat) * sin(deltaL)
let y = cos(fLat) * sin(tLat) - sin(fLat) * cos(tLat) * cos(deltaL)
let bearing = atan2(x,y)
let bearingInDegrees = bearing.toDegrees
print("Bearing Degrees :",bearingInDegrees) // sanity check
// let degree = radiansToDegrees(atan2(sin(tLng-fLng)*cos(tLat), cos(fLat)*sin(tLat)-sin(fLat)*cos(tLat)*cos(tLng-fLng)))
if bearingInDegrees >= 0 {
return bearingInDegrees
} else {
return bearingInDegrees + 360
}
}
In our VisionOS project we want to apply wall panel on walls in a room , we are occluding furniture kept in front of walls by creating a mesh of occlusion material on everything except walls but this way is not able to occlude objects perfectly , edges are not smooth , panel coming over tv, table etc.
Is there any other way to achieve this.
At the end of the WWDC 2023 sessoion https://developer.apple.com/wwdc23/10111 the session talks about implementing Virtual Hands. However the Code shown in the Video is not correct anymore with the latest version of VisionOS and also the example “SpaceGloves“ Entity referenced in the Video is not documented and there is no example project resource to reference either.
I have been trying for the last two weeks to implement the same basic example shown in this video, including skinning and rigging my own hand meshes But have had significant issues doing so and found no working code/usdz examples to reference across different internet resource.
Is it possible for a working project/code example including USDZ files for the Hand Meshes to be provided in order to test a working example of this as a starting point for building my own virtual hands?
Thanks for your help.
Hello,
I want to capture video from Vision Pro in the Vision OS app. I am referring to the (https://developer.apple.com/videos/play/wwdc2024/10139/) Apple video and their code. step like below
import ARKit
com.apple.developer.arkit.main-camera-access.allow = true in info.plist
Do below code
func loadCameraFeed() async {
// Main Camera Feed Access Example
let formats = CameraVideoFormat.supportedVideoFormats(for: .main, cameraPositions:[.left])
let cameraFrameProvider = CameraFrameProvider()
var arKitSession = ARKitSession()
var pixelBuffer: CVPixelBuffer?
await arKitSession.queryAuthorization(for: [.cameraAccess])
do {
try await arKitSession.run([cameraFrameProvider])
} catch {
return
}
guard let cameraFrameUpdates =
cameraFrameProvider.cameraFrameUpdates(for: formats[0]) else {
return
}
print(cameraFrameUpdates)
for await cameraFrame in cameraFrameUpdates {
print(cameraFrame)
guard let mainCameraSample = cameraFrame.sample(for: .left) else {
continue
}
pixelBuffer = mainCameraSample.pixelBuffer
}
}
I want to convert "pixelBuffer" into video streaming and show it in a frame like iOS.
Please guide me on how to achieve my next step. I am blank after this code.
Hello,
I am trying to develop an app that broadcasts what the user sees via Apple Vision Pro.
I have applied for and obtained the Enterprise API and actually can stream via the "Main camera access" API, as reported on https://developer.apple.com/videos/play/wwdc2024/10139/.
My problem is that I have not found any reference to how to integrate the "Passthrough in screen capture" API into my project.
Have any of you been able to do this?
Thank you
Hello, I am trying to make an app that involves room scanning and then placing of imaginary objects in the room. I had two questions about the specifics behind this.
Is it possible for Roomplan to include the ceiling when scanning the room?
Is it possible to place objects in AR while Room plan is running, or is it necessary to wait until after the scan is done?
I noticed that tracking moving images is super slow on visionOS.
Although the anchor update closure is called multiple times per second, the anchor's transform seems to be updated only once in a while. Another issue might be that the SwiftUI isn't updating more often.
On iOS, image tracking is pretty smooth.
Is there a way to speed this up somehow on visionOS, too?
Hi,
With this code
on iPad Pro (11-inch) (4th generation), when capturing in area mode and pressing the "Next" Button, an overlay with a white dot in the center appears. Everything else is blurred out and the app is stuck.
It does seem to work well on iPhones though.
Thanks!
Hi, I am trying to train an object tracking model on MacBook Air with a M2 silicon. The .usdz file used to train is captured by iPhone 14 pro. However the training process takes too much time. it stay at 0.0% for about an hour and a half. I wonder if there is any other methods to generate reference objects.
Hello,
I have rendered an usdz File using sceneKit's .write() method on the displayed scene. Once I load it on another RealityKit's ARView using the .nonAR mode of the camera, I am trying to use the view's raycast(from:,allowing:,alignment:) method, to get the coordinates on the model. I have applied the collisionComponents when loading the model using the .generateCollisionShapes() function to be able to interact with the modelEntity.
However, the raycast result returns nothing.
Is there something I am missing for it to work?
Thanks!
Anybody try hand tracking provider in 2.0? I'm getting them in 11ms interval, as advertised, but they are duplicate. Here's a print of the timestamps. Problematic for me because I am tracking the last 5 position for a calculation and expect them to be unique. Can't find docs on this anywhere.
I understand it's not truly 90 updates a second but predicted pose, however I expected the updates to include predicted poses.
As the question suggests, I would like to use environmental awareness and item placement functions in Unity. Does have any related example projects?
The project was developed using Unity, and the requirement is to place a virtual model in the real world. When the user leaves the environment or the machine is turned off and then on again, the virtual model is still in its original real position. I found that the worldtracking function of Arkit is useful, but I don't know how to use it in Unity. Is that have any related example projects?
Hi,
Object Capture's original sample code was released last year, and this year there was a talk about adding area mode to it. The talk links to the old Object Capture code - when can I expect to have the new one with area mode, and is there anything I can help you with to have it published faster?
Thanks!
We are developing apps for visionOS and need the following capabilities for a consumer app:
access to the main camera, to let users shoot photos and videos
reading QR codes, to trigger the download of additional content
So I was really happy when I noticed that visionOS 2.0 has these features.
However, I was shocked when I also realized that these capabilities are restricted to enterprise customers only:
https://developer.apple.com/videos/play/wwdc2024/10139/
I think that Apple is shooting itself into the foot with these restrictions. I can understand that privacy is important, but these limitations restrict potential use cases for this platform drastically, even in consumer space.
IMHO Apple should decide if they want to target consumers in the first place, or if they want to go the Hololens / MagicLeap route and mainly satisfy enterprise customers and their respective devs. With the current setup, Apple is risking to push devs away to other platforms where they have more freedom to create great apps.