My visionOS app can install custom fonts.
My visionOS app also lists these fonts as available within the application, and I can see them in a list using CTFontManagerCopyAvailableFontFamilyNames
I manually track which fonts have been installed.
So far, so good. But here’s my problem: When a user uninstalls a font via Settings, I have no way to tell. That’s because CTFontManagerCopyAvailableFontFamilyNames will still list that font because it’s still available within the application.
How can I track these changes in my app when a font is uninstalled via Settings?
Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.
Post
Replies
Boosts
Views
Activity
I am seeking a comprehensive pathway to learning Metal programming on VisionOS. The official documentation’s Pathway on Metal is insufficient in this regard. I kindly request that someone create a detailed pathway to assist me in this endeavor.
The pathway should encompass the following key areas:
Knowledge Base:
Understand the fundamental principles of Metal and other frameworks, as well as basic concepts, to prepare for future learning.
Metal3 (very important) :
Gain a deep understanding of Metal itself, the programming language used to communicate with the GPU on the device to render graphics. This knowledge forms the foundation for all Metal-related tasks.
Compositor Services and ARKit (important) :
Learn how to display Metal scenes within the Vision device’s space and enable augmented reality (AR) and hand interaction. This knowledge is essential for creating interactive and immersive experiences.
Metal Performance Shaders:
Acquire expertise in optimizing material rendering to enhance performance.
MetalKit:
Simplifies the tasks that display your Metal content onscreen.
MetalFX:
Develop proficiency in using MetalFX to improve rendering efficiency and achieve visually stunning effects.
I would appreciate it if you could provide me with a detailed and comprehensive pathway, including the URLs of relevant documents, to guide my learning journey. Thank you for your assistance.
Hi!
I'm creating an app like this:
Using Image Tracking to set world anchor in real world first.
The timeline in Reality Composer Pro scene needs to be played in same time(for the people in same place using the app).
People using the app will see the same contents in same position in same time in same place.
I already made Image Tracking feature worked. But the big problem is "Synchronization". I found Group Activities and TabletopKit to solve the problem. But I don't know if this are the right modules for this project.
How do I solve this problem technically?
If you have ideas, please let me know. I really need help for this.
Hello! Currently watching the Envision the Future: Build great apps for visionOS" webinar, and lots of questions coming up. Thx for offering this online!
For those of us with "VR legs", how can we go about setting up custom hand/finger gestures that would enable us to add the functionality for teleporting and navigating within our fully Immersive environments? Both smooth, and snap turn/teleport options would be great, thx! This is adjacent to my previous question on how to setup a PS5 controller to do something similar. Think Half-Life: ALYX as the gold standard for VR navigation.
I recently completed a freelance project where I was tasked with creating room-scale environments that could be used as AR elements. As a bonus, I suggested that these could be done to scale, and repurposed for eventual viewing in Vision Pro. To illustrate, I was able to quickly create a simple Immersive project in Xcode, add the USDZ file (authored in Maya, with baked lighting from Arnold) to Reality Composer Pro, and compile for quick sending to headset. I then would do screen recordings inside the immersive space, which the client loved to see. However, I am unable to walk around due to the boundary limitations.
My next obvious thought is, how can I setup the “player” camera so that I can control with a PS5 controller inside AVP? In addition to Maya, I’m an Unreal Engine artist, and have been waiting patiently to get any projects compiled for AVP. With 5.5 release, I was able to get a VR Template test over to AVP, where I have rudimentary navigation control via the PS5 controller.
Ideally, I’d also love to learn how to set this up natively, so I can take simple USDZ scenes created in Maya, import to RCP, setup a simple camera controller, and then be able to use this to navigate my VR Immersive spaces on Vision Pro. How can we go about doing this?
Part two of this question/suggestion is, how would I go about controlling a rigged, animated character in AR/passthrough mode in a similar fashion? Thx!
Hello,
I am currently working on an app that features multiple environments in which I combine Reality Composer Pro scenes with objects managed at runtime as well as make heavy use of RealityView attachments that modify the appearance of certain objects. Is it possible to keep track of an AR anchor when transitioning between immersive spaces?
About my app:
There are two main contexts/scenes in the app that the user progresses through. The first takes place in AR and is non-interactive and driven by a timeline animation. The second is in VR and allows the user to change materials of select models. Both scenes need to be placed relative to a real-life object that functions as an image anchor. Anchoring is necessary for visual purposes in AR context and it would be nice to use it in the VR context as well in order to provide passive haptics to the user.
If the user doesn't have access to the physical object, we make use of plane-based anchoring. Either way, we would like to keep the anchor's position across the scenes.
Hi all,
Our app allows a user to scan a room and then save that scan on a separate view, followed by additional scans. We're looking into allowing room combining via CapturedStructure, so we need rooms to be scanned in the same ARWorldMap without necessarily needing to re-localize in the same session. This should fit within the first scenario that Apple described.
The only way I have found that allows our requirements is to save RoomCaptureView and to re-use that RoomCaptureView whenever we need to start a session again. This creates a number of other issues, and ideally, we wouldn't need to save a View in something like a singleton. We are using captureSession.stop(pauseARSession: false). Additionally, if we use the same RoomCaptureView and an error occurs during the scanning process, we can't get the instructions overlay to appear again if we reuse this view (specifically, the instructions in the middle of the view that state "Move device to start"). It's as if the instructions are completely removed and scanning is stuck on an error state if an error occurs.
These instructions also seem to be separate from the instructions we can grab from RoomCaptureViewDelegate via didProvide instruction: RoomCaptureSession.Instruction), so we can't use that either. There's a couple subviews that seem relevant to this: RoomCaptureCoachingOverlayView and ARGlyphView - but both are not public, so we can't force them to appear. Also attempted a number of other things to try to get these subviews to appear, such as layoutIfNeeded().
Saving the ARSession and using it in let roomCaptureView = RoomCaptureView(frame: viewBounds, arSession: arSession) where we're creating a new view with the same ARSession seems much more ideal as that solves the above issues, but we run into another issue: world tracking seems to be completely lost when a new RoomCaptureView (and thus a new RoomCaptureSession) is started, even with the same already started ARSession, almost as if captureSession.stop(pauseARSession: false) doesn't work as described.
Is there any way around needing to use the same RoomCaptureView or RoomCaptureSession for subsequent scans in the same session without needing to re-localize via ARWorldMap loading? Is there a way to force the guiding instructions to appear?
What is recommended best practice for importing a Blender 3D file into RCP? I assume as a .usdz file? Is there a WWDC24 session or other Apple resource that best explains this. I want to make sure I provide the right format/file to RCP from Blender.
WWDC21 had a cool demo project with fish, with a watery, misty look (Dive into RealityKit). It used post processing in RealityKit, but the ARView class isn’t available in VisionOS. Can CompositorLayer be used instead for post processing in full immersion?
Hi, I heve a problem with an visionOS app and I couldn't find a solution. I have 3D carousel with cards and when I use the drag gesture and I drag to the left I want the carousel to rotate clockwise and when I drag to the right I want the carousel to rotate counter clockwise. My problem is when I rotate my body more than 90 degrees to the left or to the right the drag gesture changes it's value and the carousel rotates in the opposite direction? Do you know how can I maintain the right sense taking into account that the user can rotate his body?
I've tried to take the user orientation with device tracking and check if rotation on Y axis is greater than 90 degrees in both direction but It is a very small area bettween 70-110 degrees when it still rotates in the opposite direction. I think that's because the device traker doesn't update at the same rate as drag gesture or it doesn't have the same acurracy.
Hi, since I updated my device to visionOS 2.0 or higher I have some problems with my app. Sometimes when I go with my eyes over buttons or the tabview which is a SwiftUI component that many apps use, the hover effect doesn't trigger anymore, I can tap on the icon from the tabview but doesn't extend when I hover it to se the description of the button. It is weird because it doesn't happen all the time. But in VisionOS 1.0 doesn't happen at all.
My second issue is that I have an navbar as an attachment and this attachment has a draggable modifier that we created. The buttons are not tappable until I drag the navbar a little, this also never happened in VisionOS 1.0 but always happen in Vision 2.0+ because I've tested in the simulator with different versions.
Is it possible these problems to be related to handTracking Service that we use from ARKit? Because sometimes when we close the trackers the app works as intended.
Ever since updating to Xcode 16 my AR app doesn't compile, because Xcode doesn't recognize the .rcproject files used to load the AR experiences in iOS app. The .rcproject files were authored in Reality Composer on iPadOS.
The expected behavior is described in this official Apple documentation article: https://developer.apple.com/documentation/realitykit/loading-entities-from-a-file
How do I submit a ticket to Apple?
Can apple vision pro can shareplay differents apps at the same time? How about this on the iphone or ipad?
==> Which information will I get from CMSSampleBuffer ?
Is there an option to block close up accomodation of the camera ?
Is there a way for the object capture module to take a video instead of a series of picture ?
It would be fantastic to have an answer on all of these questions to be able to move forward on new implementations.
Hello everyone,
I'm currently working through this example project Exploring object tracking with ARKit to learn how to use Object Tracking with visionOS.
I was able to modify the example project to overlay a different model onto the detected object. However, with the example code, I'm only able to track 1 instance of the trained object at a time. I'm wondering if there is a way to track multiple instances of 1 object?
For example, I have a usdz model of a box, then trained that box for object tracking, I'm able to overlay a model of a chair over that box once it's detected. But now, I have multiple copies of that same box, and I want to arrange them so that when I wear the Vision Pro, I can see the chairs arranged however I want.
I'm still new to visionOS development, so I'm not sure if there's a way to accomplish that by just training 1 object and having copies of it.
If it helps, this is my current modification to overlay a virtual object ontop of a detected object.
func loadReferenceObjects() async -> [ReferenceObject] {
// Get a list of all reference object files in the app's main bundle and attempt to load each.
var referenceObjectFiles: [String] = []
if let resourcesPath = Bundle.main.resourcePath {
print("resource path: \(resourcesPath)")
try? referenceObjectFiles = FileManager.default.contentsOfDirectory(atPath: resourcesPath).filter { $0.hasSuffix(".referenceobject") }
}
await withTaskGroup(of: Void.self) { group in
for file in referenceObjectFiles {
let objectURL = Bundle.main.bundleURL.appending(path: file)
group.addTask {
// get the file name
let fileNameWithoutExtension = (file as NSString).deletingPathExtension
// load each ref objs as task
await self.loadSingleReferenceObject(url: objectURL, fileName: fileNameWithoutExtension)
}
}
}
return self.referenceObjects
}
// Private helper method to load a single object
// and assign entity to it.
private func loadSingleReferenceObject(url: URL, fileName: String) async {
var referenceObject: ReferenceObject
do {
print("Loading reference object from \(url)")
// Load the file as a `ReferenceObject` - this can take a while for larger objects.
try await referenceObject = ReferenceObject(from: url)
} catch {
fatalError("Failed to load reference object with error \(error)")
}
// add the ref obj to the ref objs array
self.referenceObjects.append(referenceObject)
// entity with each file
var model: Entity = Entity()
// add entity according to the file name
switch fileName {
// Box1 ref obj binds with Chair1
case "Box1":
// try to load the model
do {
try await model = Entity(named: "chair1", in: realityKitContentBundle)
} catch {
print("Failed to load chair1")
}
// Box2 ref obj binds with Chair2
case "Box2":
// try to load the model
do {
try await model = Entity(named: "chair2", in: realityKitContentBundle)
} catch {
print("Failed to load chair2")
}
default:
print("no model associated with this file name: \(fileName)")
break
}
// map entity to ref objs
usdzPerReferenceObject[referenceObject.id] = model
}
Any help or suggestions would be greatly appreciated.
Thank you.
How can I access the player’s camera vector in VisionOS, specifically using RealityKit?
In Unity and other game engines, there’s often an API like Camera.main.transform.forward for this purpose. I’ve found the head anchor but haven’t identified a way to obtain the forward vector in RealityKit.
Is there a related API for this? Any guidance would be greatly appreciated.
Thanks!
How can I access the player’s camera vector in VisionOS, specifically using RealityKit?
In Unity and other game engines, there’s often an API like "Camera.main.transform.forward for this purpose. "
I’ve found the head anchor but haven’t identified a way to obtain the forward vector in RealityKit.
Is there a related API for this? Any guidance would be greatly appreciated.
Thanks!
I created a custom component for composer pro in which I have several variables I need an entity to have.
The idea is to add this component to some 3d models and save them as usdz’s then I load these usdz’s in code and do specific things depending on these variables.
The component shows up in composer fine and I can set variables there. The problem is that the values I set in composer are different that what is shown in code. lets say in composer I set canMove = true. then when I read in code is set to false.
I don’t know if I’m missing something
public struct MyObjectComponent: Component, Codable
{
public var affectAll: Bool = false
public var affectFloor: Bool = false
public var canMove: Bool = false
public var moveX: Bool = false
public var moveY: Bool = false
public var moveZ: Bool = false
public var canRotate: Bool = false
public var rotateX: Bool = false
public var rotateY: Bool = false
public var rotateZ: Bool = false
public init() {
}
}
Any help appreciated.
Guillermo
Function Introduction "https://developer.apple.com/documentation/avkit/creating-a-multiview-video-playback-experience-in-visionos/"
When I use this function, my videoPlayer has no back Action in player.
And we did not find any method provided by the system "addChildViewControllerAndView(form)"
"https://developer.apple.com/documentation/avkit/adopting-the-system-player-interface-in-visionos"
Referencing this document also did not work
As long as you enter this line of code
let playerController = AVPlayerViewController()
// Enable the multiview experience along with the default recommended set.
playerController.experienceController.allowedExperiences = .recommended(including: [.multiview])
there is no back button, only full screen and zoom out
In VisionOS versions 2.1 and 2.2, I’m encountering a significant limitation when using the .immersionStyle(selection: .constant(.mixed), in: .mixed) mode, specifically in mixed immersive style. Here’s a breakdown of the behavior:
In full immersion mode (.immersionStyle(selection: .constant(.full), in: .full)), users can interact with and manipulate system windows while inside a 3D model, allowing typical interactions like moving windows, pinching, or activating UI switches.
However, in mixed immersive mode, using the exact same layout “inside” a 3D model (which doesn’t visually obstruct the window), users are unable to interact with window content or move the window. Basic interactions like pinching or toggling switches require users to physically touch these elements in AR space, which is inconsistent with the behavior in full immersion.
From a usability perspective, this restriction seems unnecessary, as the software should ideally allow for similar interaction capabilities across both immersive styles. The expected behavior is to enable window manipulation within a 3D model in mixed mode, matching the functionality observed in full immersion.
The scene in question is a House in which the user is placed during the immersion that's why I am referring to the user being "Inside" of the scene.
Has anyone else experienced this or found a workaround?