I downloaded Xcode 16 and updated my macOS to 15, but I keep getting this error when trying to build the game in simulator or in device
[xrsimulator] Exception thrown: The operation couldn’t be completed. (realitytool.RKAssetsCompiler.RKAssetsCompilerError error 3.)
General
RSS for tagDiscuss Spatial Computing on Apple Platforms.
Post
Replies
Boosts
Views
Activity
I've been trying to get the drag gesture up and running so I can move my 3D model around in my immersive space, but for some reason I am not able to move it around. The model shows up in my visionOS 1.0 Simulator, but I can't seem to get it to move around. Would love some help with this and some resources too that would be helpful. Here's a snippet of my Reality View code
import SwiftUI
import RealityKit
import RealityKitContent
struct GearRealityView: View {
static var modelEntity = Entity()
var body: some View {
RealityView { content in
if let model = try? await Entity(named: "LandingGear", in: realityKitContentBundle) {
GearRealityView.modelEntity = model
content.add(model)
}
}.gesture(
DragGesture()
.targetedToEntity(GearRealityView.modelEntity)
.onChanged({ value in
GearRealityView.modelEntity.position = value.convert(value.location3D, from: .local, to: GearRealityView.modelEntity.parent!)
})
)
}
}
In the example https://developer.apple.com/documentation/imageio/writing-spatial-photos, we see that for each image encoded with the photo we include the following information:
kCGImagePropertyGroups: [
kCGImagePropertyGroupIndex: 0,
kCGImagePropertyGroupType: kCGImagePropertyGroupTypeStereoPair,
(isLeft ? kCGImagePropertyGroupImageIsLeftImage : kCGImagePropertyGroupImageIsRightImage): true,
kCGImagePropertyGroupImageDisparityAdjustment: encodedDisparityAdjustment
],
Which will identify which image is left, and which is right, also information about group type = stereo pair.
Now, how do you read those back?
I tried to implement a reading simply with CGImageSourceCopyPropertiesAtIndex, and that did not work, getting back "No property groups found."
func tryToReadThose() {
guard
let imageData = try? Data(contentsOf: outputImageURL),
let source = CGImageSourceCreateWithData(imageData as NSData, nil)
else {
print("cannot read")
return
}
for i in 0..<CGImageSourceGetCount(source) {
guard let imageProperties = CGImageSourceCopyPropertiesAtIndex(source, i, nil) as? [String: Any] else {
print("cannot read options")
continue
}
if let propertyGroups = imageProperties[String(kCGImagePropertyGroups)] as? [Any] {
// Process the property groups as needed
print(propertyGroups)
} else {
print("No property groups found.")
}
//print(imageProperties)
}
}
I assume maybe CGImageSourceCopyPropertiesAtIndex expects something as a 3rd parameter. But in the https://developer.apple.com/documentation/imageio/cgimagesource "Specifying the Read Options" I don't see anything related to that.
With quite some excitement I read about visionOS 2's new feature to automatically turn regular 2D photos into spatial photos, using machine learning. It's briefly mentioned in this WWDC video:
https://developer.apple.com/wwdc24/10166
My question is: Can developers use this feature via an API, so we can turn any image into a spatial image, even if it is not in the device photo library?
We would like to download an image from our server, convert it on the visionPro on-the-fly and display it as a spatial photo.
How to display the user's own persona in a view
I followed the WWDC video to learn Sharplay. I understood the first creation of seats, but I couldn't learn some of the following content very well, so I hope you can give me a list code. The contents are as follows:
I have already taken a seat.
struct TeamSelectionTemplate: SpatialTemplate {
let elements: [any SpatialTemplateElement] = [
.seat(position: .app.offsetBy(x: 0, z: 4)),
.seat(position: .app.offsetBy(x: 1, z: 4)),
.seat(position: .app.offsetBy(x: -1, z: 4)),
.seat(position: .app.offsetBy(x: 2, z: 4)),
.seat(position: .app.offsetBy(x: -2, z: 4)),
]
}
I hope you can give me a SharePlay Button. After pressing it, it will assign all users in Facetime to a seat with elements quantified in TeamSelectionTemplate. Thank you very much.
When using .mov video files for creating video materials in RealityKit, they display correctly on my modelEntity. However, when I tried using a video file in the .mp4 format, I only get a solid black material. Does AVKit support playing .mp4 video files on visionOS 1.2?
Has there been an adjustment in the maximum number of DrawableQueues that can be swapped for textures in VisionOS 2? Or an adjustment in the total amount of RAM allowed in a scene?
I have been having a difficult time getting more than one DrawableQueue to appear when it worked fine in VisionOS 1.x.
I would like to code some RealityViews to run on my Mac first (and then incorporate them in a visionOS project) so that my code/test loop is faster, but I have not been able to find a simple example that supports Mac.
Is it possible to have volumes on a Mac? Is there support for using a game controller to move around the RealityView, like in the visionOS simulator?
How can I remove or hide this part under a SwiftUI panel?
What Swift code should I write to control this?
Hi everyone,
I'm working on a Vision Pro application and encountering an issue while trying to load a USDZ file. Here are the details:
File Path:
/Users/siddharthpatel/Library/Developer/CoreSimulator/Devices/31F10013-50B6-4CEF-9388-9094087FAEBF/data/Containers/Data/Application/EB260F0A-A84F-4E95-876D-08199D2A4998/Documents/hive1.usdz
Code:
do {
try await modelEntityForCollider = ModelEntity(contentsOf: fileURL!)
} catch {
print("Error loading model: (error)")
}
Error:
Thread 1: Fatal error: Failed to import entity from "/Users/siddharthpatel/Library/Developer/CoreSimulator/Devices/31F10013-50B6-4CEF-9388-9094087FAEBF/data/Containers/Data/ ... ve1.usdz"
I've verified that the file path is correct and the USDZ file exists at the specified location. What could be causing this error and how can I resolve it?
Thanks in advance for your help!
Siddharth
Hello,
I am new to swiftUI and VisionOS but I developed an app with a window and an ImmersiveSpace. I want the Immersive space to be dismissed when the window/app is closed.
I have the code below using the state of ScenePhase and it was working fine in Vision OS 1.1 but it stopped working with VisionOS 2.0.
Any idea what I am doing wrong? Is there another way to handle the dismissal of ImmersiveSpace when my main Window is closed?
@main
struct MyApp: App {
@State private var viewModel = ViewModel()
var body: some Scene {
@Environment(\.scenePhase) var scenePhase
@Environment(\.dismissImmersiveSpace) var dismissImmersiveSpace
WindowGroup {
SideBarView()
.environment(viewModel)
.frame(width: 1150,height: 700)
.onChange(of: scenePhase, { oldValue, newValue in
if newValue == .inactive || newValue == .background {
Task {
await dismissImmersiveSpace()
viewModel.immersiveSpaceIsShown = false
}
}
})
}.windowResizability(.contentSize)
ImmersiveSpace(id: "ImmersiveSpace") {
ImmersiveView(area: viewModel.currentModel)
.environment(viewModel)
}
}
}
I just follow the video and add the codes, but when I switch to spatial video capturing, the videoPreviewLayer shows black.
<<<< FigCaptureSessionRemote >>>> Fig assert: "! storage->connectionDied" at bail (FigCaptureSessionRemote.m:405) - (err=0)
<<<< FigCaptureSessionRemote >>>> captureSessionRemote_getObjectID signalled err=-16405 (kFigCaptureSessionError_ServerConnectionDied) (Server connection was lost) at FigCaptureSessionRemote.m:405
<<<< FigCaptureSessionRemote >>>> Fig assert: "err == 0 " at bail (FigCaptureSessionRemote.m:421) - (err=-16405)
<<<< FigCaptureSessionRemote >>>> Fig assert: "msg" at bail (FigCaptureSessionRemote.m:744) - (err=0)
Did I miss something?
Hello,
I am currently developing an application using RealityKit and I've encountered a couple of challenges that I need assistance with:
Capturing Perspective Camera View: I am trying to render or capture the view from a PerspectiveCamera in RealityKit/RealityView. My goal is to save this view of a 3D model as an image or video using a virtual camera. However, I'm unsure how to access or redirect the rendered output from a PerspectiveCamera within RealityKit. Is there an existing API or a recommended approach to achieve this?
Integrating SceneKit with RealityKit: I've also experimented with using
SCNNode and SCNCamera to capture the camera's view, but I'm wondering if SceneKit is directly compatible within a RealityKit scene, specifically within a RealityView.
I would like to leverage the advanced features of RealityKit for managing 3D models. Is saving the virtual view of a camera supported, and if so, what are the best practices?
Any guidance, sample code, or references to documentation would be greatly appreciated.
Thank you in advance for your help!
For visionOS 2.0+, it has been announced the object tracking feature. Is there any support for PolySpatial in Unity or is it only available in Swift and Xcode?
I am trying to make the immersive version of AVplayerViewController bigger, but I can't find any information on how I can go about it. It seems that if I wanted to change immersive video viewing experience, only thing I can do is using VideoMaterial and put it on ModelEntity with .generatePlane. is there a way to change video size on immersive mode for AVplayerViewController?
I found that the app AirDraw can export users' draw scenes to a USDZ file. So how can I implement this function using RealityKit?
I learned Sharplay from the WWDC video. I understand the creation of seats, but I can't learn some of the following contents well, so I hope you can help me. The content is as follows: I have set up the seats.
struct TeamSelectionTemplate: SpatialTemplate {
let elements: [any SpatialTemplateElement] = [
.seat(position: .app.offsetBy(x: 0, z: 4)),
.seat(position: .app.offsetBy(x: 1, z: 4)),
.seat(position: .app.offsetBy(x: -1, z: 4)),
.seat(position: .app.offsetBy(x: 2, z: 4)),
.seat(position: .app.offsetBy(x: -2, z: 4)),
]
}
It was mentioned in one of my previous posts: "I hope you can give me a SharePlay Button. After pressing it, it will assign all users in Facetime to a seat with elements quantified in TeamSe lectionTemplate.", and someone replied to me and asked me to try systemCoordinator.configuration.spatialTemplatePreference = .custom (TeamSelectionTemplate()), however, Xcode error Cannot find 'systemCoordinator' in scope How to solve it? Thank you!
Hello,
We are wondering how and if we can save/render the view that a PerspectiveCamera sees in VisionOS.
Ideally, the goal is to use a PerspectiveCamera in a Mixed Immersive space and render out what it sees.
We don't see any methods exposed for rendering the view.
We would truly appreciate any advice on this!
Thank you.
I have an immersive space that is rendered using metal. Is there a way that I can position swiftUI views at coordinates relative to positions in my immersive space?
I know that I can display a volume with RealityKit content simultaneously to my metal content. The volume's coordinate system specifically, it's bounds, does not, coincide with my entire metal scene.
One approach I thought of would be to open two views in my immersive space. That way, I could simply add Attachment's to invisible RealityKit Entities in one view at positions where I have content in my metal scene.
unfortunately it seems that, while I can declare an ImmersiveSpace be composed of multiple RealityViews
ImmersiveSpace(){
RealityView { content in
// load first view
} update: { content in
// update
}
}
RealityView{ content in
//load second view
}
} update: { content in
//update
}
}
That results in two coinciding realty kit views in the immersive space.
I can not however do something like this:
ImmersiveSpace(){
CompositorLayer(configuration: ContentStageConfiguration()){ layerRenderer in
//set up my metal renderer and stuff
}
RealityView{ content in
//set up a view where I could use attachments
} update: { content in
}
}