Hello,
I keep running into the below warning when pushing a window of type volumetric. Although pushing the windows is achieved, we always get the warning regardless of pushing the window via the Attachment button or via the buttons in the ToolbarItemGroup.
Illustrated is all the code: app file, first volume and second volume. You can see in my app file that all volumetric window are indeed in a WindowGroup.
What is wrong? How can I get rid of that warning?
Warning:
PushWindowAction requires the replaced window to be a WindowGroup or DocumentGroup
General
RSS for tagDiscuss Spatial Computing on Apple Platforms.
Post
Replies
Boosts
Views
Activity
I'm developing a VisionOS app and I'm trying to load a ModelEntity from a USDZ file which is inside my custom RealityKit package called R2UVisionOficial. But it keeps giving me an resourceNotFound error.
import RealityKit
import R2UVisionOficial
import ARKit
/* more code */
do {
let newEntity: Entity
//...
// Loads entity from USDZ inside package
newEntity = try await ModelEntity(named: "Salas", in: r2UVisionOficialBundle)
//...
return newEntity
} catch {
print("wtManager >>> **** FAILED to load entity:", error.localizedDescription)
throw error
}
I'm sure I have the Salas.usdz file in the root folder of my package and that I'm using the correct paths. However I keep getting the error:
Failed to find resource with name "Salas" in bundle
It's funny because when I try to load a USDA (scenes) from the same packages, it works fine. So I guess there's something to do with ModelEntity or USDZ files.
Can you please help me?
P.S. This issue is similar to https://developer.apple.com/forums/thread/746842?answerId=780415022#780415022
I'm trying to downgrade my visionPro to visionOS 1.3. I downloaded the visionOS 1.3 ipsw file from the Apple Developer site (on September 25, 2024), but I'm unable to restore the device using this file.
After checking ipsw.me, I noticed that visionOS 1.3 is no longer signed. This makes me wonder if the 1.3 IPSW file, although available on the developer site, might not be usable anymore.
Has anyone else encountered this issue? Is there any official confirmation on whether visionOS 1.3 can still be restored?
I am trying to only apply a drag gesture to specific entities that has a specific component. My entities has the component on it along with the input target and collision component. The gestures work when I use .targetedToAnyEntity() modifier but .targetedToEntity(where:) modifier fails
struct ImmersiveView: View {
var body: some View {
RealityView { content in
// Add the initial RealityKit content
if let scene = try? await Entity(named: "Immersive", in: realityKitContentBundle) {
content.add(scene)
}
}
.gesture(
DragGesture()
.targetedToEntity(where: .has(ToyComponent.self))
.onChanged({ value in
value.entity.position = value.convert(value.location3D, from: .local, to: value.entity.parent!)
})
)
}
}
What could be wrong here?
Hello,
To me, it does not seem to be entirely clear why, when I'm trying to display my attachment, no matter the positioning, it will always be hidden/covered by my visionOS app window. I'm trying to achieve displaying the attachment one layer above/in front of the window. When my head isn't directed towards the window I can see the attachment but else it's covered by it.
I appreciate any help!
ContentView.swift
import SwiftUI
import RealityKit
struct ContentView: View {
@Environment(\.openImmersiveSpace) private var openImmersiveSpace
public var body: some View {
VStack {
Text("Hello World")
.font(.largeTitle)
Button("Start") {
Task {
await openImmersiveSpace(id: "AppSpace")
}
}
}
}
}
ImmersiveView.swift
import SwiftUI
import RealityKit
struct ImmersiveView: View {
var loader: EnvironmentLoader
public var body: some View {
RealityView { content, attachments in
content.add(try! await loader.getEntity())
let headEntity = AnchorEntity(.head)
content.add(headEntity)
if let text = attachments.entity(for: "at01") {
text.position = [0, 0, -0.25]
headEntity.addChild(text)
}
}
attachments: {
Attachment(id: "at01") {
Text("Hello World!")
.font(.extraLargeTitle)
.padding()
}
}
}
}
App.swift
import SwiftUI
@main
private struct App: App {
@State var loader = EnvironmentLoader()
public var body: some Scene {
WindowGroup {
ContentView()
}
ImmersiveSpace(id: "AppSpace") {
ImmersiveView(loader: loader)
}
.immersionStyle(selection: .constant(.progressive), in: .progressive)
}
}
We would like to create an Immersive video and store the video file locally in Vision Pro for viewing.
By Immersive video, I mean the video that is played at the end of the Vision Pro experience at the Apple Store (LeBron's dunk, Curry's 3-point shot, tightrope walk, etc.). It is unclear if a way is currently provided to view Immersive video locally.
I can find some information about Spatial video on the Dev site, but I can't find any information about Immersive video. My understanding is:
Spatial video:
A video window appears in space and plays video with depth. Up to 4K side-by-side video can be converted to MV-HEVC format using Xcode and played back in the Photos app.
Immersive video:
180VR video, but Iām not sure how it was created. Similar to Spatial video, I converted a side-by-side 180VR video to MV-HEVC format using Xcode, but it could not be played back in the Photos app as expected.
Vision Pro's Photos app features an Immersive button during video playback, but this appears to be for zooming in on Spatial video to the full field of view, which seems different from Immersive video.
The demo video provided by Apple is streamed from Apple TV, and there are no local files available.
We are currently considering creating an app that displays different videos to each eye, but we prefer not to go this route due to licensing and distribution issues.
I am using RealityView for an iOS program.
Is it possible to turn off the camera passthrough, so only my virtual content is showing? I am looking to create VR experience.
I have a work around where I turn off occlusion and then create a sphere around me (e.g., with a black texture), but in the pre-RealityView days, I think I used something like this:
arView.environment.background = .color(.black)
Is there something similar in RealityView for iOS?
Here are some snippets of my current work around inside RealityView.
First create the sphere to surround the user:
// Create sphere
let blackMaterial = UnlitMaterial(color: .black)
let sphereMesh = MeshResource.generateSphere(radius: 100)
let sphereModelComponent = ModelComponent(mesh: sphereMesh, materials: [blackMaterial])
let sphereEntity = Entity()
sphereEntity.components.set(sphereModelComponent)
sphereEntity.scale *= .init(x: -1, y: 1, z: 1)
content.add(sphereEntity)
Then turn off occlusion:
// Turn off occlusion
let configuration = SpatialTrackingSession.Configuration(
tracking: [],
sceneUnderstanding: [],
camera: .back)
let session = SpatialTrackingSession()
await session.run(configuration)
Hi,
We are a team of student working on a project featuring the Vision Pro, and we'd simply like to know if a 3rd party app can access the video stream of the front cameras?
From our tests, FaceTime for exemple, is able to screen share the entire stream that the user is seeing. (real world + app windows), but apps as Discord, are only able to share app windows, the real world is fully black.
Is it a privacy security, or is it because 3rd parties app doesn't yet support the stream of the front cameras?
To give some more context, we'd like to screenshot an area of the view (real world), with a pinch and drag gesture, and then access the screenshot to work on it. How would we be able to access the video stream?
Thanks in advance for your help,
MrCubic
HoverEffectComponent on macOS 15 and iOS 18 works fine using RealityView, but seems to be ignored when ARView (even with a SwiftUI UIViewRepresentable) is used.
Feedback ID: FB15080805
Hi, from the 2023 WWDC video on RoomPlan, they mention that it should be possible to integrate photo / video with RoomPlan: https://developer.apple.com/videos/play/wwdc2023/10192/ (at ~2:30)
However, when I attempt to use AVFoundation and AVCaptureSession with RoomPlan, I get the simple error of "Cannot Record".
So I'm not sure if there is something wrong with my setup/code, or if these two libraries are actually incompatible. Are there any kinds of guides for doing things like this? Am I going in the right direction or should I try a different approach? Happy to share code if necessary. Thanks
Hi,
I'm currently working on some messages that should appear in front of the user depending on the system's state of my visionOS app. How am I able to change the distance of the appearing message relative to the user if the message is displayed as a View. Or is this only possible if I would create an enitity for that message, and then set apply .setPosition() and .relativeTo() e.g. the head anchor? Currently I can change the x and y coordinates of the view as it works within a 2D space, but as I'm intending to display that view in my immersive space, it would be cool if I can display my message a little bit further away in the user's UI, as it currently is a little bit to close in the user's view. If there is a solution without the use of entities I would prefer that one.
Thank you for your help!
Below an example:
Feedback.swift
import SwiftUI
struct Feedback: View {
let message: String
var body: some View {
VStack {
Text(message)
}
}
.position(x: 0, y: -850) // how to adapt distance/depth relative to user in UI?
}
}
ImmersiveView.swift
import SwiftUI
import RealityKit
struct ImmersiveView: View {
@State private var feedbackMessage = "Hello World"
public var body: some View {
VStack {}
.overlay(
Feedback(message: feedbackMessage)
)
RealityView { content in
let configuration = SpatialTrackingSession.Configuration(tracking: [.hand])
let spatialTrackingSession = SpatialTrackingSession.init()
_ = await spatialTrackingSession.run(configuration)
// Head
let headEntity = AnchorEntity(.head)
content.add(headEntity)
}
}
}
I try vision frameworks with VisionPro but does not work only with VisionOS2.0.
When I perform requests, do not work and below error is caught.
I try same code with VisionOS1.2, iOS18.0beta it works.
I try also new beta API but does not work and same error.
ex.GenerateForegroundInstanceMaskRequest
do you have any idea? is it any permission for use vision framework with visionOS2.0.
This is my try list
with VisionOS2.0beta4
GenerateForegroundInstanceMaskRequest (not work error1)
VNGenerateForegroundInstanceMaskRequest(not work error1)
VNRecognizeTextRequest (not work error2)
with VisionOS1.2
VNRecognizeTextRequest (work)
with iOS 18beta
GenerateForegroundInstanceMaskRequest (work)
My Development Env
Env1
VisionPro: VIsionOS2.0beta4
Xcode: 16.0beta4,16.0beta2.
macOS: 14.5ļ¼23F79ļ¼
Env2
VisionPro: VIsionOS1.2.
Xcode: 15.4
macOS: 14.5ļ¼23F79ļ¼.
Error1
Error Domain=com.apple.Vision Code=9 "Could not build inference plan - ANECF error: failed to load ANE model file:///System/Library/Frameworks/Vision.framework/subject_lifting_gen1_rev5_gv8dsz6vxu_multihead_int8.espresso.net Error= (DESIGN)" UserInfo={NSLocalizedDescription=Could not build inference plan - ANECF error: failed to load ANE model file:///System/Library/Frameworks/Vision.framework/subject_lifting_gen1_rev5_gv8dsz6vxu_multihead_int8.espresso.net Error= (DESIGN)}
Error2
Error Domain=com.apple.Vision Code=11 "VNRecognizeTextRequest produced an internal error" UserInfo={NSLocalizedDescription=VNRecognizeTextRequest produced an internal error, NSUnderlyingError=0x3001f6850 {Error Domain=CRImageReaderErrorDomain Code=-5 "Unknown error" UserInfo={NSLocalizedDescription=Unknown error}}}
Hi all,
Iām quite new to XR development in general and need some guidance.
I want to create a function that simply tells me if my palm is facing me or not (returning a bool), but I honestly have no idea where to start.
I saw an earlier Reddit post about 6 months that essentially wanted the same thing I need, but the only response was this:
Consider a triangle made up of the wrist, thumb knuckle, and little finger metacarpal (see here for the joints, and note that naming has changed slightly since this WWDC video): the orientation of this triangle (i.e., whether the front or back is visible) seen from the device location should be a very exact indication of whether the userās palm is showing or not.
While I really like this solution, I genuinely have no idea how to code it, and no further code was provided. Iām not asking for the entire implementation, but rather just enough to get me on the right track.
Heres basically all I have so far (no idea if this is correct or not):
func isPalmFacingDevice(hand: HandSkeleton, devicePosition: SIMD3<Float>) -> Bool {
// Get the wrist, thumb knuckle and little finger metacarpal positions as 3D vectors
let wristPos = SIMD3<Float>(hand.joint(.wrist).anchorFromJointTransform.columns.3.x,
hand.joint(.wrist).anchorFromJointTransform.columns.3.y,
hand.joint(.wrist).anchorFromJointTransform.columns.3.z)
let thumbKnucklePos = SIMD3<Float>(hand.joint(.thumbKnuckle).anchorFromJointTransform.columns.3.x,
hand.joint(.thumbKnuckle).anchorFromJointTransform.columns.3.y,
hand.joint(.thumbKnuckle).anchorFromJointTransform.columns.3.z)
let littleFingerPos = SIMD3<Float>(hand.joint(.littleFingerMetacarpal).anchorFromJointTransform.columns.3.x,
hand.joint(.littleFingerMetacarpal).anchorFromJointTransform.columns.3.y,
hand.joint(.littleFingerMetacarpal).anchorFromJointTransform.columns.3.z)
}
In iOS, to display a RealityView, you can assign a value to the content.camera property:
content.camera = .virtual
However, how can this be implemented in macOS and tvOS?
The set up:
I am developing a visionOS app that uses an immersive space.
The user sees a board with entities put onto it. My app places the board in front of the default camera and entities with a certain position and orientation relative to the board. Placement and rotation should be animated.
The problem:
If I place the entities by assigning a Transform to the transform property of the entity directly, i.e. without animation, the result is correct.
However I have to use the entity's move(to: function to animate it. And move(to: works in an unexpected way.
I thus wrote a little test app, based on Apple's visionOS immersive app template (below). There, the following 5 cases are treated:
Set transform directly (without animation). This gives the correct result, and works as expected (without animation).
Set transform using move relative to world (without animation). This gives the correct result, although it does not work as expected. I expected "relative to world" means translation and rotation is relativ to world. This seems wrong for translation and right for rotation.
Set transform using move relative to parentEntity (without animation). This gives a wrong result, although translation and rotation are defined relative to the parentEntity.
Set transform using move relative to world with animation. This gives also a wrong result, and without animation.
Set transform using move relative to parentEntity with animation. This gives also a wrong result, and without animation.
Here are the screen shots for the cases 1...5:
Cases 1 & 2
Case 3
Cases 4 & 5
The question:
So, obviously, I don't understand what move(to: does. I would be happy to get any advice what is wrong and how to do it right.
Here is the code:
import SwiftUI
import RealityKit
import RealityKitContent
struct ImmersiveView: View {
@Environment(AppModel.self) var appModel
let boardHeight: Float = 0.1
let boxHeight: Float = 0.3
var body: some View {
RealityView { content in
let boardEntity = makeBoard()
content.add(boardEntity)
let boxEntity = makeBox(parentEntity: boardEntity)
boardEntity.addChild(boxEntity)
}
}
func makeBoard() -> ModelEntity {
let mesh = MeshResource.generateBox(width: 1.0, height: boardHeight, depth: 1.0)
var material = UnlitMaterial(); material.color.tint = .red
let boardEntity = ModelEntity(mesh: mesh, materials: [material])
boardEntity.transform.translation = [0, 0, -3]
return boardEntity
}
func makeBox(parentEntity: Entity) -> ModelEntity {
let mesh = MeshResource.generateBox(width: 0.3, height: boxHeight, depth: 0.3)
var material = UnlitMaterial(); material.color.tint = .green
let boxEntity = ModelEntity(mesh: mesh, materials: [material])
// Set position and orientation of the box
// To put the box onto the board, move it up by half height of the board and half height of the box
let y_up = boardHeight/2.0 + boxHeight/2.0
let translation = SIMD3<Float>(0, y_up, 0)
// Turn the box by 45 degrees around the y axis
let rotationY = simd_quatf(angle: Float(45.0 * .pi/180.0), axis: SIMD3(x: 0, y: 1, z: 0))
let transform = Transform(rotation: rotationY, translation: translation)
// Do the actual move
// 1) Set transform directly (without animation)
boxEntity.transform = transform // Translation and rotation correct
// 2) Set transform using move relative to world (without animation)
// boxEntity.move(to: transform, relativeTo: nil) // Translation and rotation correct
// 3) Set transform using move relative to parentEntity (without animation)
// boxEntity.move(to: transform, relativeTo: parentEntity) // Translation incorrect, rotation correct
// 4) Set transform using move relative to world with animation
// boxEntity.move(to: transform,
// relativeTo: nil,
// duration: 1.0,
// timingFunction: .linear) // Translation incorrect, rotation incorrect, no animation
// 5) Set transform using move relative to parentEntity with animation
// boxEntity.move(to: transform,
// relativeTo: parentEntity,
// duration: 1.0,
// timingFunction: .linear) // 5) Translation incorrect, rotation incorrect, no animation
return boxEntity
}
}
In my visionOS app I am attempting to get the location of a finger press (not a tap, but when the user first presses their fingers together). As far as I can tell, the only way to get this event is to use a SpatialEventGesture.
I've currently got a DragGesture and I am able to use the convert functions in the passed in EntityTargetValue to convert the location3D from the DragEvent to my hit tested entity. But as far as I can tell the SpatialEventGesture doesn't use an EntityTargetValue. I've tried using the convert functions in my targeted entity (ie, myEntity.convert(position: from:)) but these do not return valid values.
My questions are:
Is SpatialEventGesture the correct way to get notified of finger presses?
How do I convert the location3D in the SpatialEventGesture to my entity space?
I tried to use the application icon from sample project https://developer.apple.com/documentation/visionos/diorama, but the 3 layers of the app icon are not separated when I hover on the icon in the Vision Pro simulator. Could you please advise how to fix the problem? I am using the latest Xcode Version 15.4 (15F31d). Thank you.
Hello,
Would it be possible to use any of the available visionOS environments when I use an app that requires me to be in an immersive space? I'm developing an app where users can start the immersive space experience by pressing a button. In my case, it would be helpful if the user could still choose a visionOS environment using the Digital Crown, but currently, it seems to be unavailable after opening an immersive space.
Thank you very much in advance!
When opening the Game Center dashboard via the Access Point, the Game Center dashboard appears BEHIND any content in the window with z depth (default type not volumetric). It obscures the dashboard and this makes it unusable.
Alerts have the same placement.
The new defaultWindowPlacement would probably suffice, but I don't think there's a way to apply that to the Game Center window.
What to do?
Thanks.
Hi. I recently added SwiftUI context menus and picker menus to my app, but when they are activated they flicker rapidly, and it is impossible to select anything (there is no hover effect either). When these menus are activated, the console prints lots of warning messages similar to this:
[NetworkComponent] Cannot find component's entity (guid=14395713952467043328, typeID=295756909031380028, type=CustomComponentRCPInputTargetComponent, entity=0x1047c6750).
This issue doesn't seem to happen on visionOS 1.2 simulator, but is reliably reproducible on visionOS 2.0 simulator and device.
Any idea what this might be related to? I am attempting to narrow down on the issue but it's challenging to do so without knowing what the error is about. Thanks!