Hi I have 2 views and an Immersive space. 1st and 2nd views are display in a TabView I open my ImmersiveSpace from a button in the 1st view of the tab. Then When I go to 2nd TabView I want to show an attachment in my Immersive space. This attachment should be visible in Immersive space only as long as the user os on the 2nd view. This is what I have done so far
struct Second: View {
@StateObject var sharedImageData = SharedImageData()
var body: some View {
VStack {
// other code
} .onAppear() {
Task {
sharedImageData.shouldCameraButtonShouw = true
}
}
.onDisappear() {
Task {
sharedImageData.shouldCameraButtonShouw = false
}
}
}
}
This is my Immersive space
struct ImmersiveView: View {
@EnvironmentObject var sharedImageData: SharedImageData
var body: some View {
RealityView { content, attachments in
// some code
} update: { content, attachments in
guard let controlCenterAttachmentEntity =
attachments.entity(for: Attachments.controlCenter) else { return }
controlCenterentity.addChild(controlCenterAttachmentEntity)
content.add(controlCenterentity)
} attachments: {
if sharedImageData.shouldCameraButtonShouw {
Attachment(id: Attachments.controlCenter) {
ControlCenter()
}
}
}
}
}
And this is my Observable class
class SharedImageData: ObservableObject {
@Published var takenImage: UIImage? = nil
@Published var shouldCameraButtonShouw: Bool = false
}
My problem is, when I am on Second view my attachment never appears. Attachment appears without this if condition. But How can I achieve my goal?
General
RSS for tagDiscuss Spatial Computing on Apple Platforms.
Post
Replies
Boosts
Views
Activity
Hello,
I downloaded the most recent Xcode 16.0 beta 6 along with the example project located here
Currently I am experiencing the following build failures:
RealityAssetsCompile
...
error: [xrsimulator] Component Compatibility: BlendShapeWeights not available for 'xros 1.0', please update 'platforms' array in Package.swift
error: [xrsimulator] Component Compatibility: EnvironmentLightingConfiguration not available for 'xros 1.0', please update 'platforms' array in Package.swift
error: [xrsimulator] Component Compatibility: AudioLibrary not available for 'xros 1.0', please update 'platforms' array in Package.swift
error: [xrsimulator] Exception thrown during compile: compileFailedBecause(reason: "compatibility faults")
error: Tool exited with code 1
I saw that there is a similar issue reported. As a test I downloaded that project compiled as expected.
Hello,
I'm trying to load a video on immersive space.
Actually using VideoMaterial and applying it on a plane surface I'm able to load video that I have locally.
If I want to load something from an external link like youtube or other service how can I do that?
Remembering obv that I'm loading it in an immersive space.
Thanks ;)
I downloaded Xcode 16 and updated my macOS to 15, but I keep getting this error when trying to build the game in simulator or in device
[xrsimulator] Exception thrown: The operation couldn’t be completed. (realitytool.RKAssetsCompiler.RKAssetsCompilerError error 3.)
The following RealityView ModelEntity animated text works in visionOS 1.0. In visionOS 2.0, when running the same piece of code, the model entity move duration does not seem to work. Are there changes to the way it works that I am missing? Thank you in advance.
RealityView { content in
let textEntity = generateMovingText()
content.add(textEntity)
_ = try? await arkitSession.run([worldTrackingProvider])
} update: { content in
guard let entity = content.entities.first(where: { $0.name == .textEntityName}) else { return }
if let pose = worldTrackingProvider.queryDeviceAnchor(atTimestamp: CACurrentMediaTime()) {
entity.position = .init(
x: pose.originFromAnchorTransform.columns.3.x,
y: pose.originFromAnchorTransform.columns.3.y,
z: pose.originFromAnchorTransform.columns.3.z
)
}
if let modelEntity = entity as? ModelEntity {
let rotation = Transform(rotation: simd_quatf(angle: -.pi / 6, axis: [1, 0, 0])) // Adjust angle as needed
modelEntity.transform = Transform(matrix: rotation.matrix * modelEntity.transform.matrix)
let animationDuration: Float = 60.0 // Adjust the duration as needed
let moveUp = Transform(scale: .one, translation: [0, 2, 0])
modelEntity.move(to: moveUp, relativeTo: modelEntity, duration: TimeInterval(animationDuration), timingFunction: .linear)
}
}
The source is available at the following:
https://github.com/Sebulec/crawling-text
I'm building a VisionOS 2.0 app where the AVP user can change the position of the end effector of a robot model, which was generated in Reality Composer Pro (RCP) from Primitive Shapes. I'd like to use an IKComponent to achieve this functionality, following example code here. I am able to load my entityy and access its MeshResource following the IKComponent example code, but on the line
let modelSkeleton = meshResource.contents.skeletons[0]
I get an error since my MeshResource does not include a skeleton.
Is there some way to directly generate the skeleton with my entities in RCP, or is there a way to add a skeleton generated in XCode to an existing MeshResource that corresponds to my entities generated in RCP? I have tried using MeshSkeletonCollection.insert() with a skeleton I generated in XCode, but I cannot figure out how to assign this skeletonCollection to the MeshResource of the entity.
Hi, I would like to add a topbar in the panel in VisionOS by using ToolbarTitleMenu, reffering to the document: https://developer.apple.com/documentation/swiftui/toolbartitlemenu,
but in the simulator, I cannot see the topbar, what's wrong with my code?
when I try to import CompositorServices, I get an error for:
dyld[596]: Symbol not found: _$sSo13cp_drawable_tV18CompositorServicesE17computeProjection37normalizedDeviceCoordinatesConvention9viewIndexSo13simd_float4x4aSo0A26_axis_direction_conventionV_SitF
Referenced from: /private/var/containers/Bundle/Application/33008953-150D-4888-9860-28F41E916655/VolumeRenderingVision.app/VolumeRenderingVision.debug.dylib
Expected in: <968F7985-72C8-30D7-866C-AD8A1B8E7EE6> /System/Library/Frameworks/CompositorServices.framework/CompositorServices
The app wrongly refers to my Mac's local directory. However, I chose Vision Pro as a running device. My Mac has been updated to macOS 15 beta 7, and I have not had the same issue before.
Hello,
I am developing a VisionOS based application, that uses the various data providers like Image Tracking, Plane Detection, Scene Reconstruction but these are not supported on VisionOS Simulator. What is the Work Around for this issue ?
Hi everyone,
I'm looking for a way to convert an FBX file to USDZ directly within my iOS app. I'm aware of Reality Converter and the Python USDZ converter tool, but I haven't been able to find any documentation on how to do this directly within the app (assuming the user can upload their own file). Any guidance on how to achieve this would be greatly appreciated.
I've heard about Model I/O and SceneKit, but I haven't found much information on using them for this purpose either.
Thanks!
Here's a video clearly demonstrating the problem:
https://youtu.be/-IbyaaIzh0I
This is a major issue for my game, because it's not meant to be played multiple times. My game is designed to only play once, so it really ruins the experience if it runs poorly until someone force quits or crashes the game.
Does anyone have a solution to this, or has encountered this issue of poor initial launch performance?
I made this game in Unity and I'm not sure if this is an Apple issue or a Unity issue.
The sample code project Tabletopkit Sample found at the article Creating tabletop games here fails to compile with the following errors in Xcode 16 beta 6.
error: [xrsimulator] Component Compatibility: Billboard not available for 'xros 1.0', please update 'platforms' array in Package.swift
error: [xrsimulator] Exception thrown during compile: compileFailedBecause(reason: "compatibility faults")
error: Tool exited with code 1
Is it possible to get the location of where a user uses the tap gesture on the screen? Like an x/y coordinate that can be used within my app.
I know there is spatialTapGesture but from what I can tell that is only linked to content entities like a cube or something. Does this mean I can only get the x/y coordinate data by opening an immersive space and using the tap gesture in relation to some entity?
TLDR: Can I get the location of a tap gesture in Vision OS in a regular app, without opening an immersive space?
I`m want to learn how to use RealityKit Debugger Tool in the website https://developer.apple.com/wwdc24/10172 . The video say downloads the ClubView.Swift file, so where it is?
The sample code project PencilKitCustomToolPicker found at this article Configuring the PencilKit tool picker here fails to compile with the following errors in Xcode 16 beta 6
The sample code project RealityKit-Stereo-Rendering found at the article Rendering a windowed game in stereo here fails to compile with the following errors in Xcode 16 beta 6.
Is it possible to show a SafariWebView in an ImmersiveSpace with hand tracking enabled?
I have an app with an initialView that launches an immersive space and opens a SafariView. I noticed that hand tracking stops working when I open the SafariView, but not when I open the TestView (which is just an empty window).
Here's the Scene:
var body: some Scene {
WindowGroup(id: "control") {
InitialView()
}.windowResizability(.contentSize)
WindowGroup(id: "test") {
TestView()
}.windowResizability(.contentSize)
WindowGroup(id: "safari") {
SafariView(url: URL(string: "some URL")!)
}
ImmersiveSpace(id: "immersiveSpace") {
ImmersiveView()
}
}
I am developing an immersive visionOS app based on RealityKit and SwiftUI.
This app has ModelEntities that have a PerspectiveCamera entity as child.
I want to display the camera view in a 2D window in visionOS.
I am creating the camera, and add it to the entity with
let cameraEntity = PerspectiveCamera()
cameraEntity.camera.far = 10000
cameraEntity.camera.fieldOfViewInDegrees = 60
cameraEntity.camera.near = 0.01
entity.addChild(cameraEntity)
My app is not AR. The immersive view is programmatically generated.
In iOS, I could use an ARView with non AR camera mode. However, ARView is not available in visionOS.
How can I show the camera view in a SwiftUI 2D window in the immersive space?
Hi everyone, I am having trouble implementing spatial video recording into files by following the WWDC24 video: Build compelling spatial photo and video experiences. Specifically, the flag "isSpatialVideoCaptureSupported" of AVCaptureMovieFileOutput shows FALSE where the code is tested on both my physical iPhone 15 Pro (iOS 18.1) and the simulator (iOS 18.0).
This is the code that I am running:
let movieFileOutput = AVCaptureMovieFileOutput()
print("movieCapture output isSpatialVideoCaptureSupported: \(movieFileOutput.isSpatialVideoCaptureSupported)")
However, one of the formats of AVCaptureDevice shows a TRUE for the flag isSpatialVideoCaptureSupported.
for format in currentDevice.formats {
if format.isSpatialVideoCaptureSupported {
print("isSpatialVideoCaptureSupported is true")
break
}
}
I am totally confused now, why DOES the camera device support spatial mode while the movieFileCapture DOES NOT? Can someone please help? Really appreciate it!!
Here are my testing environment:
iPhone 15 Pro iOS 18.1 (US version)
Xcode 16.0 beta 16A5171c
when I use openwindow to show a volume window,I want to hide the old
window ,keep it stays where it is, when I close the volume window, the old window appears again.the logic is simple, I tried to use opaque to hide the old window, but the window bar is still there, which is very annoying. how could I solve this?