I've created an app for visionOS that uses a custom package that includes RealityKitContent as well (as a sub-package). I now want to turn this app into a multi-platform app that also supports iOS.
When I try to compile the app for this platform, I get this error message:
Building for 'iphoneos', but realitytool only supports [xros, xrsimulator]
Thus, I want to exclude the RealityKitContent from my package for iOS, but I don't really know how. The Apple docs are pretty complicated, and ChatGPT did only give me solutions that did not work at all.
I also tried to post this on the Swift forum, but no-one could help me there either - so I am trying my luck here.
Here is my Package.swift file:
// swift-tools-version: 5.10
import PackageDescription
let package = Package(
name: "Overlays",
platforms: [
.iOS(.v17), .visionOS(.v1)
],
products: [
.library(
name: "Overlays",
targets: ["Overlays"]),
],
dependencies: [
.package(
path: "../BackendServices"
),
.package(
path: "../MeteorDDP"
),
.package(
path: "Packages/OverlaysRealityKitContent"
),
],
targets: [
.target(
name: "Overlays",
dependencies: ["BackendServices", "MeteorDDP", "OverlaysRealityKitContent"]
),
.testTarget(
name: "OverlaysTests",
dependencies: ["Overlays"]),
]
)
Based on a recommendation in the Swift forum, I also tried this:
dependencies: [
...
.package(
name: "OverlaysRealityKitContent",
path: "Packages/OverlaysRealityKitContent"
),
],
targets: [
.target(
name: "Overlays",
dependencies: [
"BackendServices", "MeteorDDP",
.product(name: "OverlaysRealityKitContent", package: "OverlaysRealityKitContent", condition: .when(platforms: [.visionOS]))
]
),
...
]
but this won't work either.
The problem seems to be that the package is listed under dependencies, which makes the realitytool kick in. Is there a way to avoid this? I definitely need the RealityKitContent package being part of the Overlay package, since the latter depends on the content (on visionOS). And I would not want to split the package up in two parts (one for iOS and one for visionOS), if possible.
RealityKit
RSS for tagSimulate and render 3D content for use in your augmented reality apps using RealityKit.
Post
Replies
Boosts
Views
Activity
I'm trying to detect when two entities collide. The following code shows a very basic set-up. How do I get the upperSphere to move? If I set its physicsBody.mode to .dynamic it moves with gravity (and the collision is reported), but when in kinematic mode it doesn't respond to the impulse:
struct CollisionView: View {
@State var subscriptions: [EventSubscription] = []
var body: some View {
RealityView { content in
let upperSphere = ModelEntity(mesh: .generateSphere(radius: 0.04))
let lowerSphere = ModelEntity(mesh: .generateSphere(radius: 0.04))
upperSphere.position = [0, 2, -2]
upperSphere.physicsBody = .init()
upperSphere.physicsBody?.mode = .kinematic
upperSphere.physicsMotion = PhysicsMotionComponent()
upperSphere.generateCollisionShapes(recursive: false)
lowerSphere.position = [0, 1, -2]
lowerSphere.physicsBody = .init()
lowerSphere.physicsBody?.mode = .static
lowerSphere.generateCollisionShapes(recursive: false)
let sub = content.subscribe(to: CollisionEvents.Began.self, on: nil) { _ in print("Collision!") }
subscriptions.append(sub)
content.add(upperSphere)
content.add(lowerSphere)
Task {
try? await Task.sleep(for: .seconds(2))
print("Impulse applied")
upperSphere.applyLinearImpulse([0, -1, 0], relativeTo: nil)
}
}
}
}
When developing a VisionPro application, I need to first move and then rotate the Entity in RealityView.
How can these two animations be executed sequentially? (I tested it and executing it simultaneously would result in incorrect animation positions)
I get a crash every time I try to swap this texture for a drawable queue.
I have a DrawableQueue leftQueue created on the main thread, and I invoke this block on the main thread. Scene.usda contains a reference to a model cinema. It crashes on the line with the replace().
if let shaderMaterial = try? await ShaderGraphMaterial(named: "/Root/cinema/_materials/Screen", from: "Scene.usda", in: theaterBundle) {
if let imageParam = shaderMaterial.getParameter(name: "image"), case let .textureResource(imageTexture) = imageParam {
imageTexture.replace(withDrawables: leftQueue)
}
}
One weird thing, the imageParam has an invalid value when it crashes.
imageParam RealityFoundation.MaterialParameters.Value <invalid> (0x0)
Here is the stack trace is:
* thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x9)
frame #0: 0x0000000191569734 libobjc.A.dylib`objc_release + 8
frame #1: 0x00000001cb9e5134 CoreRE`re::SharedPtr<re::internal::AssetReference>::reset(re::internal::AssetReference*) + 64
frame #2: 0x00000001cba77cf0 CoreRE`re::AssetHandle::operator=(re::AssetHandle const&) + 36
frame #3: 0x00000001ccc84d14 CoreRE`RETextureAssetReplaceDrawableQueue + 228
frame #4: 0x00000001acf9bbcc RealityFoundation`RealityKit.TextureResource.replace(withDrawables: RealityKit.TextureResource.DrawableQueue) -> () + 164
* frame #5: 0x00000001006d361c Screenlit`TheaterView.setupMaterial(self=Screenlit.TheaterView @ 0x000000011e2b7a30) at TheaterView.swift:245:74
frame #6: 0x00000001006e0848 Screenlit`closure #1 in closure #1 in closure #1 in closure #1 in TheaterView.body.getter(self=Screenlit.TheaterView @ 0x000000011e2b7a30) at TheaterView.swift:487
frame #7: 0x00000001006f1658 Screenlit`partial apply for closure #1 in closure #1 in closure #1 in closure #1 in TheaterView.body.getter at <compiler-generated>:0
frame #8: 0x00000001004fb7d8 Screenlit`thunk for @escaping @callee_guaranteed @Sendable @async () -> (@out τ_0_0) at <compiler-generated>:0
frame #9: 0x0000000100500bd4 Screenlit`partial apply for thunk for @escaping @callee_guaranteed @Sendable @async () -> (@out τ_0_0) at <compiler-generated>:0
In the past, Apple recommended restricting USDZ models to a maximum of 100,000 triangles and a texture sizes of 2048x2048 for Apple QuickLook (and I think for RealityKit on iOS in general).
Does Apple have any recommended max polygon counts for visionOS? Is it the same for models running in a Volumetric window in the shared space and in ImmersiveSpace?
What is the recommended texture size for visionOS? (I seem to recall 8192x8192, but I can't find it now)
I want to transfer this video stream to another device and then view it on the other device. But I did not see any development information related to the camera by checking the VisionOS documentation information, so I would like to ask if anyone knows how to do it?
Thank you.
I'm creating a full immersive app of a large 3d environment in which I need to be able to move the player with different options like, hand gestures, game controller and teleporting.
I have worked with unreal engine in which moving the player is easy and well documented. But I have not been able to find any information on how I could achieve this in visionOS.
Has anyone done something similar that could give me some advice or sample code?
any help appreciated
Guillermo
I'm trying to understand how Apple handles dragging windows around in an immersive space. 3d Gestures seem to be only half of the solution in that they are great if you're standing still and want to move the window an exaggerated amount around the environment, but if you then start walking while dragging, the amplified gesture sends the entity flying off into the distance. It seems they quickly transition from one coordinate system to another depending on if the user is physically moving. If you drag a window and start walking the movement suddenly matches your speed. When you stop moving, you can push and pull the windows around again like a super hero. Am I missing something obvious in how to copy this behavior? Hello world, which uses the 3d gesture has the same problem. You can move the world around but if you walk with it, it flies off. Are they tracking the head movement and if it's moved more than a certain amount it uses that offset instead? Is there anything out of the box that can do this before I try and hack my own solution?
I'm building a visionOS app which loads a Reality Composer scene with a large number of models. The app includes several of these scenes, and allows the user to switch between them. Because the scenes have a large number of models, I want to unload the currently loaded scene before loading a different one. So far I have been unable to reclaim all of the used memory by removing the entities from the scene.
I've made a few small changes to the Mixed Immersive app template which demonstrate this behavior which I've included below (apparently I'm unable to upload a zip file with the entire project). Using just the two spheres included in the reality kit content the leaked memory is fairly small, but if you add a couple larger models to the scene (I was able to easily find free ones online) then the memory leak becomes much more obvious.
When the immersive space is initially opened, I'm seeing roughly 44MB of used memory (as shown in the Xcode Debug navigator). Each time I tap the "Load Models" and then "Unload Models" buttons, the memory use decreases but does not get back down to the initial amount. Subsequent loads and unloads will continue to increase the used memory (the amount of increase will depend on the models that you add to the scene).
Also note that I've seen similar memory increases when dynamically creating the entities. Inside ViewModel.loadModels I've included some commented out code that dynamically creates entities instead of loading a Reality Composer scene.
Is there a way to fully reclaim the used memory? I've tried many different ways to clear the RealityKit entities but so far have been unsuccessful.
struct RKMemTestApp: App {
private var viewModel = ViewModel()
var body: some Scene {
WindowGroup {
ContentView()
.environment(viewModel)
}
ImmersiveSpace(id: "ImmersiveSpace") {
ImmersiveView()
.environment(viewModel)
}
}
}
Add this above the body in ContentView:
@Environment(ViewModel.self) private var viewModel
The ContentView body should be:
VStack {
Toggle("Show ImmersiveSpace", isOn: $showImmersiveSpace)
.font(.title)
.frame(width: 360)
.padding(24)
.glassBackgroundEffect()
Button("Load Models") {
viewModel.loadModels()
}
Button("Unload Models") {
viewModel.unloadModels()
}
}
ImmersiveView:
struct ImmersiveView: View {
@Environment(ViewModel.self) private var viewModel
var body: some View {
RealityView { content in
if let rootEntity = viewModel.rootEntity {
content.add(rootEntity)
}
} update: { content in
if viewModel.rootEntity == nil && !content.entities.isEmpty {
content.entities.removeAll()
} else if let rootEntity = viewModel.rootEntity, content.entities.isEmpty {
content.add(rootEntity)
}
}
}
}
ViewModel:
import Foundation
import Observation
import RealityKit
import RealityKitContent
@Observable
class ViewModel {
var rootEntity: Entity?
init() {
}
func loadModels() {
Task {
if let scene = try? await Entity(named: "Immersive", in: realityKitContentBundle) {
Task { @MainActor in
if rootEntity == nil {
rootEntity = Entity()
}
rootEntity!.addChild(scene)
}
}
}
/*if rootEntity == nil {
rootEntity = Entity()
}
for _ in 0..<1000 {
let mesh = MeshResource.generateSphere(radius:0.1)
let material = SimpleMaterial(color: .blue, roughness: 0, isMetallic: true)
let entity = ModelEntity(mesh: mesh, materials: [material])
entity.position = [Float.random(in: 0.0..<1.0), Float.random(in: 0.5..<1.5), -Float.random(in: 1.5..<2.5)]
rootEntity!.addChild(entity)
}*/
}
func unloadModels() {
rootEntity?.children.removeAll()
rootEntity?.removeFromParent()
rootEntity = nil
}
}
I have been trying to replicate the entity transform functionality present in the magnificent app Museum That Never Was (https://apps.apple.com/us/app/the-museum-that-never-was/id6477230794) -- it allows you to simultaneously rotate, magnify and translate the entity, using gestures with both hands (as opposed to normal DragGesture() which is a one-handed gesture). I am able to rotate & magnify simultaneously but translating via drag does not activate while doing two-handed gestures. Any ideas? My setup is something like so:
Gestures:
var drag: some Gesture {
DragGesture()
.targetedToEntity(where: QueryPredicate<Entity>.has(MyComponent.self))
.onChanged { value in
gestureTranslation = value.convert(value.translation3D, from: .local, to: .scene)
}
.onEnded { value in
itemTranslation += gestureTranslation
gestureTranslation = .init()
}
}
var rotate: some Gesture {
RotateGesture3D()
.targetedToEntity(where: QueryPredicate<Entity>.has(MyComponent.self))
.onChanged { value in
gestureRotation = simd_quatf(value.rotation.quaternion).inverse
}
.onEnded { value in
itemRotation = gestureRotation * itemRotation
gestureRotation = .identity
}
}
var magnify: some Gesture {
MagnifyGesture()
.targetedToEntity(where: QueryPredicate<Entity>.has(MyComponent.self))
.onChanged { value in
gestureScale = Float(value.magnification)
}
.onEnded { value in
itemScale *= gestureScale
gestureScale = 1.0
}
}
RealityView modifiiers:
.simultaneousGesture(drag)
.simultaneousGesture(rotate)
.simultaneousGesture(magnify)
RealityView update block:
entity.position = itemTranslation + gestureTranslation + exhibitDefaultPosition
entity.orientation = gestureRotation * itemRotation
entity.scaleAll(itemScale * gestureScale)
I start a project for iPad/iPhone and I set SwiftUI - RealityKit and I can’t get the build to compile. I do nothing but create a project and hit run.
So I am wondering if it’s even possible to run RealityKit on just an iPad anymore.
I then tried to use Reality composer to import a basic cylinder shape to my project and that wouldn’t run either.
So I am wondering how to get a 3D model into my iPad app so that the user can interact with it.
Thanks for any help
I am trying to verify my understanding of adding a HoverEffectComponent on entities inside a scene in RealityViews.
Inside RealityComposer Pro, I have added the required Input Target and Collision components to one entity inside a node with multiple siblings, and left any options as defaults. They appear to create appropriately sized bounding boxes etc for these objects.
In my RealityView I programmatically add the HoverEffectComponents to the entities as I don't see them in RCP.
On device, this appears to "work" in the sense that when I gaze at the entity, it lights up - but so does every other entity in the scene - even those without Input Target and Collision components attached.
Because the documentation on the components is sparse I am unsure if this is behavior as designed (e.g. all entities in that node are activated) or a bug or something in between.
Has anyone encountered this and is there an appropriate way of setting these relationships up?
Thanks
Context
https://developer.apple.com/forums/thread/751036
I found some sample code that does the process I described in my other post for ModelEntity here: https://www.youtube.com/watch?v=TqZ72kVle8A&ab_channel=ZackZack
At runtime I'm loading:
Immersive scene in a RealityView from Reality Compose Pro with the robot model baked into the file (not remote - asset in project)
A Model3D view that pulls in the robot model from the web url
A RemoteObjectView (RealityView) which downloads the model to temp, creates a ModelEntity, and adds it to the content of the RealityView
Method 1 above is fine, but Methods 2 + 3 load the model with a pure black texture for some reason.
Ideal state is Methods 2 + 3 look like the Method 1 result (see screenshot).
Am I doing something wrong? e.g. I shouldn't use multiple Reality Views at once?
Screenshot
Code
struct ImmersiveView: View {
var body: some View {
RealityView { content in
// Add the initial RealityKit content
if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) {
content.add(immersiveContentEntity)
// Add an ImageBasedLight for the immersive content
guard let resource = try? await EnvironmentResource(named: "ImageBasedLight") else { return }
let iblComponent = ImageBasedLightComponent(source: .single(resource), intensityExponent: 0.25)
immersiveContentEntity.components.set(iblComponent)
immersiveContentEntity.components.set(ImageBasedLightReceiverComponent(imageBasedLight: immersiveContentEntity))
// Put skybox here. See example in World project available at
// https://developer.apple.com/
}
}
Model3D(url: URL(string: "https://developer.apple.com/augmented-reality/quick-look/models/vintagerobot2k/robot_walk_idle.usdz")!)
SkyboxView()
// RemoteObjectView(remoteURL: "https://developer.apple.com/augmented-reality/quick-look/models/retrotv/tv_retro.usdz")
RemoteObjectView(remoteURL: "https://developer.apple.com/augmented-reality/quick-look/models/vintagerobot2k/robot_walk_idle.usdz")
}
}
I can't find a way to download a USDZ at runtime and load it into a Reality View with Reality kit.
As an example, imagine downloading one of the 3D models from this Apple Developer page: https://developer.apple.com/augmented-reality/quick-look/
I think the process should be:
Download the file from the web and store in temporary storage with the FileManager API
Load the entity from the temp file location using Entity.init (I believe Entity.load is being deprecated in Swift 6 - throws up compiler warning) - https://developer.apple.com/documentation/realitykit/loading-entities-from-a-file
Add the entity to content in the Reality View.
I'm doing this at runtime on vision os in the simulator. I can get this to work with textures using slightly different APIs so I think the logic is sound but in that case I'm creating the entity with a mesh and material. Not sure if file size has an effect.
Is there any official guidance or a code sample for this use case?
extension Entity {
func addPanoramicImage(for media: WRMedia) {
let subscription=TextureResource.loadAsync(named:"image_20240425_201630").sink(
receiveCompletion: { switch $0 {
case .finished: break
case .failure(let error): assertionFailure("(error)")
}
},
receiveValue: { [weak self] texture in
guard let self = self else { return }
var material = UnlitMaterial()
material.color = .init(texture: .init(texture))
self.components.set(ModelComponent(
mesh: .generateSphere(radius: 1E3),
materials: [material] ))
self.scale *= .init(x: -1, y: 1, z: 1)
self.transform.translation += SIMD3(0.0, -1, 0.0) } ) components.set(Entity.WRSubscribeComponent(subscription: subscription))
}
problem:
case .failure(let error): assertionFailure("(error)")
Thread 1: Fatal error: Error Domain=MTKTextureLoaderErrorDomain Code=0 "Image decoding failed" UserInfo={NSLocalizedDescription=Image decoding failed, MTKTextureLoaderErrorKey=Image decoding failed}
I'm currently developing an application where the models present inside a volumetric window may exceed the clipping boundaries of the window. ( Which I currently understand to be a maximum of 2m )
Because of this, as models move through the clipping boundaries, the interior of the models becomes visible. If possible, I'd like to cap these interiors with a solid fill so as to make them more visually appealing.
However, as far as I can tell, I'm quite limited in how I might be able to achieve this when using RealityKit on VisionOS.
Some approaches I've seen to accomplish similar effects seem to use multiple passes of model geometries rendering into stencil buffers and using that to inform whether or not a cap should be drawn. However, afiact, if I have opted into using a RealityView and RealityKit, I don't have the level of control over my render pipeline such that I can render ModelEntities and also have multiple rendering passes over the set of contained entities to render into a stencil buffer that I then provide to a separate set of "capping planes" ( how I currently imagine I might accomplish this effect ).
Alternatively ( due to the nature of the models I'm using ) I considered using a height map to construct an approximation of a surface cap, but how I might use a shader to construct a height map of rendered entities seems similarly difficult using the VisionOS RealityView pipeline. It is not obvious to me how I could use a ShaderGraphMaterial to render to an arbitrary image buffer that I might then pass to other functions to use as an input; ShaderGraphMaterial seems biased to the fact that all image inputs and outputs are either literal files or the actual rendered buffer.
Would anyone out there have already created an effect like this that might have some advice? Or, potentially correct any misunderstandings I have with regards to accessing the Metal pipeline for RealityView or using ShaderGraphMaterial to construct a height map?
Hello, I would like to change the aspect (scale, texture, color) of a 3D element (Model Entity) when I hovered it with my eyes. What should I do If I want to create a request for this feature? And how would I know if it will ever be considered or when it will appear?
I'm trying to build a project with a moderately complex Reality Composer Pro project, but am unable to because my Mac mini (2023, 8GB RAM) keeps running out of memory.
I'm wondering if there are any known memory leaks in realitytool, but basically the tool is taking up 20-30GB (!) memory during builds.
I have a Mac Pro for content creation, which is why I didn't go for more RAM on the mini – it was supposed to just be a build machine for Apple Silicon compatibility, as my Pro is Intel.
But, I'm kinda stuck here.
I have a scene that builds fine, but any time I had a USD – in this case a tree asset – with lots of instances, or a lot of geometry, I run into the memory issue. I've tried greatly simplifying the model, but even a 2MB USD is resulting in the crash. I'm failing to see how adding a 2MB asset would cause the memory of realitytool to balloon so much during builds.
If someone from Apple is willing to look, I can provide the scene – but it's proprietary so I can't just post it publicly here.
I'm trying to get a similar experience to Apple TV's immersive videos, but I cannot figure out how to present the AVPlayerViewController controls detached from the video.
I am able to use the same AVPlayer in a window and projected on a VideoMaterial, but I can't figure out how to just present the controls, while displaying the video only in the 3D entity, without having a 2D projection in any view.
Is this even possible?
The transparency in reality kit is not rendered properly from specific ordinal axes. It seems like it is a depth sorting issue where it is rejecting some transparent surfaces when it should not. Some view directions relative to specific ordinal axes are fine. I have not narrowed down which specific axes is the problem. This is true across particle systems and/or meshes. It is very easy to replicate this issues using multiple transparent meshes or particle systems.
In the above gif you can see the problem in multiple instances, the fire and snow particles are sorted behind the terrain, which has transparency since it is a procedural blend of grass, rock, and ice, but it is correctly sorted in front of the opaque materials such the rocks and wood.
In the above gif, it is two back to back grid meshes (since dual sided rendering is not supported) that have a custom surface shader to animate the mesh in a wave and also apply transperency. You can see in the distance, where the transparency seems to be rendered/overlapped correctely, but at the overlap approaches the screen (and crosses an ordinal axes) it renders black for the transparent portion of the surface, when the green of the mesh that is behind should be rendered.
This is a blocking problem for the development of this demo.