I'm creating a full immersive app of a large 3d environment in which I need to be able to move the player with different options like, hand gestures, game controller and teleporting.
I have worked with unreal engine in which moving the player is easy and well documented. But I have not been able to find any information on how I could achieve this in visionOS.
Has anyone done something similar that could give me some advice or sample code?
any help appreciated
Guillermo
RealityKit
RSS for tagSimulate and render 3D content for use in your augmented reality apps using RealityKit.
Post
Replies
Boosts
Views
Activity
I'm trying to understand how Apple handles dragging windows around in an immersive space. 3d Gestures seem to be only half of the solution in that they are great if you're standing still and want to move the window an exaggerated amount around the environment, but if you then start walking while dragging, the amplified gesture sends the entity flying off into the distance. It seems they quickly transition from one coordinate system to another depending on if the user is physically moving. If you drag a window and start walking the movement suddenly matches your speed. When you stop moving, you can push and pull the windows around again like a super hero. Am I missing something obvious in how to copy this behavior? Hello world, which uses the 3d gesture has the same problem. You can move the world around but if you walk with it, it flies off. Are they tracking the head movement and if it's moved more than a certain amount it uses that offset instead? Is there anything out of the box that can do this before I try and hack my own solution?
I'm building a visionOS app which loads a Reality Composer scene with a large number of models. The app includes several of these scenes, and allows the user to switch between them. Because the scenes have a large number of models, I want to unload the currently loaded scene before loading a different one. So far I have been unable to reclaim all of the used memory by removing the entities from the scene.
I've made a few small changes to the Mixed Immersive app template which demonstrate this behavior which I've included below (apparently I'm unable to upload a zip file with the entire project). Using just the two spheres included in the reality kit content the leaked memory is fairly small, but if you add a couple larger models to the scene (I was able to easily find free ones online) then the memory leak becomes much more obvious.
When the immersive space is initially opened, I'm seeing roughly 44MB of used memory (as shown in the Xcode Debug navigator). Each time I tap the "Load Models" and then "Unload Models" buttons, the memory use decreases but does not get back down to the initial amount. Subsequent loads and unloads will continue to increase the used memory (the amount of increase will depend on the models that you add to the scene).
Also note that I've seen similar memory increases when dynamically creating the entities. Inside ViewModel.loadModels I've included some commented out code that dynamically creates entities instead of loading a Reality Composer scene.
Is there a way to fully reclaim the used memory? I've tried many different ways to clear the RealityKit entities but so far have been unsuccessful.
struct RKMemTestApp: App {
private var viewModel = ViewModel()
var body: some Scene {
WindowGroup {
ContentView()
.environment(viewModel)
}
ImmersiveSpace(id: "ImmersiveSpace") {
ImmersiveView()
.environment(viewModel)
}
}
}
Add this above the body in ContentView:
@Environment(ViewModel.self) private var viewModel
The ContentView body should be:
VStack {
Toggle("Show ImmersiveSpace", isOn: $showImmersiveSpace)
.font(.title)
.frame(width: 360)
.padding(24)
.glassBackgroundEffect()
Button("Load Models") {
viewModel.loadModels()
}
Button("Unload Models") {
viewModel.unloadModels()
}
}
ImmersiveView:
struct ImmersiveView: View {
@Environment(ViewModel.self) private var viewModel
var body: some View {
RealityView { content in
if let rootEntity = viewModel.rootEntity {
content.add(rootEntity)
}
} update: { content in
if viewModel.rootEntity == nil && !content.entities.isEmpty {
content.entities.removeAll()
} else if let rootEntity = viewModel.rootEntity, content.entities.isEmpty {
content.add(rootEntity)
}
}
}
}
ViewModel:
import Foundation
import Observation
import RealityKit
import RealityKitContent
@Observable
class ViewModel {
var rootEntity: Entity?
init() {
}
func loadModels() {
Task {
if let scene = try? await Entity(named: "Immersive", in: realityKitContentBundle) {
Task { @MainActor in
if rootEntity == nil {
rootEntity = Entity()
}
rootEntity!.addChild(scene)
}
}
}
/*if rootEntity == nil {
rootEntity = Entity()
}
for _ in 0..<1000 {
let mesh = MeshResource.generateSphere(radius:0.1)
let material = SimpleMaterial(color: .blue, roughness: 0, isMetallic: true)
let entity = ModelEntity(mesh: mesh, materials: [material])
entity.position = [Float.random(in: 0.0..<1.0), Float.random(in: 0.5..<1.5), -Float.random(in: 1.5..<2.5)]
rootEntity!.addChild(entity)
}*/
}
func unloadModels() {
rootEntity?.children.removeAll()
rootEntity?.removeFromParent()
rootEntity = nil
}
}
I start a project for iPad/iPhone and I set SwiftUI - RealityKit and I can’t get the build to compile. I do nothing but create a project and hit run.
So I am wondering if it’s even possible to run RealityKit on just an iPad anymore.
I then tried to use Reality composer to import a basic cylinder shape to my project and that wouldn’t run either.
So I am wondering how to get a 3D model into my iPad app so that the user can interact with it.
Thanks for any help
I am trying to verify my understanding of adding a HoverEffectComponent on entities inside a scene in RealityViews.
Inside RealityComposer Pro, I have added the required Input Target and Collision components to one entity inside a node with multiple siblings, and left any options as defaults. They appear to create appropriately sized bounding boxes etc for these objects.
In my RealityView I programmatically add the HoverEffectComponents to the entities as I don't see them in RCP.
On device, this appears to "work" in the sense that when I gaze at the entity, it lights up - but so does every other entity in the scene - even those without Input Target and Collision components attached.
Because the documentation on the components is sparse I am unsure if this is behavior as designed (e.g. all entities in that node are activated) or a bug or something in between.
Has anyone encountered this and is there an appropriate way of setting these relationships up?
Thanks
Context
https://developer.apple.com/forums/thread/751036
I found some sample code that does the process I described in my other post for ModelEntity here: https://www.youtube.com/watch?v=TqZ72kVle8A&ab_channel=ZackZack
At runtime I'm loading:
Immersive scene in a RealityView from Reality Compose Pro with the robot model baked into the file (not remote - asset in project)
A Model3D view that pulls in the robot model from the web url
A RemoteObjectView (RealityView) which downloads the model to temp, creates a ModelEntity, and adds it to the content of the RealityView
Method 1 above is fine, but Methods 2 + 3 load the model with a pure black texture for some reason.
Ideal state is Methods 2 + 3 look like the Method 1 result (see screenshot).
Am I doing something wrong? e.g. I shouldn't use multiple Reality Views at once?
Screenshot
Code
struct ImmersiveView: View {
var body: some View {
RealityView { content in
// Add the initial RealityKit content
if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) {
content.add(immersiveContentEntity)
// Add an ImageBasedLight for the immersive content
guard let resource = try? await EnvironmentResource(named: "ImageBasedLight") else { return }
let iblComponent = ImageBasedLightComponent(source: .single(resource), intensityExponent: 0.25)
immersiveContentEntity.components.set(iblComponent)
immersiveContentEntity.components.set(ImageBasedLightReceiverComponent(imageBasedLight: immersiveContentEntity))
// Put skybox here. See example in World project available at
// https://developer.apple.com/
}
}
Model3D(url: URL(string: "https://developer.apple.com/augmented-reality/quick-look/models/vintagerobot2k/robot_walk_idle.usdz")!)
SkyboxView()
// RemoteObjectView(remoteURL: "https://developer.apple.com/augmented-reality/quick-look/models/retrotv/tv_retro.usdz")
RemoteObjectView(remoteURL: "https://developer.apple.com/augmented-reality/quick-look/models/vintagerobot2k/robot_walk_idle.usdz")
}
}
Hello, I would like to change the aspect (scale, texture, color) of a 3D element (Model Entity) when I hovered it with my eyes. What should I do If I want to create a request for this feature? And how would I know if it will ever be considered or when it will appear?
extension Entity {
func addPanoramicImage(for media: WRMedia) {
let subscription=TextureResource.loadAsync(named:"image_20240425_201630").sink(
receiveCompletion: { switch $0 {
case .finished: break
case .failure(let error): assertionFailure("(error)")
}
},
receiveValue: { [weak self] texture in
guard let self = self else { return }
var material = UnlitMaterial()
material.color = .init(texture: .init(texture))
self.components.set(ModelComponent(
mesh: .generateSphere(radius: 1E3),
materials: [material] ))
self.scale *= .init(x: -1, y: 1, z: 1)
self.transform.translation += SIMD3(0.0, -1, 0.0) } ) components.set(Entity.WRSubscribeComponent(subscription: subscription))
}
problem:
case .failure(let error): assertionFailure("(error)")
Thread 1: Fatal error: Error Domain=MTKTextureLoaderErrorDomain Code=0 "Image decoding failed" UserInfo={NSLocalizedDescription=Image decoding failed, MTKTextureLoaderErrorKey=Image decoding failed}
I can't find a way to download a USDZ at runtime and load it into a Reality View with Reality kit.
As an example, imagine downloading one of the 3D models from this Apple Developer page: https://developer.apple.com/augmented-reality/quick-look/
I think the process should be:
Download the file from the web and store in temporary storage with the FileManager API
Load the entity from the temp file location using Entity.init (I believe Entity.load is being deprecated in Swift 6 - throws up compiler warning) - https://developer.apple.com/documentation/realitykit/loading-entities-from-a-file
Add the entity to content in the Reality View.
I'm doing this at runtime on vision os in the simulator. I can get this to work with textures using slightly different APIs so I think the logic is sound but in that case I'm creating the entity with a mesh and material. Not sure if file size has an effect.
Is there any official guidance or a code sample for this use case?
I'm currently developing an application where the models present inside a volumetric window may exceed the clipping boundaries of the window. ( Which I currently understand to be a maximum of 2m )
Because of this, as models move through the clipping boundaries, the interior of the models becomes visible. If possible, I'd like to cap these interiors with a solid fill so as to make them more visually appealing.
However, as far as I can tell, I'm quite limited in how I might be able to achieve this when using RealityKit on VisionOS.
Some approaches I've seen to accomplish similar effects seem to use multiple passes of model geometries rendering into stencil buffers and using that to inform whether or not a cap should be drawn. However, afiact, if I have opted into using a RealityView and RealityKit, I don't have the level of control over my render pipeline such that I can render ModelEntities and also have multiple rendering passes over the set of contained entities to render into a stencil buffer that I then provide to a separate set of "capping planes" ( how I currently imagine I might accomplish this effect ).
Alternatively ( due to the nature of the models I'm using ) I considered using a height map to construct an approximation of a surface cap, but how I might use a shader to construct a height map of rendered entities seems similarly difficult using the VisionOS RealityView pipeline. It is not obvious to me how I could use a ShaderGraphMaterial to render to an arbitrary image buffer that I might then pass to other functions to use as an input; ShaderGraphMaterial seems biased to the fact that all image inputs and outputs are either literal files or the actual rendered buffer.
Would anyone out there have already created an effect like this that might have some advice? Or, potentially correct any misunderstandings I have with regards to accessing the Metal pipeline for RealityView or using ShaderGraphMaterial to construct a height map?
I'm trying to build a project with a moderately complex Reality Composer Pro project, but am unable to because my Mac mini (2023, 8GB RAM) keeps running out of memory.
I'm wondering if there are any known memory leaks in realitytool, but basically the tool is taking up 20-30GB (!) memory during builds.
I have a Mac Pro for content creation, which is why I didn't go for more RAM on the mini – it was supposed to just be a build machine for Apple Silicon compatibility, as my Pro is Intel.
But, I'm kinda stuck here.
I have a scene that builds fine, but any time I had a USD – in this case a tree asset – with lots of instances, or a lot of geometry, I run into the memory issue. I've tried greatly simplifying the model, but even a 2MB USD is resulting in the crash. I'm failing to see how adding a 2MB asset would cause the memory of realitytool to balloon so much during builds.
If someone from Apple is willing to look, I can provide the scene – but it's proprietary so I can't just post it publicly here.
I'm trying to get a similar experience to Apple TV's immersive videos, but I cannot figure out how to present the AVPlayerViewController controls detached from the video.
I am able to use the same AVPlayer in a window and projected on a VideoMaterial, but I can't figure out how to just present the controls, while displaying the video only in the 3D entity, without having a 2D projection in any view.
Is this even possible?
In my project, i want to use new shadergraphmaterial to do the stereoscopic render, i notice that there is a node called Camera Index Switch Node can do this. But when i tried it , i found that :
It can only output Integer type value, when i change to float value , it change back again, i don't konw if it is a bug.
2. So i test this node with a IF node,i found that it output is weird.
Below is zero should output,it is black
but when i change to IF node,it is grey,it is neither 0 nor 1(My IF node result is TRUE result 1, FALSE result 0)
I wanna ask if this is a bug, and if this is a correct way to do the stereoscopic render.
Hey, I'm wondering what would be the proper way to add RealityView content asynchronously, while doing the heavy lifting in a background thread. My use case is that I am generating procedural geometry which takes a few seconds to complete. Meanwhile I would like the UI to show other geometry / UI elements and the Main thread to be responsive.
Basically what I would like to do, in pseudocode, is:
runInBackgroundThread {
let geometry = generateGeometry() // CPU intensive, takes 1-2 s
let entity = createEntity(geometry) // CPU intensive, takes ~1 s
let material = try! await ShaderGraphMaterial(..)
entity.model!.materials = [material]
runInMainThread {
addToRealityViewContent(entity)
}
}
With this I am running into so many issues with especially the material, which apparently cannot be constructed on a non-main thread and cannot be passed over thread borders.
I am trying to make a shader for a disco ball lighting effect for my app. I want the light to reflect on the scene mesh.
i was curious if anyone has pointers on how to do this in shader graph in reality composer pro or writing a surface shader.
The effect rotates the dots as the ball spins.
This is the effect in the apple clips that applies the effect to the scene mesh
Adding AVPlayer as attachment on the side using RealityKit. The video in it thought is not aligned. And thoughts on what could be going wrong?
RealityView { content, attachments in
let url = self.video.resolvedURL
let asset = AVURLAsset(url: url)
let playerItem = AVPlayerItem(asset: asset)
var videoPlayerComponent = VideoPlayerComponent(avPlayer: player)
videoPlayerComponent.isPassthroughTintingEnabled = true
// entity.components[VideoPlayerComponent.self] = videoPlayerComponent
entity.position = [0, 0, 0]
entity.scale *= 0.50
player.replaceCurrentItem(with: playerItem)
player.play()
content.add(entity)
} update: { content, attachments in
// if content.entities.count < 2 {
if showAnotherPlayer {
if let attachment = attachments.entity(for: "Attachment") {
playerModel.loadVideo(library.selectedVideo!, presentation: .fullWindow)
//4. Position the Attachment and add it to the RealityViewContent
attachment.position = [1.0, 0, 0]
attachment.scale *= 1.0
//let radians = -45.0 * Float.pi / 180.0
//attachment.transform.rotation += simd_quatf(angle: radians, axis: SIMD3<Float>(0,1,0))
let entity = content.entities.first
attachment.setParent(entity)
content.add(attachment)
}
}
if showLibrary {
if let attachment = attachments.entity(for: "Featured") {
//4. Position the Attachment and add it to the RealityViewContent
attachment.position = [0.0, -0.3, 0]
attachment.scale *= 0.7
//let radians = -45.0 * Float.pi / 180.0
//attachment.transform.rotation += simd_quatf(angle: radians, axis: SIMD3<Float>(0,1,0))
let entity = content.entities.first
attachment.setParent(entity)
viewModel.attachment = attachment
content.add(attachment)
}
} else {
if let scene = content.entities.first?.scene {
let _ = print("found scene")
}
if let featuredEntity = content.entities.first?.scene?.findEntity(named: "Featured") {
let _ = print("featured entity found")
}
if let attachment = viewModel.attachment {
let _ = print("-- removing attachment")
if let anchor = attachment.anchor {
let _ = print("-- removing anchor")
anchor.removeFromParent()
}
attachment.removeFromParent()
content.remove(attachment)
} else {
let _ = print("the attachment is missing")
}
}
// }
} attachments: {
Attachment(id: "Attachment") {
PlayerView()
.frame(width: 2048, height: 1024)
.environment(library)
.environment(playerModel)
.onAppear {
DispatchQueue.main.asyncAfter(deadline: .now()+1) {
playerModel.play()
}
}
.onDisappear {
}
}
if showLibrary {
Attachment(id: "Featured") {
VideoListView(title: "Featured",
videos: library.videos,
cardStyle: .full,
cardSpacing: 20) { video in
library.selectedVideo = video
showAnotherPlayer = true
}
.frame(width: 2048, height: 1024)
}
}
}
PlayerView
Hi,
I am implementing player using RealityKit's VideoPlayerComponent and AVPlayer. When app enter immersive space, playback beigns. But only audio playabck, I can't see video. Do I need specify entity's position and size?
struct MyApp: App {
@State private var playerImmersionStyle: ImmersionStyle = .full
var body: some Scene {
WindowGroup {
ContentView()
}
.defaultSize(width: 800, height: 200)
ImmersiveSpace(id: "playerImmersionStyle") {
ImmersiveSpaceView()
}
.immersionStyle(selection: $playerImmersionStyle, in: playerImmersionStyle)
}
func application(_ application: UIApplication,
configurationForConnecting connectingSceneSession: UISceneSession,
options: UIScene.ConnectionOptions) -> UISceneConfiguration {
return UISceneConfiguration(name: "My Scene Configuration", sessionRole: connectingSceneSession.role)
}
}
struct PlayerViewEx: View {
let entity = Entity()
var body: some View {
RealityView() { content in
let entity = makeVideoEntity()
content.add(entity)
}
}
public func makeVideoEntity() -> Entity {
let url = Bundle.main.url(forResource: "football", withExtension: "mov")!
let asset = AVURLAsset(url: url)
let playerItem = AVPlayerItem(asset: asset)
let player = AVPlayer()
var videoPlayerComponent = VideoPlayerComponent(avPlayer: player)
videoPlayerComponent.isPassthroughTintingEnabled = true
entity.components[VideoPlayerComponent.self] = videoPlayerComponent
entity.scale *= 0.4
player.replaceCurrentItem(with: playerItem)
player.play()
return entity
}
}
#Preview {
PlayerViewEx()
}
I'm developing an app for Apple Vision Pro and have a question about RealityKit. Recently, I attempted to use drag gestures to manipulate two entities, A and B, with my left and right hands respectively. The two entities belong to the same RealityView.
I anticipated that I could move Entity A with my left hand and Entity B with my right hand independently. However, I noticed that the movement of one hand affects both entities simultaneously.
Presumably, DragGesture().onChanged is triggered twice for each entity. In an attempt to properly pair each hand with its corresponding entity, I investigated the platform.manipulatorGroup in the debugger. However, I encountered a compile error when trying to access the platform variable.
Is it feasible to pair each hand with a specific entity and move both objects separately?
Thank you in advance.
We are developing an AR app which uses spatial audio. If we want to use Realitykit to create the app, will we need to use a MacBook Pro running Silicon?
Hello,
I've been trying to leverage instanced rendering in RealityKit on visionOS but have not had success.
RealityKit states this is supported:
https://developer.apple.com/documentation/realitykit/validating-usd-files
https://developer.apple.com/videos/play/wwdc2021/10075/?time=1373
https://developer.apple.com/videos/play/wwdc2023/10099/?time=772
RealityKit Trace metrics
Validating instancing is working:
To test I made a base visionOS app with immersive space and the entity replaced with my test usdz file. I've been using the RealityKit Trace profiling template in xcode instruments in the immersive space and volume closed. This gets consistent draw call results.
If I have a single sphere mesh with one material I get one draw call, but the number of draw calls grows linearly with mesh count no matter how my entity is configured.
What I've tried
Create a test scene in blender, export with instancing enabled
Create a test scene in Reality Composer Pro using references
Author usda files by hand based on the OpenUSD spec
Programatically create a MeshResource with Contents at runtime
References
https://openusd.org/release/api/_usd__page__scenegraph_instancing.html
https://developer.apple.com/documentation/realitykit/meshresource
https://developer.apple.com/documentation/realitykit/meshresource/instance
Thank you