If you create a custom shader you get access to a collection of uniform values, one is the uniforms::time() parameter which is defined as "the number of seconds that have elapsed since RealityKit began rendering
the current scene" in this doc: https://developer.apple.com/metal/Metal-RealityKit-APIs.pdf
Is there some way to get this value from Swift code? I want to animate a value in my shader based on the time so I need to get the starting time value so I can interpolate the animation offset from that point. If I create a System in the update() function I get a SceneUpdateContext instance and that has a deltaTime property but not an elapsedTime property which I would assume would map to the shader time() value.
RealityKit
RSS for tagSimulate and render 3D content for use in your augmented reality apps using RealityKit.
Post
Replies
Boosts
Views
Activity
Hi,
I am currently considering porting my AR game from SceneKit to RealityKit so it appears in 3D on Vision OS, but one crucial question to knonw if it can even be ported is:
I need a tap on a 3D Model that is than translated into a tap onto the texture of that model.
On visionOS this would be the gaze of the person so the question is if I can get the point of a texture is user is looking at (optimally always to I can have a hover effect, but if not at least when tapping the finger together).
If that is not possible, is it possible to touch a reality kit object and get the location of that on the texture?
All the best
Christoph
I'm trying to render a large number of entities, it looks like each ModelEntity causes a draw call, even if you share the ModelComponent so each Entity shares the mesh and materials.
I tried to use the MeshInstanceCollection inside MeshResource to generate a large number of objects in the scene, the code works and draws many objects but the draw count is still one call per instance, this seems strange I would assume it should only be one draw call for the single entity since I have specified to use instancing in the resource.
Has anybody else successfully used instancing in RealityKit to draw a large number go Entities (maybe around 10,000) or drawn this amount of items successfully with 60fps any other way?
Here is some sample code that draws 100 cubes using instancing but still causes 100 draw calls.
func instanceTest(scene: RealityKit.Scene) {
var resource = MeshResource.generateBox(size: 0.2)
var contents = MeshResource.Contents()
contents.models = resource.contents.models
var arr: [MeshResource.Instance] = []
var matrix = matrix_identity_float4x4
matrix[3, 0] = 0.5
for i in 0..<100 {
let inst = MeshResource.Instance(id: "\(i)", model: "MeshModel", at: matrix)
arr.append(inst)
}
contents.instances = MeshInstanceCollection(arr)
let updatedResource = try? MeshResource.generate(from: contents)
let unlitMaterial = UnlitMaterial(color: .red)
let modelEntity = ModelEntity(
mesh: updatedResource!,
materials: [unlitMaterial]
)
let anchor = AnchorEntity()
anchor.addChild(modelEntity)
scene.addAnchor(anchor)
}
I'd like to create meshes in RealityKit ( AR mode on iPad ) in screen-space, i.e. for UI.
I noticed a lot of useful new functionality in RealityKit for the next OS versions, including the OrthographicCameraComponent here:
https://developer.apple.com/documentation/realitykit/orthographiccameracomponent?changes=_3
I think this would help, but I need AR worldtracking as well as a regular perspective camera to work with the 3D elements.
Firstly, can I have a camera attached selectively to a few entities, just for those entities? This could be the orthographic camera.
Secondly, can I make it so those entities are always rendered in-front, in screenspace? (They'd need to follow the camera.)
If I can't have multiple cameras, what can be done in that case?
Is it actually better to use a completely different view / API for layering on-top of RealityKit? I would much rather keep everything in RealityKit, however, for simplicity.
I am trying to establish a workflow with using Reality Composer Pro to make scenes - I am grey boxing a scene using primitives at the moment.
I have set up a cube with a texture material and a simple animation to spin.
I am confused as to what I should be loading. I have created what I think is a scene asset in the package for the Reality Composer Project.
Here is a code snippet:
struct ContentView: View {
var body: some View {
RealityView { content in
do {
let scene = try await ModelEntity(named: "HOF")
content.add(scene)
} catch {
print("Error loading scene: \(error.localizedDescription)")
}
}
}
}
Here is the project layout in Reality Composer Pro:
Hi everyone,
I'm choosing a framework for developing a game that doesn't involve augmented reality (AR) and I'm unsure whether to use SceneKit or RealityKit. I would like to hear from Apple engineers on this matter. Which of these frameworks is better suited for creating non-AR games?
Additionally, I'd like to know if it's possible to disable AR in RealityKit using the updated RealityView? Thanks in advance for your insights and recommendations!
The WWDC24 video "Build a spatial drawing app with RealityKit" https://developer.apple.com/wwdc24/10104 at 12:04 includes a slide showing a Reality Composer Pro shader graph that features wonderful inline documentation comment boxes:
Are shader graph inline comments a new feature that Reality Composer Pro supports? This would be extraordinarily useful, as complex shader graphs can be challenging to decipher.
If so, how are inline shader graph comments created in Reality Composer Pro?
Hello,
I'm trying to attach one entity to another entity via the new PhysicsFixedJoint. I have a usdz that contains a skeletal pose which expose the joints as pins as desired. However the when I access the pin, it is returning a GeometricPin, instead of an EntityGeometricPin as you would expect. I can't use the returned GeometricPin to create the joint.
Am I missing something? Shouldn't access the Entity's pins object return EntityGeometricPins instead of GeometricPin?
Here is the code sample:
var body: some View {
RealityView { content in
if let scene = try? await Entity(named: "Scene", in: untitledBundle) {
content.add(scene)
let attack = try! Entity.load(named: "Attack01_SingleSword")
let anchor = scene.findEntity(named: "Root")
anchor?.addChild(attack)
let sword = try! Entity.load(named: "OHS08_Sword")
anchor?.addChild(sword)
if let swordEntity = findModelComponentEntity(entity: sword) {
let swordPin = swordEntity.pins.set(
named: "test", position: SIMD3<Float>.zero
)
if let attackEntity = findModelComponentEntity(entity: attack) {
let attackPin = attackEntity.pins["root/pelvis/spine_01/spine_02/spine_03/clavicle_r/upperarm_r/lowerarm_r/hand_r/weapon_r"]! // This is returning GeomtricPin instead of the EntityGeometricPin that the "pins" object contains
let joint = PhysicsFixedJoint(
pin0: swordPin,
pin1: attackPin // This is a compile error since it is not an EntityGeometricPin type
)
try! joint.addToSimulation()
}
}
}
}
}
import SwiftUI
import RealityKit
import ARKit
import AVFoundation
struct ContentView : View {
var body: some View {
ARViewContainer().edgesIgnoringSafeArea(.all)
}
}
struct ARViewContainer: UIViewRepresentable {
func makeUIView(context: Context) -> ARView {
let arView = ARView(frame: .zero)
arView.session.delegate = context.coordinator
let worldConfig = ARWorldTrackingConfiguration()
worldConfig.planeDetection = .horizontal
// worldConfig.providesAudioData = true // open here -----> Error:
arView.session.run(worldConfig)
addTestEntity(arView: arView)
return arView
}
func updateUIView(_ uiView: ARView, context: Context) {}
func makeCoordinator() -> Coordinator {
Coordinator()
}
class Coordinator: NSObject, ARSessionDelegate, ARSessionObserver {
func session(_ session: ARSession, didOutputAudioSampleBuffer audioSampleBuffer: CMSampleBuffer) {
}
}
}
func addTestEntity(arView: ARView) {
let mesh = MeshResource.generatePlane(width: 0.5, depth: 0.35)
guard let url = Bundle.main.url(forResource: "videoplayback", withExtension: "mp4") else { return }
let player = AVPlayer(url: url)
let videoMaterial = VideoMaterial(avPlayer: player)
let model = ModelEntity(mesh: mesh, materials: [videoMaterial])
model.transform.translation.y = 0.05
let anchor = AnchorEntity(.plane(.horizontal, classification: .any, minimumBounds: SIMD2<Float>(0.2, 0.2)))
anchor.children.append(model)
player.play()
arView.scene.anchors.append(anchor)
}
Error:
failed to update STS state: Error Domain=com.apple.STS-N Code=1396929899 "Error: failed to signal change" UserInfo={NSLocalizedDescription=Error: failed to signal change}
failed to update STS state: Error Domain=com.apple.STS-N Code=1396929899 "Error: failed to signal change" UserInfo={NSLocalizedDescription=Error: failed to signal change}
......
ARSession <0x125d88040>: did fail with error: Error Domain=com.apple.arkit.error Code=102 "Required sensor failed." UserInfo={NSLocalizedFailureReason=A sensor failed to deliver the required input., NSUnderlyingError=0x302922dc0 {Error Domain=AVFoundationErrorDomain Code=-11819 "Cannot Complete Action" UserInfo={NSLocalizedDescription=Cannot Complete Action, NSLocalizedRecoverySuggestion=Try again later.}}, NSLocalizedRecoverySuggestion=Make sure that the application has the required privacy settings., NSLocalizedDescription=Required sensor failed.}
iOS 17.5.1
Xcode 15.4
What is the current recommendation for creating high-quality 3D content?
The context is a hobbyist, specialised CAD app for macOS (with an iPadOS companion) that is mostly 2D but also offers a 3D visualization option (currently OpenGL).
Somewhere down the line there might be an AR view but at the moment - certainly for macOS - it's purely generated 3D visualization, all rendered content.
So starting with a rewrite of the 3D visualization in 2024 targeting macOS Sequoia/iPadOS 18 is RealityKit the suggested way forward?
Cheers,
Jay
I would think it would be common practice that when adding a new entity into your RealityView scene for them to appear in front of the user. And then the user places the entity in the scene. Image a puzzle piece appearing in front of you and you drag it to your puzzle board.
if you move around your puzzle board you’d expect that wherever you are the new piece should appear in front of you.
That seems applicable to a lot of applications.
I can add a new entity using the head anchor but as we all know that transform is the identity so reparenting the entity to something (eg puzzle board) won’t work.
I’ve been trying to use World positioning and query pose which helps but I’m stumped as to how to get the new entity to appear in front of me, no matter which way I turn.
Looking for suggestions and guidance on this.
I am working on an app where we are attempting to place large entities quite far away from the user, when trying to recognise a tap gesture on them though the gesture isn't being picked up for part of the model.
It seems as though the larger and further a model is placed the more offset the collision shape seems to be. It responds to taps in a region that shrinks towards the bottom right. The actual size of the collision shape appears to be correct when viewed with the collision shape debug visualisation. I've been able to replicate this behaviour in the simulator and on a physical device.
It's hard to explain in words, there's a video in the README for the repo here
I've been able to replicate the issue in a simple sample app. Not sure if I might be using it wrong or if it is expected behaviour for tap gestures to be a bit off when places a large distance from the user. Appreciate any help, thanks.
struct ImmersiveView: View {
@State private var tapCount = 0
var body: some View {
RealityView { content in
let sphere = ModelEntity(mesh: .generateSphere(radius: 50), materials: [UnlitMaterial(color: .red)])
sphere.setPosition([500, 0, 0], relativeTo: nil)
sphere.components.set([
InputTargetComponent(),
CollisionComponent(shapes: [.generateBox(width: 250, height: 250, depth: 250)]),
])
content.add(sphere)
}
.gesture(
SpatialTapGesture()
.targetedToAnyEntity()
.onEnded { value in
tapCount += 1
print(tapCount)
}
)
}
}
``
I have attempted to use VideoMaterial with HDR HLS stream, and also a TextureResource.DrawableQueue with rgba16Float in a ShaderGraphMaterial.
I'm capturing to 64RGBAHalf with AVPlayerItemVideoOutput and converting that to rgba16Float.
I don't believe it's displaying HDR properly or behaving like a raw AVPlayer.
Since we can't configure any EDR metadata or color space for a RealityView, how do we display HDR video? Is using rgba16Float supposed to be enough?
Is expecting the 64RGBAHalf capture to handle HDR properly a mistake and should I capture YUV and do the conversion myself?
Thank you
I am trying to create demo for spatial meeting using persona also refer apple videos, But not getting clear idea about it.
Any one could you please guide me step by step process or any code are appreciated for learning.
I was reading through cube-image node docs and it talked about loading data from a cubemap file in ktx format. It wasn’t clear if that was only for the original KTX format, and if that node was also able to take advantage of the KTX2 format?
Is this shader node only relevant for files in the original (v1) KTX format?
Hello guys, I do have a virtual environment in which I have a mesh. I want the mesh to be mirrored onto a glass which is very close nearby.
I can't just duplicate it because it varies depending on from which position you are looking at it.
Is there a possibility to mirror a mesh via reflections? It shouldn't reflect real world objects - just a virtual mesh.
Thank you guys
Does anyone have any guidance / experience using TriggerVolumes to detect collision rather than the Physics engine in Reality Kit.
Aside from not participating the physics engine are there any other downside or upsides to using them?
Hello,
In the documentation for an ARView we see a diagram that shows that all Entity's are connected via AnchorEntity's to the Scene:
https://developer.apple.com/documentation/realitykit/arview
What happens when we are using a RealityView? Here the documentation suggests we direclty add Entity's:
https://developer.apple.com/documentation/realitykit/realityview/
Three questions:
Do we need to add Entitys to an AnchorEntity first and add this via content.add(...)?
Is an Entity ignored by physics engine if attached via an Anchor?
If both the AnchorEntity and an attached Entity is added via content.add(...), does is its anchor's position ignored?
I cannot figure out how shareplay + spatial persona place the origin of RealityKit's coordinate system
I have an App on visionOS with immersiveSpace(.mix) scene. In the scene I am using ARKit to track my hand, creating a virtual object following the movement of my hand palm. Every frame I query positions from the HandAnchor to update the position of my object, using originFromAnchorTransform to correctly place my object in the scene.
However when I try to adopt that into a shareplay experience with spatial persona, the virtual object's position update becomes a mess. With different template (.sidebyside or .conversational), the origin in my space appears with no pattern. I can always see that the virtual object don't follow my hand but in a random place. it seems that there was a differnce/transform between handAnchor's origin and immersiveSpace origin under stpatial persona + shareplay mode. isn't it?
Or is there something I can try to use convert(displacement.inverse.rotation, from: .immersiveSpace, to: .scene) that mentioned here: https://developer.apple.com/documentation/realitykit/realitycoordinatespaceconverting and https://developer.apple.com/documentation/swiftui/environmentvalues/immersivespacedisplacement to do the translation and apply it on my virtual object. But not working yet. Can someone tells me how to do this correctly?
Developing a prototype Vision Pro app and would like to render a 3D scene made from Reality Composer Pro on an image anchor in a RealityView. But I have no luck so far to make it work and need some guidance to move on.
I got the image file stored in the assets like below:
And from below is the source codes:
import SwiftUI
import RealityKit
import RealityKitContent
struct AnchorView: View {
@State var imageEntity: Entity = {
let anchorEntity = AnchorEntity(.image(group: "AR Resources", name: "reanchor"))
return anchorEntity
}()
var body: some View {
RealityView { content in
do
{
// Add the initial RealityKit content
if let scene = try? await Entity(named: "Scene", in: realityKitContentBundle)
{
imageEntity.addChild(scene)
content.add(imageEntity)
}
}
catch
{
print("Error occurs when adding reality view content: \(error)")
}
}
}
}