Hey, I'm wondering what would be the proper way to add RealityView content asynchronously, while doing the heavy lifting in a background thread. My use case is that I am generating procedural geometry which takes a few seconds to complete. Meanwhile I would like the UI to show other geometry / UI elements and the Main thread to be responsive.
Basically what I would like to do, in pseudocode, is:
runInBackgroundThread {
let geometry = generateGeometry() // CPU intensive, takes 1-2 s
let entity = createEntity(geometry) // CPU intensive, takes ~1 s
let material = try! await ShaderGraphMaterial(..)
entity.model!.materials = [material]
runInMainThread {
addToRealityViewContent(entity)
}
}
With this I am running into so many issues with especially the material, which apparently cannot be constructed on a non-main thread and cannot be passed over thread borders.
RealityKit
RSS for tagSimulate and render 3D content for use in your augmented reality apps using RealityKit.
Post
Replies
Boosts
Views
Activity
I am trying to make a shader for a disco ball lighting effect for my app. I want the light to reflect on the scene mesh.
i was curious if anyone has pointers on how to do this in shader graph in reality composer pro or writing a surface shader.
The effect rotates the dots as the ball spins.
This is the effect in the apple clips that applies the effect to the scene mesh
Adding AVPlayer as attachment on the side using RealityKit. The video in it thought is not aligned. And thoughts on what could be going wrong?
RealityView { content, attachments in
let url = self.video.resolvedURL
let asset = AVURLAsset(url: url)
let playerItem = AVPlayerItem(asset: asset)
var videoPlayerComponent = VideoPlayerComponent(avPlayer: player)
videoPlayerComponent.isPassthroughTintingEnabled = true
// entity.components[VideoPlayerComponent.self] = videoPlayerComponent
entity.position = [0, 0, 0]
entity.scale *= 0.50
player.replaceCurrentItem(with: playerItem)
player.play()
content.add(entity)
} update: { content, attachments in
// if content.entities.count < 2 {
if showAnotherPlayer {
if let attachment = attachments.entity(for: "Attachment") {
playerModel.loadVideo(library.selectedVideo!, presentation: .fullWindow)
//4. Position the Attachment and add it to the RealityViewContent
attachment.position = [1.0, 0, 0]
attachment.scale *= 1.0
//let radians = -45.0 * Float.pi / 180.0
//attachment.transform.rotation += simd_quatf(angle: radians, axis: SIMD3<Float>(0,1,0))
let entity = content.entities.first
attachment.setParent(entity)
content.add(attachment)
}
}
if showLibrary {
if let attachment = attachments.entity(for: "Featured") {
//4. Position the Attachment and add it to the RealityViewContent
attachment.position = [0.0, -0.3, 0]
attachment.scale *= 0.7
//let radians = -45.0 * Float.pi / 180.0
//attachment.transform.rotation += simd_quatf(angle: radians, axis: SIMD3<Float>(0,1,0))
let entity = content.entities.first
attachment.setParent(entity)
viewModel.attachment = attachment
content.add(attachment)
}
} else {
if let scene = content.entities.first?.scene {
let _ = print("found scene")
}
if let featuredEntity = content.entities.first?.scene?.findEntity(named: "Featured") {
let _ = print("featured entity found")
}
if let attachment = viewModel.attachment {
let _ = print("-- removing attachment")
if let anchor = attachment.anchor {
let _ = print("-- removing anchor")
anchor.removeFromParent()
}
attachment.removeFromParent()
content.remove(attachment)
} else {
let _ = print("the attachment is missing")
}
}
// }
} attachments: {
Attachment(id: "Attachment") {
PlayerView()
.frame(width: 2048, height: 1024)
.environment(library)
.environment(playerModel)
.onAppear {
DispatchQueue.main.asyncAfter(deadline: .now()+1) {
playerModel.play()
}
}
.onDisappear {
}
}
if showLibrary {
Attachment(id: "Featured") {
VideoListView(title: "Featured",
videos: library.videos,
cardStyle: .full,
cardSpacing: 20) { video in
library.selectedVideo = video
showAnotherPlayer = true
}
.frame(width: 2048, height: 1024)
}
}
}
PlayerView
Hi,
I am implementing player using RealityKit's VideoPlayerComponent and AVPlayer. When app enter immersive space, playback beigns. But only audio playabck, I can't see video. Do I need specify entity's position and size?
struct MyApp: App {
@State private var playerImmersionStyle: ImmersionStyle = .full
var body: some Scene {
WindowGroup {
ContentView()
}
.defaultSize(width: 800, height: 200)
ImmersiveSpace(id: "playerImmersionStyle") {
ImmersiveSpaceView()
}
.immersionStyle(selection: $playerImmersionStyle, in: playerImmersionStyle)
}
func application(_ application: UIApplication,
configurationForConnecting connectingSceneSession: UISceneSession,
options: UIScene.ConnectionOptions) -> UISceneConfiguration {
return UISceneConfiguration(name: "My Scene Configuration", sessionRole: connectingSceneSession.role)
}
}
struct PlayerViewEx: View {
let entity = Entity()
var body: some View {
RealityView() { content in
let entity = makeVideoEntity()
content.add(entity)
}
}
public func makeVideoEntity() -> Entity {
let url = Bundle.main.url(forResource: "football", withExtension: "mov")!
let asset = AVURLAsset(url: url)
let playerItem = AVPlayerItem(asset: asset)
let player = AVPlayer()
var videoPlayerComponent = VideoPlayerComponent(avPlayer: player)
videoPlayerComponent.isPassthroughTintingEnabled = true
entity.components[VideoPlayerComponent.self] = videoPlayerComponent
entity.scale *= 0.4
player.replaceCurrentItem(with: playerItem)
player.play()
return entity
}
}
#Preview {
PlayerViewEx()
}
I'm developing an app for Apple Vision Pro and have a question about RealityKit. Recently, I attempted to use drag gestures to manipulate two entities, A and B, with my left and right hands respectively. The two entities belong to the same RealityView.
I anticipated that I could move Entity A with my left hand and Entity B with my right hand independently. However, I noticed that the movement of one hand affects both entities simultaneously.
Presumably, DragGesture().onChanged is triggered twice for each entity. In an attempt to properly pair each hand with its corresponding entity, I investigated the platform.manipulatorGroup in the debugger. However, I encountered a compile error when trying to access the platform variable.
Is it feasible to pair each hand with a specific entity and move both objects separately?
Thank you in advance.
We are developing an AR app which uses spatial audio. If we want to use Realitykit to create the app, will we need to use a MacBook Pro running Silicon?
Hello,
I've been trying to leverage instanced rendering in RealityKit on visionOS but have not had success.
RealityKit states this is supported:
https://developer.apple.com/documentation/realitykit/validating-usd-files
https://developer.apple.com/videos/play/wwdc2021/10075/?time=1373
https://developer.apple.com/videos/play/wwdc2023/10099/?time=772
RealityKit Trace metrics
Validating instancing is working:
To test I made a base visionOS app with immersive space and the entity replaced with my test usdz file. I've been using the RealityKit Trace profiling template in xcode instruments in the immersive space and volume closed. This gets consistent draw call results.
If I have a single sphere mesh with one material I get one draw call, but the number of draw calls grows linearly with mesh count no matter how my entity is configured.
What I've tried
Create a test scene in blender, export with instancing enabled
Create a test scene in Reality Composer Pro using references
Author usda files by hand based on the OpenUSD spec
Programatically create a MeshResource with Contents at runtime
References
https://openusd.org/release/api/_usd__page__scenegraph_instancing.html
https://developer.apple.com/documentation/realitykit/meshresource
https://developer.apple.com/documentation/realitykit/meshresource/instance
Thank you
Is there any way to specify a clip volume or clipping planes on either a RealityView or the underlying RealityKit entity on visionOS? This was easy on SceneKit with shader modifiers, or in OpenGL, or WebGL, or with RealityKit on iOS or macOS with CustomMaterial surface shader, but CustomMaterial is not supported on visionOS.
I'm trying to better understand how loading entities works. If I do this:
RealityView { content in
// Add the initial RealityKit content
if let scene = try? await Entity(named: "RCP_Scene", in: realityKitContentBundle) {
content.add(scene)
}
}
It returns the root with the two objects I have in the scene (sphere_01 and sphere_02). If I add a drag gesture to this entity it works on the root and gets applied to both sphere_01 and sphere_02 together (they both indiviually have collision and input components set to allow gestures). How do I get individual control of sphere_01 and sphere_02? Is it possible to load the root scene, as I'm doing above, and have individual control?
Hi,
I am investigating how to emit the following in my visionOS app.
https://www.hiroakit.com/archives/1432
https://blog.terresquall.com/2020/01/getting-your-emission-maps-to-work-in-unity/
Right now, I'm trying various things with Shader Graph in Reality Composer Pro, but I can't tell from the official documentation and WWDC session videos what the individual functions and combined effects of Reality Composer Pro's Shader Graph nodes are, I am having a hard time understanding the effects of the individual functions and combinations of them.
I have a feeling that such luminous materials and expressions are not possible in visionOS to begin with. If there is a way to achieve this, please let me know.
Thanks.
Hello, I'm currently building an app that implements the on-device object capture API to create 3D models. I have two concerns that I cannot find addressed anywhere on the internet:
Can on-device object capture be performed by devices without LiDAR? I understand that depth data is necessary for making scale-accurate models - if there is an option to disable it, where would one specify that in code?
Can models be exported to .obj instead of .usdz? From WWDC2021 at 3:00 it is mentioned that it is possible with the Apple Silicon API but what about with on-device scanning?
I would be very grateful if anyone is knowledgeable enough to provide some insight. Thank you so much!
Hi,
I'm trying to rotate an entity in VisionPro.
Most of the code is the same as the Diorama code from WWDC23.
The problem I'm having is that the rotiation occurs but the axis of the rotation is not the center of my object.
It seems to be centered on the zero coordinate of the immersive space . How do I change the rotation3DEffect to tell it to rotate around the entity? Not the space?
Is it even possible?
This is the code, the rotation is at the end.
var body: some View {
@Bindable var viewModel = viewModel
RealityView { content, _ in
do {
let entity = try await Entity(named: "DioramaAssembled", in: RealityKitContent.RealityKitContentBundle)
viewModel.rootEntity = entity
content.add(entity)
viewModel.updateScale()
// Offset the scene so it doesn't appear underneath the user or conflict with the main window.
entity.position = SIMD3<Float>(0, 0, -2)
subscriptions.append(content.subscribe(to: ComponentEvents.DidAdd.self, componentType: PointOfInterestComponent.self, { event in
createLearnMoreView(for: event.entity)
}))
entity.generateCollisionShapes (recursive: true)
entity.components.set(InputTargetComponent())
} catch {
print("Error in RealityView's make: \(error)")
}
}
.rotation3DEffect(.radians(currentrotateByX), axis: .y)
.rotation3DEffect(.radians(currentrotateByY), axis: .x)
Does anyone know how I can disable foveation for an ImmersiveSpace? I'm aware that I could use a CompositorLayer and my own Metal rendering to control foveation, but I'm hoping that I can configure an existing/underlying LayerRenderer (or similar) to disable it for an immersive scene.
Or if there's another approach I should be taking, any pointers are appreciated. Thank you!
I have a volumetric window that I am using to display 3D content.
The issue I have is that the rotation of the 3D models will rotate when the user moves the window. I want the rotation across the Y-axis to remain fixed when the user repositions the window. Is that possible?
Also, is there a way to visually debug the walls of the 3D volume window?
We have been using attachment.bounds.extents to determine the size of a RealityView attachment at run time. It has been working fine until VisionOS 1.1 update. I wonder if we are doing something wrong as the release notes suggest some visual bounds calculation issue was fixed with the latest release. The funny thing is we did not have an issue before.
Below is how we access to height value:
let height = attachmentEntity.attachment.bounds.extents.y
Previously it returned the correct value. Now it returns 0.
I wonder if anyone else is having the same issue.
I've been trying to animate the OpacityComponent to fade in/out entities in my scene. I've tried animating the component with an AnimationResource as well as tried animating with a custom System. Both worked fine in the simulator, but failed on device.
AnimationResource: When I animated the opacity of an entity using an animation with an opacity bind target, the entity would not change opacity until I physically looked away from the object. It's almost as if the device keeps an entity visible for as long as you keep looking at it, but once you look away it plays the animation.
System: I created a custom system that manually changes the opacity over time, however, on device the gradual fade of the entity doesn't work. Instead, the entity literally pops in/out of view instead of fading.
Can someone explain exactly how this component is supposed to be used? The simulator plays the animations exactly the way I would expect, but on device it's completely different.
Edit:
I'm trying to change the opacity of entities with a VideoMaterial added to a ModelComponent. The fade animations are performed at certain points in the video that are triggered by an AVPlayer time boundary observer.
I'm using RealityKit for a scene with many static and dynamic ModelEntitys simulating physics. When all the entities have simple collision generated from .generateCollisionShapes I don't see any issues, but for some entities I need much more complex and accurate collision. For this I've been using ShapeResource.generateStaticMesh with the mesh's data (2769 positions, 16272 face indices in this case), which works exactly as desired with a low entity count. However once there are 600+ dynamic entities introducing even one static entity with complex collision will reliably trigger a crash when colliding with one of the dynamic entities (not necessarily on first contact, but inevitably after multiple collisions).
If I arbitrarily limit the number of entities to a max of around 500 it seems to prevent the issue from happening, though the likelihood seems to increase with the number of entities so there may be a low probability of it triggering even at 500 entities that I haven't hit while testing.
If physx imposes some kind of entity or collision face/shape limit or something like that I'd at least like to know exactly what it is, but ideally there's a way to work around this. Right now my "fix" is just arbitrarily restricting the entity count in a way that limits what my app can do.
The crash triggers inside
0x00000001a6790dfc in physx::PxcDiscreteNarrowPhasePCM(physx::PxcNpThreadContext&, physx::PxcNpWorkUnit const&, physx::Gu::Cache&, physx::PxsContactManagerOutput&) ()
which looks like this (crash line has an -> arrow at the bottom)
CoreRE`physx::PxcDiscreteNarrowPhasePCM:
...
0x1a6790df0 <+668>: mov x1, x24
0x1a6790df4 <+672>: bl 0x1a67913d8 ; physx::PxcNpCacheStreamPair::reserve(unsigned int)
0x1a6790df8 <+676>: ldrb w8, [x23]
-> 0x1a6790dfc <+680>: str w8, [x0, #0x20]
Hi all,
I took bunch of photos using Apple's 'Capture Sample' iOS app. Even though the all images in .HEIC/HEIF file format that CLI tool logs the bunch of the following errors and couldn't find any solution.
1-) HEIF file is expected.
2-) *** Assertion failure in OCReturn OCNonModularSPI_CMPhoto_readResolution(const OCHeicReadHandle, const NSURL *__strong, uint64_t *, uint64_t *)(), CMPhoto+NonModularSPI.m:1271
Really new to the Vision OS coding. We tried to create a customized experience just like what apple did in the system Environments or the Disney did in its app ( it seems they put user in a carefully made 3d model). Do we need to make a 3d model for the environment and how to put it around the user sphere like the skybox?
Is there a way to give a "Primitive Shape" entity created through Reality Composer Pro a ModelComponent?
I have a custom ShaderGraphMaterial assigned to a primitive shape in my RC Pro scene hierarchy, and I'd like to tweak the inputs of this material programatically. I found a great example of the behavior I'm looking for here: https://developer.apple.com/videos/play/wwdc2023/10273/?time=1862
@State private var sliderValue: Float = 0.0
Slider(value: $sliderValue, in: (0.0)...(1.0))
.onChange(of: sliderValue) { _, _ in
guard let terrain = rootEntity.findEntity(named: "DioramaTerrain"),
var modelComponent = terrain.components[ModelComponent.self],
var shaderGraphMaterial = modelComponent.materials.first
as? ShaderGraphMaterial else { return }
do {
try shaderGraphMaterial.setParameter(name: "Progress", value: .float(sliderValue))
modelComponent.materials = [shaderGraphMaterial]
terrain.components.set(modelComponent)
} catch { }
}
}
However, when I try applying this example to my use-case, my project's equivalent to this line fails to execute:
var modelComponent = terrain.components[ModelComponent.self]
The only difference I can see between my case and this example is my entity is a primitive shape, whereas the example uses a model reference to a .usdz file. Is there some way to update a primitive shape entity to contain this ModelComponent in its set of components so I can reference + update its materials programmatically?