RealityKit

RSS for tag

Simulate and render 3D content for use in your augmented reality apps using RealityKit.

RealityKit Documentation

Post

Replies

Boosts

Views

Activity

RealityView clip volume
Is there any way to specify a clip volume or clipping planes on either a RealityView or the underlying RealityKit entity on visionOS? This was easy on SceneKit with shader modifiers, or in OpenGL, or WebGL, or with RealityKit on iOS or macOS with CustomMaterial surface shader, but CustomMaterial is not supported on visionOS.
0
0
561
Mar ’24
Object Capture API Limitation Concerns
Hello, I'm currently building an app that implements the on-device object capture API to create 3D models. I have two concerns that I cannot find addressed anywhere on the internet: Can on-device object capture be performed by devices without LiDAR? I understand that depth data is necessary for making scale-accurate models - if there is an option to disable it, where would one specify that in code? Can models be exported to .obj instead of .usdz? From WWDC2021 at 3:00 it is mentioned that it is possible with the Apple Silicon API but what about with on-device scanning? I would be very grateful if anyone is knowledgeable enough to provide some insight. Thank you so much!
2
0
694
Mar ’24
Loading entities from reality kit content?
I'm trying to better understand how loading entities works. If I do this: RealityView { content in // Add the initial RealityKit content if let scene = try? await Entity(named: "RCP_Scene", in: realityKitContentBundle) { content.add(scene) } } It returns the root with the two objects I have in the scene (sphere_01 and sphere_02). If I add a drag gesture to this entity it works on the root and gets applied to both sphere_01 and sphere_02 together (they both indiviually have collision and input components set to allow gestures). How do I get individual control of sphere_01 and sphere_02? Is it possible to load the root scene, as I'm doing above, and have individual control?
0
0
444
Mar ’24
Construction of luminous expression methods
Hi, I am investigating how to emit the following in my visionOS app. https://www.hiroakit.com/archives/1432 https://blog.terresquall.com/2020/01/getting-your-emission-maps-to-work-in-unity/ Right now, I'm trying various things with Shader Graph in Reality Composer Pro, but I can't tell from the official documentation and WWDC session videos what the individual functions and combined effects of Reality Composer Pro's Shader Graph nodes are, I am having a hard time understanding the effects of the individual functions and combinations of them. I have a feeling that such luminous materials and expressions are not possible in visionOS to begin with. If there is a way to achieve this, please let me know. Thanks.
0
0
414
Mar ’24
Rotate an entity in VisionPro
Hi, I'm trying to rotate an entity in VisionPro. Most of the code is the same as the Diorama code from WWDC23. The problem I'm having is that the rotiation occurs but the axis of the rotation is not the center of my object. It seems to be centered on the zero coordinate of the immersive space . How do I change the rotation3DEffect to tell it to rotate around the entity? Not the space? Is it even possible? This is the code, the rotation is at the end. var body: some View { @Bindable var viewModel = viewModel RealityView { content, _ in do { let entity = try await Entity(named: "DioramaAssembled", in: RealityKitContent.RealityKitContentBundle) viewModel.rootEntity = entity content.add(entity) viewModel.updateScale() // Offset the scene so it doesn't appear underneath the user or conflict with the main window. entity.position = SIMD3<Float>(0, 0, -2) subscriptions.append(content.subscribe(to: ComponentEvents.DidAdd.self, componentType: PointOfInterestComponent.self, { event in createLearnMoreView(for: event.entity) })) entity.generateCollisionShapes (recursive: true) entity.components.set(InputTargetComponent()) } catch { print("Error in RealityView's make: \(error)") } } .rotation3DEffect(.radians(currentrotateByX), axis: .y) .rotation3DEffect(.radians(currentrotateByY), axis: .x)
7
0
1.2k
Mar ’24
RealityKit - Change Material Color or other properties in RealityView
In a RealityView, I have scene loaded from Reality Composer Pro. The entity I'm interacting with has a PhysicallyBasedMaterial with a diffuse color. I want to change that color when on long press. I can get the entity and even get a reference to the material, but I can't seem to change anything about it. What is the best way to change the color of a material at runtime? var longPress: some Gesture { LongPressGesture(minimumDuration: 0.5) .targetedToAnyEntity() .onEnded { value in value.entity.position.y = value.entity.position.y + 0.01 if var shadow = value.entity.components[GroundingShadowComponent.self] { shadow.castsShadow = true value.entity.components.set(shadow) } if let model = value.entity.components[ModelComponent.self] { print("material", model) if let mat = model.materials.first { print("material", mat) // I have a material here but I can't set any properties? // mat.diffuseColor does not exist } } } } Here is the full code struct Lab5026: View { var body: some View { RealityView { content in if let root = try? await Entity(named: "GestureLab", in: realityKitContentBundle) { root.position = [0, -0.45, 0] if let subject = root.findEntity(named: "Cube") { subject.components.set(HoverEffectComponent()) subject.components.set(GroundingShadowComponent(castsShadow: false)) } content.add(root) } } .gesture(longPress.sequenced(before: dragGesture)) } var longPress: some Gesture { LongPressGesture(minimumDuration: 0.5) .targetedToAnyEntity() .onEnded { value in value.entity.position.y = value.entity.position.y + 0.01 if var shadow = value.entity.components[GroundingShadowComponent.self] { shadow.castsShadow = true value.entity.components.set(shadow) } if let model = value.entity.components[ModelComponent.self] { print("material", model) if let mat = model.materials.first { print("material", mat) // I have a material here but I can't set any properties? // mat.diffuseColor does not exist // PhysicallyBasedMaterial } } } } var dragGesture: some Gesture { DragGesture() .targetedToAnyEntity() .onChanged { value in let newPostion = value.convert(value.location3D, from: .global, to: value.entity.parent!) let limit: Float = 0.175 value.entity.position.x = min(max(newPostion.x, -limit), limit) value.entity.position.z = min(max(newPostion.z, -limit), limit) } .onEnded { value in value.entity.position.y = value.entity.position.y - 0.01 if var shadow = value.entity.components[GroundingShadowComponent.self] { shadow.castsShadow = false value.entity.components.set(shadow) } } } }
2
0
1.2k
Feb ’24
Entity disappears when changing position
I have some strange behavior in my app. When I set the position to .zero you can see the sphere normally. But when I change it to any number it doesn't matter which and how small. The Sphere isn't visible or in the view. The RealityView import SwiftUI import RealityKit import RealityKitContent struct TheSphereOfDoomRV: View { @StateObject var viewModel: SphereViewModel = SphereViewModel() let sphere = SphereEntity(radius: 0.25, materials: [SimpleMaterial(color: .red, isMetallic: true)], name: "TheSphere") var body: some View { RealityView { content, attachments in content.add(sphere) } update: { content, attachments in sphere.scale = SIMD3<Float>(x: viewModel.scale, y: viewModel.scale, z: viewModel.scale) } attachments: { VStack { Text("The Sphere of Doom is one of the most powerful Objects. You can interact with him in every way you can imagine ").multilineTextAlignment(.center) Button { } label: { Text("Play Video!") } }.tag("description") }.modifier(GestureModifier()).environmentObject(viewModel) } } SphereEntity: import Foundation import RealityKit import RealityKitContent class SphereEntity: Entity { private let sphere: ModelEntity @MainActor required init() { sphere = ModelEntity() super.init() } init(radius: Float, materials: [Material], name: String) { sphere = ModelEntity(mesh: .generateSphere(radius: radius), materials: materials) sphere.generateCollisionShapes(recursive: false) sphere.components.set(InputTargetComponent()) sphere.components.set(HoverEffectComponent()) sphere.components.set(CollisionComponent(shapes: [.generateSphere(radius: radius)])) sphere.name = name super.init() self.addChild(sphere) self.position = .zero // .init(x: Float, y: Float, z: Float) and [Float, Float, Float] doesn't work ... } }
1
1
779
Aug ’23
Disable Foveation for ImmersiveSpace?
Does anyone know how I can disable foveation for an ImmersiveSpace? I'm aware that I could use a CompositorLayer and my own Metal rendering to control foveation, but I'm hoping that I can configure an existing/underlying LayerRenderer (or similar) to disable it for an immersive scene. Or if there's another approach I should be taking, any pointers are appreciated. Thank you!
0
0
375
Mar ’24
OpacityComponent does not work on device
I've been trying to animate the OpacityComponent to fade in/out entities in my scene. I've tried animating the component with an AnimationResource as well as tried animating with a custom System. Both worked fine in the simulator, but failed on device. AnimationResource: When I animated the opacity of an entity using an animation with an opacity bind target, the entity would not change opacity until I physically looked away from the object. It's almost as if the device keeps an entity visible for as long as you keep looking at it, but once you look away it plays the animation. System: I created a custom system that manually changes the opacity over time, however, on device the gradual fade of the entity doesn't work. Instead, the entity literally pops in/out of view instead of fading. Can someone explain exactly how this component is supposed to be used? The simulator plays the animations exactly the way I would expect, but on device it's completely different. Edit: I'm trying to change the opacity of entities with a VideoMaterial added to a ModelComponent. The fade animations are performed at certain points in the video that are triggered by an AVPlayer time boundary observer.
1
0
620
Mar ’24
Photogrammetry CLI Tool Error
Hi all, I took bunch of photos using Apple's 'Capture Sample' iOS app. Even though the all images in .HEIC/HEIF file format that CLI tool logs the bunch of the following errors and couldn't find any solution. 1-) HEIF file is expected. 2-) *** Assertion failure in OCReturn OCNonModularSPI_CMPhoto_readResolution(const OCHeicReadHandle, const NSURL *__strong, uint64_t *, uint64_t *)(), CMPhoto+NonModularSPI.m:1271
1
1
612
Mar ’24
Adding ModelComponent to Reality Composer Pro's "Primitive Shape" entity
Is there a way to give a "Primitive Shape" entity created through Reality Composer Pro a ModelComponent? I have a custom ShaderGraphMaterial assigned to a primitive shape in my RC Pro scene hierarchy, and I'd like to tweak the inputs of this material programatically. I found a great example of the behavior I'm looking for here: https://developer.apple.com/videos/play/wwdc2023/10273/?time=1862 @State private var sliderValue: Float = 0.0 Slider(value: $sliderValue, in: (0.0)...(1.0)) .onChange(of: sliderValue) { _, _ in guard let terrain = rootEntity.findEntity(named: "DioramaTerrain"), var modelComponent = terrain.components[ModelComponent.self], var shaderGraphMaterial = modelComponent.materials.first as? ShaderGraphMaterial else { return } do { try shaderGraphMaterial.setParameter(name: "Progress", value: .float(sliderValue)) modelComponent.materials = [shaderGraphMaterial] terrain.components.set(modelComponent) } catch { } } } However, when I try applying this example to my use-case, my project's equivalent to this line fails to execute: var modelComponent = terrain.components[ModelComponent.self] The only difference I can see between my case and this example is my entity is a primitive shape, whereas the example uses a model reference to a .usdz file. Is there some way to update a primitive shape entity to contain this ModelComponent in its set of components so I can reference + update its materials programmatically?
1
0
628
Feb ’24
Non-convex (torus) collision shapes for VisionOS/RealityKit
I have tried entity.generateCollisionShapes (generating simple box-shaped collision) and ShapeResource.generateConvex(from: entity) (generating convex-shaped collision, as the name suggests). Unfortunately neither suits my case, where I have a torus entity inside which no collision should happen with other entities - namely, smaller entities should be able to "fall through" the torus, thus the title of this post. Was wondering if there's any solution that I overlooked. Thanks 🙏
2
0
520
Feb ’24
Windowgroup
has anyone gotten their 3d Models to render in seperate windows, i tried following the code in the video for creating a seperate window group, but i get a ton of obsecure errors, i was able to get it to render in my 2d windows, but when i try making a seperate window group i get errors
2
0
792
Aug ’23
Vision OS Torus Collision Shape
Hi, I have a usdz asset of a torus / hoop shape that I would like to pass another Reality Kit Entity cube-like object through (without touching the torus) in VisionOS. Similar to how a basketball goes through a hoop. Whenever I pass the cube through, I am getting a collision notification, even if the objects are not actually colliding. I want to be able to detect when the objects are actually colliding, vs when the cube passes cleanly through the opening in the torus. I am using entity.generateCollisionShapes(recursive: true) to generate the collision shapes. I believe the issue is in the fact that the collision shape of the torus is a rectangular box, and not the actual shape of the torus. I know that the collision shape is a rectangular box because I can see this in the vision os simulator by enabling "Collision Shapes" Does anyone know how to programmatically create a torus in collision shape in SwiftUI / RealityKit for VisionOS. Followup, can I create a torus in reality kit, so I don't even have to use a .usdz asset?
3
1
1k
Dec ’23
Adding custom material to sceneReconstruction mesh
I wanted to add a custom material over the mesh detected by the sceneReconstruction provider but i can't find a way to convert the meshAnchor to a usable MeshResource func processReconstructionUpdates() async { for await update in sceneReconstruction.anchorUpdates { let meshAnchor = update.anchor guard let shape = try? await ShapeResource.generateStaticMesh(from: meshAnchor)
 else { continue } switch update.event { case .added: let entity = ModelEntity( mesh: **somehow get the mesh from mesh anchor here**, materials: [material] ) contentEntity.addChild(entity) case .updated: ... case .removed: ... @unknown default: fatalError("Unsupported anchor event") } } }
3
0
696
Feb ’24
How can I simultaneously apply the drag gesture to multiple entities?
I wanted to drag EntityA while also dragging EntityB independently. I've tried to separate them by entity but it only recognizes the latest drag gesture RealityView { content, attachments in ... } .gesture( DragGesture() .targetedToEntity(EntityA) .onChanged { value in ... } ) .gesture( DragGesture() .targetedToEntity(EntityB) .onChanged { value in ... } ) also tried using the simultaneously but didn't work too, maybe i'm missing something .gesture( DragGesture() .targetedToEntity(EntityA) .onChanged { value in ... } .simultaneously(with: DragGesture() .targetedToEntity(EntityB) .onChanged { value in ... } )
0
1
446
Feb ’24
How to use drag gestures on objects with inverted normals?
I want to build a panorama sphere around the user. The idea is that the users can interact with this panorama, i.e. pan it around and select markers placed on it, like on a map. So I set up a sphere that works like a skybox, and inverted its normal, which makes the material is inward facing, using this code I found online: import Combine import Foundation import RealityKit import SwiftUI extension Entity { func addSkybox(for skybox: Skybox) { let subscription = TextureResource .loadAsync(named: skybox.imageName) .sink(receiveCompletion: { completion in switch completion { case .finished: break case let .failure(error): assertionFailure("\(error)") } }, receiveValue: { [weak self] texture in guard let self = self else { return } var material = UnlitMaterial() material.color = .init(texture: .init(texture)) let sphere = ModelComponent(mesh: .generateSphere(radius: 5), materials: [material]) self.components.set(sphere) /// flip sphere inside out so the texture is inside self.scale *= .init(x: -1, y: 1, z: 1) self.transform.translation += SIMD3(0.0, 1.0, 0.0) }) components.set(Entity.SubscriptionComponent(subscription: subscription)) } struct SubscriptionComponent: Component { var subscription: AnyCancellable } } This works fine and is looking awesome. However, I can't get a gesture work on this. If the sphere is "normally" oriented, i.e. the user drags it "from the outside", I can do it like this: import RealityKit import SwiftUI struct ImmersiveMap: View { @State private var rotationAngle: Float = 0.0 var body: some View { RealityView { content in let rootEntity = Entity() rootEntity.addSkybox(for: .worldmap) rootEntity.components.set(CollisionComponent(shapes: [.generateSphere(radius: 5)])) rootEntity.generateCollisionShapes(recursive: true) rootEntity.components.set(InputTargetComponent()) content.add(rootEntity) } .gesture(DragGesture().targetedToAnyEntity().onChanged({ _ in log("drag gesture") })) But if the user drags it from the inside (i.e. the negative x scale is in place), I get no drag events. Is there a way to achieve this?
1
0
424
Feb ’24