RealityKit

RSS for tag

Simulate and render 3D content for use in your augmented reality apps using RealityKit.

RealityKit Documentation

Post

Replies

Boosts

Views

Activity

How to set a size for an entity that is composed by a 3d model?
Hello Everyone, I'm facing a challenge related to resizing an entity built from a 3D model. Although I can manipulate the size of the mesh, the entity's overall dimensions seem to remain static and unchangeable. Here's a snippet of my code: let giftEntity = try await Entity(named: "gift") I've come across an operator that allows for scaling the entity. However, I'm uncertain about the appropriate value to employ, especially since the realityView is encapsulated within an HStack, which is further nested inside a ScrollView. Would anyone have experience or guidance on this matter? Any recommendations or resources would be invaluable. Thank you in advance for your assistance!
1
0
1.1k
Oct ’23
Anchoring a view in VisionOS
Hi community, I'm developing a VisionOS app where I want to anchor a View so that it follows users' movement. The View contains clickable Buttons. However I'm been looking through docs/wikis and it seems currently, we can only anchor an Entity instance. Any ideas about how I can anchor a view? References: https://www.youtube.com/watch?v=NZ-TJ8Ln7NY https://www.reddit.com/r/visionosdev/comments/152ycqr/using_anchors_in_the_vision_os_simulator/ https://developer.apple.com/documentation/realitykit/entity
1
0
959
Oct ’23
Check whether an Entity is currently undergoing any collisions
I'm working on a game where it would be helpful to know whether a given Entity is currently colliding with any other Entities. The collider shape is not guaranteed to be simple — they're each constructed with multiple ShapeResources for accuracy. The Entity in question does not have a physics body, can be dragged freely, and should be able to overlap with other Entities with or without physics bodies, but all with CollisionComponents. The problem I'm running into is that using CollisionEvents.Began and CollisionEvents.Ended creates a situation where an Entity can be dragged over another, briefly switches to my "overlapping" state (the red semitransparent object), but then immediately switches back as soon as the object is dragged any further (the pink semitransparent object)— indicating CollisionEvents.Ended is being called while the Entities are still colliding. Both should be in the "overlapping" state on the left. tl;dr — Is there a simple way I'm unaware of to check whether there are any currently active collisions on an Entity? Or some other way of thinking about this that may be beneficial?
1
0
331
Oct ’23
How to implement a drag/grab effect for an entity in a RealityView?
I want to implement a simple object-grabbing effect for an entity defined inside a RealityView. I'm not sure what the recommended way is. The following can move the object but the coordinates seem to be messed up. RealityView { self.myEntity = ModelEntity(...) }.gesture( DragGesture(minimumDistance: 0) .onChanged { value in let trans = value.translation3D self.myEntity.move( to: Transform( scale: SIMD3(repeating: 1.0), rotation: simd_quaternion(0, 0, 0, 1), translation: SIMD3<Float>(Float(trans.x), Float(trans.y, -Float(trans.z))), relativeTo: cards[item]) } My wild guess is value.translation3D is defined in view space and move(to..) should use 3D space? I saw RealityCoordinateSpaceConverting but no idea how to use it. Any suggestions/links/examples are appreciated :)
0
0
378
Nov ’23
How to access custom Mesh buffers inside a custom ShaderGraphMaterial?
My app is built using SceneKit/Metal on iOS, iPadOS, and tvOS, and users have generated tons of content. To bring that content to visionOS with fidelity, I have to port a particle emitter system. I have been able to successfully re-use my Metal code via CompsitorServices, but I'd like to get it working in RealityKit for all the UX/UI affordances it can provide. To that end, I have also been successful in getting the particle geometry data rendering via a Component/System that replaces mesh geometry content in real time. The last major step is to find a way to color and texture each particle via a ShaderGraphMaterial. Like any good particle emitter system, particle colors can change and vary over time. In Metal, the shader look like this: fragment half4 CocosFragmentFunctionDefaultTextureColor(const CocosFragData in [[stage_in]], texture2d<half> cc_MainTexture [[texture(0)]], sampler cc_MainTextureSampler [[sampler(0)]]) { return in.color * cc_MainTexture.sample(cc_MainTextureSampler, in.texCoord); } Basically I multiply a texture sample with a vertex color. Fairly simple stuff in GL shader-speak. So, how do I achieve this via ShaderGraphMaterial? In another post, I see that I can pass in vertex colors via a custom mesh buffer like so: let vertexColor: MeshBuffers.Semantic = MeshBuffers.custom("vertexColor", type: SIMD4<Float>.self) let meshResource = MeshDescriptor() meshResource[vertexColor] = ... Unfortunately, that doesn't appear to work for me. I'm sure I missed a step, but what I really want/need is a way to access this custom buffer from inside a ShaderGraphMaterial and multiply it against a sample of the texture. How? Any pointers, or sample code, or sample Reality Composer Pro project would be most appreciated!
1
3
653
Nov ’23
RealityKit: ECS System.update() method not being called every frame on hardware
Hi, I'm trying to use the ECS System class, and noticing on hardware, that update() is not being called every frame as its described in the Simulator, or as described in the documentation. To reproduce, simply create a System like so: class MySystem : System { var count : Int = 0 required init(scene: Scene) { } func update(context: SceneUpdateContext) { count = count + 1 print("Update \(count)") } } Then, inside the RealityView, register the System with: MySystem.registerSystem() Notice that while it'll reliably be called every frame in Simulator, it'll run for a few seconds on hardware, freeze, then only be called when indirectly doing something like moving a window or performing other visionOS actions that are analogous to those that call "invalidate" to refresh a window in other operating systems. Thanks in advance, -Rob.
1
2
539
Nov ’23
RayCasting to Surface Returns Inconsistent Results?
On Xcode 15.1.0b2 when rayacsting to a collision surface, there appears to be a tendency for the collisions to be inconsistent. Here are my results. Green cylinders are hits, and red cylinders are raycasts that returned no collision results. NOTE: This raycast is triggered by a tap gesture recognizer registering on the cube... so it's weird to me that the tap would work, but the raycast not collide with anything. Is this something that just performs poorly in the simulator? My RayCasting command is: guard let pose = self.arSessionController.worldTracking.queryDeviceAnchor(atTimestamp: CACurrentMediaTime()) else { print("FAILED TO GET POSITION") return } let transform = Transform(matrix: pose.originFromAnchorTransform) let locationOfDevice = transform.translation let raycastResult = scene.raycast(from: locationOfDevice, to: destination, relativeTo: nil) where destination is retrieved in a tap gesture handler via: let worldPosition: SIMD3<Float> = value.convert(value.location3D, from: .local, to: .scene) Any findings would be appreciated.
2
0
584
Nov ’23
How to attach point cloud(or depth data) to heic?
I'm developing 3D Scanner works on iPad. I'm using AVCapturePhoto and Photogrammetry Session. photoCaptureDelegate is like below: extension PhotoCaptureDelegate: AVCapturePhotoCaptureDelegate { func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) { let fileUrl = CameraViewModel.instance.imageDir!.appendingPathComponent("\(PhotoCaptureDelegate.name)\(id).heic") let img = CIImage(cvPixelBuffer: photo.pixelBuffer!, options: [ .auxiliaryDepth: true, .properties: photo.metadata ]) let depthData = photo.depthData!.converting(toDepthDataType: kCVPixelFormatType_DepthFloat32) let colorSpace = CGColorSpace(name: CGColorSpace.sRGB) let fileData = CIContext().heifRepresentation(of: img, format: .RGBA8, colorSpace: colorSpace!, options: [ .avDepthData: depthData ]) try? fileData!.write(to: fileUrl, options: .atomic) } } But, Photogrammetry session spits warning messages: Sample 0 missing LiDAR point cloud! Sample 1 missing LiDAR point cloud! Sample 2 missing LiDAR point cloud! Sample 3 missing LiDAR point cloud! Sample 4 missing LiDAR point cloud! Sample 5 missing LiDAR point cloud! Sample 6 missing LiDAR point cloud! Sample 7 missing LiDAR point cloud! Sample 8 missing LiDAR point cloud! Sample 9 missing LiDAR point cloud! Sample 10 missing LiDAR point cloud! The session creates a usdz 3d model but scale is not correct. I think the point cloud can help Photogrammetry session to find right scale, but I don't know how to attach point cloud.
1
2
654
Nov ’23
Object Capture: Exporting model as OBJ.
https://developer.apple.com/documentation/realitykit/photogrammetrysession/request/modelfile(url:detail:geometry:) if the path given is a file path that contains a .usdz extension, then it will be saved as .usdz, or else if we provide a folder, it will save as OBJ; I tried it, but no use. Right before saving, it shows the folder that will be saved, but after I click on done and check the folder, it's always empty.
2
2
832
Nov ’23
VisionOS PortalComponent issue
Hi! Im having an issue creating a PortalComponent on visionOS Im trying to anchor a Portal to a wall or floor anchor and always the portal appears opposite to the anchor. If I use a vertical anchor (wall) the portal appears horizontal on the scene If I use a horizontal anchor (floor) the portal appears vertical on the scene Im tested on xcode 15.1.0 beta 3 15.1.0 beta 2 15.0 beta 8 Any ideas ?? Thank you so much!
0
0
421
Nov ’23
How Can I Create an Emissive Video Material in RealityKit?
From this WWDC session video, apple suggests that any objects that appear to emit lights should shine color onto nearby objects. But when I was trying to construct an cinema immersive space with videoMaterial and some other entities. I find that videoMaterial is not emissive and the nearby entity doesn't reflect any lights from the screen Entity where my videoMaterial is attached to. What is the right way to achieve an effect similar to the TV app that is displayed in the video?
0
0
360
Nov ’23
Experience.rcproject
Hello I am very new here in the forum (in iOS dev as well). I am trying to build an app that uses 3d face filters and I want to use Reality Composer. I knew Xcode 15 did not have it so I downloaded the beta 8 version (as suggested in another post). This one actually has Reality Composure Pro (XCode -> Developer tools -> Reality Composure Pro) but the Experience.rcproject still does not appear. Is there a way to create one? When I use Reality Composure it seems only able to create standalone projects and it does not seem to be bundled in a any way to xCode. Thanks for your time people!
2
0
713
Dec ’23
RealityView fit in volumetric window
Hey guys How I can fit RealityView content inside a volumetric window? I have below simple example: WindowGroup(id: "preview") { RealityView { content in if let entity = try? await Entity(named: "name") { content.add(entity) entity.setPosition(.zero, relativeTo: entity.parent) } } } .defaultSize(width: 0.6, height: 0.6, depth: 0.6, in: .meters) .windowStyle(.volumetric) I understand that we can resize a Model3D view automatically using .resizable() and .scaledToFit() after the model loaded. Can we achieve the same result using a RealityView? Cheers
1
1
810
Dec ’23
Vision OS Torus Collision Shape
Hi, I have a usdz asset of a torus / hoop shape that I would like to pass another Reality Kit Entity cube-like object through (without touching the torus) in VisionOS. Similar to how a basketball goes through a hoop. Whenever I pass the cube through, I am getting a collision notification, even if the objects are not actually colliding. I want to be able to detect when the objects are actually colliding, vs when the cube passes cleanly through the opening in the torus. I am using entity.generateCollisionShapes(recursive: true) to generate the collision shapes. I believe the issue is in the fact that the collision shape of the torus is a rectangular box, and not the actual shape of the torus. I know that the collision shape is a rectangular box because I can see this in the vision os simulator by enabling "Collision Shapes" Does anyone know how to programmatically create a torus in collision shape in SwiftUI / RealityKit for VisionOS. Followup, can I create a torus in reality kit, so I don't even have to use a .usdz asset?
3
1
965
Dec ’23
Retrieve AnchorEntity Location relative to Scene?
I want to place a ModelEntity at an AnchorEntity's location, but not as a child of the AnchorEntity. ( I want to be able to raycast to it, and have collisions work.) I've placed an AnchorEntity in my scene like so: AnchorEntity(.plane(.vertical, classification: .wall, minimumBounds: [2.0, 2.0]), trackingMode: .continuous) In my RealityView update closure, I print out this entity's position relative to "nil" like so: wallAnchor.position(relativeTo: nil) Unfortunately, this position doesn't make sense. It's very close to zero, even though it appears several meters away. I believe this is because AnchorEntities have their own self contained coordinate spaces that are independent from the scene's coordinate space, and it is reporting its position relative to its own coordinate space. How can I bridge the gap between these two? WorldAnchor has an originFromAnchorTransform property that helps with this, but I'm not seeing something similar for AnchorEntity. Thank you
0
0
545
Dec ’23