On Scenekit, using SCNShapewe can create SCN geometry from SwiftUI 2D shapes/beziers:https://developer.apple.com/documentation/scenekit/scnshape
Is there an equivalent in RealityKit?
Could we use the generate(from:) for that?https://developer.apple.com/documentation/realitykit/meshresource/3768520-generate
https://developer.apple.com/documentation/realitykit/meshresource/3768520-generate
RealityKit
RSS for tagSimulate and render 3D content for use in your augmented reality apps using RealityKit.
Post
Replies
Boosts
Views
Activity
Hello Dev Community,
I've been thinking over Apple's preference for USDZ for AR and 3D content, especially when there's the widely used GLTF. I'm keen to discuss and hear your insights on this choice.
USDZ, backed by Apple, has seen a surge in the AR community. It boasts advantages like compactness, animation support, and ARKit compatibility. In contrast, GLTF too is a popular format with its own merits, like being an open standard and offering flexibility.
Here are some of my questions toward the use of USDZ:
Why did Apple choose USDZ over other 3D file formats like GLTF?
What benefits does USDZ bring to Apple's AR and 3D content ecosystem?
Are there any limitations of USDZ compared to other file formats?
Could factors like compatibility, security, or integration ease have influenced Apple's decision?
I would love to hear your thoughts on this. Feel free to share any experiences with USDZ or other 3D file formats within Apple's ecosystem!
Hi,
is there a way in visionOS to anchor an entity to the POV via RealityKit?
I need an entity which is always fixed to the 'camera'.
I'm aware that this is discouraged from a design perspective as it can be visually distracting. In my case though I want to use it to attach a fixed collider entity, so that the camera can collide with objects in the scene.
Edit:
ARView on iOS has a lot of very useful helper properties and functions like cameraTransform (https://developer.apple.com/documentation/realitykit/arview/cameratransform)
How would I get this information on visionOS? RealityViews content does not seem offer anything comparable.
An example use case would be that I would like to add an entity to the scene at my users eye-level, basically depending on their height.
I found https://developer.apple.com/documentation/realitykit/realityrenderer which has an activeCamera property but so far it's unclear to me in which context RealityRenderer is used and how I could access it.
Appreciate any hints, thanks!
I am working on a fully immersive RealityView for visionOS and I need to add light from the sun to my scene. I see that DirectionalLight, PointLight, and SpotLight are not available on visionOS. Does anyone know how to add light to fully immersive scene on visionOS?
My scene is really dark right now without any additional light.
I have a RealityView in my visionOS app. I can't figure out how to access RealityRenderer. According to the documentation (https://developer.apple.com/documentation/realitykit/realityrenderer) it is available on visionOS, but I can't figure out how to access it for my RealityView. It is probably something obvious, but after reading through the documentation for RealityView, Entities, and Components, I can't find it.
has anyone gotten their 3d Models to render in seperate windows, i tried following the code in the video for creating a seperate window group, but i get a ton of obsecure errors, i was able to get it to render in my 2d windows, but when i try making a seperate window group i get errors
I have some strange behavior in my app. When I set the position to .zero you can see the sphere normally. But when I change it to any number it doesn't matter which and how small. The Sphere isn't visible or in the view.
The RealityView
import SwiftUI
import RealityKit
import RealityKitContent
struct TheSphereOfDoomRV: View {
@StateObject var viewModel: SphereViewModel = SphereViewModel()
let sphere = SphereEntity(radius: 0.25, materials: [SimpleMaterial(color: .red, isMetallic: true)], name: "TheSphere")
var body: some View {
RealityView { content, attachments in
content.add(sphere)
} update: { content, attachments in
sphere.scale = SIMD3<Float>(x: viewModel.scale, y: viewModel.scale, z: viewModel.scale)
} attachments: {
VStack {
Text("The Sphere of Doom is one of the most powerful Objects. You can interact with him in every way you can imagine ").multilineTextAlignment(.center)
Button {
} label: {
Text("Play Video!")
}
}.tag("description")
}.modifier(GestureModifier()).environmentObject(viewModel)
}
}
SphereEntity:
import Foundation
import RealityKit
import RealityKitContent
class SphereEntity: Entity {
private let sphere: ModelEntity
@MainActor
required init() {
sphere = ModelEntity()
super.init()
}
init(radius: Float, materials: [Material], name: String) {
sphere = ModelEntity(mesh: .generateSphere(radius: radius), materials: materials)
sphere.generateCollisionShapes(recursive: false)
sphere.components.set(InputTargetComponent())
sphere.components.set(HoverEffectComponent())
sphere.components.set(CollisionComponent(shapes: [.generateSphere(radius: radius)]))
sphere.name = name
super.init()
self.addChild(sphere)
self.position = .zero // .init(x: Float, y: Float, z: Float) and [Float, Float, Float] doesn't work ...
}
}
I am attempting to place images in wall anchors and be able to move their position using drag gestures. This seem pretty straightforward if the wall anchor is facing you when you start the app. But, if you place an image on a wall anchor to the left or the wall behind the original position then the logic stops working properly. The problem seems to be the anchor and the drag.location3D orientations don't coincide once you are dealing with wall anchors that are not facing the original user position (Using Xcode Beta 8)
Question:
How do I apply dragging gestures to an image regardless where the wall anchor is located at in relation to the user original facing direction?
Using the following code:
var dragGesture: some Gesture {
DragGesture(minimumDistance: 0)
.targetedToAnyEntity()
.onChanged { value in
let entity = value.entity
let convertedPos = value.convert(value.location3D, from: .local, to: entity.parent!) * 0.1
entity.position = SIMD3<Float>(x: convertedPos.x, y: 0, z: convertedPos.y * (-1))
}
}
Hi,
I'm trying to use the ECS System class, and noticing on hardware, that update() is not being called every frame as its described in the Simulator, or as described in the documentation.
To reproduce, simply create a System like so:
class MySystem : System {
var count : Int = 0
required init(scene: Scene) {
}
func update(context: SceneUpdateContext) {
count = count + 1
print("Update \(count)")
}
}
Then, inside the RealityView, register the System with:
MySystem.registerSystem()
Notice that while it'll reliably be called every frame in Simulator, it'll run for a few seconds on hardware, freeze, then only be called when indirectly doing something like moving a window or performing other visionOS actions that are analogous to those that call "invalidate" to refresh a window in other operating systems.
Thanks in advance,
-Rob.
I'm developing 3D Scanner works on iPad.
I'm using AVCapturePhoto and Photogrammetry Session.
photoCaptureDelegate is like below:
extension PhotoCaptureDelegate: AVCapturePhotoCaptureDelegate {
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {
let fileUrl = CameraViewModel.instance.imageDir!.appendingPathComponent("\(PhotoCaptureDelegate.name)\(id).heic")
let img = CIImage(cvPixelBuffer: photo.pixelBuffer!, options: [ .auxiliaryDepth: true, .properties: photo.metadata ])
let depthData = photo.depthData!.converting(toDepthDataType: kCVPixelFormatType_DepthFloat32)
let colorSpace = CGColorSpace(name: CGColorSpace.sRGB)
let fileData = CIContext().heifRepresentation(of: img, format: .RGBA8, colorSpace: colorSpace!, options: [ .avDepthData: depthData ])
try? fileData!.write(to: fileUrl, options: .atomic)
}
}
But, Photogrammetry session spits warning messages:
Sample 0 missing LiDAR point cloud!
Sample 1 missing LiDAR point cloud!
Sample 2 missing LiDAR point cloud!
Sample 3 missing LiDAR point cloud!
Sample 4 missing LiDAR point cloud!
Sample 5 missing LiDAR point cloud!
Sample 6 missing LiDAR point cloud!
Sample 7 missing LiDAR point cloud!
Sample 8 missing LiDAR point cloud!
Sample 9 missing LiDAR point cloud!
Sample 10 missing LiDAR point cloud!
The session creates a usdz 3d model but scale is not correct.
I think the point cloud can help Photogrammetry session to find right scale, but I don't know how to attach point cloud.
From this WWDC session video, apple suggests that any objects that appear to emit lights should shine color onto nearby objects.
But when I was trying to construct an cinema immersive space with videoMaterial and some other entities. I find that videoMaterial is not emissive and the nearby entity doesn't reflect any lights from the screen Entity where my videoMaterial is attached to.
What is the right way to achieve an effect similar to the TV app that is displayed in the video?
Hello,
I was wondering if we could get the metersPerUnit metadata from a USDZ loaded as an Entity (or even MDLAsset) using Reality Kit?
Thanks
I know that CustomMaterial in RealityKit can update texture by use DrawableQueue, but in new VisionOS, CustomMaterial doesn't work anymore. How i can do the same thing,does ShaderGraphMaterial can do?I can't find example about how to do that. Looking forward your repley, thank you!
Hello I am very new here in the forum (in iOS dev as well). I am trying to build an app that uses 3d face filters and I want to use Reality Composer. I knew Xcode 15 did not have it so I downloaded the beta 8 version (as suggested in another post). This one actually has Reality Composure Pro (XCode -> Developer tools -> Reality Composure Pro) but the Experience.rcproject still does not appear. Is there a way to create one? When I use Reality Composure it seems only able to create standalone projects and it does not seem to be bundled in a any way to xCode. Thanks for your time people!
Why {videoPlayerComponent.playerScreenSize} cannot get right to the size of the video, print out the x and y is [0, 0], is this a BUG?
Hey guys
How I can fit RealityView content inside a volumetric window?
I have below simple example:
WindowGroup(id: "preview") {
RealityView { content in
if let entity = try? await Entity(named: "name") {
content.add(entity)
entity.setPosition(.zero, relativeTo: entity.parent)
}
}
}
.defaultSize(width: 0.6, height: 0.6, depth: 0.6, in: .meters)
.windowStyle(.volumetric)
I understand that we can resize a Model3D view automatically using .resizable() and .scaledToFit() after the model loaded.
Can we achieve the same result using a RealityView?
Cheers
In new VisionOS platform, CustomMaterial in RealityKit can not userd, it should use ShaderGraphMaterial instead, but i can't find the way to change the culling mode. With old CustomMaterial, it has a facculling property,
Is there a way to change the culling mode in new ShaderGraphMaterial?
Hi,
I have a usdz asset of a torus / hoop shape that I would like to pass another Reality Kit Entity cube-like object through (without touching the torus) in VisionOS. Similar to how a basketball goes through a hoop.
Whenever I pass the cube through, I am getting a collision notification, even if the objects are not actually colliding. I want to be able to detect when the objects are actually colliding, vs when the cube passes cleanly through the opening in the torus.
I am using entity.generateCollisionShapes(recursive: true) to generate the collision shapes. I believe the issue is in the fact that the collision shape of the torus is a rectangular box, and not the actual shape of the torus. I know that the collision shape is a rectangular box because I can see this in the vision os simulator by enabling "Collision Shapes"
Does anyone know how to programmatically create a torus in collision shape in SwiftUI / RealityKit for VisionOS. Followup, can I create a torus in reality kit, so I don't even have to use a .usdz asset?
I want to place a ModelEntity at an AnchorEntity's location, but not as a child of the AnchorEntity. ( I want to be able to raycast to it, and have collisions work.)
I've placed an AnchorEntity in my scene like so:
AnchorEntity(.plane(.vertical, classification: .wall, minimumBounds: [2.0, 2.0]), trackingMode: .continuous)
In my RealityView update closure, I print out this entity's position relative to "nil" like so:
wallAnchor.position(relativeTo: nil)
Unfortunately, this position doesn't make sense. It's very close to zero, even though it appears several meters away.
I believe this is because AnchorEntities have their own self contained coordinate spaces that are independent from the scene's coordinate space, and it is reporting its position relative to its own coordinate space.
How can I bridge the gap between these two?
WorldAnchor has an originFromAnchorTransform property that helps with this, but I'm not seeing something similar for AnchorEntity.
Thank you
In my project, i want to use new shadergraphmaterial to do the stereoscopic render, i notice that there is a node called Camera Index Switch Node can do this. But when i tried it , i found that :
It can only output Integer type value, when i change to float value , it change back again, i don't konw if it is a bug.
2. So i test this node with a IF node,i found that it output is weird.
Below is zero should output,it is black
but when i change to IF node,it is grey,it is neither 0 nor 1(My IF node result is TRUE result 1, FALSE result 0)
I wanna ask if this is a bug, and if this is a correct way to do the stereoscopic render.