Hello,
RealityKit offers an awesome interface to install gestures for the common interactions with the virtual object in the 3D space.
One of them is the EntityTranslationGestureRecognizer to move the 3D object in the 3D space. When checking the documentation I found the velocity(in:) method which I'd like to modify to limit the speed an object can be moved through the 3D space.
https://developer.apple.com/documentation/realitykit/entitytranslationgesturerecognizer/3255581-velocity
I didn't find a straight forward way to subclass and install this gesture recognizer yet. Do I miss something?
Best,
Lennart
RealityKit
RSS for tagSimulate and render 3D content for use in your augmented reality apps using RealityKit.
Post
Replies
Boosts
Views
Activity
On Scenekit, using SCNShapewe can create SCN geometry from SwiftUI 2D shapes/beziers:https://developer.apple.com/documentation/scenekit/scnshape
Is there an equivalent in RealityKit?
Could we use the generate(from:) for that?https://developer.apple.com/documentation/realitykit/meshresource/3768520-generate
https://developer.apple.com/documentation/realitykit/meshresource/3768520-generate
With AVFoundation's builtInLiDARDepthCamera,
if I save photo.fileDataRepresentation to heic, it only has Exif and TIFF metadata.
But, RealityKit's object capture's heic image has not only Exif and TIFF, but also has HEIC metadata including camera calibration data.
What should I do for AVFoundation's exported image has same meta data?
I am working on a fully immersive RealityView for visionOS and I need to add light from the sun to my scene. I see that DirectionalLight, PointLight, and SpotLight are not available on visionOS. Does anyone know how to add light to fully immersive scene on visionOS?
My scene is really dark right now without any additional light.
I have a RealityView in my visionOS app. I can't figure out how to access RealityRenderer. According to the documentation (https://developer.apple.com/documentation/realitykit/realityrenderer) it is available on visionOS, but I can't figure out how to access it for my RealityView. It is probably something obvious, but after reading through the documentation for RealityView, Entities, and Components, I can't find it.
has anyone gotten their 3d Models to render in seperate windows, i tried following the code in the video for creating a seperate window group, but i get a ton of obsecure errors, i was able to get it to render in my 2d windows, but when i try making a seperate window group i get errors
So I can observe RealityKit Components by using the new @Observable or using ObservableObject, both of these require my Component to be a class instead of a struct though.
I've read that making a Component a class is a bad idea, is this correct?
Is there any other way to observe values of an entities' components?
I have some strange behavior in my app. When I set the position to .zero you can see the sphere normally. But when I change it to any number it doesn't matter which and how small. The Sphere isn't visible or in the view.
The RealityView
import SwiftUI
import RealityKit
import RealityKitContent
struct TheSphereOfDoomRV: View {
@StateObject var viewModel: SphereViewModel = SphereViewModel()
let sphere = SphereEntity(radius: 0.25, materials: [SimpleMaterial(color: .red, isMetallic: true)], name: "TheSphere")
var body: some View {
RealityView { content, attachments in
content.add(sphere)
} update: { content, attachments in
sphere.scale = SIMD3<Float>(x: viewModel.scale, y: viewModel.scale, z: viewModel.scale)
} attachments: {
VStack {
Text("The Sphere of Doom is one of the most powerful Objects. You can interact with him in every way you can imagine ").multilineTextAlignment(.center)
Button {
} label: {
Text("Play Video!")
}
}.tag("description")
}.modifier(GestureModifier()).environmentObject(viewModel)
}
}
SphereEntity:
import Foundation
import RealityKit
import RealityKitContent
class SphereEntity: Entity {
private let sphere: ModelEntity
@MainActor
required init() {
sphere = ModelEntity()
super.init()
}
init(radius: Float, materials: [Material], name: String) {
sphere = ModelEntity(mesh: .generateSphere(radius: radius), materials: materials)
sphere.generateCollisionShapes(recursive: false)
sphere.components.set(InputTargetComponent())
sphere.components.set(HoverEffectComponent())
sphere.components.set(CollisionComponent(shapes: [.generateSphere(radius: radius)]))
sphere.name = name
super.init()
self.addChild(sphere)
self.position = .zero // .init(x: Float, y: Float, z: Float) and [Float, Float, Float] doesn't work ...
}
}
I am attempting to place images in wall anchors and be able to move their position using drag gestures. This seem pretty straightforward if the wall anchor is facing you when you start the app. But, if you place an image on a wall anchor to the left or the wall behind the original position then the logic stops working properly. The problem seems to be the anchor and the drag.location3D orientations don't coincide once you are dealing with wall anchors that are not facing the original user position (Using Xcode Beta 8)
Question:
How do I apply dragging gestures to an image regardless where the wall anchor is located at in relation to the user original facing direction?
Using the following code:
var dragGesture: some Gesture {
DragGesture(minimumDistance: 0)
.targetedToAnyEntity()
.onChanged { value in
let entity = value.entity
let convertedPos = value.convert(value.location3D, from: .local, to: entity.parent!) * 0.1
entity.position = SIMD3<Float>(x: convertedPos.x, y: 0, z: convertedPos.y * (-1))
}
}
I have generated a box in RealityKit with splitFaces property set to true to allow different materials on each cube side.
Applying different SimpleMaterials (e.g. with different colors) works fine on Vision Pro simulator. But combining VideoMaterial and SimpleMaterial does not work. BTW: a 6x video cube can be rendered successfully so the problem seems to be mixing material structures.
Here's my relevant code snippet:
let mesh = MeshResource.generateBox(width: 0.3, height: 0.3, depth: 0.3, splitFaces: true)
let mat1 = VideoMaterial(avPlayer: player)
let mat2 = SimpleMaterial(color: .blue, isMetallic: true)
let mat3 = SimpleMaterial(color: .red, isMetallic: true)
let cube = ModelEntity(mesh: mesh, materials: [mat1, mat2, mat3, mat1, mat2, mat3])
In detail, the video textures are shown whereas the simple surfaces are invisible.
Is this a problem of Vision Pro simulator? Or is it not possible to combine different material structures on a box? Any help is welcome!
Xcode complains that Model3D is not in scope despite that RealityKit is imported.
I have tried with Xcode 15 & beta 8 in vain.
How can I resolve this issue?
import SwiftUI
import RealityKit
struct ContentView: View {
var body: some View {
Model3D(named: "Robot-Drummer") { model in
model
.resizable()
.aspectRatio(contentMode: .fit)
} placeholder: {
ProgressView()
}
}
}
The error I get with visionOS simulator:
cannot migrate AudioUnit assets for current process
code:
guard let resource = try? AudioFileGroupResource.load(
named: "/Root/AudioGroupDropStone",
from: "Scene.usda",
in: realityKitContentBundle
)
Any ideas how to debug this?
The audio files seem to work fine in Reality Composer Pro.
Hi,
I've been working on a spatial image design, guided by this Apple developer video:
https://developer.apple.com/videos/play/wwdc2023/10081?time=792.
I've hit a challenge: I'm trying to position a label to the left of the portal. Although I've used an attachment for the label within the content, pinpointing the exact starting position of the portal to align the label is proving challenging.
Any insights or suggestions would be appreciated.
Below is the URL of the image used:
https://cdn.polyhaven.com/asset_img/primary/rural_asphalt_road.png?height=780
struct PortalView: View {
let radius = Float(0.3)
var world = Entity()
var portal = Entity()
init() {
world = makeWorld()
portal = makePortal(world: world)
}
var body: some View {
RealityView { content, attachments in
content.add(world)
content.add(portal)
if let attachment = attachments.entity(for: 0) {
portal.addChild(attachment)
attachment.position.x = -radius/2.0
attachment.position.y = radius/2.0
}
} attachments: {
Attachment(id: 0) {
Text("Title")
.background(Color.red)
}
}
}
func makeWorld() -> Entity {
let world = Entity()
world.components[WorldComponent.self] = .init()
let imageEntity = Entity()
var material = UnlitMaterial()
let texture = try! TextureResource.load(named: "road")
material.color = .init(texture: .init(texture))
imageEntity.components.set(
ModelComponent(mesh: .generateSphere(radius: radius), materials: [material])
)
imageEntity.position = .zero
imageEntity.scale = .init(x: -1, y: 1, z: 1)
world.addChild(imageEntity)
return world
}
func makePortal(world: Entity) -> Entity {
let portal = Entity()
let portalMaterial = PortalMaterial()
let planeMesh = MeshResource.generatePlane(width: radius, height: radius, cornerRadius: 0)
portal.components[ModelComponent.self] = .init(mesh: planeMesh, materials: [portalMaterial])
portal.components[PortalComponent.self] = .init(
target: world
)
return portal
}
}
#Preview {
PortalView()
}
I have a blender project, for simplicity a black hole. The way that it is modeled is a sphere on top of a round plane, and then a bunch of effects on that.
I have tried multiple ways:
convert to USD from the file menu
convert to obj and then import
But all of them have resulted in just the body, not any effects.
Does anybody know how to do this properly? I seem to have no clue except for going through the Reality Converter Pro (which I planned on going through already - but modeling it there)
Surface screen position
does it return model's vertices XYZ position normalized?
node graph needs more tutorials and explanations
made 0 progress
My app NFC.cool is using the object capture API and I fully developed the feature with an iPhone 13 Pro Max. On that phone everything works fine. No I have a new iPhone 15 Pro Max and I get crashes when the photogrammetry session is at around 1%. This happens when I completed all three scan passes. When I prematurely end a scan with around 10 images the reconstruction runs fine and I get a 3D model.
com.apple.corephotogrammetry.tracking:0 (40): EXC_BAD_ACCESS (code=1, address=0x0)
Any one else seeing these crashes?
SharePlay & Group Activities
I was able to implement entity position synchronisation via SharePlay (Group Activities) in my visionOS app by following the tutorials on SharePlay in the "Draw Together" app from these WWDC sessions:
https://developer.apple.com/wwdc21/10187
https://developer.apple.com/wwdc22/10140
While referencing the sample code at: https://developer.apple.com/documentation/groupactivities/drawing_content_in_a_group_session
MultipeerConnectivityService
However, it seems that RealityKit has something called MultipeerConnectivityService for Entity position synchronisation and it seems to be a pretty robust solution that will sync not only positions but also other things like Codable components. 🤔
See docs at: https://developer.apple.com/documentation/realitykit/multipeerconnectivityservice
Call for help
Can anyone share example code that implements MultipeerConnectivityService ?
I wonder if this is the recommended approach by Apple?
I must say, writing custom messages to sync the Entity positions via Group Activities was very hard 😅 I was just thinking what I should do for all the entity components now...
Hi,
I'm new to realitykit and still learning.
I'm trying to implement a feature on visionOS that triggers specific logic when the user's head comes into contact with another entity. When two entities are added directly to the realityView, I am able to subscribe to their collision event correctly, but when I add one of the entities to an anchorEntity that is anchored on the user's head, I am unable to receive the collision subscription. , and I found that if an entity declares that it obeys the hasAnchor protocol, it cannot participate in collision detection normally either. Why does this happen? Is this a feature or a bug?
Here is how I subscribe to collision events:
collisionSubscription = content.subscribe(to: CollisionEvents.Began.self, on: nil, componentType: nil) { collisionEvent in
print("💥 Collision between \(collisionEvent.entityA.name) and \(collisionEvent.entityB.name)")
}
the following two entity collides fine:
@State private var anotherEntity : Entity = CollisionEntity(model: MeshResource.generateSphere(radius: 1), materials: [SimpleMaterial(color: .white, isMetallic: false)], position: [-2,1,0])
@State private var headEntity : Entity = CollisionEntity(model: MeshResource.generateSphere(radius: 0.5), materials: [SimpleMaterial(color: .yellow, isMetallic: false)], position: [0, -0.35, -3])
but with anchoring, I can't get collision notifications
@State private var anotherEntity : Entity = CollisionEntity(model: MeshResource.generateSphere(radius: 1), materials: [SimpleMaterial(color: .white, isMetallic: false)], position: [-2,1,0])
@State private var headEntity : Entity = {
let headAnchor = AnchorEntity(.head)
headAnchor.addChild( CollisionEntity(model: MeshResource.generateSphere(radius: 0.5), materials: [SimpleMaterial(color: .yellow, isMetallic: false)], position: [0, -0.35, -3]))
return headAnchor
}
Any information or suggestion are welcomed, thanks!
I am trying to extract the 6DOF (six degrees of freedom) information from the PhotogrammetrySession.Pose using the ObjectCaptureSession in iOS. In the API documentation for PhotogrammetrySession.Pose, it is mentioned that it supports iOS 17 and later. However, in the GuidedCapture sample program, the following comment is written:
case .modelEntity(_, _), .bounds, .poses, .pointCloud:
// Not supported yet
break
Does this mean it's impossible to get 6DOF information from PhotogrammetrySession.Pose at this time? Or is there any other way to achieve this? Any guidance would be greatly appreciated.
The Goal
My goal is to place an item where the user taps on a plane, and have that item match the outward facing normal-vector where the user tapped.
In beta 3 a 3D Spatial Tap Gesture now returns an accurate Location3D, so determining the position to place an item is working great. I simply do:
let worldPosition: SIMD3<Float> = value.convert(value.location3D, from: .local, to: .scene)
The Problem
Now, I notice that my entities aren't oriented correctly:
The placed item always 'faces' the camera. So if the camera isn't looking straight on the target plane, then the orientation of the new entity is off.
If I retrieve the transform of my newly placed item it says the rotation relative to 'nil' is 0,0,0, which.... doesn't look correct?
I know I'm dealing with different Coordinate systems of the plane being tapped, the world coordinate system, and the item being placed and I'm getting a bit lost in it all. Not to mention my API intuition is still pretty low, so quats are still new to me.
So, I'm curious, what rotation information can I use to "correct" the placed entity's orientation?
What I tried:
I've tried investigating the tap-target-entity like so:
let rotationRelativeToWorld = value.entity.convert(transform: value.entity.transform, to: nil).rotation
I believe this returns the rotation of the "plane entity" the user tapped, relative to the world.
While that get's me the following, I'm not sure if it's useful?
rotationRelativeToWorld:
▿ simd_quatf(real: 0.7071068, imag: SIMD3<Float>(-0.7071067, 6.600024e-14, 6.600024e-14))
▿ vector : SIMD4<Float>(-0.7071067, 6.600024e-14, 6.600024e-14, 0.7071068)
If anyone has better intuition than me about the coordinated spaces involved, I would appreciate some help. Thanks!