Have some older code running an ARView in .nonAR mode with a perspective camera that is moved around. It seem if running this on an iPhone with iOS18 Beta or iOS18.1 beta it ignores the camera and the view looks incorrect.
RealityKit
RSS for tagSimulate and render 3D content for use in your augmented reality apps using RealityKit.
Post
Replies
Boosts
Views
Activity
Has anyone come across the issue that setting GKLocalPlayer.local.authenticateHandler breaks a RealityView's world tracking on iOS / iPadOS 18 beta 5?
I'm in the process of upgrading my app to make use of the much appreciated RealityView unification, using RealityView not only on visionOS but now also on iOS and iPadOS. In my RealityView, I enable world tracking on iOS like this:
content.camera = .worldTracking
However, device position and orientation were ignored (the camera remained static) and there was no camera pass-through. Then I discovered that the issue disappeared when I remove the line
GKLocalPlayer.local.authenticateHandler = { viewController, error in
// ... some more code ...
}
So I filed FB14731139 and hope that it will be resolved before the release of iOS / iPadOS 18.
I am trying to make a shader that resembles a laser like this:
I've been experimenting with a basic Fresnel shader to start, but the Fresnel shader has a problem at high viewing angles where the top has a very different color than the rest of the capsule.
This might work for a laser shader once inverted and fine tuned:
However, when viewed from the top, it doesn't look so good anymore:
Ideally, the purple edge is always ONLY on the edge, and the rest of the surface is the light pink color, no matter the viewing angle. How can I accomplish this to create something that looks like a laser?
Is it possible with iOS 18 to use RealityView with world tracking but without the camera feed as background?
With content.camera = .worldTracking the background is always the camera feed, and with content.camera = .virtual the device's position and orientation don't affect the view point. Is there a way to make a mixture of both?
My use case is that my app "Encyclopedia GalacticAR" shows astronomical objects and a skybox (a huge sphere), like a VR view of planets, as you can see in the left image. Now that iOS 18 offers RealityView for iOS and iPadOS, I would like to make use of it, but I haven't found a way to display my skybox as environment, instead of the camera feed.
I filed the suggestion FB14734105 but hope that somebody knows a workaround...
I have various USDZ files in my visionOS app. Loading the USDZ files works quite well. I only have problems with the positioning of the 3D model. For example, I have a USDZ file that is displayed directly above me. I can't move the model or perform any other actions on it. If I sit on a chair or stand up again, the 3D model automatically moves with me. This is my source code for loading the USDZ files:
struct ImmersiveView: View {
@State var modelName: String
@State private var loadedModel = Entity()
var body: some View {
RealityView { content in
if let usdModel = try? await Entity(named: modelName) {
print("====> \(modelName) : \(usdModel) <====")
let bounds = usdModel.visualBounds(relativeTo: nil).extents
usdModel.scale = SIMD3<Float>(1.0, 1.0, 1.0)
usdModel.position = SIMD3<Float>(0.0, 0.0, 0.0)
usdModel.components.set(CollisionComponent(shapes: [.generateBox(size: bounds)]))
usdModel.components.set(HoverEffectComponent())
usdModel.components.set(InputTargetComponent())
loadedModel = usdModel
content.add(usdModel)
}
}
}
}
I only want the 3D models from the USDZ files to be displayed later, and later on, to be able to move them via gestures. Moving the models is step 2. First, I need to make sure the models are displayed correctly. What have I forgotten or done wrong?
Hey everyone,
I'm working on an iOS app where I use AVPlayer to play videos, then process them through Metal to apply effects. The app has controls that let users tweak these effects in real-time, and I want the final processed video to be streamed via AirPlay. I use a custom rendering layer that uses a Metal texture to display the processed video on the screen an that works as intended.
The problem is, when I try to AirPlay the video after feeding it the processed metal frames, it’s just streaming the original video from AVPlayer, not the version with all the Metal effects.
The final processed output is a Metal texture that gets rendered in a MTKView. I even tried capturing that texture and sending it through a new AVPlayer setup, but AirPlay still grabs the original, unprocessed video instead of the final, fully-rendered output. It's also clear that the airplayed video has the full length of the original built in so it's not even that it's 'live streaming' the wrong feed.
I need help figuring out how to make AirPlay stream the live, processed video with all the effects, not just the raw video. Any ideas? Happy to share my code if that helps but I'm not sure I have the right underlying approach yet.
Thanks!
I'm building an iOS/iPadOS app for iOS 18+ using the new RealityView in SwiftUI. (I may add visionOS, but I'm not focusing on it right now.) The 3D scene I'm rendering is fairly simple (just a few dozen vertices and a couple of textures), and I'd like to render it at 120fps on ProMotion devices if possible. I tried setting CADisableMinimumFrameDurationOnPhone to true in the info plist, but it had no effect. The frame rate in the GPU Report in Xcode stays capped at 60fps, and the gauge even tops out at 60.
My question is kind of the opposite of this post, which asks how to limit the frame rate of a RealityView.
I'm on Xcode 16 beta 5 on macOS Sonoma and iOS 18.0 beta 6 on my iPhone 15 Pro.
Hello,
I’m playing around with making an fully immersive multiplayer, air to air dogfighting game, but I’m having trouble figuring out how to attach a camera to an entity.
I have a plane that’s controlled with a GamePad. And I want the camera’s position to be pinned to that entity as it moves about space, while maintaining the users ability to look around.
Is this possible?
--
From my understanding, the current state of SceneKit, ARKit, and RealityKit is a bit confusing with what can and can not be done.
SceneKit
Full control of the camera
Not sure if it can use RealityKits ECS system.
2D Window. - Missing full immersion.
ARKit
Full control of the camera* - but only for non Vision Pro devices. Since Vision OS doesn't have a ARView.
Has RealityKits ECS system
2D Window. - Missing full immersion.
RealityKit
Camera is pinned to the device's position and orientation
Has RealityKits ECS system
Allows full immersion
Using Reality Composer Pro 2.0, I created a simple shader graph that displays a texture on an unlit surface:
On visionOS 2 beta, I can successfully use ShaderGraphMaterial(named:from:in:) to load that shader graph material and assign it to a model entity.
However, on visionOS 1.2 and earlier, either in Simulator or on the device, ShaderGraphMaterial(named:from:in:) fails and I see the following logged to the console:
If, using Reality Composer Pro 1.0, I experimentally open the same project and delete and recreate exactly the same nodes above, then ShaderGraphMaterial(named:from:in:) works as expected on visionOS 1.2.
Is it a known issue that Reality Composer 2 can't be used with visionOS 1?
Is this intentional behavior?
I've submitted feedback as FB14828873, including a sample project and repro steps.
If possible, I would appreciate guidance from an Apple engineer, like "This is a known issue for [list of node types]" or "Reality Composer Pro 2 is not supported for visionOS 1 development, please refer to [documentation]" or "We recommend [workaround]."
Thank you.
Hello Guys,
I'm looking for how to look at the 3D object around me, pinch it (from far with eyes and 2 fingers, not touch them) and display a little information on top of it (like little text).
If you have the solution you will make me happy (and a bit less stupid too :) )
Cheers
Mathis
I'm trying to load up a virtual skybox, different from the built-in default, for a simple macOS rendering of RealityKit content.
I was following the detail at https://developer.apple.com/documentation/realitykit/environmentresource, and created a folder called "light.skybox" with a single file in it ("prairie.hdr"), and then I'm trying to load that and set it as the environment on the arView when it's created:
let ar = ARView(frame: .zero)
do {
let resource = try EnvironmentResource.load(named: "prairie")
ar.environment.lighting.resource = resource
} catch {
print("Unable to load resource: \(error)")
}
The loading always fails when I launch the sample app, reporting "Unable to load resource ..." and when I look in the App bundle, the resource that's included there as Contents/Resources/light.realityenv is an entirely different size - appearing to be the default lighting.
I've tried making the folder "light.skybox" explicitly target the app bundle for inclusion, but I don't see it get embedded with it toggle that way, or just default.
Is there anything I need to do to get Xcode to process and include the lighting I'm providing?
(This is inspired from https://stackoverflow.com/questions/77332150/realitykit-how-to-disable-default-lighting-in-nonar-arview, which shows an example for UIKit)
I'm trying to clone an entity that's somewhere deeper in hierarchy and I want it together with transform that takes into account parents.
Initially I made something that would go back through parents, get their transforms and then reduce them to single one. Then I realized that what I'm doing is same as .transformMatrix(relativeTo: rootEntity), but to validate that what I made gives same results I started to print them both and I noticed that for some reason the last row instead of stable (0,0,0,1) is sometimes (0,0,0,0.9999...). I know that there are rounding errors, but I'd assume that 0 and 1 are "magical" in FP world.
The only way I can try to explain it, is that .transformMatrix is using some fancy accelerated matrix multiplication and those produce some bigger rounding errors. That would explain slight differences in other fields between my version and function call, but still - the 1 seems weird.
Here's function I'm using to compare:
func cloneFlattened(entity: Entity, withChildren recursive: Bool) -> Entity {
let clone = entity.clone(recursive: recursive)
var transforms = [entity.transform.matrix]
var parent: Entity? = entity.parent
var rootEntity: Entity = entity
while parent != nil {
rootEntity = parent!
transforms.append(parent!.transform.matrix)
parent = parent!.parent
}
if transforms.count > 1 {
clone.transform.matrix = transforms.reversed().reduce(simd_diagonal_matrix(simd_float4(repeating: 1)), *)
print("QWE CLONE FLATTENED: \(clone.transform.matrix)")
print("QWE CLONE RELATIVE : \(entity.transformMatrix(relativeTo: rootEntity))")
}
else {
print("QWE CLONE SINGLE : \(clone.transform.matrix)")
}
return clone
}
Sometimes last one is not 1
QWE CLONE FLATTENED: [
[0.00042261832, 0.0009063079, 0.0, 0.0],
[-0.0009063079, 0.00042261832, 0.0, 0.0],
[0.0, 0.0, 0.0010000002, 0.0],
[-0.0013045187, -0.009559666, -0.04027118, 1.0]
]
QWE CLONE RELATIVE : [
[0.00042261826, 0.0009063076, -4.681872e-12, 0.0],
[-0.0009063076, 0.00042261826, 3.580335e-12, 0.0],
[3.4256328e-12, 1.8047965e-13, 0.0009999998, 0.0],
[-0.0013045263, -0.009559661, -0.040271178, 0.9999997]
]
Sometimes it is
QWE CLONE FLATTENED: [
[0.0009980977, -6.1623556e-05, -1.7382005e-06, 0.0],
[-6.136851e-05, -0.0009958588, 6.707259e-05, 0.0],
[-5.8642554e-06, -6.683835e-05, -0.0009977464, 0.0],
[-1.761913e-06, -0.002, 0.0, 1.0]
]
QWE CLONE RELATIVE : [
[0.0009980979, -6.1623556e-05, -1.7382023e-06, 0.0],
[-6.136855e-05, -0.0009958589, 6.707254e-05, 0.0],
[-5.864262e-06, -6.6838256e-05, -0.0009977465, 0.0],
[-1.758337e-06, -0.0019999966, -3.7252903e-09, 1.0]
]
0s in last row seem to be stable.
It happens both for entities that are few levels deep and those that have only anchor as parent.
So far I've never seen any value that would not be "technically a 1", but my hierarchies are not very deep and it makes me wonder if this rounding could get worse.
Or is it just me doing something stupid? :)
Starting with Xcode Beta 4+, any ModelEntity I load from usdz that contain a skeletal pose has no pins. The pins used to be accessible from a ModelEntity so you could use alignment with other pins.
Per the documentation, any ModelEntity with a skeletal pose should have pins that are automatically generated and contained on the entity.pins object itself.
https://developer.apple.com/documentation/RealityKit/Entity/pins
Is this a bug with the later Xcode betas or is the documentation wrong?
Hi, I'm trying to capture some images from WKWebView on visionOS. I know that there's a function 'takeSnapshot()' that can get the image from the web page. But I wonder if 'drawHierarchy()' cannot work properly on WKWebView because of GPU content, is there any other methods I can call to capture images correctly?
Furthermore, as I put my webview into an immersive space, is there any way I can get the texture of this UIView attachment? Thank you
UI:
Attachment(id: "tooptip") {
if isRecording {
TooltipView {
HStack(spacing: 8) {
Image(systemName: "waveform")
.font(.title)
.frame(minWidth: 100)
}
}
.transition(.opacity.combined(with: .scale))
}
}
Trigger:
Button("Toggle") {
withAnimation{
isRecording.toggle()
}
}
The above code did not show the animation effect when running. When I use isRecording to drive an element in a common SwiftUI view, there is an animation effect.
Hi everyone,
I'm developing an ARKit app using RealityKit and encountering an issue where a video displayed on a 3D plane shows up as a pink screen instead of the actual video content.
Here's a simplified version of my setup:
func createVideoScreen(video: AVPlayerItem, canvasWidth: Float, canvasHeight: Float, aspectRatio: Float, fitsWidth: Bool = true) -> ModelEntity {
let width = (fitsWidth) ? canvasWidth : canvasHeight * aspectRatio
let height = (fitsWidth) ? canvasWidth * (1/aspectRatio) : canvasHeight
let screenPlane = MeshResource.generatePlane(width: width, depth: height)
let videoMaterial: Material = createVideoMaterial(videoItem: video)
let videoScreenModel = ModelEntity(mesh: screenPlane, materials: [videoMaterial])
return videoScreenModel
}
func createVideoMaterial(videoItem: AVPlayerItem) -> VideoMaterial {
let player = AVPlayer(playerItem: videoItem)
let videoMaterial = VideoMaterial(avPlayer: player)
player.play()
return videoMaterial
}
Despite following the standard process, the video plane renders pink. Has anyone encountered this before, or does anyone know what might be causing it?
Thanks in advance!
In my Metal-based app, I ray-march a 3D texture. I'd like to use RealityKit instead of my own code. I see there is a LowLevelTexture (beta) where I could specify a 3D texture. However on the Metal side, there doesn't seem to be any way to access a 3D texture (realitykit::texture::textures::custom returns a texture2d).
Any work-arounds? Could I even do something icky like cast the texture2d to a texture3d in MSL? (is that even possible?) Could I encode the 3d texture into an argument buffer and get that in somehow?
I'm trying to ray-march an SDF inside a RealityKit surface shader. For the SDF primitive to correctly render with other primitives, the depth of the fragment needs to be set according to the ray-surface intersection point. Is there a way to do that within a RealityKit surface shader? It seems the only values I can set are within surface::surface_properties.
If not, can an SDF still be rendered in RealityKit using ray-marching?
Summary:
I’m working on a VisionOS project where I need to dynamically load a .bundle file containing RealityKit content from the app’s Application Support directory. The .bundle is saved to disk after being downloaded or retrieved as an On-Demand Resource (ODR).
Sample project with the issue:
Github repo. Play the target test-odr to use with the local bundle and have the crash.
Overall problem:
Setup: Add a .bundle named RealityKitContent_RealityKitContent.bundle to the app’s resources. This bundle contains a Reality file with two USDA,: “Immersive” and “Scene”.
Save to Disk: save the bundle to the Application Support directory, ensuring that the file is correctly copied and saved.
Load the Bundle: load the bundle from the saved URL using Bundle(url: bundleURL) to initialize the Bundle object.
Load Entity from Bundle: load a specific entity (“Scene”) from the bundle. When trying to load the entity using let storedEntity = try await Entity(named: "Scene", in: bundle), the app crashes with an EXC_BREAKPOINT error.
ContentsOf Method Issue: If I use the Entity.load(contentsOf:realityFileURL, withName: entityName) method, it always loads the first root entity found (in this case, “Immersive”) rather than “Scene”, even when specifying the entity name. This is why I want to use the Bundle to load entities by name more precisely.
Issue:
The crash consistently occurs on the Entity(named: "Scene", in: bundle) line. I have verified that the bundle exists and is accessible at the specified path and that it contains the expected .reality file with multiple entities (“Immersive” and “Scene”). The error code I get is EXC_BREAKPOINT (code=1, subcode=0x1d135d4d0).
What I’ve Tried:
• Ensured the bundle is properly saved and accessible.
• Checked that the bundle is initialized correctly from the URL.
• Tested loading the entity using the contentsOf method, which works fine but always loads the “Immersive” entity, ignoring the specified name. Hence, I want to use the Bundle-based approach to load multiple USDA entities selectively.
Question:
Has anyone faced a similar issue or knows why loading entities using Entity(named:in:) from a disk-based bundle causes this crash? Any advice on how to debug or resolve this, especially for managing multiple root entities in a .reality file, would be greatly appreciated.
Hi, I'm trying to understand how I can get 3- or 4-channel per-vertex data into the Graph Editor.
From my tests, it seems that:
the "Geometric Property" node does not give access to 4-channel data,
"Geometric Property (vector3)" does not give me access to custom properties besides the ones defined in MaterialX core
the "Texture Coordinates" node has a vector4f mode (yay!), but according to MaterialX spec, texcoords must have 2 or 3 channels, and I can't get 4-channel data to show up there either.
My assumption so far is that I must be missing some "magic" – for example, do the primvars in a file have to be in a specific order, independent of their names? Or do their names matter? (E.g. convention would be primars:st and primvars:st1 and so on)
Unfortunately the forum doesn't allow me to attach any USDZ or ZIP files or GDrive links; if there's a way to share a test file I'm happy to do so!