I would like to add text to a Reality Composer Pro scene and set the actual text via code. How can I achieve this? I haven't seen any "Text" element in the editor.
Reality Composer Pro
RSS for tagPrototype and produce content for AR experiences using Reality Composer Pro.
Post
Replies
Boosts
Views
Activity
I'm following the Meet Reality Composer Pro walkthrough and ran into something that didn't function as expected.
When I got to the step where I add five "Bird_With_Audio.usda" references to the scene, I found they did not play audio. After some trial and error, I found that Preview > Resource in each of their Spatial Audio items was set to "None." If I click the dropdown menu, I see several "Bird_Calls" groups to pick from.
I checked the original Bird_With_Audio.usda that I had created, and the "Bird_Calls" audio group was correctly assigned and worked. I tried dragging a sixth Bird_With_Audio into the scene and confirmed that the Spatial Audio item suddenly empties, rendering the bird silent.
I was able to go through each of the five birds and set their Spatial Audio Resource to Bird_Calls, and the group worked like the video demonstrates.
While this fixed the issue, as a beginner I'd like to know why this happened. It doesn't seem right that I would build and item and then have to re-attach any sounds to it when I place it in the main scene. So…where did I mess up?
I'm trying to make a simple demo of using ShaderGraphMaterial in a USDZ file that I can preview on Mac and VisionOS but I'm having trouble.
In Reality Composer, I make a sphere, then assign a ShaderGraphMaterial to the material, with a simple diffuse color (green) input. When I save the file as .usda, it displays as a gray sphere on mac rather than the green sphere shown in reality composer. If I then convert to usdz using Reality Converter, I get a warning on import:
"Shader nodes must have “id” as the implementationSource, with id values that begin with “Usd”. Also, shader inputs with connections must each have a single, valid connection source."
And the exported .usdz also shows as a gray sphere.
Is there a simple demo of a .usda file using ShaderGraphMaterial that displays on Mac, iOS, and VisionOS that I can look at to see how it looks internally?
My actual problem is creating usdz / usda files on visionOS for viewing on iOS / Mac / VisionOS.. but the first step is showing it's possible to even use ShaderGraphMaterial across all platforms.
Thanks
Hi,
I create an entity and add a bunch of attachments (code is based on the Diorama demo).
I can rotate the entity with this:
.gesture(
DragGesture()
.targetedToAnyEntity()
.onChanged { value in
let entity = value.entity
let orientation = Rotation3D(entity.orientation(relativeTo: nil))
let newOrientation: Rotation3D
if (value.location.x >= lastGestureValue) {
newOrientation = orientation.rotated(by: .init(angle: .degrees(0.5), axis: .y))
} else {
newOrientation = orientation.rotated(by: .init(angle: .degrees(-0.5), axis: .y))
}
entity.setOrientation(.init(newOrientation), relativeTo: nil)
lastGestureValue = value.location.x
}
)
But the attachments stay still.
How can I rotate the entity AND the attachment at the same time?
We are porting a iOS Unity AR app to native visionOS.
Ideally, we want to re-use our AR models in both applications. These AR models are rather simple. But still, converting them manually would be time-consuming, especially when it gets to the shaders.
Is anyone aware of any attempts to write conversion tools for this? Maybe in other ecosystems like Godot or Unreal, where folks also want to convert the proprietary Unity format to something else?
I've seen there's an FBX converter, but this would not care for shaders or particles.
I am basically looking for something like the Polyspatial-internal conversion tools, but without the heavy weight of all the rest of Unity. Alternatively, is there a way to export a Unity project to visionOS and then just take the models out of the Xcode project?
I have a custom material in Reality Composer.
When I attach it to a cube and try loading the scene in XCode, the material cannot be cast to a ShaderGraphMaterial because it has been changed to a PhysicallyBasedMaterial.
The material was always a Custom material, I did not change the type in Reality Composer.
Does anyone know how to fix?
Hi guys,
if you started using Vision Pro, I'm sure you already found some limitations. Let's join forces and make feature requests. When creating Feedback, request from one guy may not get any attenption from Apple, but if we join and more of us make the same request, we might just push those ideas through. Feel free to add your ideas and don't forget to create feedback:
app windows can only be moved forward to a distance of about 20ft/6m. I'm pretty sure some users would like to push window as far as a few miles away and make the window large to be still legible. This would be very interesting especialy when using Environments and 360-degree view. I really want to put some apps up on the sky above the mountains and around me, even those iOS apps just made compatible with Vision Pro.
when capturing screen, I always get message "Video capture not possible due to insufficient lighting". Why? I have Environment loaded and extended 360 degrees with some apps opened, so there is no need for external lighting (at least I thing it's not needed). I just want to capture what I see. Imagine creating tutorials, recording lessons for learning various subjects, etc. Actual Vision Pro user might prefer loading their on environments an setup app in spatial domain, but for those that don't have it yet or when creating videos to be available on antique 2D computer screens , it may be useful to create 2D videos this way.
3D video recording is not very good, kind of shaky, not when Vision Pro is static, but when walking and especially when turning head left/right/up/down (even relatively slowly). I think hardware should be able to capture and create nice and smooth video. It's possible that Apple just designed simple camera app and wants to give developers a chance to create a better Camera app, but it still would be nice to have something better out of the box.
I would like to be able to walk through Environments. I understand safety of see-through effect, so users didn't hit any obstacles, but perhaps obstacles could be detected and when user gets to 6ft/2m from obstacle then it could present at first warning (there is already "You are close to and object" and then make surroundigns visible, but if there are no obstacles (user can be located in large space and can place a tape or a thread around the safe area), I should be able to walk around and take a look inside that crater on the Moon.
We need Environments, Environments, Environments and yet more of them, I was hoping for hundreds, so we could even pick some of them and use in our apps, like games where you want to setup a specific environment.
Well, that's just a beginning and I could go on and on and on, but tell me what you guys think.
Regards and enjoy new virtual adventure!
Robert
Hi folks!
I have been working with a team on a Vision Pro app using Reality Composer Pro. One thing we have found is that multiple developers editing the RCPro scene are a continuous problem, similar to when multiple developers edit a storyboard.
RC Pro maintains a SceneMetadataList.json file that indexes the file contents of the project that is updated even as the scene hierarchy is opened and closed, not to mention other changes to scene content. We are getting frequent continuous version control conflicts with this file as we each make changes and edits to the scene, or even browse the scene without making any substantive changes.
It seems like it would be safe to add the SceneMetadataList.json file in a RC Pro project to .gitignore. Is that recommended? Any downsides to that?
I'm following the Meet Reality Composer Pro walkthrough and ran into something that didn't function as expected.
When I got to the step where I add five "Bird_With_Audio.usda" references to the scene, I found they did not play audio. After some trial and error, I found that Preview > Resource in each of their Spatial Audio items was set to "None." If I click the dropdown menu, I see several "Bird_Calls" groups to pick from.
I checked the original Bird_With_Audio.usda that I had created, and the "Bird_Calls" audio group was correctly assigned and worked. I tried dragging a sixth Bird_With_Audio into the scene and confirmed that the Spatial Audio item suddenly empties, rendering the bird silent.
I was able to go through each of the five birds and set their Spatial Audio Resource to Bird_Calls, and the group worked like the video demonstrates.
While this fixed the issue, as a beginner I'd like to know why this happened. It doesn't seem right that I would build and item and then have to re-attach any sounds to it when I place it in the main scene. So…where did I mess up?
Hello all -
I'm experiencing a shading error when I have two UnlitSurface shaders using images for color and opacity. When the shaders are applied to two mesh planes, one placed in front of the other, the shader in front will render and the plane mesh will mask out and not render what is behind.
Basically - it looks like the opacity map on the shader in front is creating a 'mask'.
I've attached some images here to help explain.
Has anyone experienced this error? And how can I go about fixing this - thx!
We are building an AR experience for deployment on iphones. We are using Unity but it looks as if Reality Composer Pro has better features for spatial audio. I am not sure if Reality Composer Pro can only be used for Vision Pro or can it also be used for deployment on Iphone or ipad.
I am very new to shaders, never used one of the large systems like Unity. However I have started exploring visionOS programming and that led me to create some effects for materials in Reality Composer Pro.
I have been overwhelmed with the possibilities, but also kind of lost. I understand that RCPs shaders are based on MaterialX, so maybe there are tutorials on the web that would cover how to create procedural effects (fire, wind, water, etc)? I’ve stumbled through…but it’s slow going. Are there any good resources that talk about how to use the various nodes to create procedural effects?
For example, it took me a while to figure out that using the “time” node allows me to animate cool color changes, especially when combined with various math and remap nodes.
Just looking for some basic resources I think. Would the shader graph tutorials about Unity, apply to using RCP? Are the node types similar enough?
Following this thread I'm able to render a simple picture in a Plane material, however, I'm unable to scale it to show bigger than the window itself, or move it behind the window.
Here's my relevant code so far.-
var body: some View {
ZStack {
RealityView { content in
var material = UnlitMaterial()
material.color = try! .init(tint: .white,
texture: .init(.load(named: "image",
in: nil)))
let entity = Entity()
let component = ModelComponent(
mesh: .generatePlane(width: 1, height: 1),
materials: [material]
)
entity.components.set(component)
let currentTransform = entity.transform
var newTransform = Transform(scale: currentTransform.scale,
rotation: currentTransform.rotation,
translation: SIMD3(0, 0, -0.2))
entity.move(to: newTransform, relativeTo: nil)
/*
let scalingPivot = Entity()
scalingPivot.position.y = entity.visualBounds(relativeTo: nil).center.y
scalingPivot.addChild(entity)
content.add(scalingPivot)
scalingPivot.scale *= .init(x: 1, y: 1, z: 1)
*/
}
}
}
It belongs to an ImmersiveSpace I'm opening directly from my main window, but I have several issues:
The texture shows always in front of the window
I'm unable to scale it (scaling seems to affect to the texture coordinates inside the material instead of scaling the mesh itself)
I can only see the texture in the canvas preview (not in simulator)
I'm developing a vision pro application. However, when the user takes off the Apple Vision Pro device, the application goes into the background. How can I prevent this behavior programmatically?
I'd like to map a SwiftUI view (in my case: a map) onto a 3D curved plane in immersive view, so user can literally immersive themselves into the map. The user should also be able to interact with the map, by panning it around and selecting markers.
Is this possible?
Hi,
I'm working on a simple visionOS app and I'm testing on device.
For one part of the app, I load an object in and place it on the user's hand. If I use a primitive shape, like a sphere or cylinder, this works fine. However, now I'm trying to load a an object from my RealityKitContent package. But everytime I try this, I get a an error message, resourceNotFound("Stone"), where "Stone" is one of my usda scenes.
This is what the guts of my function looks like that should return a ModelEntity:
do {
let entity = try await ModelEntity(named: "Stone", in: realityKitContentBundle)
entity.generateCollisionShapes(recursive: true)
return entity
} catch {
print("Error \(error)")
}
I can see the "Stone" in my Xcode sidebar as part of the RealityKitContent package and inside that scene, there is a simple sphere, but alas I always get this in the Xcode console, "Error resourceNotFound("Stone")"
I'm probably doing something pretty silly, hopefully it's obvious to someone else.
Thanks for the help.
Ian
Dear Apple Developer Forum Community,
I hope this message finds you well. I am writing to seek assistance regarding an error I encountered while attempting to create a "Hello World" application using Xcode.
Upon launching Xcode and starting a new project, I followed the standard procedure for creating a simple iOS application. However, during the process, I encountered an unexpected error that halted my progress. The error message I received was [insert error message here].
I have attempted to troubleshoot the issue by see two images, but unfortunately, I have been unsuccessful in resolving it.
I am reaching out to the community in the hope that someone might have encountered a similar issue or have expertise in troubleshooting Xcode errors. Any guidance, suggestions, or solutions would be greatly appreciated.
Thank you very much for your time and assistance.
Sincerely,
Zipzy games
y
Games
Hi, I tried to change the default size for a volumetric window but It looks like this window has a maximum width value. Is it true?
WindowGroup(id: "id") {
ItemToShow()
}.windowStyle(.volumetric)
.defaultSize(width: 100, height: 0.8, depth: 0.3, in: .meters)
Here I set the width to 100 meters but It still looks like about 2 meters
How to binding MTLTexture to Color input of the material?
I need use something similar to VideoMaterial.
So I need make a CustomMaterial.
But RealityKit CustomMaterial is not available in VisionOS, and replaced by ShaderGraphMaterial
So how to binding Metal resource such as MTLTexture to ShadeGraphMaterial directly.
Hi all,
Up until a couple of days ago I was able to open and run Reality Composer Pro on my intel-based Mac. I tried to open it again this morning and I now receive the notification "Reality Composer is not supported on this Mac".
I understand that I will eventually need a new computer with Apple silicon but it was nice to be able to start exploring Shader Graphs with my existing computer for now.
Any suggestions? Perhaps go back to an earlier version of the beta Xcode - maybe the latest version disabled my ability to run RCP?
I'm running Version 15.1 beta (15C5042i) of Xcode on an Intel i7 MacBook Pro.
Thanks, in advance!