For years, the preliminary behaviours provided a way to trigger an action sequence (now called timeline) when the user came close to an object.
I could not find the same in the new RealityComposerPro.
Reality Composer Pro
RSS for tagPrototype and produce content for AR experiences using Reality Composer Pro.
Post
Replies
Boosts
Views
Activity
I am trying to follow the documentation with the beta version of visionOS with the new realitykit LowLevelMesh construct (https://developer.apple.com/documentation/realitykit/lowlevelmesh) that draws a triangle. Although the code indicates different colors for each of the 3 vertex points, the triangle renders in white.
I believe that the missing link may be a shadergraph material, but because I will be drawing millions of triangles, with colors defined at the nodes and interpolated over the area of the triangles, I want to make sure it is efficient, either with shadergraph materials or perhaps metal.
I have, with an earlier version of the app I'm working on, successfully used a shadergraph material with MeshDescriptor.primatives as polygons for tetrahedrons. However, that is inefficient for more than 1,000 tetrahedrons (and crashes) so I'm trying to use the new LowLevelMesh instead (with each tetrahedron split into 4 triangles). However, I can't get very far using the example code from the documentation (that results in the white triangles), even trying to use the default shadergraph (GridMaterial) without getting quite a few error messages. I try to fix the errors with the suggested fixes and then get new ones (whack-a-mole) until it's seems to be all broken....
So in addition to my general question of shadergraph vs metal for a LowLevelMesh, a concrete example of using a shadergraph material with LowLevelMesh would be most appreciated! Thanks.
I can't see any tools or buttons about Timeline, In my interface, there are just Project Browser, Shader Graph, Audio Mixer, and Statistics. Why? How to trigger or turn on it? Please help me, thanks.
The object capture feature in Reality Composer App is only available in iOS and iPadOS at the moment, would this feature be available for visionOS in near future?
Reality Composer App Store
https://apps.apple.com/us/app/reality-composer/id1462358802
I have read this thread to send notification to play animations in RCP.
If I now want to pause and come back later or stop and reset the timeline, is there a way to do so?
I have an Entity exported from Blender, after loaded from RealityView, the "Body" and "Mesh" Entity have no ModelComponent, but they have Material Bindings reference, how can I update their materials?
CrashLog panicString
Given that one can add custom components and expose them via RCP, how do I go about implementing my components / system in a way where when I make a parameter change that gets applied to the entitiy in the RCP viewport?
I'm following WWDC for interactive 3D content in reality composer pro and apple's document
https://developer.apple.com/wwdc24/10102
https://developer.apple.com/documentation/realitykit/implementing-systems-for-entities-in-a-scene#Retrieve-entities-with-an-entity-query
However, this simple code to declare a dummy Component and System has compile error
/Users/Workspaces/repository/Packages/RealityKitContent/Sources/RealityKitContent/RobotComponent.swift:18:24 Static property 'query' is not concurrency-safe because non-'Sendable' type 'EntityQuery' may have shared mutable state
// Define a query to return all entities with a MyComponent.
private static let query = EntityQuery(where: .has(MyComponent.self))
// Initializer is required. Use an empty implementation if there's no setup needed.
required init(scene: Scene) { }
// Iterate through all entities containing a MyComponent.
func update(context: SceneUpdateContext) {
for entity in context.entities(
matching: Self.query,
updatingSystemWhen: .rendering
) {
// Make per-update changes to each entity here.
}
}
}
I'm using XCode beta3 and project target visionos 2
On visionOS 2 beta 3, Reality Composer Pro will open a cached copy of a scene (for example an usdc file I just changed) on the first try. Closing it and re-opening it will open the correct version.
Am I doing something wrong?
Hey there,
I'm wonder if any one knows how to make the shader graph which is shown in wwdc24 video. I tired couple things but couldn't get same result I could make it in Unity and Blender but not in RCP.
thank you.
https://developer.apple.com/wwdc24/10106
I'm using Reality Composer Pro Version 2.0 Version 2.0 (448.0.10.0.2) avaliable in Xcode_16_beta_4
When adding a animation from the Animation Library component on my armature to a timeline - the animation does not 'freeze' on the last frame.
Is there a way to 'freeze' the first or last frames when adding animations to the timeline? And how should I expect the first and last keys on my animations to behave with the default 'rest pose' on the imported usd file?
Hello everyone
I am looking to build a simple app for displaying a spatial video using the quick look preview API. I have been following this video which is useful:
https://developer.apple.com/videos/play/wwdc2024/10166/#:~:text=QuickLook%20is%20the%20system%20standard,just%20like%20the%20Photos%20app.
I am new to building apps in Xcode, and I could do with some advice on how to build the rest of the project mentioned in the above video. I was wondering if there is source code or a project example available anywhere for an app the uses the Quick Look preview API?
When I run my visionOS App, RealityKitContent Report an error:
Tool terminated by signal 'Segmentation fault: 11'
And it points to a USDZ model I imported, but in the scene, my model can be displayed normally and there is no damage. Why does an error occur? How can I check and repair it?
Hello! I’ve got a USDZ export from Maya pipeline working with animation, and they load up nicely in the Vision Pro.
I’ve been checking out the animated sample files in the Augmented Reality/Quick Loop sample page, specifically, the first three at the top of the page.
I would like to know how they are created. I’m a 3d modeler and animator, not a programmer, so dipping my toe in RCP and Xcode/SwiftUI, but could used some informative tutorials for proper workflow. For example, in the Lunar Rover sample, there are lines emanating from the model, then text windows appear. Would I need to create all these extras inside Reality Composer Pro? I’d like to start creating immersive, narrative experiences (both in a volume, and fully immersive) but for prototyping, I want to learn the proper way to add this type of functionality. I think I remember seeing something to do with “schemas” involved. I’m assuming there might be some coding to setup in RCP for when items are selected, then an associated animation is triggered. Can anyone point me towards the relevant documentation to help me get started? Remember, I don’t code. ;)
Here are my recent Vision Pro experimentations.
https://youtube.com/playlist?list=PLCH753rZ9r6eqXxpIemaSlcyYxjFgR210&si=P_7AY2aL97Upm61i
I’m also proficient with Unreal Engine, but getting content packaged and over to AVP is still not ready for prime time, so i’m exploring the native approach.
Thanks for helping point me in the right direction!
Hello. I am a designer developing a Vision Pro app. I have Two Problem in my App Develop Process.
I am trying to import free 3D national heritage content from Korea into Reality Composer Pro and place it in the app's internal space. However, there is an issue where the textures are not being imported correctly.
in Reality Composer Pro
in Simulator
In Reality Composer Pro, the textures are displayed correctly, but when I run the app on the Simulator in Xcode, the textures appear white and are not displayed properly. The content I imported is an .obj file, and I applied all the textures in jpg format using Reality Converter and exported it as a .usdz file, but the same issue persists.
I checked to see if the problem only occurs on the Simulator, but the same issue occurs on the Vision Pro device as well. How can I resolve this problem?
The following error code appears in Xcode, and the simulator does not run. I think it might be due to the size of the object added to the scene, so I tried compressing it with Reality Converter, but the issue still persists. Is there any other way to resolve this?
[MTLDebugDevice newBufferWithBytesNoCopy:length:options:deallocator:]:700: failed assertion Buffer Validation
newBufferWith*:length 0x280cc000 must not exceed 256 MB.
Baffled by the new ExtractBits shader graph node only supporting String input. Is this a bug? Trying to extract an integer from a float value, but have no idea how to pass it into Extract Bits. Convert nodes don't support number to string.
How should I set the window of WindowGrop to resemble a curved screen style?
How to solve the problem of using Model3D to load a local model file in Unity project, clicking on NavigationLink multiple times to load the local model file, and receiving a prompt "assertion failure: 'stagingBuffer.buffer.isValid()' (createMetalBuffer:line 2971) Failed to create staging buffer for texture upload"?
Hello all,
I'm developing an application for visionOS and I'm trying to implement 2 different animations:
First animation
Initially, I have a map that should not be visible. I would like to create an animation effect where it appears as if a drop of water falls in the center of the map and the expanding waves gradually reveal the entire map.
Is there a way to do it directly on SwiftUI or I need an animation on my USDZ?
Second animation
I want an animation effect similar to a cinema screen opening from the center, gradually revealing a video that was initially hidden.
Is there a way to do it directly on SwiftUI?
Can someone help me with this topic?
Thanks ;)