I would like to add text to a Reality Composer Pro scene and set the actual text via code. How can I achieve this? I haven't seen any "Text" element in the editor.
Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.
Post
Replies
Boosts
Views
Activity
I see example code converting the results of a SpatialTap to a SIMD3 location. For example, from WWDC session Meet ARKit for spatial computing:
let location3D = value.convert(value.location3D, from: .global, to: .scene)
What I really want is a simd_float4x4 that includes orientation of the surface that the tap gesture/cast collided with?
My goal is to place an object with its Y-axis along the normal of the surface that was tapped.
For example, in the referenced WWDC session, they create a CollisionComponent from the MeshAnchor data. If that mesh data is covering a curved couch cushion, I would like the normal from that curved cushion (i.e., the closest triangle approximating it).
Is this possible?
My planned fallback is to only use planes for collision surfaces for tap gestures, extract the tap gesture value's entity (which I am hoping is the plane), and grab its transform for the orientation information.
I am hoping Apple has a simple function call that is more general than my fallback approach.
I'm trying to make a simple demo of using ShaderGraphMaterial in a USDZ file that I can preview on Mac and VisionOS but I'm having trouble.
In Reality Composer, I make a sphere, then assign a ShaderGraphMaterial to the material, with a simple diffuse color (green) input. When I save the file as .usda, it displays as a gray sphere on mac rather than the green sphere shown in reality composer. If I then convert to usdz using Reality Converter, I get a warning on import:
"Shader nodes must have “id” as the implementationSource, with id values that begin with “Usd”. Also, shader inputs with connections must each have a single, valid connection source."
And the exported .usdz also shows as a gray sphere.
Is there a simple demo of a .usda file using ShaderGraphMaterial that displays on Mac, iOS, and VisionOS that I can look at to see how it looks internally?
My actual problem is creating usdz / usda files on visionOS for viewing on iOS / Mac / VisionOS.. but the first step is showing it's possible to even use ShaderGraphMaterial across all platforms.
Thanks
I'm following the Meet Reality Composer Pro walkthrough and ran into something that didn't function as expected.
When I got to the step where I add five "Bird_With_Audio.usda" references to the scene, I found they did not play audio. After some trial and error, I found that Preview > Resource in each of their Spatial Audio items was set to "None." If I click the dropdown menu, I see several "Bird_Calls" groups to pick from.
I checked the original Bird_With_Audio.usda that I had created, and the "Bird_Calls" audio group was correctly assigned and worked. I tried dragging a sixth Bird_With_Audio into the scene and confirmed that the Spatial Audio item suddenly empties, rendering the bird silent.
I was able to go through each of the five birds and set their Spatial Audio Resource to Bird_Calls, and the group worked like the video demonstrates.
While this fixed the issue, as a beginner I'd like to know why this happened. It doesn't seem right that I would build and item and then have to re-attach any sounds to it when I place it in the main scene. So…where did I mess up?
Hello. I'm developing the app using ARKit and RealityKit. The purpose of the app is to scan the apartment and put furniture next to the walls. It works good, but if AR session takes more than 3 mins at some point app is crashed. According to crash report it's not something related to my code. I'm attaching crash report (company data is hidden). Any help is appreciated. Thanks in advance.
Hi,
I'm trying to have an entity (and some attachments to it) to rotate.
If I add the entity to content, add the attachment as a child entity, and set the entity as InputTargetComponent, then when I add a gesture ONLY the entity rotates and NOT the attachments (added as child entities).
If I add a parent entity with let parentEntity = ModelEntity(), add my entity to the parentEntity, then add the attachments to an entity (which is now a child of the ModelEntity) and set the ModelEntity as InputTargetComponent then the whole thing rotates (including attachments)
I'm sure there must be a bug, why would it work only with an added ModelEntity?
Anyway, bug or not a bug, the problem I have now is that it rotates around the axes of the ModelEntity, not my primary entity, which is what I want.
Is there a way to set the ModelEntity axes to be the axes of my primary child entity so it rotates like I want?
What call should I use to move the axes where would I find the axes of the first child entity which should be the focus of my app?
Here is my code:
var body: some View {
RealityView { content, attachments in
// Add the initial RealityKit content
if let specimenentity = try? await Entity(named: "Immersive", in: realityKitContentBundle) {
let parentEntity = ModelEntity()
parentEntity.addChild(specimenentity)
content.add(parentEntity)
let entityBounds = specimenentity.visualBounds(relativeTo: parentEntity)
parentEntity.collision = CollisionComponent(shapes: [ShapeResource.generateBox(size: entityBounds.extents).offsetBy(translation: entityBounds.center)])
parentEntity.generateCollisionShapes (recursive: true)
parentEntity.components.set(InputTargetComponent())
if let Left_Hemisphere = attachments.entity(for: "Left_Hemisphere") {
//4. Position the Attachment and add it to the RealityViewContent
Left_Hemisphere.position = [-0.5, 1, 0]
specimenentity.addChild(Left_Hemisphere)
}
}
} attachments: {
Attachment(id: "Left_Hemisphere") {
//2. Define the SwiftUI View
Text("Left_Hemisphere")
.font(.extraLargeTitle)
.padding()
.glassBackgroundEffect()
}
}
.gesture(
DragGesture()
.targetedToAnyEntity()
.onChanged { value in
let entity = value.entity
var orientation = Rotation3D(entity.orientation(relativeTo: nil))
var newOrientation: Rotation3D
if (value.location.x >= lastGestureValueX) {
newOrientation = orientation.rotated(by: .init(angle: .degrees(0.5), axis: .y))
} else {
newOrientation = orientation.rotated(by: .init(angle: .degrees(-0.5), axis: .y))
}
entity.setOrientation(.init(newOrientation), relativeTo: nil)
lastGestureValueX = value.location.x
orientation = Rotation3D(entity.orientation(relativeTo: nil))
if (value.location.y >= lastGestureValueY) {
newOrientation = orientation.rotated(by: .init(angle: .degrees(0.5), axis: .x))
} else {
newOrientation = orientation.rotated(by: .init(angle: .degrees(-0.5), axis: .x))
}
entity.setOrientation(.init(newOrientation), relativeTo: nil)
lastGestureValueY = value.location.y
}
)
}
}
Is there any way to detect if an entity is being looked at in a RealityView. I know it is possible to add a "HoverEffectComponent()" which will highlight the entity a little when you gaze on it, but there doesn't seem to be any way to call a function from this. There is also no GazeGesture or anything similar.
I am working with MeshAnchors, and I am having troubles getting to the classification of the triangles/faces.
This post references the MeshAnchor.Geometry, and that struct does have a property named "classifications", but it is of type GeometrySource. I cannot find any classification information in GeometrySource. Am I missing something there?
I think I am looking for something of type MeshAnchor.MeshClassification, but I cannot find any structs with this as a property.
Hi!
Currently showing the Apple Vision Pro to my clients. Sharing the screen is challenging with Guest Mode on the AVP. You can share the screen but you need to teather the AVP and MBP to an iPhone and then you can AirPlay. A lot of big corporate WiFi networks may not allow AirPlay so teathering is a better option.
Does anyone know if Apple is making their AVP demo app available to developers?
This would be super helpful for showing off the AVP's capabilities.
Thanks!
JB
I currently have an iOS app that transmits h264 code through wifi, uses videotoolbox to decode and displays it with MTKView, and I want to implement similar functions in visionOS. What should I do? MTKView is not available on visionOS
It slows down the device, screws with user interaction -- which makes exponentially worse the ridiculous one minute capture time.
I was heavily reliant on using ARGeoAnchor in my iOS application and when started porting the app to visionOS encountered there is no equivalent there. Which is a huge bummer and showstopper to launching on Apple Vision Pro.
Is there any technical limitation that didn't allow devs to port this great piece of functionality? Can we expect it to be added in the future visionOS releases?
I've been recently working on a VisionOS app which uses CoreMl to identify specific body parts and display a window with information of the identified body part, since the use of Vision Pro's cameras is blocked, I'm using an iPhone to perform image classification, and then send the label to the headset using Multipeer Connectivity, I'd like to display a volume once the user selects a body part, could my iPhone return enough spatial information for me to be able to fully take advantage of Vision Pro's mixed reality capabilities?
Hi ,
We are using Scanning objects using Object Capture app provided by apple it was working fine but suddenly it started crashing while scanning the object with bounding box settings.
Scanning app crash log
Getting device log but showing different reasons for app crash.
Device : iPad Pro / iPhone 15 Pro
iOS : 17.4
Attaching the device log for crash, waiting for your response.
Hello Community,
I'm encountering an issue with the latest iOS 17 update, specifically related to RoomPlan version-2. In iOS 16, when using RoomPlan version-1, we were able to display stairs in our app. However, after upgrading to iOS 17 and implementing RoomPlan version-2, the stairs are no longer visible.
Despite thorough investigation, I couldn't find any option within the code to show or hide stairs, or any other objects for that matter. It seems like a specific issue with the update rather than a coding error on our part.
Has anyone else encountered a similar problem? If so, I would greatly appreciate any insights or solutions you might have. It's crucial for our app functionality to have stairs displayed accurately, and we're currently at a loss on how to address this issue.
Thank you in advance for any assistance you can provide.
Best regards
Hi everyone, is it possible to use a 3D USDZ file to train a model in Create ML, I see there is an image option but it would be good to use these files from Reality Composer from object capture? Or is this in the works for forthcoming Xcode updates? Many Thanks Stuart
Hello there,
Do you know what happens if I call one of the following but the Joint is not tracked?
var anchorFromJointTransform: simd_float4x4
The position and orientation of this joint relative to the base joint of the skeleton.
var parentFromJointTransform: simd_float4x4
The transform from the joint to its parent joint’s coordinate system.
Hi ,
We are using Scanning objects using Object Capture app provided by apple it was working fine but suddenly it started crashing while scanning the object with bounding box settings.
Getting device log but showing different reasons for app crash.
Device : iPad Pro / iPhone 15 Pro
iOS : 17.4
Attaching the device log for crash, waiting for your response.
-------------------------------------
Translated Report (Full Report Below)
-------------------------------------
Incident Identifier: 0797ADAF-D653-4C92-8AA0-300AA167002B
CrashReporter Key: 48724a3b30ef15e069f513afff7d1aa2e935a520
Hardware Model: iPhone16,1
Process: GuidedCapture [918]
Path: /private/var/containers/Bundle/Application/ACCE6C58-98F0-4DD7-AA6E-732190E0FD30/GuidedCapture.app/GuidedCapture
Identifier: com.example.apple-samplecode.GuidedCaptureH28X75MLUY
Version: 1.0 (1)
Code Type: ARM-64 (Native)
Role: Foreground
Parent Process: launchd [1]
Coalition: com.example.apple-samplecode.GuidedCaptureH28X75MLUY [743]
Date/Time: 2024-03-26 18:23:07.8269 +0530
Launch Time: 2024-03-26 18:20:02.5964 +0530
OS Version: iPhone OS 17.4.1 (21E236)
Release Type: User
Baseband Version: 1.55.04
Report Version: 104
Exception Type: EXC_BREAKPOINT (SIGTRAP)
Exception Codes: 0x0000000000000001, 0x000000022e6577a4
Termination Reason: SIGNAL 5 Trace/BPT trap: 5
Terminating Process: exc handler [918]
Triggered by Thread: 33
Thread 33 name: Dispatch queue: com.apple.coreoc.queues.serial.session
Thread 33 Crashed:
0 CoreOC 0x22e6577a4 0x22e657400 + 932
1 CoreOC 0x22e657588 0x22e657401 + 391
2 CoreOC 0x22e697628 0x22e697221 + 1031
3 CoreOC 0x22e684864 0x22e684431 + 1075
4 CoreOC 0x22e6831ec 0x22e6828d5 + 2327
5 CoreOC 0x22e6c1fb4 0x22e6c1f5d + 87
6 CoreOC 0x22e5ef128 0x22e5ef105 + 35
7 libdispatch.dylib 0x19291113c _dispatch_call_block_and_release + 31
8 libdispatch.dylib 0x192912dd4 _dispatch_client_callout + 19
9 libdispatch.dylib 0x19291a400 _dispatch_lane_serial_drain + 747
10 libdispatch.dylib 0x19291af30 _dispatch_lane_invoke + 379
11 libdispatch.dylib 0x192925cb4 _dispatch_root_queue_drain_deferred_wlh + 287
12 libdispatch.dylib 0x192925528 _dispatch_workloop_worker_thread + 403
13 libsystem_pthread.dylib 0x1e69f8f20 _pthread_wqthread + 287
14 libsystem_pthread.dylib 0x1e69f8fc0 start_wqthread + 7
I'm developing a motion tracking app that takes requires a real-time view of an iPhone camera to capture the person's body. The motion is mapped to a virtual body. Currently this appears overlayed on the person that the iPhone sees.
However, I want to transmit this real time 3D virtual body to a different Apple device, as an AR app, that the other user can place in their environment.
Any suggestions on how I can get this 3d model to be viewable by another user (and maintain live updating based on motion tracking)?
I have a RealityKit based app in TestFlight and I see the following crash happening twice.
It appears to be coming from the RealityKit framework itself in cv3d::applecv3d::concurrent_sd::SurfaceDetection::PushAndDetect has anyone seen this before and have you discovered what is causing it?
Thread 32 Crashed:
0 libsystem_kernel.dylib 0x00000001cfd81fbc __pthread_kill + 8 (:-1)
1 libsystem_pthread.dylib 0x00000001f271f680 pthread_kill + 268 (pthread.c:1681)
2 libsystem_c.dylib 0x000000019069ab90 abort + 180 (abort.c:118)
3 Recon3D 0x0000000211b8cd7c cv3d::acv::surfacedetection::DepthMapPlaneDetector::detect(cv3d::esn::arr::ArrayView<float const, cv3d::esn::dim::DX<2u>, float const*>, cv3d::esn::arr::ArrayView<float const, cv3d::esn::dim::DX<2u... + 6136 (DepthMapPlaneDetector.cpp:346)
4 Recon3D 0x0000000211bb0fe4 cv3d::acv::surfacedetection::SurfaceDetector::detectAndTrack(cv3d::acv::surfacedetection::SurfaceDetector::DetectAndTrackWithDepthParams const&) + 844 (SurfaceDetector.cpp:635)
5 Recon3D 0x000000021142fd24 cv3d::applecv3d::concurrent_sd::SurfaceDetection::PushAndDetect(cv3d::applecv3d::concurrent_sd::InputSemanticsWithDepthBundle const&) + 2672 (SurfaceDetection.cpp:645)
6 Recon3D 0x00000002114678ec cv3d::kit::concurrency::detail::ProcessorInputMessageHandlingStrategy<cv3d::applecv3d::concurrent_sd::InputSemanticsWithDepthBundle, std::experimental::expected<cv3d::applecv3d::concurrent_sd::Surf... + 92 (ProcessorInputMessageHandlingStrategy.h:136)
7 Recon3D 0x00000002114675b4 std::__1::__function::__func<void cv3d::kit::concurrency::detail::Processor<cv3d::applecv3d::concurrent_sd::InputSemanticsWithDepthBundle, std::experimental::expected<cv3d::applecv3d::concurrent_sd... + 184 (function.h:356)
8 Recon3D 0x0000000211794330 void std::__1::__invoke_void_return_wrapper<void, true>::__call<std::__1::future<void> cv3d::esn::thread::IWorkQueue::DispatchAsync<void>(std::__1::function<void ()>&&)::'lambda'()&>(std::__1::futu... + 68 (invoke.h:487)
9 Recon3D 0x0000000212387830 dispatch_async_C_CallBack + 76 (GrandCentralDispatchUtil.cpp:94)
10 libdispatch.dylib 0x00000001905e2300 _dispatch_client_callout + 20 (object.m:561)
11 libdispatch.dylib 0x00000001905e9964 _dispatch_lane_serial_drain + 956 (queue.c:3885)
12 libdispatch.dylib 0x00000001905ea3f8 _dispatch_lane_invoke + 432 (queue.c:3976)
13 libdispatch.dylib 0x00000001905eb6a8 _dispatch_workloop_invoke + 1756 (queue.c:4485)
14 libdispatch.dylib 0x00000001905f5004 _dispatch_root_queue_drain_deferred_wlh + 288 (queue.c:6913)
15 libdispatch.dylib 0x00000001905f4878 _dispatch_workloop_worker_thread + 404 (queue.c:6507)
16 libsystem_pthread.dylib 0x00000001f271b964 _pthread_wqthread + 288 (pthread.c:2629)
17 libsystem_pthread.dylib 0x00000001f271ba04 start_wqthread + 8 (:-1)