Session 10166 goes into wonderful detail about the construction of spatial photos, and the various parameters that define the relationship between left and right images. The session provides everything I need to know to combine a left and right frame, and create a spatial image output.
But I'd like to do a live preview on the Vision Pro. Change the baseline and see what it looks like. See the horizontal disparity/convergence adjustment and move the image back and forth.
Cropping, and vertical alignment, would be easy to implement live. Horizontal disparity, and baseline length? I'm baffled.
How would I create a Shader Graph to let me make these adjustments using sliders or similar affordances, and pipe the results to a Camera Index Switch? I already have a working stereography app, but the stereo parameters are not interactive at all.
I could regenerate the spatial image after each change and refresh the display, but that is awfully clunky. What's a better way?
Reality Composer Pro
RSS for tagPrototype and produce content for AR experiences using Reality Composer Pro.
Post
Replies
Boosts
Views
Activity
Hi everyone, is it possible to use a 3D USDZ file to train a model in Create ML, I see there is an image option but it would be good to use these files from Reality Composer from object capture? Or is this in the works for forthcoming Xcode updates? Many Thanks Stuart
With WWDC 24, I was excited to see that apple is bringing their APIs from Vision OS to iOS.
I tried using the Object Anchoring component in Reality Composer Pro. Which this works with a Vision Pro, it looks like the entity will spawn at origin if we run the same on iOS and the object anchoring doesn't seem to work.
Is this intended? Below is how I'm doing this.
I added an Anchoring component and added the .referenceObject file I trained using CreateML.
This is the code I'm using to load this scene in.
// GrootView.swift
// ARTest-New
//
// Created by Sravan Karuturi on 6/10/24.
//
import SwiftUI
import RealityKit
import Box
struct GrootView: View {
@StateObject private var grootVM = GrootViewModel()
@State private var ent: Entity? = nil
@State var anchor: Entity? = nil
@State var wallAnchor: Entity? = nil
@State var floorAnchor: Entity? = nil
var body: some View {
RealityView{ content in
#if os(iOS)
await content.setupWorldTracking()
content.camera = .worldTracking
#endif
ent = try? await Entity(named: "Box", in: boxBundle)
print(ent?.children)
anchor = ent?.findEntity(named: "ObjectAnchor")
wallAnchor = ent?.findEntity(named: "WallAnchor")
floorAnchor = ent?.findEntity(named: "FloorAnchor")
let updateSum = content.subscribe(to: SceneEvents.Update.self){ event in
if let anc = anchor, anc.isAnchored {
print("Found Item")
}
if let anc = floorAnchor, anc.isAnchored {
print("Found Floor")
}
if let anc = wallAnchor, anc.isAnchored {
print("Wall Anchor")
}
}
content.add(ent!)
}
}
}
#Preview {
GrootView()
}
While, something similar seems to work on visionOS, the same doesn't seem to work with iOS.
When I run this app, we see all the children and the Found Item is printed constantly even when we don't have the item in the scene.
Not really sure if this is just not supported yet on iOS ( I really hope that's not the case ) or if I messed up something somehow
Hey guys! I have question for my project. I want my 3D character with an PBR Shader to only receive IBL from my HDRI map and not receive any lighting from the surrounding environment when viewing on Apple Vision Pro. Any tips?
Thank you in advance!
I've got a couple 2D PNG assets that I want to add to a scene made of a couple other udsz files in RCP (picture adding a couple 2D videogame characters to a simple 3D diorama).
When I try to drag the PNGs to the workspace or the file tree…nothing happens.
I found a walkthrough on Medium (called "Importing and Exporting Personalized Objects for Augmented Reality: Reality Composer and SwiftUI" for those curious as I can't link to Medium posts here) that makes it look like users could do this with simple drag-and-drop. The Medium post is from June 2023, and in the screenshots RCP visually looks a lot more like Reality Composer on iPad, so I'm assuming it's changed a lot since then?
Is there still a way to do this? I've tried adding the 2D elements to a scene with Blenders "import images as planes," but I'm getting weird halos around them and was hoping RCP could make the process a bit easier/cleaner.
I wanted to create a particle effect using particle images I copied from a Unity project. These images are PNGs with an alpha channel. In Unity, these look georgeous, but on visionOS, they look rather weird, since the alpha channel is not respected. All pixel which are not pitch black are full white. Is there a way to change this behavior?
Hi,
Is there a way to access the shader graph nodes in Xcode to change the values during runtime such as changing values based on user inputs? Or is this not possible yet?
Thanks!
Myoung
I created an app for visionOS, using Reality Composer Pro. Now I want to turn this app into a multi-platform app for iOS as well.
RCP files are not supported on iOS, however. So I tried to use the "old" Reality Composer instead, but that doesn't seem to work either. Xcode 15 does not include it anymore, and I read online that files created with Xcode 14's Reality Composer cannot be included in Xcode 15 files. Also, Xcode 14 does not run on my M3 Mac with Sonoma.
That's a bummer. What is the recommended way to include 3D content in apps that support visionOS AND iOS?!
(I also read that a solution might be using USDZ for both. But how would that workflow look like? Are there samples out there that support both platforms? Please note that I want to setup the anchors myself, using code. I just need the composing tool to the create 3D content that will be placed on these anchors.)
I made an animation in Blender using geometry nodes that I exported to USDC file (then I used Reality Converter to convert to USDZ) and I can see the animation when viewing from the finder but does not play after importing to RCP. Any idea how I can play the animation? Or can the animation be accessed through Xcode?
Thanks!
I'm taking my iOS/iPadOS app and converting it so it runs on visionOS. I’m trying to compile my app, build it, for both visionOS and iOS. When I try to build for an iPhone and iPad simulator, I get the following error:
 Building for 'iphonesimulator', but realitytool only supports [xros, xrsimulator]
I’m thinking I might need to do a # if conditional compilation statement for visionOS so iOS doesn’t try to build lines of code but I can’t for this particular error find out for which file or code I need to do the conditional compilation. Anyone know how to get rid of this error? 
As the title says. While I can find the video capture on the desktop but I can not find where it is storing the screenshots even when it says Screenshot's succeeded.
I am referencing this: https://developer.apple.com/documentation/visionos/capturing-screenshots-and-video-from-your-apple-vision-pro-for-2d-viewing
I'm trying to control the LOD of textures for an app for vision pro, With the default image node in composer pro the UV's are correct but the LOD is not what I want, I would like to have control over it. I see there is a node called "RealityKitTexture2DLOD" but as soon as I try to use that one the UV's are all messed up. Am I missing something ? Do we need to do something specific to use this node ?
I tried to use the nodes "Place 2D" and "UsdTransform2d" but could not get the texture to align
Any help appreciated
I’d like to convert a Filemaker 18 Runtime to a Mac (Catalina) application package.
I found a youtube video that describes how to convert a shell script into a Mac OSX App.
This is the shell script:
I’ve had no luck adapting it to convert my Filemaker runtime into an app.
Thanks in advance for any advice.
Regards,
Lara
I setup an entity with a collision component on it. But it was hard to target the object for I tap gesture, until I increased the radius quite a bit. Now I am unsure if it is too large. Is there a way to visualize these components somehow, maybe even in a running scene?
Also, I find it pretty confusing that the size is given in cm. This made me wonder if this cm setting is affected by the entity's size at all? In Unity, it's just (local) "units".
I want to set collection in curve view with fix paging in vision pro, How can i do?
Through testing, I have been able to get 5.1 and 7.1 Dolby Atmos files created in Logic Pro to work in Reality Composer Pro and then in Vision Pro.
However, 5.1.4 and 7.1.4 files crash when added. Can someone confirm that these are not supported?
Is it possible to use an image sequence, .mov or sprite sheet as a node source for a custom material in Reality Composer Pro?
I have noticed that in the particle emitter, the magic preset uses a 4x4 sprite sheet as a particle source. Can this be done within the shader graph for the diffuse or normal slot?
Hi,
I'm trying to have an entity (and some attachments to it) to rotate.
If I add the entity to content, add the attachment as a child entity, and set the entity as InputTargetComponent, then when I add a gesture ONLY the entity rotates and NOT the attachments (added as child entities).
If I add a parent entity with let parentEntity = ModelEntity(), add my entity to the parentEntity, then add the attachments to an entity (which is now a child of the ModelEntity) and set the ModelEntity as InputTargetComponent then the whole thing rotates (including attachments)
I'm sure there must be a bug, why would it work only with an added ModelEntity?
Anyway, bug or not a bug, the problem I have now is that it rotates around the axes of the ModelEntity, not my primary entity, which is what I want.
Is there a way to set the ModelEntity axes to be the axes of my primary child entity so it rotates like I want?
What call should I use to move the axes where would I find the axes of the first child entity which should be the focus of my app?
Here is my code:
var body: some View {
RealityView { content, attachments in
// Add the initial RealityKit content
if let specimenentity = try? await Entity(named: "Immersive", in: realityKitContentBundle) {
let parentEntity = ModelEntity()
parentEntity.addChild(specimenentity)
content.add(parentEntity)
let entityBounds = specimenentity.visualBounds(relativeTo: parentEntity)
parentEntity.collision = CollisionComponent(shapes: [ShapeResource.generateBox(size: entityBounds.extents).offsetBy(translation: entityBounds.center)])
parentEntity.generateCollisionShapes (recursive: true)
parentEntity.components.set(InputTargetComponent())
if let Left_Hemisphere = attachments.entity(for: "Left_Hemisphere") {
//4. Position the Attachment and add it to the RealityViewContent
Left_Hemisphere.position = [-0.5, 1, 0]
specimenentity.addChild(Left_Hemisphere)
}
}
} attachments: {
Attachment(id: "Left_Hemisphere") {
//2. Define the SwiftUI View
Text("Left_Hemisphere")
.font(.extraLargeTitle)
.padding()
.glassBackgroundEffect()
}
}
.gesture(
DragGesture()
.targetedToAnyEntity()
.onChanged { value in
let entity = value.entity
var orientation = Rotation3D(entity.orientation(relativeTo: nil))
var newOrientation: Rotation3D
if (value.location.x >= lastGestureValueX) {
newOrientation = orientation.rotated(by: .init(angle: .degrees(0.5), axis: .y))
} else {
newOrientation = orientation.rotated(by: .init(angle: .degrees(-0.5), axis: .y))
}
entity.setOrientation(.init(newOrientation), relativeTo: nil)
lastGestureValueX = value.location.x
orientation = Rotation3D(entity.orientation(relativeTo: nil))
if (value.location.y >= lastGestureValueY) {
newOrientation = orientation.rotated(by: .init(angle: .degrees(0.5), axis: .x))
} else {
newOrientation = orientation.rotated(by: .init(angle: .degrees(-0.5), axis: .x))
}
entity.setOrientation(.init(newOrientation), relativeTo: nil)
lastGestureValueY = value.location.y
}
)
}
}
It slows down the device, screws with user interaction -- which makes exponentially worse the ridiculous one minute capture time.
I currently have an iOS app that transmits h264 code through wifi, uses videotoolbox to decode and displays it with MTKView, and I want to implement similar functions in visionOS. What should I do? MTKView is not available on visionOS