I'm really excited about the Object Capture APIs being moved to iOS, and the complex UI shown in the WWDC session.
I have a few unanswered questions:
Where is the sample code available from?
Are the new Object Capture APIs on iOS limited to certain devices?
Can we capture images from the front facing cameras?
Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.
Post
Replies
Boosts
Views
Activity
Hello Dev Community,
I've been thinking over Apple's preference for USDZ for AR and 3D content, especially when there's the widely used GLTF. I'm keen to discuss and hear your insights on this choice.
USDZ, backed by Apple, has seen a surge in the AR community. It boasts advantages like compactness, animation support, and ARKit compatibility. In contrast, GLTF too is a popular format with its own merits, like being an open standard and offering flexibility.
Here are some of my questions toward the use of USDZ:
Why did Apple choose USDZ over other 3D file formats like GLTF?
What benefits does USDZ bring to Apple's AR and 3D content ecosystem?
Are there any limitations of USDZ compared to other file formats?
Could factors like compatibility, security, or integration ease have influenced Apple's decision?
I would love to hear your thoughts on this. Feel free to share any experiences with USDZ or other 3D file formats within Apple's ecosystem!
Hello,
I’ve started testing the Metal Shader Converter to convert my HLSL shaders to metallib directly, and I was wondering if the option ’-frecord-sources’ was supported in any way?
Usually I’m compiling my shaders as follows (from Metal):
xcrun -sdk macosx metal -c -frecord-sources shaders/shaders.metal -o shaders/shaders.air
xcrun -sdk macosx metallib shaders/shaders.air -o shaders/shaders.metallib
The -frecord-sources allow me to see the source when debugging and profiling a Metal frame.
Now with DXC we have a similar option, I can compile a typical HLSL shader with embedded debug symbols with:
dxc -T vs_6_0 -E VSMain shaders/triangle.hlsl -Fo shaders/triangle.dxil -Zi -O0 -Qembed_debug
The important options here are ’-Zi` and ’-Qembed_debug’, as they make sure debug symbols are embedded in the DXIL.
It seems that right now Metal Shader Converter doesn’t pass through the DXIL debug information, and I was wondering if it was possible. I’ve looked at all the options in the utility and haven’t seen anything that looked like it.
Right now debug symbols in my shaders is a must-have, so I’ll explore other routes to convert my HLSL shaders to Metal (I’ve been testing spir-v cross to do the conversion, I haven’t actually tested the debug symbols yet, I’ll report back later).
Thank you for your time!
Is MTKView intentionally unavailable on visionOS or is this an issue with the current beta?
I have multiple images that at various times I need to replace a target image for a SKSpriteNode.
Each of these multiple images has a different size.
The target SKSpriteNode has a fixed frame that I want to stay fixed.
This target is created via:
myTarget = SKSpriteNode(imageNamed: “target”)
myTarget.size = CGSize(…)
myTarget.physicsBody = SKPhysicsBody(rectangleOf: myTarget.size)
How do I resize each of the multiple images so that each fills up the target frame (expand or contract)?
Pretend the target is a shoebox and each image is a balloon that expands or contracts to fill the shoebox.
I have tried the following that fails, that is, it changes the size of the target to fit the new image .. in short, it does the exact opposite of what I want.
let newTexture = SKTexture(imageNamed: newImage)
let changeImgAction = SKAction.setTexture(newTexture, resize: true)
myTarget.run(changeImgAction)
Again, keep frame of myTarget fixed and change size of newTexture to fit the above frame ..
Hi,
is there a way in visionOS to anchor an entity to the POV via RealityKit?
I need an entity which is always fixed to the 'camera'.
I'm aware that this is discouraged from a design perspective as it can be visually distracting. In my case though I want to use it to attach a fixed collider entity, so that the camera can collide with objects in the scene.
Edit:
ARView on iOS has a lot of very useful helper properties and functions like cameraTransform (https://developer.apple.com/documentation/realitykit/arview/cameratransform)
How would I get this information on visionOS? RealityViews content does not seem offer anything comparable.
An example use case would be that I would like to add an entity to the scene at my users eye-level, basically depending on their height.
I found https://developer.apple.com/documentation/realitykit/realityrenderer which has an activeCamera property but so far it's unclear to me in which context RealityRenderer is used and how I could access it.
Appreciate any hints, thanks!
I am working on a fully immersive RealityView for visionOS and I need to add light from the sun to my scene. I see that DirectionalLight, PointLight, and SpotLight are not available on visionOS. Does anyone know how to add light to fully immersive scene on visionOS?
My scene is really dark right now without any additional light.
I have a RealityView in my visionOS app. I can't figure out how to access RealityRenderer. According to the documentation (https://developer.apple.com/documentation/realitykit/realityrenderer) it is available on visionOS, but I can't figure out how to access it for my RealityView. It is probably something obvious, but after reading through the documentation for RealityView, Entities, and Components, I can't find it.
Sample project from: https://developer.apple.com/documentation/RealityKit/guided-capture-sample was fine with beta 3.
In beta 4, getting these errors:
Generic struct 'ObservedObject' requires that 'ObjectCaptureSession' conform to 'ObservableObject'
Does anyone have a fix?
Thanks
What is the most efficient way to use a MTLTexture (created procedurally at run-time) as a RealityKit TextureResource? I update the MTLTexture per-frame using regular Metal rendering, so it’s not something I can do offline. Is there a way to wrap it without doing a copy?
A specific example would be great.
Thank you!
has anyone gotten their 3d Models to render in seperate windows, i tried following the code in the video for creating a seperate window group, but i get a ton of obsecure errors, i was able to get it to render in my 2d windows, but when i try making a seperate window group i get errors
Hi,
I'm creating a SF Symbols image like this:
var img = UIImage(systemName: "x.circle" ,withConfiguration: symbolConfig)!.withTintColor(.red)
In the debugger the image is really red.
and I'm using this image to create a SKTexture:
let shuffleTexture = SKTexture(image: img)
The texture image is ALWAYS black and I have no idea how to change it's color. Nothing I've tried so far works.
Any ideas how to solve this?
Thank you!
Best Regards,
Frank
Hi, I have an app based on Metal and it runs on VisionOS.
It has a huge sphere mesh and renders video outputs (from AVPlayer) on it.
What I want to do is rendering left portion of my video output on left eye, and right portion of my video output on right eye.
In my fragment shader, I think I need to know that the thread in shader is for left eye or right eye. (I'm not using MV-hevc encoded video but just hevc encoded one)
So, what I currently do is, I assume 'amplification_id' is for the thing which determines the side of eyes. But, I'm not sure this is correct.
I know opengl is marked as deprecated since ios12 but I have an old project using it and I want to update some feature of it then release the update version.
So I'm wondering if I can still release an app using opengl to app store currently?
(I know it's better to shift to MetalKit but for some reason I want to cut the cost if I can. )
I have some strange behavior in my app. When I set the position to .zero you can see the sphere normally. But when I change it to any number it doesn't matter which and how small. The Sphere isn't visible or in the view.
The RealityView
import SwiftUI
import RealityKit
import RealityKitContent
struct TheSphereOfDoomRV: View {
@StateObject var viewModel: SphereViewModel = SphereViewModel()
let sphere = SphereEntity(radius: 0.25, materials: [SimpleMaterial(color: .red, isMetallic: true)], name: "TheSphere")
var body: some View {
RealityView { content, attachments in
content.add(sphere)
} update: { content, attachments in
sphere.scale = SIMD3<Float>(x: viewModel.scale, y: viewModel.scale, z: viewModel.scale)
} attachments: {
VStack {
Text("The Sphere of Doom is one of the most powerful Objects. You can interact with him in every way you can imagine ").multilineTextAlignment(.center)
Button {
} label: {
Text("Play Video!")
}
}.tag("description")
}.modifier(GestureModifier()).environmentObject(viewModel)
}
}
SphereEntity:
import Foundation
import RealityKit
import RealityKitContent
class SphereEntity: Entity {
private let sphere: ModelEntity
@MainActor
required init() {
sphere = ModelEntity()
super.init()
}
init(radius: Float, materials: [Material], name: String) {
sphere = ModelEntity(mesh: .generateSphere(radius: radius), materials: materials)
sphere.generateCollisionShapes(recursive: false)
sphere.components.set(InputTargetComponent())
sphere.components.set(HoverEffectComponent())
sphere.components.set(CollisionComponent(shapes: [.generateSphere(radius: radius)]))
sphere.name = name
super.init()
self.addChild(sphere)
self.position = .zero // .init(x: Float, y: Float, z: Float) and [Float, Float, Float] doesn't work ...
}
}
I am attempting to place images in wall anchors and be able to move their position using drag gestures. This seem pretty straightforward if the wall anchor is facing you when you start the app. But, if you place an image on a wall anchor to the left or the wall behind the original position then the logic stops working properly. The problem seems to be the anchor and the drag.location3D orientations don't coincide once you are dealing with wall anchors that are not facing the original user position (Using Xcode Beta 8)
Question:
How do I apply dragging gestures to an image regardless where the wall anchor is located at in relation to the user original facing direction?
Using the following code:
var dragGesture: some Gesture {
DragGesture(minimumDistance: 0)
.targetedToAnyEntity()
.onChanged { value in
let entity = value.entity
let convertedPos = value.convert(value.location3D, from: .local, to: entity.parent!) * 0.1
entity.position = SIMD3<Float>(x: convertedPos.x, y: 0, z: convertedPos.y * (-1))
}
}
Hi, I trying to use Metal cpp, but I have compile error:
ISO C++ requires the name after '::' to be found in the same scope as the name before '::'
metal-cpp/Foundation/NSSharedPtr.hpp(162):
template <class _Class>
_NS_INLINE NS::SharedPtr<_Class>::~SharedPtr()
{
if (m_pObject)
{
m_pObject->release();
}
}
Use of old-style cast
metal-cpp/Foundation/NSObject.hpp(149):
template <class _Dst>
_NS_INLINE _Dst NS::Object::bridgingCast(const void* pObj)
{
#ifdef __OBJC__
return (__bridge _Dst)pObj;
#else
return (_Dst)pObj;
#endif // __OBJC__
}
XCode Project was generated using CMake:
target_compile_features(${MODULE_NAME} PRIVATE cxx_std_20)
target_compile_options(${MODULE_NAME}
PRIVATE
"-Wgnu-anonymous-struct"
"-Wold-style-cast"
"-Wdtor-name"
"-Wpedantic"
"-Wno-gnu"
)
May be need to set some CMake flags for C++ compiler ?
I'm trying to animate a shape (e.g. a circle) to follow a custom path, and struggling to find the best way of doing this.
I've had a look at the animation options from SwiftUI, UIKit and SpriteKit and all seem very limited in what paths you can provide. Given the complexity of my path, I was hoping there'd be a way of providing a set of coordinates in some input file and have the shape follow that, but maybe that's too ambitious.
I was wondering if this were even possible, and assuming not, if there were other options I could consider.
Hi all
So I'm quite new into GameDev and am struggling a bit with the Tilemap
All my elements have the size of 64x64. As you can see in my screenshot there is some gap between the street and the water. It might be simple but what's the best way to fix that gap? I could increase the width of the left and right edge png but then I will sooner or later run into other problems as it then is not fitting with the rest.
Thanks for your help
Cheers from Switzerland
I'm using DrawableQueue to create textures that I apply to my ShaderGraphMaterial texture. My metal render is using a range of alpha values as a test.
My objects displayed with the DrawableQueue texture are working as expected, but the alpha component is not working.
Is this an issue with my DrawableQueue descriptor? My ShaderGraphMaterial? A missing setting on my scene objects? or some limitation in visionOS?
DrawableQueue descriptor
let descriptor = await TextureResource.DrawableQueue.Descriptor(
pixelFormat: .rgba8Unorm,
width: textureResource!.width,
height: textureResource!.height,
usage: [.renderTarget, .shaderRead, .shaderWrite], // Usage should match the requirements for how the texture will be used
//usage: [.renderTarget], // Usage should match the requirements for how the texture will be used
mipmapsMode: .none // Assuming no mipmaps are needed for the text texture
)
let queue = try await TextureResource.DrawableQueue(descriptor)
queue.allowsNextDrawableTimeout = true
await textureResource!.replace(withDrawables: queue)
Draw frame:
guard
let drawable = try? drawableQueue!.nextDrawable(),
let commandBuffer = commandQueue?.makeCommandBuffer()//,
//let renderPipelineState = renderPipelineState
else {
return
}
let renderPassDescriptor = MTLRenderPassDescriptor()
renderPassDescriptor.colorAttachments[0].texture = drawable.texture
renderPassDescriptor.colorAttachments[0].loadAction = .clear
renderPassDescriptor.colorAttachments[0].storeAction = .store
renderPassDescriptor.colorAttachments[0].clearColor = clearColor
/*renderPassDescriptor.colorAttachments[0].clearColor = MTLClearColor(
red: clearColor.red,
green: clearColor.green,
blue: clearColor.blue,
alpha: 0.5 )*/
renderPassDescriptor.renderTargetHeight = drawable.texture.height
renderPassDescriptor.renderTargetWidth = drawable.texture.width
guard let renderEncoder = commandBuffer.makeRenderCommandEncoder(descriptor: renderPassDescriptor) else {
return
}
renderEncoder.pushDebugGroup("DrawNextFrameWithColor")
//renderEncoder.setRenderPipelineState(renderPipelineState)
// No need to create a render command encoder with shaders, as we are only clearing the drawable.
// Since we are just clearing the drawable to a solid color, no need to draw primitives
renderEncoder.endEncoding()
commandBuffer.commit()
commandBuffer.waitUntilCompleted()
drawable.present()
}