I've been upgrading Xcode consistently for years and have never seen Metal shaders behave differently from one version to another until now.
On macOS 14.5, Xcode 16 beta, suddenly several color outputs turn out completely black where there should be color. All validation is on and nothing seems to be wrong (and hasn't been since maybe Xcode version 11).
I've attached two screens. The first is the normal color scheme, the second is in Xcode 16. The settings are the exact same.
Normal:
Buggy with black + transparent colors (so it seems like either colors are overflowing or are all 0s)?
Before I file a bug report or code level request, may I have some thoughts on how to debug this? The only clue I have is that I'm using bindless to multiply color texture samples with color values from my vertex struct. But it still fails even if I use hard-coded values for the texture samples, meaning somehow the color values are not being sent to the shader correctly? This is the most stable part of my rendering pipeline, so I'm surprised if the issue is there.
Thank you.
Metal
RSS for tagRender advanced 3D graphics and perform data-parallel computations using graphics processors using Metal.
Post
Replies
Boosts
Views
Activity
I have an issue with hand occlusion in immersive mode. I have an entry view for the app and a Metal CompositorLayer (which is the immersive volume) where I have set .upperLimbVisibility(Visibility.hidden). The problem is that when I dismiss the entry view, sometimes it hides the hands and sometimes it doesn't (randomly).
@main
struct AVPainterApp: App
{
@State var hand: Int32 = 0
var body: some Scene
{
WindowGroup()
{
ContentView(hand: $hand)
}
.windowResizability(.contentSize)
ImmersiveSpace(id: "ImmersiveSpace")
{
CompositorLayer(configuration: MetalLayerConfiguration())
{
layerRenderer in SpatialSceneRun(layerRenderer, hand)
}
}
.upperLimbVisibility(Visibility.hidden)
.immersionStyle(selection: .constant(.full), in: .full)
}
}
It’s great that we’ll be able to use Metal custom renderers in passthrough mode on visionOS.
https://developer.apple.com/wwdc24/10092
This is a lot of complicated set-up, however. It’s also unclear how occlusion and custom algorithms / raytracing will work in tandem with scene understanding. May we have a project template and/or sample? Preferably with the C api and not just swift. This would be much-appreciated and helpful to everyone who wants this set-up. I’d like to see the whole process.
Thank you for introducing this feature!
I am seeking clarification regarding the new device-coherent memory (buffers and textures) in Metal 3.2. Do I understand the documentation correctly that this feature allows threads from different threadgroups to update data in device memory cooperatively? The documentation mentions, "[results of operations] are visible to other threads across thread groups if you synchronize them properly." How does one do proper synchronization? From what I understand, Metal has no device-scoped barriers.
We’re experiencing an issue with wrong SceneKit hit testing results in iOS 17.2 compared with iOS 16.1 when using the either Metal or OpenGLES2 engines.
Tapping on a 3D model to place a SCNNode
// pointInScene: tapped point
let hitResults = sceneView.hitTest(pointInScene, options: nil)
return hitResults.first { $0.node.name?.compare("node_name") == .orderedSame }
I've got a full-screen animation of a bunch of circles filled with gradients, with plenty of (careless) overdraw, plus real-time audio processing driving the animation, plus the overhead of SwiftUI's dependency analysis, and that app uses less energy (on iPhone 13) than the Xcode "Metal Game" template which is a rotating textured cube (a trivial GPU workload). Why is that? How can I investigate further?
Does CoreAnimation have access to a compositor fast-path that a Metal app cannot access?
Maybe another data point: when I do the same circles animation using SwiftUI's Canvas, the energy use is "Very High" and GPU utilization is also quite high. Eventually the phone's thermal state goes "Serious" and I get a message on the device that "Charging will resume when iPhone returns to normal temperature".
Why do I get this error almost immediately on starting my rendering pass?
Multiline
BlockQuote. 2024-05-29 20:02:22.744035-0500 RoomPlanExampleApp[491:10341] [] <<<< AVPointCloudData >>>> Fig assert: "_dataBuffer" at bail (AVPointCloudData.m:217) - (err=0)
2024-05-29 20:02:22.744455-0500 RoomPlanExampleApp[491:10341] [] <<<< AVPointCloudData >>>> Fig assert: "_dataBuffer" at bail (AVPointCloudData.m:217) - (err=0)
2024-05-29 20:05:54.079981-0500 RoomPlanExampleApp[491:10025] [CAMetalLayer nextDrawable] returning nil because allocation failed.
2024-05-29 20:05:54.080144-0500 RoomPlanExampleApp[491:10341] [] <<<< AVPointCloudData >>>> Fig assert: "_dataBuffer" at bail (AVPointCloudData.m:217) - (err=0)
I am a VOIP app developer.
I am planning to develop a VOIP app on iOS using WebRTC that operates in PiP (Picture-in-Picture) mode.
Since MTKView (CAMetalLayer) cannot be used in PiP mode, I am considering using AVSampleBufferDisplayLayer.
Regarding this, I am curious about the performance differences between CAMetalLayer and AVSampleBufferDisplayLayer.
As far as I know, CAMetalLayer utilizes the GPU.
Does AVSampleBufferDisplayLayer also render using the GPU?
If AVSampleBufferDisplayLayer renders using the GPU, will the rendering performance be similar?
=> Based on tests, there seems to be no difference in CPU usage between the two, which leads me to speculate that AVSampleBufferDisplayLayer also uses the GPU.
If both use the GPU and there are no performance differences, is there a significant advantage to using CAMetalLayer?
Thank you in advance.
Hello,
This exact question was already asked in this forum (8 years ago) but I can't find a definitive answer:
Does Metal allow using the same color texture as both an input and output (color attachment) of a fragment shader? Is the behavior defined somewhere?
I believe this results in undefined behavior under both DirectX and OpenGL, so I'd assume the same for Metal, but then why doesn't Metal warn me about this as it does on some many other "misconfigurations"? It also seems to work correctly in my case, as I found out by accident.
Would love to get a clarification!
Thanks ahead!
Hello all
We would like to use AMD's FidelityFx Downsampler in our custom game engine and we are having difficulties to correctly implement it for Metal due to its use of the globallycoherent keyword. We have done extensive search online but have not succeeded in finding an answer. What we have found is the largely undocumented 'volatile' keyword, so we were hypothesising that marking a texture with 'volatile' (which implies 'device volatile' since it's a texture) could have the same effect but we are far from convinced it would work. Does anyone have insights into this?
I have provided a test UIKit app which displays three different images, side by side, each inside a separate MTKView. Each image is tagged with a different color profile:
Display P3
uRGB
Test RGB (from an image supplied in Apple's ImageApp sample).
I set up default values for all color spaces and formats. I then check if the image is tagged and, if so, I override those values with state from the tagged color space.
The variables I am setting:
“workingColorSpace” in the Metal CIContext, default = sRGB
“workingFormat” in the Metal CIContext, default = RGBAf
“outputColorSpace” in the Metal CIContext, default = displayP3
“colorPixelFormat” in the MTKView, default = bgra8Unorm
“colorSpace” in a CIRenderDestination that I use in the MTKView delegate draw method
The “colorSpace” default value = CGColorSpaceCreateDeviceRGB()
I also set “pixelFormat” in CIRenderDestination with the MTKView.colorPixelFormat.
If the image is tagged, I override the following values with the tagged colorSpace:
CIContext.workingColorSpace
CIContext.outputColorSpace
CIRenderDestination.colorSpace
If the tagged colorSpace.isWideGamutRGB = true, then I set the CIRenderDestination.colorSpace to extendedSRGB, ignoring the color space in the tagged wide gamut color space, as well as set the colorPixelFormat = bgr10_xr
Results:
The above scenario will properly render the DisplayP3 image, and the uRGB image. The “Test RGB” image fails:
If I do not override the CIRenderDestination.colorSpace with a value from the tagged image, then the “Test RGB” image succeeds, but the “uRGB” image fails to render properly:
Question: Do I have everything hooked up correctly and, if so, why does one image fail, and the other succeed?
Link to sample project:
https://www.dropbox.com/scl/fi/57u2fcrgdvys7jtzykzxt/ColorSpaceTest.zip?rlkey=unjeeiu7mi0wx9wfpylt78nwd&dl=0
After the build 4.2.9. I have a weird bug. It keep crashing and when I read the message, it display
validateRenderPassDescriptor:782: failed assertion `RenderPass Descriptor Validation
Texture at colorAttachment[0] has usage (0x01) which doesn't specify MTLTextureUsageRenderTarget (0x04)
This happen when I run in debug mode and try to hook up the motion template. I found out that the output texture create have usage only "MTLTextureUsageShaderRead" but no "MTLTextureUsageRenderTarget"
Anyone have problem like me?
I uusing fxplug 4.2.9
motion 5.7 and final cut 10.7.1. running in sonoma 14.2.1
when I try to import MetalFX in visionOS, Xcode show error : No such module 'MetalFX' , but the document show visionOS is supported
Hi,
I am using xcode frame capture to profile my app's shader. And I got some question about the shader per line profile statistics. Please see the two screen shot first, it is my compute shader.
Begin:
End:
The first image is the head of the shader. The profile show's that the shader entry function takes 72.44% of the time.
And at the end of the shader, the profile shows that the right brace '}' takes 60.45%.
Here is my question:
How to properly understand the profile data? What's the real performance data of this shader?
Why the shader entry function does not take 100% of the time?
Can someone help me to answer the question?
Thanks!
Boson
I have been using MTKView to display CVPixelBuffer from the camera. I use so many options to configure color space of the MTKView/CAMetalLayer that may be needed to tonemap content to the display (CAEDRMetadata for instance). If however I use AVSampleBufferDisplayLayer, there are not many configuration options for color matching. I believe AVSampleBufferDisplayLayer uses pixel buffer attachments to determine the native color space of the input image and does the tone mapping automatically. Does AVSampleBufferDisplayLayer have any limitations compared to MTKView, or both can be used without any compromise on functionality?
I am trying to carefully process HDR pixel buffers (10-bit YCbCr buffers) from the camera. I have watched all WWDC videos on this topic but have some doubts expressed below.
Q. What assumptions are safe to make about sample values in Metal Core Image Kernels? Are the sample values received in Metal Core Image kernel linear or gamma corrected? Or does that depend on workingColorSpace property, or the input image that is supplied (though imageByMatchingToColorSpace() API, etc.)? And what could be the max and min values of these samples in either case? I see that setting workingColorSpace to NSNull() in context creation options will guarantee receiving the samples as is and normalised to [0-1]. But then it's possible the values are non-linear gamma corrected, and extracting linear values would involve writing conversion functions in the shader. In short, how do you safely process HDR pixel buffers received from the camera (which are in YCrCr420_10bit, which I believe have gamma correction applied, so Y in YCbCr is actually Y'. Can AVFoundation team clarify this?) ?
I Instrument's CPU Profiling tool I've noticed that a significant portion (22.5%) of the CPU-side overhead related to MPS matrix multiplication (GEMM) is in a call to getenv(). Please see attached screenshot.
It seems unnecessary to perform this same check over and over, as whatever hack that needs this should be able to perform the getenv() only once and cache the result for future use.
Hi,
Is transparency supported in MetalFX?
I have a project that sets a texture to a particular alpha value. It works fine. However, as soon as I enable MetalFX, the transparency stops working. The alpha value is set to 1.0.
If transparency is supported in MetalFX, how do I enable it?
Thank you
I'm experimenting with Vision OS and Apple Vision Pro using the Xcode Beta. I'm using Xcode 15.1 Beta and visionOS 1.0 beta 4.
I'm currently doing a project where I draw a polygon using a mesh generated from MeshDescriptor/MeshResource and present it in an ImmersiveView.
I want to change the color of parts, i.e. not all of, my 3D rendered polygon and I want to do it dynamically. For example when the user presses a button.
I have gotten into Shaders and the CustomMaterial from RealityKit, only to find out that CustomMaterial is not supported on Vision OS!
Does anyone know how I can color portions/parts of a mesh that is generated from MeshDescriptor and MeshResource?
I know opengl is marked as deprecated since ios12 but I have an old project using it and I want to update some feature of it then release the update version.
So I'm wondering if I can still release an app using opengl to app store currently?
(I know it's better to shift to MetalKit but for some reason I want to cut the cost if I can. )