I have multiple CAMetalLayers that I render content to and noticed that the graphics overview HUD does not function properly when I have more than one CAMetalLayer. The values reported will be very strange. For example, FPS may report 999 or some large negative value. It the HUD simply not designed to work with multiple CAMetalLayers or MTKViews? When I disable all but one of my CAMetalLayers, the HUD works as expected.
MetalKit
RSS for tagRender graphics in a standard Metal view, load textures from many sources, and work efficiently with models provided by Model I/O using MetalKit.
Posts under MetalKit tag
48 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
Hi!
How to define and call an inline function in Metal? Or simple function that will return some value.
Case:
inline uint index4D(constant _4D& shape,
constant uint& n,
constant uint& c,
constant uint& h,
constant uint& w) {
return n * shape.C * shape.H * shape.W + c * shape.H * shape.W + h * shape.W + w;
}
When I call it in my kernel function I get No matching function for call error.
Thx in advance.
Hi,
A user sent us a crash report that indicates an error occurring just after loading the default Metal library of our app.
Application Specific Information:
Crashing on exception: *** -[__NSArrayM objectAtIndex:]: index 0 beyond bounds for empty array
The report pointed me to these (simplified) lines of codes in the library setup:
_vertexFunctions = [[NSMutableArray alloc] init];
_fragmentFunctions = [[NSMutableArray alloc] init];
id<MTLLibrary> library = [device newDefaultLibrary];
2 vertex shaders and 5 fragment shaders are then loaded and stored in these two arrays using this method:
-(BOOL) addShaderNamed:(NSString *)name library:(id<MTLLibrary>)library isFragment:(BOOL)isFragment {
id shader = [library newFunctionWithName:name];
if (!shader) {
ALOG(@"Error : Unable to find the shader named : “%@”", name);
return NO;
}
[(isFragment ? _fragmentFunctions : _vertexFunctions) addObject:shader];
return YES;
}
As you can see, the arrays are not filled if the method fails... however, a few lines later, they are used without checking if they are really filled, and that causes the crash...
But this coding error doesn't explain why no shader of a certain type (or both types) have been added to the array, meaning: why -newFunctionWithName: returned nil for all given names (since the implied array appears completely empty)?
Clue
This error has only be detected once by a user running the app on macOS 10.13 with a NVIDIA Web Driver instead of the default macOS graphic driver. Moreover, it wasn't possible to reproduce the problem on the same OS using the native macOS driver.
So my question is: is it some known conflicts between NVIDIA drivers and the use of Metal libraries? Or does this case would require some specific options in the Metal implementation?
Any help appreciated, thanks!
if you check the code here,
https://developer.apple.com/documentation/compositorservices/interacting-with-virtual-content-blended-with-passthrough
var body: some Scene {
ImmersiveSpace(id: Self.id) {
CompositorLayer(configuration: ContentStageConfiguration()) { layerRenderer in
let pathCollection: PathCollection
do {
pathCollection = try PathCollection(layerRenderer: layerRenderer)
} catch {
fatalError("Failed to create path collection \(error)")
}
let tintRenderer: TintRenderer
do {
tintRenderer = try TintRenderer(layerRenderer: layerRenderer)
} catch {
fatalError("Failed to create tint renderer \(error)")
}
Task(priority: .high) { @RendererActor in
Task { @MainActor in
appModel.pathCollection = pathCollection
appModel.tintRenderer = tintRenderer
}
let renderer = try await Renderer(layerRenderer,
appModel,
pathCollection,
tintRenderer)
try await renderer.renderLoop()
Task { @MainActor in
appModel.pathCollection = nil
appModel.tintRenderer = nil
}
}
layerRenderer.onSpatialEvent = {
pathCollection.addEvents(eventCollection: $0)
}
}
}
.immersionStyle(selection: .constant(appModel.immersionStyle), in: .mixed, .full)
.upperLimbVisibility(appModel.upperLimbVisibility)
the only way it's dealing with the error is fatalError.
And don't think I can throw anything or return anything else?
Is there a way I can gracefully handle this and show a message box in UI?
I was hoping I could somehow trigger a failure and have https://developer.apple.com/documentation/swiftui/openimmersivespaceaction return fail.
but couldn't find a nice way to do so.
Let me know if you have ideas.
I have been concentrating on developing the visionOS application. While I am currently quite familiar with RealityKit, CompositorServices has also captured my attention. I have not yet acquired knowledge of CompositorServices. Could you please clarify whether it is essential for me to learn CompositorServices? Additionally, I would appreciate it if you could provide insights into the advantages of RealityKit and CompositorServices.
Hey guys,
is it possible to implement mirror like reflections like in this project:
https://developer.apple.com/documentation/metal/metal_sample_code_library/rendering_reflections_in_real_time_using_ray_tracing
for visionOS? Or is the hardware not prepared for Metal Raytracing?
Thanks in advance
XCode 16 seems to have an issue with stitchable kernels in Core Image which gives build errors as stated in this question. As a workaround, I selected Metal 3.2 as Metal Language Revision in XCode project. It works on newer devices like iPhone 13 pro and above but metal texture creation fails on older devices like iPhone 11 pro. Is this a known issue and is there a workaround? I tried selecting Metal language revision to 2.4 but the same build errors occur as reported in this question. Here is the code where assertion failure happens on iPhone 11 Pro.
let vertexShader = library.makeFunction(name: "vertexShaderPassthru")
let fragmentShaderYUV = library.makeFunction(name: "fragmentShaderYUV")
let pipelineDescriptorYUV = MTLRenderPipelineDescriptor()
pipelineDescriptorYUV.rasterSampleCount = 1
pipelineDescriptorYUV.colorAttachments[0].pixelFormat = .bgra8Unorm
pipelineDescriptorYUV.depthAttachmentPixelFormat = .invalid
pipelineDescriptorYUV.vertexFunction = vertexShader
pipelineDescriptorYUV.fragmentFunction = fragmentShaderYUV
do {
try pipelineStateYUV = metalDevice?.makeRenderPipelineState(descriptor: pipelineDescriptorYUV)
}
catch {
assertionFailure("Failed creating a render state pipeline. Can't render the texture without one.")
return
}
My app is suddenly broken when I build it with XCode 16. It seems Core Image kernels compilation is broken in XCode 16. Answers on StackOverflow seem to suggest we need to use a downgraded version of Core Image framework as a workaround, but I am not sure if there is a better solution out there.
FYI, I am using [[ stitchable ]] kernels and I see projects having stitchable are the ones showing issue.
air-lld: error: symbol(s) not found for target 'air64_v26-apple-ios17.0.0'
metal: error: air-lld command failed with exit code 1 (use -v to see invocation)
Showing Recent Messages
/Users/Username/Camera4S-Swift/air-lld:1:1: ignoring file '/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS.sdk/System/Library/Frameworks/CoreImage.framework/CoreImage.metallib', file AIR version (2.7) is bigger than the one of the target being linked (2.6)
I use xcode16 and swiftUI for programming on a macos15 system. There is a problem. When I render a picture through mtkview, it is normal when displayed on a regular view. However, when the view is displayed through the .sheet method, the image cannot be displayed. There is no error message from xcode.
import Foundation
import MetalKit
import SwiftUI
struct CIImageDisplayView: NSViewRepresentable {
typealias NSViewType = MTKView
var ciImage: CIImage
init(ciImage: CIImage) {
self.ciImage = ciImage
}
func makeNSView(context: Context) -> MTKView {
let view = MTKView()
view.delegate = context.coordinator
view.preferredFramesPerSecond = 60
view.enableSetNeedsDisplay = true
view.isPaused = true
view.framebufferOnly = false
if let defaultDevice = MTLCreateSystemDefaultDevice() {
view.device = defaultDevice
}
view.delegate = context.coordinator
return view
}
func updateNSView(_ nsView: MTKView, context: Context) {
}
func makeCoordinator() -> RawDisplayRender {
RawDisplayRender(ciImage: self.ciImage)
}
class RawDisplayRender: NSObject, MTKViewDelegate {
// MARK: Metal resources
var device: MTLDevice!
var commandQueue: MTLCommandQueue!
// MARK: Core Image resources
var context: CIContext!
var ciImage: CIImage
init(ciImage: CIImage) {
self.ciImage = ciImage
self.device = MTLCreateSystemDefaultDevice()
self.commandQueue = self.device.makeCommandQueue()
self.context = CIContext(mtlDevice: self.device)
}
func mtkView(_ view: MTKView, drawableSizeWillChange size: CGSize) {}
func draw(in view: MTKView) {
guard let currentDrawable = view.currentDrawable,
let commandBuffer = commandQueue.makeCommandBuffer()
else {
return
}
let dSize = view.drawableSize
let drawImage = self.ciImage
let destination = CIRenderDestination(width: Int(dSize.width),
height: Int(dSize.height),
pixelFormat: view.colorPixelFormat,
commandBuffer: commandBuffer,
mtlTextureProvider: { () -> MTLTexture in
return currentDrawable.texture
})
_ = try? self.context.startTask(toClear: destination)
_ = try? self.context.startTask(toRender: drawImage, from: drawImage.extent,
to: destination, at: CGPoint(x: (dSize.width - drawImage.extent.width) / 2, y: 0))
commandBuffer.present(currentDrawable)
commandBuffer.commit()
}
}
}
struct ShowCIImageView: View {
let cii = CIImage.init(contentsOf: Bundle.main.url(forResource: "9-10", withExtension: "jpg")!)!
var body: some View {
CIImageDisplayView.init(ciImage: cii).frame(width: 500, height: 500).background(.red)
}
}
struct ContentView: View {
@State var showImage = false
var body: some View {
VStack {
Image(systemName: "globe")
.imageScale(.large)
.foregroundStyle(.tint)
Text("Hello, world!")
ShowCIImageView()
Button {
showImage = true
} label: {
Text("showImage")
}
}
.frame(width: 800, height: 800)
.padding()
.sheet(isPresented: $showImage) {
ShowCIImageView()
}
}
}
I'm using the Apple RoomPlan sdk to generate a .usdz file, which works fine, and gives me a 3D scan of my room.
But when I try to use Model I/O's MDLAsset to convert that output into an .obj file, it comes out as a completely flat model shaped like a rectangle. Here is my Swift code:
let destinationURL = destinationFolderURL.appending(path: "Room.usdz")
do {
try FileManager.default.createDirectory(at: destinationFolderURL, withIntermediateDirectories: true)
try finalResults?.export(to: destinationURL, exportOptions: .model)
let newUsdz = destinationURL;
let asset = MDLAsset(url: newUsdz);
let obj = destinationFolderURL.appending(path: "Room.obj")
try asset.export(to: obj)
}
Not sure what's wrong here. According to MDLAsset documentation, .obj is a supported format and exporting from .usdz to the other formats like .stl and .ply works fine and retains the original 3D shape.
Some things I've tried:
changing "exportOptions" to parametric, mesh, or model.
simply changing the file extension of "destinationURL" (throws error)
specifically using the newer colorEffect, layerEffect, etc routes. the below does not seem to work. do i have to use the old MTK stuff?
import SwiftUI
import MetalKit
struct HoloPreview: View {
let startDate = Date()
@State private var noiseTexture: MTLTexture?
var body: some View {
TimelineView(.animation) { context in
RoundedRectangle(cornerRadius: 20)
.fill(Color.red)
.layerEffect(ShaderLibrary.iridescentEffect(
.float(startDate.timeIntervalSinceNow),
.texture(noiseTexture)
), maxSampleOffset: .zero)
}
.onAppear {
noiseTexture = loadTexture(named: "perlinNoiseMap")
}
}
func loadTexture(named imageName: String) -> MTLTexture? {
guard let device = MTLCreateSystemDefaultDevice(),
let url = Bundle.main.url(forResource: imageName, withExtension: "png") else {
return nil
}
let textureLoader = MTKTextureLoader(device: device)
let texture = try? textureLoader.newTexture(URL: url, options: nil)
return texture
}
}
#Preview {
HoloPreview()
}
I am trying to convert a ThreeJS project to Metal for the Vision Pro. The issue is ThreeJS doesn't do any color space conversion (when I output a color in a fragment shader and then read it using the digital color meter in SRGB mode I get the same value I inputed in the fragment shader) This is not the case when using metal. When setting up my LayerRenderer I set the colorFormat to rgba16Unorm since it is the only non srgb color format supported on the vision pro apps. However switching between bgra8Unorm_srgb and rgba16Unorm seems to have no affect.
when I set up the renderPassDescriptor I use the drawable colorTexture
renderPassDescriptor.colorAttachments[0].texture = drawable.colorTextures[0]
and when printing its pixel format it seems to be passed from the configuration.
If there is anyway to disable this behavior or perform an inverse function of such that I get the original value out from the shader, that would be appreciated.
Is there any way to place 3D objects, maybe using ARKit or Metalkit in the video.
I have tried to extract frames from video, then draw a cube using SCNNode and then render it into UIImage, then gather all images and create video.
But this is not feasible solution as it creates huge memory spike and ultimately gives memory warning.
So is there any other way to draw 3D objects on the video file.
i'm trying to figure out how to basically engrave some text into this ellipsoid mesh. so far the only thing i've learned that can sort of come close to what im looking for is SCNText but it floats above the ellipsoid and doesnt conform to the angular shape.
let allocator = MTKMeshBufferAllocator(device: MTLCreateSystemDefaultDevice()!)
let disc = MDLMesh.newEllipsoid(
withRadii: vector_float3(Float(discDiameter/2), Float(discDiameter/2), Float(discThickness/2)),
radialSegments: 64,
verticalSegments: 64,
geometryType: .triangles,
inwardNormals: false,
hemisphere: false,
allocator: allocator
)
let discGeometry = SCNGeometry(mdlMesh: disc)
let material = createIridescentMaterial()
discGeometry.materials = [material]
I had a problem that my app did not work outside Xcode (had to include the metalkit.framework
I found a simple trick to get my NSLog outputs and locate the issue:
NSURL *downloadsDirectory = [[[NSFileManager defaultManager] URLsForDirectory:NSDownloadsDirectory inDomains:NSUserDomainMask] firstObject];
NSURL *fileURL = [downloadsDirectory URLByAppendingPathComponent:@"Log.txt"];
NSString *filePath = [fileURL path];
// Redirect stderr to the file
freopen([filePath fileSystemRepresentation], "a+", stderr);
Now you get debug info in your Download folder!
I've got an iOS app that is using MetalKit to display raw video frames coming in from a network source. I read the pixel data in the packets into a single MTLTexture rows at a time, which is drawn into an MTKView each time a frame has been completely sent over the network. The app works, but only for several seconds (a seemingly random duration), before the MTKView seemingly freezes (while packets are still being received).
Watching the debugger while my app was running revealed that the freezing of the display happened when there was a large spike in memory. Seeing the memory profile in Instruments revealed that the spike was related to a rapid creation of many IOSurfaces and IOAccelerators. Profiling CPU Usage shows that CAMetalLayerPrivateNextDrawableLocked is what happens during this rapid creation of surfaces. What does this function do?
Being a complete newbie to iOS programming as a whole, I wonder if this issue comes from a misuse of the MetalKit library. Below is the code that I'm using to render the video frames themselves:
class MTKViewController: UIViewController, MTKViewDelegate {
/// Metal texture to be drawn whenever the view controller is asked to render its view.
private var metalView: MTKView!
private var device = MTLCreateSystemDefaultDevice()
private var commandQueue: MTLCommandQueue?
private var renderPipelineState: MTLRenderPipelineState?
private var texture: MTLTexture?
private var networkListener: NetworkListener!
private var textureGenerator: TextureGenerator!
override public func loadView() {
super.loadView()
assert(device != nil, "Failed creating a default system Metal device. Please, make sure Metal is available on your hardware.")
initializeMetalView()
initializeRenderPipelineState()
networkListener = NetworkListener()
textureGenerator = TextureGenerator(width: streamWidth, height: streamHeight, bytesPerPixel: 4, rowsPerPacket: 8, device: device!)
networkListener.start(port: NWEndpoint.Port(8080))
networkListener.dataRecievedCallback = { data in
self.textureGenerator.process(data: data)
}
textureGenerator.onTextureBuiltCallback = { texture in
self.texture = texture
self.draw(in: self.metalView)
}
commandQueue = device?.makeCommandQueue()
}
public func mtkView(_ view: MTKView, drawableSizeWillChange size: CGSize) {
/// need implement?
}
public func draw(in view: MTKView) {
guard
let texture = texture,
let _ = device
else { return }
let commandBuffer = commandQueue!.makeCommandBuffer()!
guard
let currentRenderPassDescriptor = metalView.currentRenderPassDescriptor,
let currentDrawable = metalView.currentDrawable,
let renderPipelineState = renderPipelineState
else { return }
currentRenderPassDescriptor.renderTargetWidth = streamWidth
currentRenderPassDescriptor.renderTargetHeight = streamHeight
let encoder = commandBuffer.makeRenderCommandEncoder(descriptor: currentRenderPassDescriptor)!
encoder.pushDebugGroup("RenderFrame")
encoder.setRenderPipelineState(renderPipelineState)
encoder.setFragmentTexture(texture, index: 0)
encoder.drawPrimitives(type: .triangleStrip, vertexStart: 0, vertexCount: 4, instanceCount: 1)
encoder.popDebugGroup()
encoder.endEncoding()
commandBuffer.present(currentDrawable)
commandBuffer.commit()
}
private func initializeMetalView() {
metalView = MTKView(frame: CGRect(x: 0, y: 0, width: streamWidth, height: streamWidth), device: device)
metalView.delegate = self
metalView.framebufferOnly = true
metalView.colorPixelFormat = .bgra8Unorm
metalView.contentScaleFactor = UIScreen.main.scale
metalView.autoresizingMask = [.flexibleWidth, .flexibleHeight]
view.insertSubview(metalView, at: 0)
}
/// initializes render pipeline state with a default vertex function mapping texture to the view's frame and a simple fragment function returning texture pixel's value.
private func initializeRenderPipelineState() {
guard let device = device, let library = device.makeDefaultLibrary() else {
return
}
let pipelineDescriptor = MTLRenderPipelineDescriptor()
pipelineDescriptor.rasterSampleCount = 1
pipelineDescriptor.colorAttachments[0].pixelFormat = .bgra8Unorm
pipelineDescriptor.depthAttachmentPixelFormat = .invalid
/// Vertex function to map the texture to the view controller's view
pipelineDescriptor.vertexFunction = library.makeFunction(name: "mapTexture")
/// Fragment function to display texture's pixels in the area bounded by vertices of `mapTexture` shader
pipelineDescriptor.fragmentFunction = library.makeFunction(name: "displayTexture")
do {
renderPipelineState = try device.makeRenderPipelineState(descriptor: pipelineDescriptor)
}
catch {
assertionFailure("Failed creating a render state pipeline. Can't render the texture without one.")
return
}
}
}
My question is simply: what gives?
Hi,
Since iOS 17, when setting weight on a SCNMorpher, the normals become completely wrong. As you can see below it only happens when there are vertices along an edge.
Has anyone encountered that problem and found a solution?
Thanks
Reported: FB13798652
Hello,
I am using CAMetalDisplayLink to render my metal layer and i am trying to calculate timestamp for next render to compare with current timestamp.
I did notice that timestamp from display link Update is not like Date timestamp. Looks like it depends on some kind of phone boot or another kernel process start. I did try
func bootTime() -> TimeInterval? {
var tv = timeval()
var tvSize = MemoryLayout<timeval>.size
let err = sysctlbyname("kern.boottime", &tv, &tvSize, nil, 0);
guard err == 0, tvSize == MemoryLayout<timeval>.size else {
return nil
}
return Double(tv.tv_sec) + Double(tv.tv_usec) / 1_000_000.0
}
And i got
calc
1715680889.6883893
real
1715680878.01257
It's close but still 10 seconds different so probably it's not kern.boottime but something similar.
Anybody knows what should i use to get correct timestamp?
I am wanting to create a 3D video game in Xcode for macOS, iOS, iPadOS, tvOS, and visionOS. I have heard that there are a few different ways to go about this such as MetalKit or SceneKit. These libraries seem to have little examples and documentation so I am wondering:
Are they still be developed/supported?
Which platform should I make a game in?
Where are some resources to learn how to use these platforms?
Are there other better platforms that I am just not aware of?
Thanks!
Looking for info on the best way to convert a MTLTexture to SKTexture.
I am migrating a project where a SpriteKit scene is rendered in an SKView - and relies heavily on SKView.texture(from: SKNode) API
I am trying to migrate to an SKRenderer based SpriteKit scene - and wondering the best way to grab an SKTexture from an SKNode when rendered in a MTKView instead of SKView?