Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.

All subtopics

Post

Replies

Boosts

Views

Activity

What does CAMetalLayerWantsCompositingDependencies in Info.plist do?
I've noticed a major third-party app has the following flag set to 1/true in its Info.plist: CAMetalLayerWantsCompositingDependencies Does anyone know if it’s recognized by Core Animation / Metal, and what it’s supposed to do? It might obviously have zero relationship to the OS, defined by that app and for that app... but since it looks very much like an unofficial/undocumented environment setting, it might be great to know what problem it solves. I happen to have issues related to compositing other CALayers over a CAMetalLayer in my app... so this definitely stood out as interesting. Thank you!
0
0
25
4h
Cannot use Metal graphics overview HUD with multiple CAMetalLayers
I have multiple CAMetalLayers that I render content to and noticed that the graphics overview HUD does not function properly when I have more than one CAMetalLayer. The values reported will be very strange. For example, FPS may report 999 or some large negative value. It the HUD simply not designed to work with multiple CAMetalLayers or MTKViews? When I disable all but one of my CAMetalLayers, the HUD works as expected.
0
0
107
2d
Can the simulator be used to test a GameKit app?
I'm sure this question was asked many times before but I cannot find a good answer. In my case it just doesn't work. Here's what I did Created a couple of test user accounts on appstore connect under the sandbox section. Launched two different simulators via xcode. On each simulator I logged in into test icloud account and game center accordingly. Deployed and launched my code via xcode. What happens is that GameKit code fails to find any peers no matter what I do. Sending Invite doesn't work either because the simulator displays an error message saying "you need to log-in into icloud account first" although it's already logged in. I tried the same code on two different physical devices with two real icloud accounts and it works as expected but it's not a viable path to develop and debug an app. I'm using the latest Xcode 16.1 running on 15.1. Anybody has any clue how to solve this ?
1
0
96
2d
RealityView Attachments on iOS 18 & Visually Appealing AR Labeling Alternatives
I want use SwiftUI views as RealityKit entities to display AR Labels within a RealityKit scene, and the labels could be more complicated than just text and window as they might include images, dynamic texts, animations, WebViews, etc. Vision OS enables this through RealityView attachments, and there is a RealityView support on iOS 18. Tried running RealityView attachments code samples from VisionOS on iOS 18. However, the code below gives errors on iOS 18: import SwiftUI import RealityKit struct PassportRealityView: View { let qrCodeCenter: SIMD3<Float> let assetID: String var body: some View { RealityView { content, attachments in // Setup your AR content, such as markers or 3D models if let qrAnchor = try? await Entity(named: "QRAnchor") { qrAnchor.position = qrCodeCenter content.add(qrAnchor) } } attachments: { Attachment(id: "passportTextAttachment") { Text(assetID) .font(.title3) .foregroundColor(.white) .background(Color.black.opacity(0.7)) .padding(5) .cornerRadius(5) } } .frame(width: 300, height: 400) } } When I remove "attachments" keyword and the block, the errors are kind of gone. That does not help me as I want to attach SwiftUI views to Anchor Entities in RealityKit. As I understand, RealityView attachments are not supported on iOS 18. I wonder if there is any way of showing SwiftUI views as entities on iOS 18 at this point. Or am I forced to use the text meshes and 3d planes to build the UI? I checked out the RealityUI plugin, but it's too simple for my use case of building complex AR labels. Any advice would be appreciated. Thanks!
0
0
95
2d
Metal Compute Overhead
Hello, We are experimenting with Metal to accelerate some peculiar numerical computation. Our workloads are relatively small, so the ability to avoid moving data to and from the GPU's memory is very appealing. However, we are observing higher overhead compared to CUDA, which negates the benefits of avoiding data transfer. In our tests using an empty kernel, CUDA completes in 0.001 ms (Intel i7 10700K, RTX 3080), while Metal's waitUntilCompleted takes 0.12 ms (M2 Max). As we do not have prior experience with Metal, we are wondering if we are using the APIs just fine and this timing is expected, or if there is a way to reduce it. Thank you in advance for any comment! test-metal.cpp
0
0
117
3d
Metal Inline Functions
Hi! How to define and call an inline function in Metal? Or simple function that will return some value. Case: inline uint index4D(constant _4D& shape, constant uint& n, constant uint& c, constant uint& h, constant uint& w) { return n * shape.C * shape.H * shape.W + c * shape.H * shape.W + h * shape.W + w; } When I call it in my kernel function I get No matching function for call error. Thx in advance.
1
0
103
3d
PhotogrammetrySession on non Pro Iphone
Hello, I'm creating an app that use PhotogrammetrySession Class to build 3D objects from photographs (https://developer.apple.com/documentation/realitykit/creating-3d-objects-from-photographs). I'm wondering why this class is working only on Pro iphone (12 Pro, 13 Pro, 14 Pro, 15 Pro and 16 Pro) and none non-Pro iPhone. My app does not use Lidar so it's not the problem. I thought it could be power-related but a18 soc from iPhone 16 is more powerful than a14 bionic from iPhone 12 Pro (i could also mention iPhone 13 Pro and iPhone 14 that both have a15 bionic whereas only the first one is compatible). Did I miss something that could explain these restrictions ? Is there any plan to make this class usable by every iPhone enough powerful to run it ? Thanks in advance for answering me
0
0
49
3d
How to receive keyboard/mouse on VisionOS?
I tried using the GameController APIs for this, but they didn't seem to work. Is that the recommended API for handling keyboard/mouse? The notifications for mouse and keyboard connect/disconnect don't seem to be defined for visionOS. The visionOS 2.0 touts keyboard and mouse support. The simulator can even forward keyboard/mouse to the app. But there don't seem to be any sample code of how to programatically receive either of these. The game controller works fine (on device, not on Simulator).
1
0
83
3d
Does the SpriteView of an SKScene have layers? Unable to get magnifying glass view to work with scene.
I'm trying to make a magnifying glass that shows up when the user presses a button and follows the user's finger as it's dragged across the screen. I came across a UIKit-based solution (https://github.com/niczyja/MagnifyingGlass-Swift), but when implemented in my SKScene, only the crosshairs are shown. Through experimentation I've found that magnifiedView?.layer.render(in: context) in: public override func draw(_ rect: CGRect) { guard let context = UIGraphicsGetCurrentContext() else { return } context.translateBy(x: radius, y: radius) context.scaleBy(x: scale, y: scale) context.translateBy(x: -magnifiedPoint.x, y: -magnifiedPoint.y) removeFromSuperview() magnifiedView?.layer.render(in: context) magnifiedView?.addSubview(self) } can be removed without altering the situation, suggesting that line is not working as it should. But this is where I hit a brick wall. The view below is shown but not offset or magnified, and any attempt to add something to context results in a black magnifying glass. Does anyone know why this is? I don't think it's an issue with the code, so I'm suspecting its something specific to SpriteKit or SKScene, likely related to how CALayers work. Any pointers would be greatly appreciated. . . . Full code below: import UIKit public class MagnifyingGlassView: UIView { public weak var magnifiedView: UIView? = nil { didSet { removeFromSuperview() magnifiedView?.addSubview(self) } } public var magnifiedPoint: CGPoint = .zero { didSet { center = .init(x: magnifiedPoint.x + offset.x, y: magnifiedPoint.y + offset.y) } } public var offset: CGPoint = .zero public var radius: CGFloat = 50 { didSet { frame = .init(origin: frame.origin, size: .init(width: radius * 2, height: radius * 2)) layer.cornerRadius = radius crosshair.path = crosshairPath(for: radius) } } public var scale: CGFloat = 2 public var borderColor: UIColor = .lightGray { didSet { layer.borderColor = borderColor.cgColor } } public var borderWidth: CGFloat = 3 { didSet { layer.borderWidth = borderWidth } } public var showsCrosshair = true { didSet { crosshair.isHidden = !showsCrosshair } } public var crosshairColor: UIColor = .lightGray { didSet { crosshair.strokeColor = crosshairColor.cgColor } } public var crosshairWidth: CGFloat = 5 { didSet { crosshair.lineWidth = crosshairWidth } } private let crosshair: CAShapeLayer = CAShapeLayer() public convenience init(offset: CGPoint = .zero, radius: CGFloat = 50, scale: CGFloat = 2, borderColor: UIColor = .lightGray, borderWidth: CGFloat = 3, showsCrosshair: Bool = true, crosshairColor: UIColor = .lightGray, crosshairWidth: CGFloat = 0.5) { self.init(frame: .zero) layer.masksToBounds = true layer.addSublayer(crosshair) defer { self.offset = offset self.radius = radius self.scale = scale self.borderColor = borderColor self.borderWidth = borderWidth self.showsCrosshair = showsCrosshair self.crosshairColor = crosshairColor self.crosshairWidth = crosshairWidth } } public func magnify(at point: CGPoint) { guard magnifiedView != nil else { return } magnifiedPoint = point layer.setNeedsDisplay() } private func crosshairPath(for radius: CGFloat) -> CGPath { let path = CGMutablePath() path.move(to: .init(x: radius, y: 0)) path.addLine(to: .init(x: radius, y: bounds.height)) path.move(to: .init(x: 0, y: radius)) path.addLine(to: .init(x: bounds.width, y: radius)) return path } public override func draw(_ rect: CGRect) { guard let context = UIGraphicsGetCurrentContext() else { return } context.translateBy(x: radius, y: radius) context.scaleBy(x: scale, y: scale) context.translateBy(x: -magnifiedPoint.x, y: -magnifiedPoint.y) removeFromSuperview() magnifiedView?.layer.render(in: context) //If above disabled, no change //Possible that nothing's being rendered into context //Could it be that SKScene view has no layer? magnifiedView?.addSubview(self) } }
0
0
113
5d
SKTexture used for SceneKit object is rendered too bright
I would like to preload and use some images for both SpriteKit and SceneKit models (my game uses SceneKit with a SpriteKit overlay), and as far as I can see the only efficient way would be to create and preload SKTexture objects which can be supplied to SKSpriteNode(texture:) and SCNMaterial.diffuse.contents. The problem is that SKTexture are rendered too bright in SceneKit, for some unknown reason. Here a comparison between rendering an image (from URL) and a SKTexture: And the code that produces it: let url = Bundle.main.url(forResource: "art.scnassets/texture.png", withExtension: nil)! let plane1 = SCNPlane(width: 10, height: 10) plane1.firstMaterial!.diffuse.contents = url.path let node1 = SCNNode(geometry: plane1) node1.position.x = -5 scene.rootNode.addChildNode(node1) let plane2 = SCNPlane(width: 10, height: 10) plane2.firstMaterial!.diffuse.contents = SKTexture(image: NSImage(byReferencing: url)) let node2 = SCNNode(geometry: plane2) node2.position.x = 5 scene.rootNode.addChildNode(node2) This issue was already mentioned in this other post, but since I wasn't notified of the reply from Quinn asking about the feedback number I created at the time, it didn't make any progress.
5
0
167
6d
Cannot disable "Show Graphics HUD"
For some reason I can't disable the Graphics HUD. Not really a problem for development, but it's also showing in Testflight apps. For example when swiping down on the keyboard but also in some other places. Of course I tried disabling the toggle, but even when it's off the HUD is still showing. Even completely disabling Developer mode does not work. Is this a known issue? I already scrolled through possibly every Google search result but I can't figure out how to solve this.
0
0
111
6d
Metal rendering shows incorrect color, gputrace shows fragment function returning correct color
I'm experiencing a strange issue where I'm seeing black in a metal drawable where it should be a different color. When I capture the frame and inspect the returned value from the fragment function, it's correct, but the drawable isn't. This screenshot hopefully illustrates the issue. I've not found any references to similar issues. I saw something about some out of bounds or NaN values being dropped to 0 (which would be black), but the debugger doesn't indicate this is happening.
1
0
103
1w
Normally distributed MPSMatrixRandom number generation generates NaN
When generating large arrays of random numbers, NaNs show up. They also show up at the same indices when using the same seed, leading me to believe that this is a bug with MPSMatrixRandom's normally distributed Float32 random number distribution. Happens with both Philox and MTGP32. Is this intentional and how do I work around this? See the original post for a MWE in Swift and Julia: https://github.com/JuliaGPU/Metal.jl/issues/474
0
1
123
1w
Metal and NVIDIA graphic driver
Hi, A user sent us a crash report that indicates an error occurring just after loading the default Metal library of our app. Application Specific Information: Crashing on exception: *** -[__NSArrayM objectAtIndex:]: index 0 beyond bounds for empty array The report pointed me to these (simplified) lines of codes in the library setup: _vertexFunctions = [[NSMutableArray alloc] init]; _fragmentFunctions = [[NSMutableArray alloc] init]; id<MTLLibrary> library = [device newDefaultLibrary]; 2 vertex shaders and 5 fragment shaders are then loaded and stored in these two arrays using this method: -(BOOL) addShaderNamed:(NSString *)name library:(id<MTLLibrary>)library isFragment:(BOOL)isFragment { id shader = [library newFunctionWithName:name]; if (!shader) { ALOG(@"Error : Unable to find the shader named : “%@”", name); return NO; } [(isFragment ? _fragmentFunctions : _vertexFunctions) addObject:shader]; return YES; } As you can see, the arrays are not filled if the method fails... however, a few lines later, they are used without checking if they are really filled, and that causes the crash... But this coding error doesn't explain why no shader of a certain type (or both types) have been added to the array, meaning: why -newFunctionWithName: returned nil for all given names (since the implied array appears completely empty)? Clue This error has only be detected once by a user running the app on macOS 10.13 with a NVIDIA Web Driver instead of the default macOS graphic driver. Moreover, it wasn't possible to reproduce the problem on the same OS using the native macOS driver. So my question is: is it some known conflicts between NVIDIA drivers and the use of Metal libraries? Or does this case would require some specific options in the Metal implementation? Any help appreciated, thanks!
0
0
138
1w