Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.

All subtopics

Post

Replies

Boosts

Views

Activity

SceneKit custom physics fields using wrong position?
In the simplest case I can come up with, I create a scene (either fully or partially in code) with a single dynamic body, located slightly away from the origin. I give the body a charge as well as adding an electric field to the node. Body does nothing (as to be expected, since it's the source of the field). However if I replace that field with a custom field (does nothing except reports back the passed in position value) the position shown is the location of the body in the local space of its parent (in this case, the root node) rather than the node the field is attached to (i.e. itself). I've attached the code customising the SwiftUI app template. Hopefully someone can tell me what I'm doing wrong? ContentView customisation… struct ContentView: View { var body: some View { SceneView(scene: ElectricScene(), options: [.allowsCameraControl, .autoenablesDefaultLighting]) } } And the code to create the scene… import Foundation import SceneKit class ElectricScene: SCNScene { override init() { super.init() physicsWorld.gravity = SCNVector3(0, 0, 0) let cameraNode = SCNNode() cameraNode.camera = SCNCamera() cameraNode.position = SCNVector3(0, 0, 10) rootNode.addChildNode(cameraNode) let ballNode = SCNNode(geometry: SCNSphere(radius: 0.5)) ballNode.position = SCNVector3(2, 0, 0) ballNode.physicsBody = SCNPhysicsBody(type: .dynamic, shape: nil) ballNode.physicsBody?.charge = -1 rootNode.addChildNode(ballNode) // ballNode.physicsField = SCNPhysicsField.electric() ballNode.physicsField = SCNPhysicsField .customField {position, _, _, _, _ in print(position) return SCNVector3Zero } } @available(*, unavailable) required init?(coder: NSCoder) { fatalError("init(coder:) has not been implemented") } } This (repeatedly) prints out the following… SCNVector3(x: 2.0, y: 0.0, z: 0.0) …which is the position of the node relative to the root node, rather than relative to the source of the field (itself).
1
0
250
4w
What does CAMetalLayerWantsCompositingDependencies in Info.plist do?
I've noticed a major third-party app has the following flag set to 1/true in its Info.plist: CAMetalLayerWantsCompositingDependencies Does anyone know if it’s recognized by Core Animation / Metal, and what it’s supposed to do? It might obviously have zero relationship to the OS, defined by that app and for that app... but since it looks very much like an unofficial/undocumented environment setting, it might be great to know what problem it solves. I happen to have issues related to compositing other CALayers over a CAMetalLayer in my app... so this definitely stood out as interesting. Thank you!
0
0
158
2w
Metal Inline Functions
Hi! How to define and call an inline function in Metal? Or simple function that will return some value. Case: inline uint index4D(constant _4D& shape, constant uint& n, constant uint& c, constant uint& h, constant uint& w) { return n * shape.C * shape.H * shape.W + c * shape.H * shape.W + h * shape.W + w; } When I call it in my kernel function I get No matching function for call error. Thx in advance.
2
0
180
2w
RealityView Attachments on iOS 18 & Visually Appealing AR Labeling Alternatives
I want use SwiftUI views as RealityKit entities to display AR Labels within a RealityKit scene, and the labels could be more complicated than just text and window as they might include images, dynamic texts, animations, WebViews, etc. Vision OS enables this through RealityView attachments, and there is a RealityView support on iOS 18. Tried running RealityView attachments code samples from VisionOS on iOS 18. However, the code below gives errors on iOS 18: import SwiftUI import RealityKit struct PassportRealityView: View { let qrCodeCenter: SIMD3<Float> let assetID: String var body: some View { RealityView { content, attachments in // Setup your AR content, such as markers or 3D models if let qrAnchor = try? await Entity(named: "QRAnchor") { qrAnchor.position = qrCodeCenter content.add(qrAnchor) } } attachments: { Attachment(id: "passportTextAttachment") { Text(assetID) .font(.title3) .foregroundColor(.white) .background(Color.black.opacity(0.7)) .padding(5) .cornerRadius(5) } } .frame(width: 300, height: 400) } } When I remove "attachments" keyword and the block, the errors are kind of gone. That does not help me as I want to attach SwiftUI views to Anchor Entities in RealityKit. As I understand, RealityView attachments are not supported on iOS 18. I wonder if there is any way of showing SwiftUI views as entities on iOS 18 at this point. Or am I forced to use the text meshes and 3d planes to build the UI? I checked out the RealityUI plugin, but it's too simple for my use case of building complex AR labels. Any advice would be appreciated. Thanks!
0
0
192
2w
How to receive keyboard/mouse on VisionOS?
I tried using the GameController APIs for this, but they didn't seem to work. Is that the recommended API for handling keyboard/mouse? The notifications for mouse and keyboard connect/disconnect don't seem to be defined for visionOS. The visionOS 2.0 touts keyboard and mouse support. The simulator can even forward keyboard/mouse to the app. But there don't seem to be any sample code of how to programatically receive either of these. The game controller works fine (on device, not on Simulator).
1
0
168
2w
Metal Compute Overhead
Hello, We are experimenting with Metal to accelerate some peculiar numerical computation. Our workloads are relatively small, so the ability to avoid moving data to and from the GPU's memory is very appealing. However, we are observing higher overhead compared to CUDA, which negates the benefits of avoiding data transfer. In our tests using an empty kernel, CUDA completes in 0.001 ms (Intel i7 10700K, RTX 3080), while Metal's waitUntilCompleted takes 0.12 ms (M2 Max). As we do not have prior experience with Metal, we are wondering if we are using the APIs just fine and this timing is expected, or if there is a way to reduce it. Thank you in advance for any comment! test-metal.cpp
0
0
199
2w
GCControllerDidConnect notification not received in VisionOS 2.0
I am unable to get VisionOS 2.0 (simulator) to receive the GCControllerDidConnect notification and thus am unable to setup support for a gamepad. However, it works in VisionOS 1.2. For VisionOS 2.0 I've tried adding: .handlesGameControllerEvents(matching: .gamepad) attribute to the view Supports Controller User Interaction to Info.plist Supported game controller types -> Extended Gamepad to Info.plist ...but the notification still doesn't fire. It does when the code is run from VisionOS 1.2 simulator, both of which have the Send Game Controller To Device option enabled. Here is the example code. It's based on the Xcode project template. The only files updated were ImmersiveView.swift and Info.plist, as detailed above: import SwiftUI import GameController import RealityKit import RealityKitContent struct ImmersiveView: View { var body: some View { RealityView { content in // Add the initial RealityKit content if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) { content.add(immersiveContentEntity) } NotificationCenter.default.addObserver( forName: NSNotification.Name.GCControllerDidConnect, object: nil, queue: nil) { _ in print("Handling GCControllerDidConnect notification") } } .modify { if #available(visionOS 2.0, *) { $0.handlesGameControllerEvents(matching: .gamepad) } else { $0 } } } } extension View { func modify<T: View>(@ViewBuilder _ modifier: (Self) -> T) -> some View { return modifier(self) } }
1
0
345
Sep ’24
PhotogrammetrySession on non Pro Iphone
Hello, I'm creating an app that use PhotogrammetrySession Class to build 3D objects from photographs (https://developer.apple.com/documentation/realitykit/creating-3d-objects-from-photographs). I'm wondering why this class is working only on Pro iphone (12 Pro, 13 Pro, 14 Pro, 15 Pro and 16 Pro) and none non-Pro iPhone. My app does not use Lidar so it's not the problem. I thought it could be power-related but a18 soc from iPhone 16 is more powerful than a14 bionic from iPhone 12 Pro (i could also mention iPhone 13 Pro and iPhone 14 that both have a15 bionic whereas only the first one is compatible). Did I miss something that could explain these restrictions ? Is there any plan to make this class usable by every iPhone enough powerful to run it ? Thanks in advance for answering me
0
0
116
2w
SKTexture used for SceneKit object is rendered too bright
I would like to preload and use some images for both SpriteKit and SceneKit models (my game uses SceneKit with a SpriteKit overlay), and as far as I can see the only efficient way would be to create and preload SKTexture objects which can be supplied to SKSpriteNode(texture:) and SCNMaterial.diffuse.contents. The problem is that SKTexture are rendered too bright in SceneKit, for some unknown reason. Here a comparison between rendering an image (from URL) and a SKTexture: And the code that produces it: let url = Bundle.main.url(forResource: "art.scnassets/texture.png", withExtension: nil)! let plane1 = SCNPlane(width: 10, height: 10) plane1.firstMaterial!.diffuse.contents = url.path let node1 = SCNNode(geometry: plane1) node1.position.x = -5 scene.rootNode.addChildNode(node1) let plane2 = SCNPlane(width: 10, height: 10) plane2.firstMaterial!.diffuse.contents = SKTexture(image: NSImage(byReferencing: url)) let node2 = SCNNode(geometry: plane2) node2.position.x = 5 scene.rootNode.addChildNode(node2) This issue was already mentioned in this other post, but since I wasn't notified of the reply from Quinn asking about the feedback number I created at the time, it didn't make any progress.
5
0
253
2w
Nimbus Steel Series not working with AVP Simulator
I have this game controller connected to my M1, and the Simulator won't announced it via .GCControllerDidConnect. This works fine on iOS and macOS. I have the simulator set to "Send Game Controller to Device" which the Simulator does. If I disable that, then I can control the simulator view. But once enabled, the Simulator doesn't tell the app about the controller.
3
0
234
Oct ’24
Does the SpriteView of an SKScene have layers? Unable to get magnifying glass view to work with scene.
I'm trying to make a magnifying glass that shows up when the user presses a button and follows the user's finger as it's dragged across the screen. I came across a UIKit-based solution (https://github.com/niczyja/MagnifyingGlass-Swift), but when implemented in my SKScene, only the crosshairs are shown. Through experimentation I've found that magnifiedView?.layer.render(in: context) in: public override func draw(_ rect: CGRect) { guard let context = UIGraphicsGetCurrentContext() else { return } context.translateBy(x: radius, y: radius) context.scaleBy(x: scale, y: scale) context.translateBy(x: -magnifiedPoint.x, y: -magnifiedPoint.y) removeFromSuperview() magnifiedView?.layer.render(in: context) magnifiedView?.addSubview(self) } can be removed without altering the situation, suggesting that line is not working as it should. But this is where I hit a brick wall. The view below is shown but not offset or magnified, and any attempt to add something to context results in a black magnifying glass. Does anyone know why this is? I don't think it's an issue with the code, so I'm suspecting its something specific to SpriteKit or SKScene, likely related to how CALayers work. Any pointers would be greatly appreciated. . . . Full code below: import UIKit public class MagnifyingGlassView: UIView { public weak var magnifiedView: UIView? = nil { didSet { removeFromSuperview() magnifiedView?.addSubview(self) } } public var magnifiedPoint: CGPoint = .zero { didSet { center = .init(x: magnifiedPoint.x + offset.x, y: magnifiedPoint.y + offset.y) } } public var offset: CGPoint = .zero public var radius: CGFloat = 50 { didSet { frame = .init(origin: frame.origin, size: .init(width: radius * 2, height: radius * 2)) layer.cornerRadius = radius crosshair.path = crosshairPath(for: radius) } } public var scale: CGFloat = 2 public var borderColor: UIColor = .lightGray { didSet { layer.borderColor = borderColor.cgColor } } public var borderWidth: CGFloat = 3 { didSet { layer.borderWidth = borderWidth } } public var showsCrosshair = true { didSet { crosshair.isHidden = !showsCrosshair } } public var crosshairColor: UIColor = .lightGray { didSet { crosshair.strokeColor = crosshairColor.cgColor } } public var crosshairWidth: CGFloat = 5 { didSet { crosshair.lineWidth = crosshairWidth } } private let crosshair: CAShapeLayer = CAShapeLayer() public convenience init(offset: CGPoint = .zero, radius: CGFloat = 50, scale: CGFloat = 2, borderColor: UIColor = .lightGray, borderWidth: CGFloat = 3, showsCrosshair: Bool = true, crosshairColor: UIColor = .lightGray, crosshairWidth: CGFloat = 0.5) { self.init(frame: .zero) layer.masksToBounds = true layer.addSublayer(crosshair) defer { self.offset = offset self.radius = radius self.scale = scale self.borderColor = borderColor self.borderWidth = borderWidth self.showsCrosshair = showsCrosshair self.crosshairColor = crosshairColor self.crosshairWidth = crosshairWidth } } public func magnify(at point: CGPoint) { guard magnifiedView != nil else { return } magnifiedPoint = point layer.setNeedsDisplay() } private func crosshairPath(for radius: CGFloat) -> CGPath { let path = CGMutablePath() path.move(to: .init(x: radius, y: 0)) path.addLine(to: .init(x: radius, y: bounds.height)) path.move(to: .init(x: 0, y: radius)) path.addLine(to: .init(x: bounds.width, y: radius)) return path } public override func draw(_ rect: CGRect) { guard let context = UIGraphicsGetCurrentContext() else { return } context.translateBy(x: radius, y: radius) context.scaleBy(x: scale, y: scale) context.translateBy(x: -magnifiedPoint.x, y: -magnifiedPoint.y) removeFromSuperview() magnifiedView?.layer.render(in: context) //If above disabled, no change //Possible that nothing's being rendered into context //Could it be that SKScene view has no layer? magnifiedView?.addSubview(self) } }
0
0
185
2w
Using the same texture for both input & output of a fragment shader
Hello, This exact question was already asked in this forum (8 years ago) but I can't find a definitive answer: Does Metal allow using the same color texture as both an input and output (color attachment) of a fragment shader? Is the behavior defined somewhere? I believe this results in undefined behavior under both DirectX and OpenGL, so I'd assume the same for Metal, but then why doesn't Metal warn me about this as it does on some many other "misconfigurations"? It also seems to work correctly in my case, as I found out by accident. Would love to get a clarification! Thanks ahead!
8
1
741
May ’24
SK3DNode hitTest not working in SpriteKit/SceneKit
I have this minimum repro code: import SpriteKit import GameplayKit class MyGameScene3D: SCNScene { weak var node3D: MyNode3D! override init() { super.init() background.contents = UIColor.green let playground = SCNNode() playground.boundingBox = ( min: SCNVector3(x: 0, y: 0, z: 0), max: SCNVector3(x: 10, y: 10, z: 10)) let box = SCNNode(geometry: SCNBox(width: 1, height: 1, length: 1, chamferRadius: 0)) box.position = SCNVector3(x: 5, y: 5, z: 5) playground.addChildNode(box) playground.position = SCNVector3(x: 0, y: 0, z: 0) rootNode.addChildNode(playground) let light = SCNLight() light.type = .ambient let lightNode = SCNNode() lightNode.light = light rootNode.addChildNode(lightNode) let camera = SCNCamera() let cameraNode = SCNNode() cameraNode.camera = camera cameraNode.eulerAngles = SCNVector3(x: -3.14/2, y: 0, z: 0) cameraNode.position = SCNVector3(x: 5, y: 11, z: 5) rootNode.addChildNode(cameraNode) } required init?(coder: NSCoder) { fatalError("init(coder:) has not been implemented") } func handleTouchBegan(_ location: CGPoint) { let res = node3D.hitTest(location) print(res) } } class MyNode3D: SK3DNode { override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) { let touch = touches.first! let scene = scnScene as! MyGameScene3D let location = touch.location(in: self) print(location) scene.handleTouchBegan(location) } } class GameScene: SKScene { init() { super.init(size: CGSize(width: 500, height: 1000)) self.backgroundColor = .red let node3D = MyNode3D() let scene3D = MyGameScene3D() node3D.scnScene = scene3D scene3D.node3D = node3D node3D.isUserInteractionEnabled = true node3D.viewportSize = CGSize(width: 100, height: 200) node3D.position = CGPoint(x: 50, y: 100) addChild(node3D) let up = SKSpriteNode(color: .blue, size: CGSize(width: 500, height: 10)) up.anchorPoint = CGPoint(x: 0, y:0) up.position = CGPoint(x:0, y:200) addChild(up) let right = SKSpriteNode(color: .gray, size: CGSize(width: 10, height: 500)) right.anchorPoint = CGPoint(x:0,y: 0) right.position = CGPoint(x:100, y:0) addChild(right) } required init?(coder aDecoder: NSCoder) { fatalError("init(coder:) has not been implemented") } } Basically, I have a SK3DNode of size 100x200, positioned at lower left corner of the screen (see screenshot below). Then in this SK3DNode, I have a SCNScene, where I put a 10x10x10 Playground node at position (0, 0, 0). Then I put a camera node right at the top of the Playground at position (5, 11, 5), and the camera looks down along the -y axis, with euler angle = (-90, 0, 0). Then in this Playground, I put a small box of size 1x1x1, at the center of the Playground at (5, 5, 5). The 2 long bars (gray & blue) are just there to indicate the boundary of the SK3DNode. The result rendering is correct (see screenshot below). However, I can't get the hit test working. I tap on the center 1x1x1 box on screen, get the right coordinate printed out, but the hit test result is empty. I want to be get the center 1x1x1 box when hitting there. How can I do so? Update: I tried to loop through all the pixels from -2000 to 2000, and still no hit: func handleTouchBegan(_ location: CGPoint) { for x in -2000...2000 { print("handling x: \(x)") for y in -2000...2000 { let res = node3D.hitTest(location) if !res.isEmpty { print("\(x), \(y), \(res)") } } } print("Done") }
1
0
198
3w
ARKit body tracking (ARBodyAnchor) broken in iOS 18.0 + 18.1
I'm wondering if anyone can suggest a workaround for the broken ARKit body tracking in iOS / iPadOS 18.0 and 18.1? The orientation for foot bones (and possibly other bones) is incorrect in the ARBodyAnchor returned via ARView.session.delegate update method. It works correctly in iOS / iPadOS 17.x. The same failure occurs in a SceneKit app via a ARBodyAnchor in ARSCNViewDelegate. You can easily verify the problem using Apple’s own sample app: https://developer.apple.com/documentation/arkit/arkit_in_ios/content_anchors/capturing_body_motion_in_3d Any help would be greatly appreciated.
2
0
269
Oct ’24
Cannot disable "Show Graphics HUD"
For some reason I can't disable the Graphics HUD. Not really a problem for development, but it's also showing in Testflight apps. For example when swiping down on the keyboard but also in some other places. Of course I tried disabling the toggle, but even when it's off the HUD is still showing. Even completely disabling Developer mode does not work. Is this a known issue? I already scrolled through possibly every Google search result but I can't figure out how to solve this.
0
0
158
3w
Metal rendering shows incorrect color, gputrace shows fragment function returning correct color
I'm experiencing a strange issue where I'm seeing black in a metal drawable where it should be a different color. When I capture the frame and inspect the returned value from the fragment function, it's correct, but the drawable isn't. This screenshot hopefully illustrates the issue. I've not found any references to similar issues. I saw something about some out of bounds or NaN values being dropped to 0 (which would be black), but the debugger doesn't indicate this is happening.
1
0
164
3w