On macOS, system symbols displays in a SKTexture as expected, with the correct color and aspect ratio.
But on iOS they are always displayed in black, and sometimes with slightly wrong aspect ratio.
Is there a solution to this problem?
import SpriteKit
#if os(macOS)
import AppKit
#else
import UIKit
#endif
class GameScene: SKScene {
override func didMove(to view: SKView) {
let systemImage = "square.and.arrow.up"
let width = 400.0
#if os(macOS)
let image = NSImage(systemSymbolName: systemImage, accessibilityDescription: nil)!.withSymbolConfiguration(.init(hierarchicalColor: .white))!
let scale = NSScreen.main!.backingScaleFactor
image.size = CGSize(width: width * scale, height: width / image.size.width * image.size.height * scale)
#else
let image = UIImage(systemName: systemImage)!.applyingSymbolConfiguration(.init(pointSize: width))!.applyingSymbolConfiguration(.init(hierarchicalColor: .white))!
#endif
let texture = SKTexture(image: image)
print(image.size, texture.size(), image.size.width / image.size.height)
let size = CGSize(width: width, height: width / image.size.width * image.size.height)
addChild(SKSpriteNode(texture: texture, size: size))
}
}
Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.
Post
Replies
Boosts
Views
Activity
I am trying to get a little game prototype up and running using Metal using the metal-cpp libraries where I run everything natively at 120Hz with a coupled renderer using Vsync turned on so that I have the absolute physically minimum input to photon latency possible.
// Create the metal view
SDL_MetalView metal_view = SDL_Metal_CreateView(window);
CA::MetalLayer *swap_chain = (CA::MetalLayer *)SDL_Metal_GetLayer(metal_view);
// Set up the Metal device
MTL::Device *device = MTL::CreateSystemDefaultDevice();
swap_chain->setDevice(device);
swap_chain->setPixelFormat(MTL::PixelFormat::PixelFormatBGRA8Unorm);
swap_chain->setDisplaySyncEnabled(true);
swap_chain->setMaximumDrawableCount(2);
I am using SDL3 just for creating the window. Now when I go through my game / render loop - I stall for a long time on getting the next drawable which is understandable - my app runs in about 2-3ms.
m_CurrentContext->m_Drawable = m_SwapChain->nextDrawable();
m_CurrentContext->m_CommandBuffer = m_CommandQueue->commandBuffer()->retain();
char frame_label[32];
snprintf(frame_label, sizeof(frame_label), "Frame %d", m_FrameIndex);
m_CurrentContext->m_CommandBuffer->setLabel(NS::String::string(frame_label, NS::UTF8StringEncoding));
m_CurrentContext->m_RenderPassDescriptor[ERenderPassTypeNormal] = MTL::RenderPassDescriptor::alloc()->init();
MTL::RenderPassColorAttachmentDescriptor* cd = m_CurrentContext->m_RenderPassDescriptor[ERenderPassTypeNormal]->colorAttachments()->object(0);
cd->setTexture(m_CurrentContext->m_Drawable->texture());
cd->setLoadAction(MTL::LoadActionClear);
cd->setClearColor(MTL::ClearColor( 0.53f, 0.81f, 0.98f, 1.0f ));
cd->setStoreAction(MTL::StoreActionStore);
However my ProMotion display does not reliably run at 120Hz when fullscreen and using the direct to display system - it seems to run faster when windowed in composite which is the opposite of what I would expect. The Metal HUD says 120Hz, but the delay to getting the next drawable and looking at what Instruments is saying tells otherwise.
When I profile it, the game loop has completed and is sitting there waiting for the next drawable, but the screen does not want to complete in 8.33ms, so the whole thing slows down for no discernible reason.
Also as a game developer it is very strange for the command buffer to actually need the drawable texture free to be allowed to encode commands - usually the command buffers and swapping the front and back render buffers are not directly dependent on each other. Usually you only actually need the render buffer texture free when you want to draw to it. I could give myself another drawable, but because I am completing in less than 3ms, all it would do would be to add another frame of latency.
I also looked at the FramePacing example and its behaviour is even worse at having high framerate with low latency - the direct to display is always rejected for some reason.
Is this just a flaw in the Metal API? Or am I missing something important? I hope someone can help - the behaviour of the display is baffling.
Summary:
I’m working on a VisionOS project where I need to dynamically load a .bundle file containing RealityKit content from the app’s Application Support directory. The .bundle is saved to disk after being downloaded or retrieved as an On-Demand Resource (ODR).
Sample project with the issue:
Github repo. Play the target test-odr to use with the local bundle and have the crash.
Overall problem:
Setup: Add a .bundle named RealityKitContent_RealityKitContent.bundle to the app’s resources. This bundle contains a Reality file with two USDA,: “Immersive” and “Scene”.
Save to Disk: save the bundle to the Application Support directory, ensuring that the file is correctly copied and saved.
Load the Bundle: load the bundle from the saved URL using Bundle(url: bundleURL) to initialize the Bundle object.
Load Entity from Bundle: load a specific entity (“Scene”) from the bundle. When trying to load the entity using let storedEntity = try await Entity(named: "Scene", in: bundle), the app crashes with an EXC_BREAKPOINT error.
ContentsOf Method Issue: If I use the Entity.load(contentsOf:realityFileURL, withName: entityName) method, it always loads the first root entity found (in this case, “Immersive”) rather than “Scene”, even when specifying the entity name. This is why I want to use the Bundle to load entities by name more precisely.
Issue:
The crash consistently occurs on the Entity(named: "Scene", in: bundle) line. I have verified that the bundle exists and is accessible at the specified path and that it contains the expected .reality file with multiple entities (“Immersive” and “Scene”). The error code I get is EXC_BREAKPOINT (code=1, subcode=0x1d135d4d0).
What I’ve Tried:
• Ensured the bundle is properly saved and accessible.
• Checked that the bundle is initialized correctly from the URL.
• Tested loading the entity using the contentsOf method, which works fine but always loads the “Immersive” entity, ignoring the specified name. Hence, I want to use the Bundle-based approach to load multiple USDA entities selectively.
Question:
Has anyone faced a similar issue or knows why loading entities using Entity(named:in:) from a disk-based bundle causes this crash? Any advice on how to debug or resolve this, especially for managing multiple root entities in a .reality file, would be greatly appreciated.
I have a legacy OpenGL fixed-pipeline app which has been ported from Windows (32-bit) to MacOS 64-bit.
The problem is that if I have a scene with a non-positional light, everything works great. If I add a positional spotlight the two lights interact, and I get incorrect results.
This problem does not occur on X86_64 Macs. It does occur when the app is X86_64 running under Rosetta or native ARM64.
So it's either an Apple Silicon OpenGL driver behaviour my code is triggering, or something with the on-chip Apple Silicon graphics.
Here is the "normal" case: the spotlight is to the right:
Here, I have moved the spotlight down (Y = 1). Notice the black areas on the cube. That's incorrect.
Now, I turn off the spotlight by commenting out the "makeALight" call for the spotlight (light 6). Now, the cube is evenly lit.
Here is the test code I use to generate the lights. You will need to install glfw with brew to build it.
main.cpp
Hi,
I created a leaderboard in my application, and a method to record a new score:
GKLeaderboard.loadLeaderboards(IDs: [leaderboardID]) { (leaderboards, error) in
if let error = error {
print("Error loading leaderboards: \(error.localizedDescription)")
}
guard let leaderboard = leaderboards?.first else {
print("Leaderboard not found")
return
}
leaderboard.submitScore(score, context: 0, player: self.localPlayer) { error in
if let error = error {
print("Error reporting score: \(error.localizedDescription)")
} else {
print("Score reported successfully!")
}
}
}
}
When debuging, this method is correctly called and I have a success, so I tried to test it with an internal TestFlight release.
The leaderboard is never updated.
Is there a way to perform a test of a leaderboard before publishing the app?
I have the same question for achievements:
let achievement = GKAchievement(identifier: identifier)
achievement.percentComplete = percentComplete
GKAchievement.report([achievement]) { error in
if let error = error {
print("Error reporting achievement: \(error.localizedDescription)")
}
}
}
Thanks!
Every now and then my SceneKit game app crashes and I have no idea why. The SCNView has a overlaySKScene, so it might also be SpriteKit's fault.
The stack trace is
#0 0x0000000241c1470c in jet_context::set_fragment_texture(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, jet_texture*) ()
#27 0x000000010572fd40 in _pthread_wqthread ()
Does anyone have an idea where I could start debugging this, without being able to consistently reproduce it?
Hello,
I’m trying to run Age of Mythology Retold on my Mac using the Game Porting Toolkit. Unfortunately, the game crashes before it opens. Has anyone experienced something similar or have any suggestions on how to resolve it?
Thank you!
I'm trying to ray-march an SDF inside a RealityKit surface shader. For the SDF primitive to correctly render with other primitives, the depth of the fragment needs to be set according to the ray-surface intersection point. Is there a way to do that within a RealityKit surface shader? It seems the only values I can set are within surface::surface_properties.
If not, can an SDF still be rendered in RealityKit using ray-marching?
Hello,
I want to create a painting app for iOS and I saw many examples use a CAShapeLayer to draw a UIBezierPath.
As I understand CoreAnimation uses the GPU so I was wondering how is this implemented on the GPU? Or in other words, how would you do it with Metal or OpenGL?
I can only think of continuously updating a texture in response to the user's drawing but that would be a very resource intensive operation...
Thanks
Guten Tag,
my project is simple, first I want draw wired Hexa,-Tetra- and Octahedrons.
I draw a cube with Metal but I didn't found rotation, translation and scale.
I have searched help , the examples I found are too complicated for me.
Mit freundlichen Grüßen
VanceRegnet
In my Metal-based app, I ray-march a 3D texture. I'd like to use RealityKit instead of my own code. I see there is a LowLevelTexture (beta) where I could specify a 3D texture. However on the Metal side, there doesn't seem to be any way to access a 3D texture (realitykit::texture::textures::custom returns a texture2d).
Any work-arounds? Could I even do something icky like cast the texture2d to a texture3d in MSL? (is that even possible?) Could I encode the 3d texture into an argument buffer and get that in somehow?
Hi everyone,
I'm developing an ARKit app using RealityKit and encountering an issue where a video displayed on a 3D plane shows up as a pink screen instead of the actual video content.
Here's a simplified version of my setup:
func createVideoScreen(video: AVPlayerItem, canvasWidth: Float, canvasHeight: Float, aspectRatio: Float, fitsWidth: Bool = true) -> ModelEntity {
let width = (fitsWidth) ? canvasWidth : canvasHeight * aspectRatio
let height = (fitsWidth) ? canvasWidth * (1/aspectRatio) : canvasHeight
let screenPlane = MeshResource.generatePlane(width: width, depth: height)
let videoMaterial: Material = createVideoMaterial(videoItem: video)
let videoScreenModel = ModelEntity(mesh: screenPlane, materials: [videoMaterial])
return videoScreenModel
}
func createVideoMaterial(videoItem: AVPlayerItem) -> VideoMaterial {
let player = AVPlayer(playerItem: videoItem)
let videoMaterial = VideoMaterial(avPlayer: player)
player.play()
return videoMaterial
}
Despite following the standard process, the video plane renders pink. Has anyone encountered this before, or does anyone know what might be causing it?
Thanks in advance!
I am trying to convert a ThreeJS project to Metal for the Vision Pro. The issue is ThreeJS doesn't do any color space conversion (when I output a color in a fragment shader and then read it using the digital color meter in SRGB mode I get the same value I inputed in the fragment shader) This is not the case when using metal. When setting up my LayerRenderer I set the colorFormat to rgba16Unorm since it is the only non srgb color format supported on the vision pro apps. However switching between bgra8Unorm_srgb and rgba16Unorm seems to have no affect.
when I set up the renderPassDescriptor I use the drawable colorTexture
renderPassDescriptor.colorAttachments[0].texture = drawable.colorTextures[0]
and when printing its pixel format it seems to be passed from the configuration.
If there is anyway to disable this behavior or perform an inverse function of such that I get the original value out from the shader, that would be appreciated.
So, I've been messing around with SteamVR on Apple Silicon and it runs as expected under Rosetta translation, I've even got a game to run. But for some reason SteamVR cannot detect a headset, even when using one that SteamVR has drivers for such as the 2017 Vive headset. Would there be any explanation as to why this is because SteamVR works as expected so that leads me to believe it's something with MacOS.
UI:
Attachment(id: "tooptip") {
if isRecording {
TooltipView {
HStack(spacing: 8) {
Image(systemName: "waveform")
.font(.title)
.frame(minWidth: 100)
}
}
.transition(.opacity.combined(with: .scale))
}
}
Trigger:
Button("Toggle") {
withAnimation{
isRecording.toggle()
}
}
The above code did not show the animation effect when running. When I use isRecording to drive an element in a common SwiftUI view, there is an animation effect.
Hi, I'm trying to capture some images from WKWebView on visionOS. I know that there's a function 'takeSnapshot()' that can get the image from the web page. But I wonder if 'drawHierarchy()' cannot work properly on WKWebView because of GPU content, is there any other methods I can call to capture images correctly?
Furthermore, as I put my webview into an immersive space, is there any way I can get the texture of this UIView attachment? Thank you
I'm trying to create a custom Metal-based visual effect as a UIView to be used inside an existing UIKit-based interface. (An example might be a view that applies a blur effect to what's behind it.) I need to capture the MTLTexture of what's behind the view so that I can feed it to MTLRenderCommandEncoder.setFragmentTexture(_:index:). Can someone show me how or point me to an example? Thanks!
I am currently working on a project where I aim to overlay the camera feed obtained via the Apple Vision Pro's camera access API to align perfectly with the user's perspective in Vision Pro.
However, I've noticed a discrepancy between the captured camera feed and the actual view from the user's perspective. My assumption is that this difference might be related to lens distortion correction or the lack thereof.
Unfortunately, I'm not entirely sure how the camera feed is being corrected or processed. For the overlay, I'm using a typical 3D CG approach where a texture captured from the background plane is projected onto a surface. In this case, the "background capture" is the camera feed that I'm projecting.
If anyone has insights or suggestions on how to align the camera feed with the user's perspective more accurately, any information would be greatly appreciated.
Attached image shows what difference between the camera feed and actual user's perspective field of view.
I want to align the camera feed image to the user's perspective.
Greetings! I have been battling with a bit of a tough issue. My use case is running a pixelwise regression model on a 2D array of images using CIImageProcessorKernel and a custom Metal Shader.
It mostly works great, but the issue that arises is that if the regression calculation in Metal takes too long, an error occurs and the resulting output texture has strange artifacts, for example:
The specific error is:
Error excuting command buffer = Error Domain=MTLCommandBufferErrorDomain Code=1 "Internal Error (0000000e:Internal Error)" UserInfo={NSLocalizedDescription=Internal Error (0000000e:Internal Error), NSUnderlyingError=0x60000320ca20 {Error Domain=IOGPUCommandQueueErrorDomain Code=14 "(null)"}} (com.apple.CoreImage)
There are multiple levels of concurrency: Swift Concurrency calling the Core Image code (which shouldn't have an impact) and of course the Metal command buffer.
Is there anyway to ensure the compute command encoder can complete its work?
Here is the full implementation of my CIImageProcessorKernel subclass:
class ParametricKernel: CIImageProcessorKernel {
static let device = MTLCreateSystemDefaultDevice()!
override class var outputFormat: CIFormat {
return .BGRA8
}
override class func formatForInput(at input: Int32) -> CIFormat {
return .BGRA8
}
override class func process(with inputs: [CIImageProcessorInput]?, arguments: [String : Any]?, output: CIImageProcessorOutput) throws {
guard
let commandBuffer = output.metalCommandBuffer,
let images = arguments?["images"] as? [CGImage],
let mask = arguments?["mask"] as? CGImage,
let fillTime = arguments?["fillTime"] as? CGFloat,
let betaLimit = arguments?["betaLimit"] as? CGFloat,
let alphaLimit = arguments?["alphaLimit"] as? CGFloat,
let errorScaling = arguments?["errorScaling"] as? CGFloat,
let timing = arguments?["timing"],
let TTRThreshold = arguments?["ttrthreshold"] as? CGFloat,
let input = inputs?.first,
let sourceTexture = input.metalTexture,
let destinationTexture = output.metalTexture
else {
return
}
guard let kernelFunction = device.makeDefaultLibrary()?.makeFunction(name: "parametric") else {
return
}
guard let commandEncoder = commandBuffer.makeComputeCommandEncoder() else {
return
}
let imagesTexture = Texture.textureFromImages(images)
let pipelineState = try device.makeComputePipelineState(function: kernelFunction)
commandEncoder.setComputePipelineState(pipelineState)
commandEncoder.setTexture(imagesTexture, index: 0)
let maskTexture = Texture.textureFromImages([mask])
commandEncoder.setTexture(maskTexture, index: 1)
commandEncoder.setTexture(destinationTexture, index: 2)
var errorScalingFloat = Float(errorScaling)
let errorBuffer = device.makeBuffer(bytes: &errorScalingFloat, length: MemoryLayout<Float>.size, options: [])
commandEncoder.setBuffer(errorBuffer, offset: 0, index: 1)
// Other buffers omitted....
let threadsPerThreadgroup = MTLSizeMake(16, 16, 1)
let width = Int(ceil(Float(sourceTexture.width) / Float(threadsPerThreadgroup.width)))
let height = Int(ceil(Float(sourceTexture.height) / Float(threadsPerThreadgroup.height)))
let threadGroupCount = MTLSizeMake(width, height, 1)
commandEncoder.dispatchThreadgroups(threadGroupCount, threadsPerThreadgroup: threadsPerThreadgroup)
commandEncoder.endEncoding()
}
}
The Metal feature set tables specifies that beginning with the Apple4 family, the "Maximum threads per threadgroup" is 1024. Given that a single threadgroup is guaranteed to be run on the same GPU shader core, it means that a shader core of any new Apple GPU must be capable of running at least 1024/32 = 32 warps in parallel.
From the WWDC session "Scale compute workloads across Apple GPUs (6:17)":
For relatively complex kernels, 1K to 2K concurrent threads per shader core is considered a very good occupancy.
The cited sentence suggests that a single shader core is capable of running at least 2K (I assume this is meant to be 2048) threads in parallel, so 2048/32 = 64 warps running in parallel.
However, I am curious what is the maximum theoretical amount of warps running in parallel on a single shader core (it sounds like it is more than 64). The WWDC session mentions 2K to be only "very good" occupancy. How many threads would be "the best possible" occupancy?