I've been attempting to use the new CAMetalDisplayLink to simplify the code needed to sync my rendering with the display across Apple platforms. One thing I noticed since moving to using CAMetalDisplayLink is that the Metal Performance HUD which I had previously been using to analyze the total memory used by my app (among other things) is suddenly no longer appearing when using CAMetalDisplayLink.
This issue can be reproduced with the Frame Pacing sample from WWDC23
Anyone from Apple know if this is expected behavior or have an idea on how to get this to work properly?
I've filed FB13495684 for official review.
General
RSS for tagDelve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.
Post
Replies
Boosts
Views
Activity
If anyone even uses this, GCVirtualController from import GameController no longer plays nice with SwiftUI NavigationView
Working on changes to get by...
This was working in iOS 17.x but not in iOS 18 and iPadOS 18.
Just a heads up...
Recently started playing this game, when I restarted my computer once it glitched out and when I tried to open it again, it said "last time you opened this app, it was force quit. Do you want to reopen it?" then the app closes itself immediately. How can I fix this? How can I remove the caches?
Hello, I have a USDC file with vertex color (WITHOUT textures), and it displays perfectly in Preview. If I package it in a zip (without compression) and rename the resulting file to USDZ, I can see it without any issues in AVP and Mac. However, if I send it to an iPhone, the vertex color does not display.
Is there anything else I need to do besides packaging the USDC without compression in a ZIP?
Thank you very much.
Hey,
Just watched and started investigating a new TabletopKit framework, which looks fantastic.
I'm looking at how multiplayer can be tested.
Found info about testing on multiple real devices here - https://developer.apple.com/documentation/tabletopkit/tabletopkitsample#Start-a-multiplayer-game-on-devices
I wanted to ask about options for testing multiplayer on simulators, maybe with a simulator + a single real device.
Unfortunately, having multiple VisionPros for indie development is unrealistic, so I hope there are ways to do it.
Does GPTK2 support all AVX instructions like F16C, or only AVX2?
when i open settings on my phone, i cant see game mode settings
i work as a game tester in game company and i need more information about the game mode
visionOS 2.0 enables passthrough with Metal now:
https://developer.apple.com/wwdc24/10092
This suggests it will be possible for WebXR’s AR passthrough module to be implemented for Safari.
Is this already available, perhaps behind a flag?
I think it's kind of essential to have eye tracking data available to apps in VR mode (with the user's permission).
The biggest problem I've observed is that Unity isn't able to implement dynamic foveated rendering without eye tracking data.
Without the eye tracking it's only possible to have fixed foveated rendering. That gives a performance boost to rendering, but it also makes it so it gets blurry for the user if they look to the side without turning their head.
I understand why it's a privacy issue to have apps tracking where the user is looking in the real world, but video passthrough is disabled in VR -- so it should be ok to enable eye tracking in VR (with the user's permission).
Unity already supports dynamic foveated rendering (with eye tracking) for other VR headsets, and Vision Pro has the best eye tracking -- so Vision Pro should definitely have the best dynamic foveated rendering in VR.
Hi, I've got a well established game in gamemaker, which seems to crash on latest gen ipads. In particular have been able to reproduce in ipad 11-inch M2 in the emulator.
Game launches, shows the splash screen, and seems to crash as it's leaving it, but before launching the game itself, as such the log results are rather limited.
I'm attaching what I can see from the crash, and hope anyone has a suggestion in what might be different in this model of the ipad to warrant this sort of crash. Any leads would be welcome.
Thank you in advance.
Hi,
with the default values the rotation take place but changing the value not.
I bind a button to a slider action:
-(IBAction)rotateXAction:(id)sender
{
NSLog(@"%@ \n",sender);
BOOL yn = YES;
_rotationX = [_sliderX intValue];
if(yn) printf("rotationX %d \n",_rotationX); // value o.k
[self setNeedsDisplay:YES];
}
The drawRect: will not be called.
What is wrong with my code, please tell me.
Uwe
My app stopped working in Mac OS Sonoma 14.0 and I quickly isolated the problem to CGContextDrawLayerAtPoint. Two issues, first of all about 1/2 the time there was no data copied (the updated CGLayer did not show up in the window). Then the app would crash iin libswiftCore.dylib after about 5 updates with a very unusual message: "Fatal error: Duplicate keys of type 'DisplayList' were found in a Dictionary. This usually means either that the type violates Hashable's requirements, or that members of such a dictionary were mutated after insertion". This behavior showed up in builds built with XCode 13 on a Mac OS Montery platform, as well as XCode 15 on Mac OS Ventura when the app was run on Sonoma.
My app uses a very traditional method to create an off-screen graphics context in drawRect:
- (void)drawRect:(NSRect)dirtyRect {
// Obtain context from the current NSGraphicsContext ...
viewNSContext = [NSGraphicsContext currentContext];
viewCGContext = (CGContextRef)[viewNSContext graphicsPort];
drawingLayer = CGLayerCreateWithContext(viewCGContext, size, NULL);
So the exact details of the off-screen drawing area were based upon the characteristics of the window being drawn to.
Fortunately the work-around was very easy. By creating a custom CGBitmapContext everything was resolved. My drawing requirements are very basic, so a simple 32-bit RGB off-screen context was adequate.
colorSpaceRef = CGColorSpaceCreateDeviceRGB();
bitMapContextRef = CGBitmapContextCreate(NULL, (int) rintf(size.width), (int) rintf(size.height), 8, 0, colorSpaceRef, kCGImageAlphaNoneSkipLast);
drawingLayer = CGLayerCreateWithContext(bitMapContextRef, size, NULL);
Once I changed to a bitmap offscreen context, problem resolved.
In my case I verified that the portion of the window that was updated with the CGContextDrawLayerAtPoint was indeed restricted to the dirty part of the view rectangle, at least in Sonoma 14.5.
Hope this helps someone else searching for the issue, as I found nothing in the Forums or online.
Let's say the type of project doesn't work well by using the LSApplicationCategoryType property in Info.plist.
How can I trigger it via code in MacOS?
I rewrite an old project since OpenGL is deprecated and the downgrade to
High Sierra goes with the lost of the old project.
Here is the drawRect:
- (void)drawRect:(NSRect)dirtyRect
{
[super drawRect:dirtyRect];
if(backgroundColor == 1) glClearColor(0.95f, 1.0, 1.0f, 1.0);
else glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
glTranslatef(0.0 ,0.0, -10.0);
// After mistake
I wrote PushMatrix()
glTranslatef(-0.01 ,-0.01, -10.0); // Screen Shot 0
// glTranslatef(-0.25 ,-0.25, -10.0); Screen Shot 1
glLineWidth(1.0);
glRotated(rotationX, 1, 0, 0);
glRotated(rotationY, 0, 1, 0);
glCallList(axes);
// After mistake
I wrote PopMatrix()
// Axes Vertices of xx,y,z are 1.0
// With vertices +/-0.99 nothing is drawn
[self glError];
[self.openGLContext flushBuffer];
}
I hope you Accept my Text and hope you can help me !
Mit freundlichen Grüßen
Uwe
Screen Shot 0
Screen Shot 1
So I have a class that does the selection but what I get back is a total mess. Can anyone help me with the code or possibly point me at an example class that performs a magic wand selection like in photoshop? Below is my current Code but I've also attempted to use metal and I've gotten an identical result in the same amount of time
var red1: CGFloat = 0
var green1: CGFloat = 0
var blue1: CGFloat = 0
var alpha1: CGFloat = 0
selectedColor.getRed(&red1, green: &green1, blue: &blue1, alpha: &alpha1)
var red2: CGFloat = 0
var green2: CGFloat = 0
var blue2: CGFloat = 0
var alpha2: CGFloat = 0
pixelColor.getRed(&red2, green: &green2, blue: &blue2, alpha: &alpha2)
let tolerance = CGFloat(tolerance)
return abs(red1 - red2) < tolerance && abs(green1 - green2) < tolerance &&
abs(blue1 - blue2) < tolerance && abs(alpha1 - alpha2) < tolerance
}
func getPixelData(from image: UIImage) -> [UInt8]? {
guard let cgImage = image.cgImage else { return nil }
let width = Int(cgImage.width)
let height = Int(cgImage.height)
let bytesPerPixel = 4
let bytesPerRow = bytesPerPixel * width
let bitsPerComponent = 8
let bitmapInfo = CGImageAlphaInfo.premultipliedLast.rawValue
var pixelData = [UInt8](repeating: 0, count: width * height * bytesPerPixel)
guard let context = CGContext(data: &pixelData,
width: width,
height: height,
bitsPerComponent: bitsPerComponent,
bytesPerRow: bytesPerRow,
space: CGColorSpaceCreateDeviceRGB(),
bitmapInfo: bitmapInfo) else { return nil }
context.draw(cgImage, in: CGRect(x: 0, y: 0, width: width, height: height))
return pixelData
}
func performMagicWandSelection(inputImage: UIImage, selectedColor: UIColor, tolerance: Float) -> UIBezierPath? {
guard let pixelData = getPixelData(from: inputImage) else { return nil }
let width = Int(inputImage.size.width)
let height = Int(inputImage.size.height)
let bytesPerPixel = 4
let path = UIBezierPath()
var isDrawing = false
for y in 0..<height {
for x in 0..<width {
let index = (y * width + x) * bytesPerPixel
let pixelColor = UIColor(
red: CGFloat(pixelData[index]) / 255.0,
green: CGFloat(pixelData[index + 1]) / 255.0,
blue: CGFloat(pixelData[index + 2]) / 255.0,
alpha: CGFloat(pixelData[index + 3]) / 255.0)
if colorMatches(selectedColor: selectedColor, pixelColor: pixelColor, tolerance: tolerance) {
let point = CGPoint(x: x, y: y)
if !isDrawing {
path.move(to: point)
isDrawing = true
} else {
path.addLine(to: point)
}
} else {
if isDrawing {
path.close()
isDrawing = false
}
}
}
}
if isDrawing {
path.close()
}
return path
}
I am trying to store usdz files with SwiftData for now.
I am converting usdz to data, then storing it with SwiftData
My model
import Foundation
import SwiftData
import SwiftUI
@Model
class Item {
var name: String
@Attribute(.externalStorage)
var usdz: Data? = nil
var id: String
init(name: String, usdz: Data? = nil) {
self.id = UUID().uuidString
self.name = name
self.usdz = usdz
}
}
My function to convert usdz to data. I am currently a local usdz just to test if it is going to work.
func usdzData() -> Data? {
do {
guard let usdzURL = Bundle.main.url(forResource: "tv_retro", withExtension: "usdz") else {
fatalError("Unable to find USDZ file in the bundle.")
}
let usdzData = try Data(contentsOf: usdzURL)
return usdzData
} catch {
print("Error loading USDZ file: \(error)")
}
return nil
}
Loading the items
@Query private var items: [Item]
...
var body: some View {
...
ForEach(items) { item in
HStack {
Model3D(?????) { model in
model
.resizable()
.scaledToFit()
} placeholder: {
ProgressView()
}
}
}
...
}
How can I load the Model3D?
I have tried:
Model3D(data: item.usdz)
Gives me the errors:
Cannot convert value of type '[Item]' to expected argument type 'Binding<C>'
Generic parameter 'C' could not be inferred
Both errors are giving in the ForEach.
I am able to print the content inside item:
ForEach(items) { item in
HStack {
Text("\(item.name)")
Text("\(item.usdz)")
}
}
This above works fine for me.
The item.usdz prints something like Optional(10954341 bytes)
I would like to know 2 things:
Is this the correct way to save usdz files into SwiftData? Or should I use FileManager? If so, how should I do that?
Also how can I get the usdz from the storage (SwiftData) to my code and use it into Model3D?
Hi everyone,
Last month, my endless runner game set in a crypto world was rejected for using the Bitcoin logo, labeled as a "Copycats" issue. We were asked to remove the logo and Bitcoin prices, even though the Bitcoin logo is public domain. I noticed other Bitcoin games on the AppStore were updated recently without any problems.
I couldn't find any rules against using Bitcoin in the guidelines. When I appealed, they just told me to remove it again but didn't explain why. We've updated our game several times before, and only this small bug fix was rejected.
Any advice on what are the next step? Anyone experienced this?
Cheers
Hi everyone,
This happens with Xcode 15.3 (15E204a) and visionOS 1.1.2 (21O231).
To reproduce this issue, simply create a new VisionOS app with Metal (see below).
Then simply change the following piece of code in Renderer.swift:
func renderFrame() {
[...]
// Set the clear color red channel to 1.0 instead of 0.0.
renderPassDescriptor.colorAttachments[0].clearColor = MTLClearColor(red: 1.0, green: 0.0, blue: 0.0, alpha: 0.0)
[...]
}
On the simulator it works as expected while on device it will show a black background with red jagged edges (see below).
Working on a vision OS app. I've noticed that even when castsShadow is false, performance goes down the drain when there are more than a few dozen entities that have GroundingShadowComponent. I managed to hard crash the Vision Pro with about 200 or so entities that each had two ModelEntities with GroundingShadowComponent attached but set to castShadows = false.
My solution is to add and remove the GroundingShadowComponent from entities as needed, but I thought maybe someone at Apple might want to look into this. I don't expect great performance with that many entities casting shadows, but I'd think turning the shadow off would effectively disable the component and not incur a performance penalty.
I use ScreenCaptureKit, CoreVideo, CoreImage, CoreMedia frameworks to capture screenshots on macOS 14.0 and higher.
Example of creating CGImageRef:
CVImageBufferRef cvImageBufferRef = ..;
CIImage* temporaryImage = [CIImage imageWithCVPixelBuffer:cvImageBufferRef];
CIContext* temporaryContext = [CIContext context];
CGImageRef imageRef = [temporaryContext createCGImage:temporaryImage
fromRect:CGRectMake(0, 0,
CVPixelBufferGetWidth(cvImageBufferRef),
CVPixelBufferGetHeight(cvImageBufferRef))];
I have the next results of profiling with XCode Instruments Memory Leaks & Allocations:
there is constantly increasing memory usage, but no memory leaks are detected, and there are many calls to create IOSurface objects, that have been never released.
The most part of memory - All Anonymous VM - VM: IOSurface.
The heaviest stack trace:
[RPIOSurfaceObject initWithCoder:]
[IOSurface initWithMachPort:]
IOSurfaceClientLookupFromMachPort
I don't have any of IOSurface objects created by myself. There are low-level calls to it. In Allocation List I can see many allocations of IOSurface objects, but there are no info about releasing it.
Due to this info, how can I release them to avoid permanent increasing memory consumption?