Sample project from: https://developer.apple.com/documentation/RealityKit/guided-capture-sample was fine with beta 3.
In beta 4, getting these errors:
Generic struct 'ObservedObject' requires that 'ObjectCaptureSession' conform to 'ObservableObject'
Does anyone have a fix?
Thanks
General
RSS for tagDelve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.
Post
Replies
Boosts
Views
Activity
I tried using the GameController APIs for this, but they didn't seem to work. Is that the recommended API for handling keyboard/mouse? The notifications for mouse and keyboard connect/disconnect don't seem to be defined for visionOS.
The visionOS 2.0 touts keyboard and mouse support. The simulator can even forward keyboard/mouse to the app. But there don't seem to be any sample code of how to programatically receive either of these. The game controller works fine (on device, not on Simulator).
I have this game controller connected to my M1, and the Simulator won't announced it via .GCControllerDidConnect. This works fine on iOS and macOS.
I have the simulator set to "Send Game Controller to Device" which the Simulator does. If I disable that, then I can control the simulator view. But once enabled, the Simulator doesn't tell the app about the controller.
I want to know why there is no video frame data when RePlaykit enters the background and then enters the foreground?
For some reason I can't disable the Graphics HUD.
Not really a problem for development, but it's also showing in Testflight apps.
For example when swiping down on the keyboard but also in some other places.
Of course I tried disabling the toggle, but even when it's off the HUD is still showing. Even completely disabling Developer mode does not work.
Is this a known issue?
I already scrolled through possibly every Google search result but I can't figure out how to solve this.
I'm an iOS developer, and I've been testing our app in iOS 18.0 Beta. I noticed that there's a problem with the font rendering, and after troubleshooting, I've found out that it's caused by the removal of the PingFang.ttc font in 18.0.
I would like to ask the reason for removing this font file and which font should be used to display Chinese in the future?
My test device is an iPhone 11 Pro and the system version is iOS 18.0 (22A5297). I have also tested Beta 1 and it has the same issue.
In previous versions of the system, the PingFang font is located in this directory /System/Library/Fonts/LanguageSupport/PingFang.ttc. But in iOS 18.0, the font file in this directory has become Kohinoor.ttc, and I've tested that this font can't display Chinese either.
I traversed the following system font directories and could not find the PingFang.ttc font file.
/System/Library/Fonts/AppFonts
/System/Library/Fonts/Core
/System/Library/Fonts/CoreAddition
/System/Library/Fonts/CoreUI
/System/Library/Fonts/LanguageSupport
/System/Library/Fonts/UnicodeSupport
/System/Library/Fonts/Watch
Looking for answers, thanks for the help!
Guys,
In my main application bundle, I have included a helper bundle in its Resources. When the helper requests Accessibility permission, the system modal window displays what the helper is requesting permission for.
However, when the helper requests permission for Screen Recording, the system modal window displays that the main application bundle is requesting permission, which includes the helper.
This issue seems to be specific to Ventura, as both requests are displayed on behalf of the helper in Monterey.
I'm wondering if this is a known issue or limitation or if there is a way to make the permission request specifically from the helper.
Hi,
When using a High Definition Display, is there a way to render at exactly the target resolution on the physical screen?
My understanding is that the default behavior is to render to a backing store with a resolution (in pixels) which can be twice the size of the logical resolution (in points). Then we let the OS handle the down-scaling to the actual target resolution on the screen. This is all nice for non-graphics intensive apps, but it means that my game will render at a higher resolution than needed, which seems like an obvious loss of performance.
My expectation is that, for graphics intensive application such as games, we should be able to query and render to the final resolution on the display. Can it / should it be done?
Thank you for your help :)
FYI I did find a document which explains how to setup your CAMetalLayer to render at a custom resolution. I suspect that this may be what I have to do?
Our application uses Core Image to apply custom CIFilters to still images and video. I'm running into issues when the supplied image is large enough (>4096) that the image is automatically tiled. The simplest of these to describe is a filter that performs various mirroring effects - backwards, upside-down etc.
The implementation portion of the filter provides a sampler (src) and passes this into the kernel with an roiCallback that uses the destRect, inset by -1 in both dimensions:
return [mirrorsKernel applyWithExtent:[src extent] roiCallback:^CGRect(int index, CGRect destRect) { return CGRectInset(destRect, -1, -1); }
arguments:@[src]
];
The kernel is very simple, sampling from the X coordinate equal to the src width - current coordinate:
float4 backwards(sampler image, destination dest)
{
float2 dc = dest.coord();
dc.x = image.size().x - dc.x;
return image.sample(image.transform(dc)));
}
When this runs on an image that is wider than 4096, tiling happens, with the result being that destRect is not the entire image and therefore the resulting output image is incorrect. If the ROI uses [src extent] instead of destRect, the result is correct, but this will lead to serious performance issues when src gets too large.
All of this makes sense to me. What I'd like to know is if there is a way to handle this filter's requirements for sampling from the entire source while still limiting the ROI to maintain performance? I think the answer is probably no within our current structure and performance limits. But I wanted to see if there's anything we're missing.
I am aware that the simple kernel above can be replaced with an affine transform, which is an option for backwards and upside-down mirroring. We have other kernels in this filter that perform mirroring of either half of the source image or one quadrant of the source image. In these cases, I suppose it might be possible (up to a point) to create a custom ROI that is only the portion of the source that is being mirrored. We have not attempted that yet.
Any thoughts/input appreciated, thanks!
We've recently updated a view which displays photos via a CoreImage chain from a NSOpenGLView subclass to a NSView with a backing CAMetalLayer.
Things are mostly working fine, but we occasionally hit a deadlock involving CALayer and CIMetalCommandQueue. I've made a spindump, it appears none of our code is involved in the locked threads. Despite this, I'm assuming the problem is ours 😅
I saw the mention in the CAMetalLayer documentation about releasing drawables with an @autoreleasepool in drawRect, we have done this and I can't find any places we're retaining a drawable outside drawRect.
https://developer.apple.com/documentation/quartzcore/cametallayer?language=objc
I am seeing this on macOS 15.0.1, M2 Max MacBookPro. We haven't seen it on macOS 14.x but it may be luck as we have not tested much on that OS.
I don't know how to move forward debugging this, any help much appreciated!
The two locking threads in the spindump are MainThread and CI::RenderCompletionQueue.
Thread 0xb3b0f8 DispatchQueue "com.apple.main-thread"(1)
…
CA::Layer::commit_if_needed(CA::Transaction*, void (CA::Layer*, unsigned int, unsigned int) block_pointer) + 364 (QuartzCore + 178484) [0x1a5dba934]
invocation function for block in CA::Context::commit_transaction(CA::Transaction*, double, double*) + 176 (QuartzCore + 1782676) [0x1a5f42394]
-[CALayer(CALayerPrivate) _copyRenderLayer:layerFlags:commitFlags:] + 720 (QuartzCore + 179304) [0x1a5dbac68]
-[NSImage(CALayerSupport) CA_copyRenderValue] + 52 (AppKit + 1517960) [0x1a0fe0988]
-[NSImage CGImageForProposedRect:context:hints:] + 440 (AppKit + 1246368) [0x1a0f9e4a0]
-[NSImage _usingBestRepresentationForRect:context:hints:body:] + 148 (AppKit + 1247980) [0x1a0f9eaec]
__48-[NSImage CGImageForProposedRect:context:hints:]_block_invoke + 80 (AppKit + 1248792) [0x1a0f9ee18]
-[NSCIImageRep CGImageForProposedRect:context:hints:] + 112 (AppKit + 6200292) [0x1a1457be4]
+[CIContext contextWithOptions:] + 40 (CoreImage + 549532) [0x1a8df129c]
-[CIContext initWithOptions:] + 588 (CoreImage + 65744) [0x1a8d7b0d0]
+[CIContext(Internal) internalContextWithMTLDevice:options:] + 76 (CoreImage + 66568) [0x1a8d7b408]
CIMetalCommandQueueCreate + 52 (CoreImage + 66692) [0x1a8d7b484]
-[CaptureMTLDevice newCommandQueue] + 168 (GPUToolsCapture + 130200) [0x1029e7c98]
-[CaptureMTLCommandQueue initWithBaseObject:captureDevice:] + 204 (GPUToolsCapture + 799812) [0x102a8b444]
GTMTLGuestAppClientAddMTLCommandQueueInfo + 108 (GPUToolsCapture + 313572) [0x102a148e4]
__ulock_wait2 + 8 (libsystem_kernel.dylib + 60540) [0x19d24bc7c]
*??? (kernel.release.t6020 + 6102048) [0xfffffe0008cd5c20] (blocked by turnstile waiting for Phocus [11343] [unique pid 1001657] thread 0xb41b08 - part of a deadlock)
and
Thread 0xb41b08 DispatchQueue "CI::RenderCompletionQueue"(535) 1000 samples (1-1000) priority 46 (base 46)
start_wqthread + 8 (libsystem_pthread.dylib + 52464) [0x1035f4cf0]
_pthread_wqthread + 288 (libsystem_pthread.dylib + 20736) [0x1035ed100]
_dispatch_workloop_worker_thread + 580 (libdispatch.dylib + 129956) [0x1026afba4]
_dispatch_root_queue_drain_deferred_wlh + 652 (libdispatch.dylib + 133360) [0x1026b08f0]
_dispatch_lane_invoke + 468 (libdispatch.dylib + 68516) [0x1026a0ba4]
_dispatch_lane_serial_drain + 860 (libdispatch.dylib + 64160) [0x10269faa0]
_dispatch_client_callout + 20 (libdispatch.dylib + 26788) [0x1026968a4]
_dispatch_call_block_and_release + 32 (libdispatch.dylib + 19300) [0x102694b64]
CI::Object::unref() const + 120 (CoreImage + 35360) [0x1a8d73a20]
CI::MetalContext::~MetalContext() + 16 (CoreImage + 192260) [0x1a8d99f04]
CI::MetalContext::~MetalContext() + 236 (CoreImage + 192536) [0x1a8d9a018]
-[CaptureMTLCommandQueue dealloc] + 44 (GPUToolsCapture + 797916) [0x102a8acdc]
GTMTLGuestAppClientRemoveMTLCommandQueueInfo + 236 (GPUToolsCapture + 314240) [0x102a14b80]
GTMTLGuestAppClient_allCaptureObjectsUnsafe + 392 (GPUToolsCapture + 298776) [0x102a10f18]
AllMetalLayers + 64 (GPUToolsCapture + 518224) [0x102a46850]
MakeLayerInfos + 320 (GPUToolsCapture + 518608) [0x102a469d0]
-[CALayer frame] + 88 (QuartzCore + 74624) [0x1a5da1380]
__ulock_wait2 + 8 (libsystem_kernel.dylib + 60540) [0x19d24bc7c]
*??? (kernel.release.t6020 + 6102048) [0xfffffe0008cd5c20] (blocked by turnstile waiting for Phocus [11343] [unique pid 1001657] thread 0xb3b0f8 - part of a deadlock)
We have a pixel buffer pool managed by the system(created using CVPixelBufferPoolCreate API). And each time when we need a pixel buffer, we call CVPixelBufferPoolCreatePixelBuffer to created one from the pool. Then we override all pixels of the buffer, getting IOSurface from the buffer, and then set the IOSurface as CALayer's contents property in another process to show it, everything works fine.
Now we want to do some optimization by only override pixels that's changed between frames. The way we'd like to do is that after we call CVPixelBufferPoolCreatePixelBuffer to create a buffer, we get the underlying IOSurface id map it with a frame info. Next time if we get the same IOSurface id, we just compare the current frame info with the one we stored and only update the changed pixels in CVPixelBuffer.
However, there is no document mentioning whether the CVPixelBuffer created using CVPixelBufferPoolCreatePixelBuffer will contain previous pixels(content before it's returned to the pool). Do we have this guarantee? If not, is there any way we can know whether the created buffer contains the previous pixels or not?
I want to use Swift language to write code for drawing multiple polygons. I would like to find some examples as references. Can anyone provide example code or tell me where I can see such examples? Thank you!
Technical Issue Report for Maple Tale App - Audio Format Compatibility
Dear Apple Technical Support Team,
I hope this message finds you well. My name is [Your Name], and I am part of the development team behind the Maple Tale app. We have encountered an issue with audio format compatibility within our app that we believe requires your assistance.
The issue pertains to the audio formats supported by our app. Currently, our app only supports WAV and OGG formats, which has led to a limitation in user experience. We are looking to expand our support to include additional formats such as MP3 and AAC, which are widely used by our user base.
To provide a clear understanding of the issue, I have outlined the steps to reproduce the problem:
Launch the Maple Tale app.
Proceed with the game normally.
Upon picking up equipment within the game, a warning box pops up indicating the audio format compatibility issue.
This warning box appears due to the app's inability to process audio files in formats other than WAV and OGG. We understand that this can be a significant hindrance to the user experience, and we are eager to resolve this as quickly as possible.
We have reviewed the documentation available on the official Apple Developer website but are still seeking clarification on the best practices for supporting a wider range of audio formats within our app. We would greatly appreciate any official recommendations or guidelines that could assist us in this endeavor.
Additionally, we are considering updating our app to inform users about the current audio format requirements and provide guidance on how to optimize their audio files for the best performance within our app. If there are any official documents or resources that we should reference when crafting this update, please let us know.
We appreciate your time and assistance in this matter and look forward to your guidance on how to best implement audio format support on the iOS platform.
Thank you once again for your support.
Warm regards,
The update was the only change I can see. Which I did just the other day. I did log on to iCloud.com, I also looked at Apple Developer. I didn't see any additional updates of terms that needed to be accepted. I am sure to log into iCloud on the simulator. It seems to stay logged in. Until I fetchSavedGames. Where I have it exit at 20 seconds due to timing out. Then when I go back to check my account in Settings, it's asking to "Sign in to iCloud" again. It does work properly on a device. So it doesn't stay logged into iCloud on the simulator but it seems like the fetchSavedGame from GKLocalPlayer is what resets that. Any help or suggestions would be appreciated. Thanks.
Hi everyone,
I've been working with the autoAdjustmentFilters provided by Core Image, which includes filters like CIHighlightShadowAdjust, CIVibrance, and CIToneCurve. However, I’ve noticed that the results differ significantly from the "Auto" enhancement feature in the Photos app. In the Photos app, the Auto function seems to adjust multiple parameters such as contrast, exposure, white balance, highlights, and shadows in a more advanced manner.
Is there an API or a framework available that can replicate the more sophisticated "Auto" adjustments as seen in the Photos app? Or would I need to manually combine filters (like CIExposureAdjust, CIWhitePointAdjust, etc.) to approximate this functionality?
Any insights or recommendations on how to achieve this would be greatly appreciated. Thank you!
Hi,
I face a problem that I could not scan a specific Code 39 barcode with Vision framework. We have multiple barcode in a label and almost all Code 39 can be scanned, but not for specific one.
One more information, regardless the one that is not recognized with Vision can be read by a general barcode scanner.
Have anyone faced similar situation?
Is there unique condition to make it hard to scan the barcode when using Vision?(size, intensity, etc)
Regards,
Hi, I've got a Swift Framework with a bunch of Metal files. Currently users have to manually include a Metal Lib in their bundle provided separately, to use the Swift Package.
First question; Is there a way to make a Metal Lib target in a Swift Package, and just include the .metal files? (without a binary asset)
Second question; If not, Swift 5.3 has resource support, how would you recommend to bundle a Metal Lib in a Swift Package?
Hello,
I've been experimenting with the iphone-performance-gaming-tier Device Capability using TestFlight.
I've decided I don't want the restriction and I've just submitted a new build (new version number) without the value but the build metadata still includes it.
How do i remove it? I don't want to the restriction.
Cheers
Hi, everytime I try to run Minecraft Launcher theres a bunch of code that comes up? It ran fine about a month ago but now I cant even open the app, it wont let me copy all of the code but it does say this ""PC register does not match crashing frame (0x0 vs 0x10B338024)"". My Mac is fully up to date, I have deleted and reinstalled Minecraft numerous times ut it continues to display a large amount of code with nothing to fix it.
Could someone help me fix it or recommend anything I can do to fix it?
I am currently working on a project where I aim to overlay the camera feed obtained via the Apple Vision Pro's camera access API to align perfectly with the user's perspective in Vision Pro.
However, I've noticed a discrepancy between the captured camera feed and the actual view from the user's perspective. My assumption is that this difference might be related to lens distortion correction or the lack thereof.
Unfortunately, I'm not entirely sure how the camera feed is being corrected or processed. For the overlay, I'm using a typical 3D CG approach where a texture captured from the background plane is projected onto a surface. In this case, the "background capture" is the camera feed that I'm projecting.
If anyone has insights or suggestions on how to align the camera feed with the user's perspective more accurately, any information would be greatly appreciated.
Attached image shows what difference between the camera feed and actual user's perspective field of view.
I want to align the camera feed image to the user's perspective.