I have followed https://apple.github.io/coremltools/docs-guides/source/installing-coremltools.html but failed.
Looks like the doc is too outdated.
Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.
Post
Replies
Boosts
Views
Activity
In our app we use CoreML. But ever since macOS 15.x was released we started to get a great bunch of crashes like this:
Incident Identifier: 424041c3-884b-4e50-bb5a-429a83c3e1c8
CrashReporter Key: B914246B-1291-4D44-984D-EDF84B52310E
Hardware Model: Mac14,12
Process: <REMOVED> [1509]
Path: /Applications/<REMOVED>
Identifier: com.<REMOVED>
Version: <REMOVED>
Code Type: arm64
Parent Process: launchd [1]
Date/Time: 2024-11-13T13:23:06.999Z
Launch Time: 2024-11-13T13:22:19Z
OS Version: Mac OS X 15.1.0 (24B83)
Report Version: 104
Exception Type: SIGABRT
Exception Codes: #0 at 0x189042600
Crashed Thread: 36
Thread 36 Crashed:
0 libsystem_kernel.dylib 0x0000000189042600 __pthread_kill + 8
1 libsystem_c.dylib 0x0000000188f87908 abort + 124
2 libsystem_c.dylib 0x0000000188f86c1c __assert_rtn + 280
3 Metal 0x0000000193fdd870 MTLReportFailure.cold.1 + 44
4 Metal 0x0000000193fb9198 MTLReportFailure + 444
5 MetalPerformanceShadersGraph 0x0000000222f78c80 -[MPSGraphExecutable initWithMPSGraphPackageAtURL:compilationDescriptor:] + 296
6 Espresso 0x00000001a290ae3c E5RT::SharedResourceFactory::GetMPSGraphExecutable(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, NSDictionary*) + 932
.
.
.
43 CoreML 0x0000000192d263bc -[MLModelAsset modelWithConfiguration:error:] + 120
44 CoreML 0x0000000192da96d0 +[MLModel modelWithContentsOfURL:configuration:error:] + 176
45 <REMOVED> 0x000000010497b758 -[<REMOVED> <REMOVED>] (<REMOVED>)
No similar crashes on macOS 12-14!
MetalPerformanceShadersGraph.log
Any clue what is causing this?
Thanks! :)
I tried this:
struct CarShortcutsProvider: AppShortcutsProvider {
@AppShortcutsBuilder
static var appShortcuts: [AppShortcut] {
AppShortcut(
intent: LockCarIntent(),
phrases: ["Lock my car with \(.applicationName)", "Lock my \(\.$car) with \(.applicationName)"],
shortTitle: LocalizedStringResource("Lock Car"),
systemImageName: "lock.fill"
)
AppShortcut(
intent: UnlockCarIntent(),
phrases: ["Unlock my car with \(.applicationName)", "Unlock my \(\.$car) with \(.applicationName)"],
shortTitle: LocalizedStringResource("Unlock Car"),
systemImageName: "lock.open.fill"
)
}
}
but Siri only understands "unlock my car ", not with the placeholder. Siri asks me then for the car, and it understands it, but not in one sentence. Is there something wrong with my code?
Also I tried it without applicationName first, and then it didn't work at all with Siri. Is this a general limitation of app intents? I thought the goal was to reduce friction. If the user has to mention the app name all the time, it adds friction.
Itself been 4-5 days my Image playground has showing the “Downloading Support for Image Playground “
Hey, has anyone figured out how the “Persons” list in Genmoji/Playground actually works?
I’ve had a strange experience so far. When I first got access during Beta 2, the list randomly included about 10–15 people, even though my photo library contains many more recognizable faces. To try fixing this, I started naming faces in the Photos app, hoping they’d be added to the Genmoji/Playground list, but nothing changed.
Then, after updating to Beta 3, it added just 2–3 of the people I had named. Encouraged, I spent about an hour naming all the faces in my library. But a few hours later, the list unexpectedly removed around 10 people, leaving me with fewer than I had initially.
I’ve also read that leaving the phone locked and plugged into power should help sort people in the library, but that hasn’t worked for me yet.
Anyone else experienced this or found a way to make it work? Thanks!
Hi, The most recent version of tensorflow-metal is only available for macosx 12.0 and python up to version 3.11. Is there any chance it could be updated with wheels for macos 15 and Python 3.12 (which is the default version supported for tensrofllow 2.17+)? I'd note that even downgrading to Python 3.11 would not be sufficient, as the wheels only work for macos 12.
Thanks.
Hi everyone,
I'm working on integrating object recognition from live video feeds into my existing app by following Apple's sample code. My original project captures video and records it successfully. However, after integrating the Vision-based object detection components (VNCoreMLRequest), no detections occur, and the callback for the request is never triggered.
To debug this issue, I’ve added the following functionality:
Set up AVCaptureVideoDataOutput for processing video frames.
Created a VNCoreMLRequest using my Core ML model.
The video recording functionality works as expected, but no object detection happens. I’d like to know:
How to debug this further? Which key debug points or logs could help identify where the issue lies?
Have I missed any key configurations? Below is a diff of the modifications I’ve made to my project for the new feature.
Diff of Changes:
(Attach the diff provided above)
Specific Observations:
The captureOutput method is invoked correctly, but there is no output or error from the Vision request callback.
Print statements in my setup function setForVideoClassify() show that the setup executes without errors.
Questions:
Could this be due to issues with my Core ML model compatibility or configuration?
Is the VNCoreMLRequest setup incorrect, or do I need to ensure specific image formats for processing?
Platform:
Xcode 16.1, iOS 18.1, Swift 5, SwiftUI, iPhone 11,
Darwin MacBook-Pro.local 24.1.0 Darwin Kernel Version 24.1.0: Thu Oct 10 21:02:27 PDT 2024; root:xnu-11215.41.3~2/RELEASE_X86_64 x86_64
Any guidance or advice is appreciated! Thanks in advance.
Hi everyone,
Could someone confirm if it's currently possible, or if there are any plans, to restrict users from enabling Apple Intelligence altogether?
I understand that we can block individual features using MDM, but I'm interested in knowing if we can prevent users from toggling Apple Intelligence on and off in System Settings entirely.
Thanks!
Kind Regards,
Filipe Nogueira
I noticed that the ChatGPT is listed as an "Extension" in the Apple Intelligence settings on iOS 18.2 beta. Does this mean developers will be able to create their own extensions? Or will this be limited to larger companies to incorporate their own models into Apple Intelligence?
Hello, we're investigating an option to disable writing tools for some customers in our app. I'm aware of the writingToolsBehavior property for UITextView etc, but we would like to have a way to set this globally without having to update all UITextView instances (or future instances). Is there any API to do this?
We tried using UITextView.appearance.writingToolsBehavior = .none and it seemed promising on 18.2 beta, however it introduced crashes on devices running 18.1.
The crashes look like:
*** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Have you sent -setWritingToolsBehavior: to <UITextView: 0x14462c000; frame = (0 0; 0 0); text = ''; userInteractionEnabled = NO; gestureRecognizers = <NSArray: 0x30067cb40>; backgroundColor = UIExtendedGrayColorSpace 0 0; layer = <CALayer: 0x3009b1ba0>; contentOffset: {0, 0}; contentSize: {0, 0}; adjustedContentInset: {0, 0, 0, 0}> off the main thread? To verify, look for a complaint in the logs: "Unsupported use of UIKit…", and fix the problem if you find it. If your use is main-thread only please file a radar on UIKit, and attach this log. exercisedImplementations = {
"setWritingToolsBehavior:" = (
);
}'
Similarly, even on 18.2 beta if we used UITextField.appearance.writingToolsBehavior = .none we would see crashes for any search fields like:
*** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Have you sent -setWritingToolsBehavior: to <UISearchBarTextField: 0x141c04a00; frame = (0 0; 0 0); text = ''; opaque = NO; gestureRecognizers = <NSArray: 0x301fe15c0>; placeholder = Search Leads; borderStyle = RoundedRect; background = <_UITextFieldSystemBackgroundProvider: 0x3015de960: backgroundView=<_UISearchBarSearchFieldBackgroundView: 0x141c60200; frame = (0 0; 0 0); opaque = NO; autoresize = W+H; userInteractionEnabled = NO; layer = <CALayer: 0x3015de8e0>>, fillColor=(null), textfield=<UISearchBarTextField: 0x141c04a00>>; layer = <CALayer: 0x3015de240>> off the main thread? To verify, look for a complaint in the logs: "Unsupported use of UIKit…", and fix the problem if you find it. If your use is main-thread only please file a radar on UIKit, and attach this log. exercisedImplementations = {
"setWritingToolsBehavior:" = (
);
}'
Is it possible to set this globally?
Hello,
I am exploring real-time object detection, and its replacement/overlay with another shape, on live video streams for an iOS app using Core ML and Vision frameworks. My target is to achieve high-speed, real-time detection without noticeable latency, similar to what’s possible with PageFault handling and Associative Caching in OS, but applied to video processing.
Given that this requires consistent, real-time model inference, I’m curious about how well the Neural Engine or GPU can handle such tasks on A-series chips in iPhones versus M-series chips (specifically M1 Pro and possibly M4) in MacBooks. Here are a few specific points I’d like insight on:
Hardware Suitability: How feasible is it to perform real-time object detection with Core ML on the Neural Engine (i.e., can it maintain low latency)? Would the M-series chips (e.g., M1 Pro or newer) offer a tangible benefit for this type of task compared to the A-series in mobile devices? Which A- and M- chips would be minimum feasible recommendation for such task.
Performance Expectations: For continuous, live video object detection, what would be the expected frame rate or latency using an optimized Core ML model? Has anyone benchmarked such applications, and is the M-series required to achieve smooth, real-time processing?
Differences Across Apple Hardware: How does performance scale between the A-series Neural Engine and M-series GPU and Neural Engine? Is the M-series vastly superior for real-time Core ML tasks like object detection on live video feeds?
If anyone has attempted live object detection on these chips, any insights on real-time performance, limitations, or optimizations would be highly appreciated.
Please refer: Apple APIs
Thank you in advance for your help!
Hi all,
When executing an HLO program using the JAX metal PJRT plugin, the program fails due to an unsupported data type returned by the rng_bit_generator operation.
The generated HLO includes:
%output_state, %output = "mhlo.rng_bit_generator"(%1) <{rng_algorithm = #mhlo.rng_algorithm<PHILOX>}> : (tensor<3xi64>) -> (tensor<3xi64>, tensor<3xui32>)
The error message indicates that:
Metal only supports MPSDataTypeFloat16, MPSDataTypeBFloat16, MPSDataTypeFloat32, MPSDataTypeInt32, and MPSDataTypeInt64.
The use of ui32 seems to be incompatible with Metal’s allowed types.
I’m trying to understand if the ui32 output is the problem or maybe the use of rng_bit_generator is wrong.
Could you clarify if there is a workaround or planned support for ui32 output in this context? Alternatively, guidance on configuring rng_bit_generator for compatibility with Metal’s supported types would be greatly appreciated.
My Playground app has been stuck at Downloading support for Image Playground now for about 4-5 days. I've reinstalled the app, restarted phone multiple times, reset network settings 2-3 times. Tried LTE/5G and Different WIFI networks and it still sat showing Downloading Support.
Anyone else having this issue?
How long does it usually take to get access to image playground. Its been about a week since I got IOS 18.2 public beta and still am waiting for access to the image playground. When I got apple intelligence only took a few hours.
Where can I find an example of using this MPSGraph function? I'm trying to use it to paste an image into a larger canvas at certain coordinates.
func sliceUpdateDataTensor(
_ dataTensor: MPSGraphTensor,
update updateTensor: MPSGraphTensor,
starts: [NSNumber],
ends: [NSNumber],
strides: [NSNumber],
startMask: UInt32,
endMask: UInt32,
squeezeMask: UInt32,
name: String?
) -> MPSGraphTensor
When I pressed an early access a few days ago and when I check it it still says we will notify you when it is ready can apple please fix this problem with image playground
I Got Access and Then I opened the App and it said still waiting I was very sad
Woke up to a notification saying playground, Genmoji…etc was ready. but every time I try to use it says early access was requested. Anyone else had this issue? if so how did you fix it?
I would like to explore the option of restricting system prompts to share information with ChatGPT on my company's laptops. Is this possible, and if so, how can it be implemented? Should this be centralized, or should each user handle it individually?
I have a model that uses a CoreML delegate, and I’m getting the following warning whenever I set the model to nil. My understanding is that CoreML is creating a cache in the app’s storage but is having issues clearing it. As a result, the app’s storage usage increases every time the model is loaded.
This StackOverflow post explains the problem in detail: App Storage Size Increases with CoreML usage
This is a critical issue because the cache will eventually fill up the phone’s storage:
doUnloadModel:options:qos:error:: model=_ANEModel: { modelURL=file:///var/mobile/Containers/Data/Application/22DDB13E-DABA-4195-846F-F884135F37FE/tmp/F38A9824-3944-420C-BD32-78CE598BE22D-10125-00000586EFDFD7D6.mlmodelc/ : sourceURL= (null) : key={"isegment":0,"inputs":{"0_0":{"shape":[256,256,1,3,1]}},"outputs":{"142_0":{"shape":[16,16,1,222,1]},"138_0":{"shape":[16,16,1,111,1]}}} : identifierSource=0 : cacheURLIdentifier=E0CD0F44FB0417936057FC6375770CFDCCC8C698592ED412DDC9C81E96256872_C9D6E5E73302943871DC2C610588FEBFCB1B1D730C63CA5CED15D2CD5A0AC0DA : string_id=0x00000000 : program=_ANEProgramForEvaluation: { programHandle=6077141501305 : intermediateBufferHandle=6077142786285 : queueDepth=127 } : state=3 : programHandle=6077141501305 : intermediateBufferHandle=6077142786285 : queueDepth=127 : attr={
ANEFModelDescription = {
ANEFModelInput16KAlignmentArray = (
);
ANEFModelOutput16KAlignmentArray = (
);
ANEFModelProcedures = (
{
ANEFModelInputSymbolIndexArray = (
0
);
ANEFModelOutputSymbolIndexArray = (
0,
1
);
ANEFModelProcedureID = 0;
}
);
kANEFModelInputSymbolsArrayKey = (
"0_0"
);
kANEFModelOutputSymbolsArrayKey = (
"138_0@output",
"142_0@output"
);
kANEFModelProcedureNameToIDMapKey = {
net = 0;
};
};
NetworkStatusList = (
{
LiveInputList = (
{
BatchStride = 393216;
Batches = 1;
Channels = 3;
Depth = 1;
DepthStride = 393216;
Height = 256;
Interleave = 1;
Name = "0_0";
PlaneCount = 3;
PlaneStride = 131072;
RowStride = 512;
Symbol = "0_0";
Type = Float16;
Width = 256;
}
);
LiveOutputList = (
{
BatchStride = 113664;
Batches = 1;
Channels = 111;
Depth = 1;
DepthStride = 113664;
Height = 16;
Interleave = 1;
Name = "138_0@output";
PlaneCount = 111;
PlaneStride = 1024;
RowStride = 64;
Symbol = "138_0@output";
Type = Float16;
Width = 16;
},
{
BatchStride = 227328;
Batches = 1;
Channels = 222;
Depth = 1;
DepthStride = 227328;
Height = 16;
Interleave = 1;
Name = "142_0@output";
PlaneCount = 222;
PlaneStride = 1024;
RowStride = 64;
Symbol = "142_0@output";
Type = Float16;
Width = 16;
}
);
Name = net;
}
);
} : perfStatsMask=0} was not loaded by the client.