Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.

All subtopics

Post

Replies

Boosts

Views

Activity

Custom Model Not Working Correctly in the Application #56
I created a model that classifies certain objects using yolov8. I noticed that the model is not working properly in my application. While the model works fine in Xcode preview, in the application it either returns the same result with 99% accuracy for each classification or does not provide any result. In Preview it looks like this: Predictions: extension CameraVC : AVCapturePhotoCaptureDelegate { func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: (any Error)?) { guard let data = photo.fileDataRepresentation() else { return } guard let image = UIImage(data: data) else { return } guard let cgImage = image.cgImage else { fatalError("Unable to create CIImage") } let handler = VNImageRequestHandler(cgImage: cgImage,orientation: CGImagePropertyOrientation(image.imageOrientation)) DispatchQueue.global(qos: .userInitiated).async { do { try handler.perform([self.viewModel.detectionRequest]) } catch { fatalError("Failed to perform detection: \(error)") } } lazy var detectionRequest: VNCoreMLRequest = { do { let model = try VNCoreMLModel(for: bestv720().model) let request = VNCoreMLRequest(model: model) { [weak self] request, error in self?.processDetections(for: request, error: error) } request.imageCropAndScaleOption = .centerCrop return request } catch { fatalError("Failed to load Vision ML model: \(error)") } }() This is where i print recognized objects: func processDetections(for request: VNRequest, error: Error?) { DispatchQueue.main.async { guard let results = request.results as? [VNRecognizedObjectObservation] else { return } var label = "" var all_results = [] var all_confidence = [] var true_results = [] var true_confidence = [] for result in results { for i in 0...results.count{ all_results.append(result.labels[i].identifier) all_confidence.append(result.labels[i].confidence) for confidence in all_confidence { if confidence as! Float > 0.7 { true_results.append(result.labels[i].identifier) true_confidence.append(confidence) } } } label = result.labels[0].identifier } print("True Results " , true_results) print("True Confidence ", true_confidence) self.output?.updateView(label:label) } } I converted the model like this: from ultralytics import YOLO model = YOLO(model_path) model.export(format='coreml', nms=True, imgsz=[720,1280])
2
1
595
Jun ’24
No Speedup with CoreML SDPA
I am testing the new scaled dot product attention CoreML op on macOS 15 beta 1. Based on the session video I was expecting to see a speedup when running on GPU however I see roughly equivalent performance to the same model on macOS 14. I ran tests with two models: one that simply repeats y = sdpa(y, k, v) 50 times gpt2 124M converted from nanoGPT (the only change is not returning loss from the forward method) I converted both models using coremltools 8.0b1 with minimum deployment targets of macOS 14 and also macOS 15. In Xcode, I can see that the new op was used for the macOS 15 target. Running on macOS 15 both target models take the same time, and that time matches the runtime on macOS 14. Should I be seeing performance improvements?
2
3
600
Jun ’24
CoreML Text Classifier in Message Filter
Hi all, I'm trying to build a scam detection in Message Filter powered by CoreML. I find the predictions of ML reliable and the solution for text frauds and scams are sorely needed. I was able to create a trained MLModel and deploy it in the app. It works on my container app, but when I try to use and initialise the model in the Message Filter extension, I get an error; initialization of text classifier model with model data failed I have tried putting the model in the container app, extension, even made a shared framework for container and extension but to no avail. Every time I invoke the codes to init my model from the extension, I am met with the same error. Here's my code for initializing the model do { let model = try Ace_v24_6(configuration: .init()) let output = try model.prediction(text: text) guard !output.label.isEmpty else { return nil } return MessagePrediction(rawValue: output.label) } catch { return nil } My question is: Is it impossible to use CoreML in MessageFilters? Cheers
1
0
543
Jun ’24
Siri only uses first App Shortcut defined
Using App Shortcuts with app intents, Siri only responds to the first shortcut defined in the app shortcut below. struct MementoShortcuts: AppShortcutsProvider { u/AppShortcutsBuilder static var appShortcuts: [AppShortcut] { AppShortcut( intent: SaveLinkIntent(), phrases: ["Add a link to \(.applicationName)", "Add \(\.$url) to \(.applicationName)", "Make a new link in \(.applicationName)", "Create a new link in \(.applicationName) from \(\.$url)"], shortTitle: "Add Link", systemImageName: "link.badge.plus" ) AppShortcut( intent: LinkViewedIntent(), phrases: [ "Mark a link I saved in \(.applicationName) as viewed", "Mark \(\.$link) as viewed in \(.applicationName)", "Set link in \(.applicationName) to viewed", "Change status of \(\.$link) to viewed in \(.applicationName)", ], shortTitle: "Mark Link as Viewed", systemImageName: "book" ) } } I have tried switching the order and she always uses the one that comes first. Both show up in the shortcuts app as an app shortcut, but only one shortcut is recognized by Siri even if I say the other one's phrase.
1
0
574
Jun ’24
In iOS 18 beta, the SoundAnalysis framework reports an error when the iPhone is locked
I use SoundAnalysis to analyze background sounds and have enabled background permissions. It worked well in previous iOS systems, but a warning appeared in the new iOS18beta version and sound analysis was stopped. Warning List: Execution of the command buffer was aborted due to an error during execution. Insufficient Permission (to submit GPU work from background) [Espresso::handle_ex_plan] exception=Espresso exception: "Generic error": Insufficient Permission (to submit GPU work from background) (00000006:kIOGPUCommandBufferCallbackErrorBackgroundExecutionNotPermitted); code=7 status=-1 Unable to compute the prediction using a neural network model. It can be an invalid input data or broken/unsupported model (error code: -1). CoreML prediction failed with Error Domain=com.apple.CoreML Code=0 "Failed to evaluate model 0 in pipeline" UserInfo={NSLocalizedDescription=Failed to evaluate model 0 in pipeline, NSUnderlyingError=0x30330e910 {Error Domain=com.apple.CoreML Code=0 "Failed to evaluate model 1 in pipeline" UserInfo={NSLocalizedDescription=Failed to evaluate model 1 in pipeline, NSUnderlyingError=0x303307840 {Error Domain=com.apple.CoreML Code=0 "Unable to compute the prediction using a neural network model. It can be an invalid input data or broken/unsupported model (error code: -1)." UserInfo={NSLocalizedDescription=Unable to compute the prediction using a neural network model. It can be an invalid input data or broken/unsupported model (error code: -1).}}}}}
6
4
688
Jun ’24
Vision and iOS18 - Failed to create espresso context.
I'm playing with the new Vision API for iOS18, specifically with the new CalculateImageAestheticsScoresRequest API. When I try to perform the image observation request I get this error: internalError("Error Domain=NSOSStatusErrorDomain Code=-1 \"Failed to create espresso context.\" UserInfo={NSLocalizedDescription=Failed to create espresso context.}") The code is pretty straightforward: if let image = image { let request = CalculateImageAestheticsScoresRequest() Task { do { let cgImg = image.cgImage! let observations = try await request.perform(on: cgImg) let description = observations.description let score = observations.overallScore print(description) print(score) } catch { print(error) } } } I'm running it on a M2 using the simulator. Is it a bug? What's wrong?
1
0
491
Jun ’24
The CoreML runtime is inconsistent.
for (int i = 0; i < 1000; i++){ double st_tmp = CFAbsoluteTimeGetCurrent(); retBuffer = [self.enhancer enhance:pixelBuffer error:&error]; double et_tmp = CFAbsoluteTimeGetCurrent(); NSLog(@"[enhance once] %f ms ", (et_tmp - st_tmp) * 1000); } When I run a CoreML model using the above code, I notice that the runtime gradually decreases at the beginning. output: [enhance once] 14.965057 ms [enhance once] 12.727022 ms [enhance once] 12.818098 ms [enhance once] 11.829972 ms [enhance once] 11.461020 ms [enhance once] 10.949016 ms [enhance once] 10.712981 ms [enhance once] 10.367990 ms [enhance once] 10.077000 ms [enhance once] 9.699941 ms [enhance once] 9.370089 ms [enhance once] 8.634090 ms [enhance once] 7.659078 ms [enhance once] 7.061005 ms [enhance once] 6.729007 ms [enhance once] 6.603003 ms [enhance once] 6.427050 ms [enhance once] 6.376028 ms [enhance once] 6.509066 ms [enhance once] 6.452084 ms [enhance once] 6.549001 ms [enhance once] 6.616950 ms [enhance once] 6.471038 ms [enhance once] 6.462932 ms [enhance once] 6.443977 ms [enhance once] 6.683946 ms [enhance once] 6.538987 ms [enhance once] 6.628990 ms ... In most deep learning inference frameworks, there is usually a warmup process, but typically, only the first inference is slower. Why does CoreML have a decreasing runtime at the beginning? Is there a way to make only the first inference time longer, while keeping the rest consistent? I use the CoreML model in the (void)display_pixels:(IJKOverlay *)overlay function.
1
1
606
May ’24
coreml convert flatten to reshape, but npu does not support reshape
I have a model that uses ‘flatten’, and when I converted it to a Core ML model and ran it on Xcode with an iPhone XR, I noticed that ‘flatten’ was automatically converted to ‘reshape’. However, the NPU does not support ‘reshape’. howerver, I got the Resnet50 model on apple models and performance it on XCode with the same iphone XR, I can see the 'flatten' operator which run on NPU. On the other hand, when I used the following code to convert ResNet50 in PyTorch and ran it on Xcode Performance, the ‘flatten’ operation was converted to ‘reshape’, which then ran on the CPU. ? So I dont know how to keep 'flatten' operator when convert to ml model ? coreml tool 7.1 iphone XR ios 17.5.1 from torchvision import models import coremltools as ct import torch import torch.nn as nn network_name = "my_resnet50" torch_model = models.resnet50(pretrained=True) torch_model.eval() width = 224 height = 224 example_input = torch.rand(1, 3, height, width) traced_model = torch.jit.trace(torch_model, (example_input)) model = ct.convert( traced_model, convert_to = "neuralnetwork", inputs=[ ct.TensorType( name = "data", shape = example_input.shape, dtype = np.float32 ) ], outputs = [ ct.TensorType( name = "output", dtype = np.float32 ) ], compute_units = ct.ComputeUnit.CPU_AND_NE, minimum_deployment_target = ct.target.iOS14, ) model.save("my_resnet.mlmodel") ResNet50 on Resnet50.mlmodel My Convertion of ResNet50
1
0
499
Jun ’24
Any luck with DINO conversion to MPS Graph/CoreML ?
The DINO v1/v2 models are particularly interesting to me as they produce embeddings for the detected objects rather than ordinary classification indexes. That makes them so much more useful than the CNN based models. I would like to prepare some of the models posted on Huggingface to run on Apple Silicon, but it seems that the default conversion with TorchScript will not work. The other default conversions I've looked at so far also don't work. Conversion based on an example input doesn't capture enough of the model. I know that some have managed to convert it as I have a demo with a coreml model that seems to work, but I would like to know how to do the conversion myself. Has anyone managed to convert any of the DINOv2 models?
0
0
387
Jun ’24
tf.function decorator with tensorflow-metal breaks tf.signal.fft3d()
I consistently receive corrupted results from tf.signal.fft3d() when it is within a function that has a @tf.function decorator. The results are all zero (0.) for entries after a certain x index (see image). Surprisingly, the issue depends on the matrix size. For example, (1023, 1023, 287) works but (1023, 1023, 575) does not. The issue is problematic because it occurs silently and not for all matrix sizes, i.e. can easily slip through tests. The error occurs only when tensorflow-metal is installed. The Tensorflow version is 2.16.1. My hardware is a Macbook Pro M3 Max with 40 GPU cores, 128 GB RAM running MacOS Sonoma version 14.5 (23F79). A Python environment to reproduce the bug can be created as follows: conda create --name tfmetalbug python=3.11.9 conda activate tfmetalbug pip install tensorflow tensorflow-metal conda install matplotlib The following code reproduces the issue: import tensorflow as tf import numpy as np import matplotlib.pyplot as plt # Wrap fft3d with tf.function @tf.function def fft3d_wrapper_function(x): return tf.signal.fft3d(x) # Generate a 3D image img = tf.random.normal(shape=(1023, 1023, 575), stddev=1., dtype=float) # generate random 3d image img = tf.dtypes.cast(img, tf.complex64) # convert to complex values # Compute the 3D FFT img_fft = fft3d_wrapper_function(img) # Visualize the 3D FFT plt.imshow(np.real(img_fft)[:, img_fft.shape[1]//2+10, :], cmap="gray", vmin=-0.001, vmax=0.001) plt.savefig("fft3d_wrapper_function.png") For me, removing the @tf.function decorator has resolved the issue.
0
0
469
Jun ’24
CoreML Crashed in iOS18 Beta
Here is an App using CoreML API with ML package format, it works fine in iOS17, while it is crashed when calling [MLModel modelWithContentsOfURL ] to load model running in iOS18. It seems an exception is raised "Failed to set compute_device_types_mask E5RT: Cannot provide zero compute device types. (1)". Is it a bug of iOS18 beta version , and it will be fixed in the future? The stack is as below: Exception Codes: #0 at 0x1e9280254 Crashed Thread: 49 Application Specific Information: *** Terminating app due to uncaught exception 'NSGenericException', reason: 'Failed to set compute_device_types_mask E5RT: Cannot provide zero compute device types. (1)' Last Exception Backtrace: 0 CoreFoundation 0x0000000199466418 __exceptionPreprocess + 164 1 libobjc.A.dylib 0x00000001967cde88 objc_exception_throw + 76 2 CoreFoundation 0x0000000199560794 -[NSException initWithCoder:] 3 CoreML 0x00000001b4fcfa8c -[MLE5ProgramLibraryOnDeviceAOTCompilationImpl createProgramLibraryHandleWithRespecialization:error:] + 1584 4 CoreML 0x00000001b4fcf3cc -[MLE5ProgramLibrary _programLibraryHandleWithForceRespecialization:error:] + 96 5 CoreML 0x00000001b4fc23d8 __44-[MLE5ProgramLibrary prepareAndReturnError:]_block_invoke + 60 6 libdispatch.dylib 0x00000001a12e1160 _dispatch_client_callout + 20 7 libdispatch.dylib 0x00000001a12f07b8 _dispatch_lane_barrier_sync_invoke_and_complete + 56 8 CoreML 0x00000001b4fc3e98 -[MLE5ProgramLibrary prepareAndReturnError:] + 220 9 CoreML 0x00000001b4fc3bc0 -[MLE5Engine initWithContainer:configuration:error:] + 220 10 CoreML 0x00000001b4fc3888 +[MLE5Engine loadModelFromCompiledArchive:modelVersionInfo:compilerVersionInfo:configuration:error:] + 344 11 CoreML 0x00000001b4faf53c +[MLLoader _loadModelWithClass:fromArchive:modelVersionInfo:compilerVersionInfo:configuration:error:] + 364 12 CoreML 0x00000001b4faedd4 +[MLLoader _loadModelFromArchive:configuration:modelVersion:compilerVersion:loaderEvent:useUpdatableModelLoaders:loadingClasses:error:] + 540 13 CoreML 0x00000001b4f9b900 +[MLLoader _loadWithModelLoaderFromArchive:configuration:loaderEvent:useUpdatableModelLoaders:error:] + 424 14 CoreML 0x00000001b4faaeac +[MLLoader _loadModelFromArchive:configuration:loaderEvent:useUpdatableModelLoaders:error:] + 460 15 CoreML 0x00000001b4fb0428 +[MLLoader _loadModelFromAssetAtURL:configuration:loaderEvent:error:] + 240 16 CoreML 0x00000001b4fb00c4 +[MLLoader loadModelFromAssetAtURL:configuration:error:] + 104 17 CoreML 0x00000001b5314118 -[MLModelAssetResourceFactoryOnDiskImpl modelWithConfiguration:error:] + 116 18 CoreML 0x00000001b5418cc0 __60-[MLModelAssetResourceFactory modelWithConfiguration:error:]_block_invoke + 72 19 libdispatch.dylib 0x00000001a12e1160 _dispatch_client_callout + 20 20 libdispatch.dylib 0x00000001a12f07b8 _dispatch_lane_barrier_sync_invoke_and_complete + 56 21 CoreML 0x00000001b5418b94 -[MLModelAssetResourceFactory modelWithConfiguration:error:] + 276 22 CoreML 0x00000001b542919c -[MLModelAssetModelVendor modelWithConfiguration:error:] + 152 23 CoreML 0x00000001b5380ce4 -[MLModelAsset modelWithConfiguration:error:] + 112 24 CoreML 0x00000001b4fb0b3c +[MLModel modelWithContentsOfURL:configuration:error:] + 168
2
0
753
Jun ’24