Machine Learning

RSS for tag

Create intelligent features and enable new experiences for your apps by leveraging powerful on-device machine learning.

Posts under Machine Learning tag

52 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

CreateML/CoreML Issues with Large Dataset
Hello All, I'm developing a machine learning model for image classification, which requires managing an exceptionally large dataset comprising over 18,000 classes. I've encountered several hurdles while using Create ML, and I would appreciate any insights or advice from those who have faced similar challenges. Current Issues: Create ML Failures with Large Datasets: When using Create ML, the process often fails with errors such as "Failed to create CVPixelBufferPool." This issue appears when handling particularly large volumes of data. Custom Implementation Struggles: To bypass some of the limitations of Create ML, I've developed a custom solution leveraging the MLImageClassifier within the CreateML framework in my own SwiftUI MacOS app. Initially I had similar errors as I did in Create ML, but I discovered I could move beyond the "extracting features" stage without crashing by employing a workaround: using a timer to cancel and restart the job every 30 seconds. This method is the only way I've been able to finish the extraction phase, even with large datasets, but it causes many errors in the console if I allow it to run too long. Lack of Progress Reporting: Using MLJob<MLImageClassifier>, I've noticed that progress reporting stalls after the feature extraction phase. Although system resources indicate activity, there is no programmatic feedback on what is occurring. Things I've Tried: Data Validation: Ensured that all images in the dataset are valid and non-corrupted, which helps prevent unnecessary issues from faulty data. Custom Implementation with CreateML Framework: Developed a custom solution using the MLImageClassifier within the CreateML framework to gain more control over the training process. Timer-Based Workaround: Employed a workaround using a timer to cancel and restart the job every 30 seconds to move past the "extracting features" phase, allowing progress even with larger datasets. Monitoring System Resources: Observed ongoing system resource usage when process feedback stalled, confirming background processing activity despite the lack of progress reporting. Subset Testing: Successfully created and tested a model on a subset of the data, which validated the approach worked for smaller datasets and could produce a functioning model. Router Model Concept: Considered training multiple models for different subsets of data and implementing a "router" model to decide which specialized model to utilize based on input characteristics. What I Need Help With: Handling Large Datasets: I'm seeking strategies or best practices for effectively utilizing Create ML with large datasets. Any guidance on memory management or alternative methodologies would be immensely helpful. Improving Progress Reporting: I'm looking for ways to obtain more consistent and programmatic progress updates during the training and testing phases. I'm working on a Mac M1 Pro w/ 32GB RAM, with Apple Silicon and am fully integrated within the Apple ecosystem. I am very grateful for any advice or experiences you could share to help overcome these challenges. Thank you! I've pasted the relevant code below: func go() { if self.trainingSession == nil { self.trainingSession = createTrainingSession() } if self.startTime == nil { self.startTime = Date() } job = try! MLImageClassifier.resume(self.trainingSession) job.phase .receive(on: RunLoop.main) .sink { phase in self.phase = phase } .store(in: &cancellables) job.checkpoints .receive(on: RunLoop.main) .sink { checkpoint in self.state = "\(checkpoint)\n\(self.job.progress)" self.progress = self.job.progress.fractionCompleted + 0.2 self.updateTimeEstimates() } .store(in: &cancellables) job.result .receive(on: DispatchQueue.main) .sink(receiveCompletion: { completion in switch completion { case .failure(let error): print("Training Failed: \(error.localizedDescription)") case .finished: print("🎉🎉🎉🎉 TRAINING SESSION FINISHED!!!!") self.trainingFinished = true } }, receiveValue: { classifier in Task { await self.saveModel(classifier) } }) .store(in: &cancellables) } private func createTrainingSession() -> MLTrainingSession<MLImageClassifier> { do { print("Initializing training Data...") let trainingData: MLImageClassifier.DataSource = .labeledDirectories(at: trainingDataURL) let modelParameters = MLImageClassifier.ModelParameters( validation: .split(strategy: .automatic), augmentation: self.augmentations, algorithm: .transferLearning( featureExtractor: .scenePrint(revision: 2), classifier: .logisticRegressor ) ) let sessionParameters = MLTrainingSessionParameters( sessionDirectory: self.sessionDirectoryURL, reportInterval: 1, checkpointInterval: 100, iterations: self.numberOfIterations ) print("Initializing training session...") let trainingSession: MLTrainingSession<MLImageClassifier> if FileManager.default.fileExists(atPath: self.sessionDirectoryURL.path) && isSessionCreated(atPath: self.sessionDirectoryURL.path()) { do { trainingSession = try MLImageClassifier.restoreTrainingSession(sessionParameters: sessionParameters) } catch { print("error resuming, exiting.... \(error.localizedDescription)") fatalError() } } else { trainingSession = try MLImageClassifier.makeTrainingSession( trainingData: trainingData, parameters: modelParameters, sessionParameters: sessionParameters ) } return trainingSession } catch { print("Failed to initialize training session: \(error.localizedDescription)") fatalError() } }
0
0
150
6d
How to Fine-Tune the SNSoundClassifier for Custom Sound Classification in iOS?
Hi Apple Developer Community, I’m exploring ways to fine-tune the SNSoundClassifier to allow users of my iOS app to personalize the model by adding custom sounds or adjusting predictions. While Apple’s WWDC session on sound classification explains how to train from scratch, I’m specifically interested in using SNSoundClassifier as the base model and building/fine-tuning on top of it. Here are a few questions I have: 1. Fine-Tuning on SNSoundClassifier: Is there a way to fine-tune this model programmatically through APIs? The manual approach using macOS, as shown in this documentation is clear, but how can it be done dynamically - within the app for users or in a cloud backend (AWS/iCloud)? Are there APIs or classes that support such on-device/cloud-based fine-tuning or incremental learning? If not directly, can the classifier’s embeddings be used to train a lightweight custom layer? Training is likely computationally intensive and drains too much on battery, doing it on cloud can be right way but need the right apis to get this done. A sample code will do good. 2. Recommended Approach for In-App Model Customization: If SNSoundClassifier doesn’t support fine-tuning, would transfer learning on models like MobileNetV2, YAMNet, OpenL3, or FastViT be more suitable? Given these models (SNSoundClassifier, MobileNetV2, YAMNet, OpenL3, FastViT), which one would be best for accuracy and performance/efficiency on iOS? I aim to maintain real-time performance without sacrificing battery life. Also it is important to see architecture retention and accuracy after conversion to CoreML model. 3. Cost-Effective Backend Setup for Training: Mac EC2 instances on AWS have a 24-hour minimum billing, which can become expensive for limited user requests. Are there better alternatives for deploying and training models on user request when s/he uploads files (training data)? 4. TensorFlow vs PyTorch: Between TensorFlow and PyTorch, which framework would you recommend for iOS Core ML integration? TensorFlow Lite offers mobile-optimized models, but I’m also curious about PyTorch’s performance when converted to Core ML. 5. Metrics: Metrics I have in mind while picking the model are these: Publisher, Accuracy, Fine-Tuning capability, Real-Time/Live use, Suitability of iPhone 16, Architectural retention after coreML conversion, Reasons for unsuitability, Recommended use case. Any insights or recommended approaches would be greatly appreciated. Thanks in advance!
2
0
194
5d
CreateML
I'm trying to use the Spatial model to perform Object Tracking on a .usdz file that I create. After loading the file, which I can view correctly in the console, I start the training. Initially, I notice that the disk usage on my PC increases. After several GB, the usage stops, but the training progress remains for hours at 0.00% with the message "About 8hr." How can I understand what the issue is? Has anyone else experienced the same problem? Thanks Diego
0
1
126
2w
Optimizing YOLOv8 for Real-Time Object Detection in a Specific Screen Area
I’m working on real-time object detection using YOLOv8, but I only need to detect objects in approximately 40% of the screen area. Is it possible to limit the captureOut method to focus solely on that specific region of the screen? If this isn’t feasible, I’m considering an approach where the full-screen pixel buffer is captured and then cropped to the target area before running detection. However, I’m concerned about how this might affect real-time performance. I’d appreciate any insights on how to maintain real-time performance or suggestions for better alternatives. Thank you!
2
0
224
2w
Seeking API for Advanced Auto Image Enhancement Similar to Photos App's Auto Feature
Hi everyone, I've been working with the autoAdjustmentFilters provided by Core Image, which includes filters like CIHighlightShadowAdjust, CIVibrance, and CIToneCurve. However, I’ve noticed that the results differ significantly from the "Auto" enhancement feature in the Photos app. In the Photos app, the Auto function seems to adjust multiple parameters such as contrast, exposure, white balance, highlights, and shadows in a more advanced manner. Is there an API or a framework available that can replicate the more sophisticated "Auto" adjustments as seen in the Photos app? Or would I need to manually combine filters (like CIExposureAdjust, CIWhitePointAdjust, etc.) to approximate this functionality? Any insights or recommendations on how to achieve this would be greatly appreciated. Thank you!
0
0
142
3w
Handling YOLOv8 Object Detection in 60FPS UltraWideCamera on iOS: Frame Processing Query
I am developing an iOS app that uses YOLOv8 for object detection and aims to detect objects at 60 FPS using the UltraWide camera. My goal is to process every frame within captureOutput and utilize the detected data (such as coordinates) for each one. I have a question regarding how background thread processing behaves in this scenario. Does the size of the YOLO model (n, s, m, etc.) or the weight of the operations inside captureOutput affect the number of frames that can be successfully processed? Specifically, I would like to know if all frames will be processed sequentially with a delay due to heavy processing in the background, or if some frames will be dropped and not processed at all. Any insights on how to handle this would be greatly appreciated. Thank you!
2
0
284
Oct ’24
Kernel dying issue after installing tensorflow
I was working on my project and when I tried to train a model the kernel crashed, so I restarted the kernel and tried the same and still I got the same crashing issue. Then I read one of the thread having the same issue where the apple support was saying to install tensorflow-macos and tensorflow-metal and read the guide from this site: https://developer.apple.com/metal/tensorflow-plugin/ and I did so, I tried every single thing and when I tried the test code provided in the site, I got the same error, here's the code and the output. Code: import tensorflow as tf cifar = tf.keras.datasets.cifar100 (x_train, y_train), (x_test, y_test) = cifar.load_data() model = tf.keras.applications.ResNet50( include_top=True, weights=None, input_shape=(32, 32, 3), classes=100,) loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False) model.compile(optimizer="adam", loss=loss_fn, metrics=["accuracy"]) model.fit(x_train, y_train, epochs=5, batch_size=64) and here's the output: Epoch 1/5 The Kernel crashed while executing code in the current cell or a previous cell. Please review the code in the cell(s) to identify a possible cause of the failure. Click here for more info. View Jupyter log for further details. And here's the half of log file as it was not fully coming: metal_plugin/src/device/metal_device.cc:1154] Metal device set to: Apple M1 2024-10-06 23:30:49.894405: I metal_plugin/src/device/metal_device.cc:296] systemMemory: 8.00 GB 2024-10-06 23:30:49.894420: I metal_plugin/src/device/metal_device.cc:313] maxCacheSize: 2.67 GB 2024-10-06 23:30:49.894444: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:305] Could not identify NUMA node of platform GPU ID 0, defaulting to 0. Your kernel may not have been built with NUMA support. 2024-10-06 23:30:49.894460: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:271] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 0 MB memory) -> physical PluggableDevice (device: 0, name: METAL, pci bus id: ) 2024-10-06 23:30:56.701461: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:117] Plugin optimizer for device_type GPU is enabled. [libprotobuf FATAL google/protobuf/message_lite.cc:353] CHECK failed: target + size == res: libc++abi: terminating due to uncaught exception of type google::protobuf::FatalException: CHECK failed: target + size == res: Please respond to this post as soon as possible as I am working on my project now and getting this error again n again. Device: Apple MacBook Air M1.
0
0
263
Oct ’24
The Vision request does not work in simulator with Error "Could not create inference context"
When I use VNGenerateForegroundInstanceMaskRequest to generate the mask in the simulator by SwiftUI, there is an error "Could not create inference context". Then I add the code to make the vision by CPU: let request = VNGenerateForegroundInstanceMaskRequest() let handler = VNImageRequestHandler(ciImage: inputImage) #if targetEnvironment(simulator) if #available(iOS 18.0, *) { let allDevices = MLComputeDevice.allComputeDevices for device in allDevices { if(device.description.contains("MLCPUComputeDevice")){ request.setComputeDevice(.some(device), for: .main) break } } } else { // Fallback on earlier versions request.usesCPUOnly = true } #endif do { try handler.perform([request]) if let result = request.results?.first { let mask = try result.generateScaledMaskForImage(forInstances: result.allInstances, from: handler) return CIImage(cvPixelBuffer: mask) } } catch { print(error) } Even I force the simulator to run the code by CPU, but it still have the error: "Could not create inference context"
2
0
296
Sep ’24
Apple AI / Data Protection & Processing
Where does the processing power to enact certain AI capabilities come from? Is it hosted on the originating device? Or does the device send contents of originating information to Apple assets to process and give product to end user? e.g. If I ask AI to summarize an email will it send the contents of the email to an Apple AI asset to process it and give the summary to the originating device.
0
0
249
Sep ’24
Error in TensorFlow in MacBook Air M1 (macOS Monterey)
getting this error again and again even if I tried reinstalling. Traceback (most recent call last): File "", line 1, in File "/Users/aman/LLM/env/lib/python3.8/site-packages/tensorflow/init.py", line 439, in _ll.load_library(_plugin_dir) File "/Users/aman/LLM/env/lib/python3.8/site-packages/tensorflow/python/framework/load_library.py", line 151, in load_library py_tf.TF_LoadLibrary(lib) tensorflow.python.framework.errors_impl.NotFoundError: dlopen(/Users/aman/LLM/env/lib/python3.8/site-packages/tensorflow-plugins/libmetal_plugin.dylib, 0x0006): Symbol not found: OBJC_CLASS$_MPSGraphRandomOpDescriptor Referenced from: /Users/aman/LLM/env/lib/python3.8/site-packages/tensorflow-plugins/libmetal_plugin.dylib Expected in: /System/Library/Frameworks/MetalPerformanceShadersGraph.framework/Versions/A/MetalPerformanceShadersGraph
1
0
514
Sep ’24
Idea's to improve Apple Watch
Dear Apple Team, I have a suggestion to enhance the Apple Watch user experience. A new feature could provide personalized recommendations based on weather conditions and the user’s mood. For example, during hot weather, it could suggest drink something cold, or if the user feeling down,it could offer ways the boost their mood. This kind of feature could make the Apple Watch not just a health and fitness tracker but also a more functional personal assistant. “İmprove communication with Apple Watch” Feature #1: Noise detection and location suggestions. Imagine having your Apple Watch detect ambient noise levels and suggest a quieter location for your call. Feature #2: Context-aware call response options. If you can't answer a call, your Apple Watch could offer pre-set responses to communicate your status and reduce missed call anxiety. For example, if you're in a busy restaurant, your Apple Watch could suggest moving to a quieter spot nearby for a better conversation. Or if you’re in a movie theater,your Apple Watch could send an automatic “I’m in the movie’s” text to the caller. “İmprove user experience and app management“ Automated Sleep Notifications: The ability for the Apple Watch to automatically turn off notifications or change the watch face when the user is sleeping would provide a more seamless experience. For instance,when the watch detects that the user is in sleep mode,it could enable Do Not Disturb to silence calls and alerts. Caller Notification: In addition,it would be great if the Apple Watch could inform callers that the user is currently sleeping. This could help manage expectations for those attempting to reach the user at night. App Management to Conserve Battery: Implementing a feature that detects draining app's when the user is asleep could further enhance battery life. The watch and the iphone could close or pause apps that are using significant power while the user is not active. I believe these features could provide valuable advancements in enhancing the Apple Watch's usability for those who prioritize a restful night's sleep. Thank you for considering my suggestions. Best regards, Mahmut Ötgen Istanbul,Turkey
1
0
915
Aug ’24
CoreML Crash on iOS18 Beta5
Hello, My App works well on iOS17 and previous iOS18 Beta version, while it crashes on latest iOS18 Beta5, when it calling model predictionFromFeatures. Calling stack of crash is as: *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: 'Unrecognized ANE execution priority MLANEExecutionPriority_Unspecified' Last Exception Backtrace: 0 CoreFoundation 0x000000019bd6408c __exceptionPreprocess + 164 1 libobjc.A.dylib 0x000000019906b2e4 objc_exception_throw + 88 2 CoreFoundation 0x000000019be5f648 -[NSException initWithCoder:] 3 CoreML 0x00000001b7507340 -[MLE5ExecutionStream _setANEExecutionPriorityWithOptions:] + 248 4 CoreML 0x00000001b7508374 -[MLE5ExecutionStream _prepareForInputFeatures:options:error:] + 248 5 CoreML 0x00000001b7507ddc -[MLE5ExecutionStream executeForInputFeatures:options:error:] + 68 6 CoreML 0x00000001b74ce5c4 -[MLE5Engine _predictionFromFeatures:stream:options:error:] + 80 7 CoreML 0x00000001b74ce7fc -[MLE5Engine _predictionFromFeatures:options:error:] + 208 8 CoreML 0x00000001b74cf110 -[MLE5Engine _predictionFromFeatures:usingState:options:error:] + 400 9 CoreML 0x00000001b74cf270 -[MLE5Engine predictionFromFeatures:options:error:] + 96 10 CoreML 0x00000001b74ab264 -[MLDelegateModel _predictionFromFeatures:usingState:options:error:] + 684 11 CoreML 0x00000001b70991bc -[MLDelegateModel predictionFromFeatures:options:error:] + 124 And my model file type is ml package file. Source code is as below: //model MLModel *_model; ...... // model init MLModelConfiguration* config = [[MLModelConfiguration alloc]init]; config.computeUnits = MLComputeUnitsCPUAndNeuralEngine; _model = [MLModel modelWithContentsOfURL:compileUrl configuration:config error:&error]; ..... // model prediction MLPredictionOptions *option = [[MLPredictionOptions alloc]init]; id<MLFeatureProvider> outFeatures = [_model predictionFromFeatures:_modelInput options:option error:&error]; Is there anything wrong? Any advice would be appreciated.
3
1
526
Aug ’24
Chat gpt audio in background
Dear Apple Development Team, I’m writing to express my concerns and request a feature enhancement regarding the ChatGPT app for iOS. Currently, the app's audio functionality does not work when the app is in the background. This limitation significantly affects the user experience, particularly for those of us who rely on the app for ongoing, interactive voice conversations. Given that many apps, particularly media and streaming services, are allowed to continue audio playback when minimized, it’s frustrating that the ChatGPT app cannot do the same. This restriction interrupts the flow of conversation, forcing users to stay within the app to maintain an audio connection. For users who multitask on their iPhones, being able to switch between apps while continuing to listen or interact with ChatGPT is essential. The ability to reference notes, browse the web, or even respond to messages while maintaining an ongoing conversation with ChatGPT would greatly enhance the app’s usability and align it with other background-capable apps. I understand that Apple prioritizes resource management and device performance, but I believe there’s a strong case for allowing apps like ChatGPT to operate with background audio. Given its growing importance as a tool for productivity, learning, and communication, adding this capability would provide significant value to users. I hope you will consider this feedback for future updates to iOS, or provide guidance on any existing APIs that could be leveraged to enable such functionality. Thank you for your time and consideration. Best regards, luke yes I used gpt to write this.
1
0
434
Aug ’24
xcrun: error: cannot be used within an App Sandbox.
Helpppp. I installed Krita from the Appstore, it works. Then install ai_diffusion and I got : xcrun: error: cannot be used within an App Sandbox. Can anybody help me? Thanks. AttributeError Python 3.10.7: /Applications/krita.app/Contents/MacOS/krita Sat Aug 3 18:15:59 2024 A problem occurred in a Python script. Here is the sequence of function calls leading up to the error, in the order they occurred. /Users/alejandropereira/Library/Containers/org.kde.krita/Data/Library/Application Support/krita/pykrita/ai_diffusion/ui/region.py in update_settings(self=&lt;ai_diffusion.ui.region.ActiveRegionWidget object&gt;, key='prompt_translation', value=None) 345 self._layout_language_button() 346 elif key == "prompt_translation": 347 self._update_language() 348 349 async def _replace_with_translation(self, client: Client): self = &lt;ai_diffusion.ui.region.ActiveRegionWidget object&gt; self._update_language = &gt; /Users/alejandropereira/Library/Containers/org.kde.krita/Data/Library/Application Support/krita/pykrita/ai_diffusion/ui/region.py in _update_language(self=&lt;ai_diffusion.ui.region.ActiveRegionWidget object&gt;) 381 enabled = self._root._model.translation_enabled 382 lang = settings.prompt_translation if enabled else "en" 383 self._language_button.setText(lang.upper()) 384 if enabled: 385 text = self._lang_help_enabled self = &lt;ai_diffusion.ui.region.ActiveRegionWidget object&gt; self._language_button = &lt;PyQt5.QtWidgets.QToolButton object&gt; self._language_button.setText = lang = None lang.upper undefined AttributeError: 'NoneType' object has no attribute 'upper' cause = None class = &lt;class 'AttributeError'&gt; context = None delattr = &lt;method-wrapper 'delattr' of AttributeError object&gt; dict = {} dir = doc = 'Attribute not found.' eq = &lt;method-wrapper 'eq' of AttributeError object&gt; format = ge = &lt;method-wrapper 'ge' of AttributeError object&gt; getattribute = &lt;method-wrapper 'getattribute' of AttributeError object&gt; gt = &lt;method-wrapper 'gt' of AttributeError object&gt; hash = &lt;method-wrapper 'hash' of AttributeError object&gt; init = &lt;method-wrapper 'init' of AttributeError object&gt; init_subclass = le = &lt;method-wrapper 'le' of AttributeError object&gt; lt = &lt;method-wrapper 'lt' of AttributeError object&gt; ne = &lt;method-wrapper 'ne' of AttributeError object&gt; new = reduce = reduce_ex = repr = &lt;method-wrapper 'repr' of AttributeError object&gt; setattr = &lt;method-wrapper 'setattr' of AttributeError object&gt; setstate = sizeof = str = &lt;method-wrapper 'str' of AttributeError object&gt; subclasshook = suppress_context = False traceback = args = ("'NoneType' object has no attribute 'upper'",) name = 'upper' obj = None with_traceback = The above is a description of an error in a Python program. Here is the original traceback: Traceback (most recent call last): File "/Users/alejandropereira/Library/Containers/org.kde.krita/Data/Library/Application Support/krita/pykrita/ai_diffusion/ui/region.py", line 347, in update_settings self._update_language() File "/Users/alejandropereira/Library/Containers/org.kde.krita/Data/Library/Application Support/krita/pykrita/ai_diffusion/ui/region.py", line 383, in _update_language self._language_button.setText(lang.upper()) AttributeError: 'NoneType' object has no attribute 'upper'
1
0
379
Aug ’24
CreateMl Hand Pose Classifier Preview not showing the Prediction result
I have created and trained a Hand Pose classifier model and am trying to test it. I have noticed in the WWDC2021 "Classify hand poses and actions with Create ML" the preview windows has a prediction result that gives you the prediction based on the live preview or the images. Mine does not have that. When i try to import pictures or do the live test there is no result. Its just the wireframe view and under it there is nothing. How do I fix this please? Thanks.
1
0
383
Jul ’24
Use iPad M1 processor as GPU
Hello, I’m currently working on Tiny ML or ML on Edge using the Google Colab platform. Due to the exhaust of my compute unit’s free usage, I’m being prompted to pay. I’ve been considering leveraging the GPU capabilities of my iPad M1 and Intel-based Mac. Both devices utilize Thunderbolt ports capable of sharing connections up to 30GB/s. Since I’m primarily using a classification model, extensive GPU usage isn’t necessary. I’m looking for assistance or guidance on utilizing the iPad’s processor as an eGPU on my Mac, possibly through an API or Apple technology. Any help would be greatly appreciated!
2
0
674
Sep ’24
On device training of text classifier model
I have made a text classifier model but I want to train it on device too. When text is classified wrong, user can make update the model on device. Code : // // SpamClassifierHelper.swift // LearningML // // Created by Himan Dhawan on 7/1/24. // import Foundation import CreateMLComponents import CoreML import NaturalLanguage enum TextClassifier : String { case spam = "spam" case notASpam = "ham" } class SpamClassifierModel { // MARK: - Private Type Properties /// The updated Spam Classifier model. private static var updatedSpamClassifier: SpamClassifier? /// The default Spam Classifier model. private static var defaultSpamClassifier: SpamClassifier { do { return try SpamClassifier(configuration: .init()) } catch { fatalError("Couldn't load SpamClassifier due to: \(error.localizedDescription)") } } // The Spam Classifier model currently in use. static var liveModel: SpamClassifier { updatedSpamClassifier ?? defaultSpamClassifier } /// The location of the app's Application Support directory for the user. private static let appDirectory = FileManager.default.urls(for: .applicationSupportDirectory, in: .userDomainMask).first! class var urlOfModelInThisBundle : URL { let bundle = Bundle(for: self) return bundle.url(forResource: "SpamClassifier", withExtension:"mlmodelc")! } /// The default Spam Classifier model's file URL. private static let defaultModelURL = urlOfModelInThisBundle /// The permanent location of the updated Spam Classifier model. private static var updatedModelURL = appDirectory.appendingPathComponent("personalized.mlmodelc") /// The temporary location of the updated Spam Classifier model. private static var tempUpdatedModelURL = appDirectory.appendingPathComponent("personalized_tmp.mlmodelc") // MARK: - Public Type Methods static func predictLabelFor(_ value: String) throws -> (predication :String?, confidence : String) { let spam = try NLModel(mlModel: liveModel.model) let result = spam.predictedLabel(for: value) let confidence = spam.predictedLabelHypotheses(for: value, maximumCount: 1).first?.value ?? 0 return (result,String(format: "%.2f", confidence * 100)) } static func updateModel(newEntryText : String, spam : TextClassifier) throws { guard let modelURL = Bundle.main.url(forResource: "SpamClassifier", withExtension: "mlmodelc") else { fatalError("Could not find model in bundle") } // Create feature provider for the new image let featureProvider = try MLDictionaryFeatureProvider(dictionary: ["label": MLFeatureValue(string: newEntryText), "text": MLFeatureValue(string: spam.rawValue)]) let batchProvider = MLArrayBatchProvider(array: [featureProvider]) let updateTask = try MLUpdateTask(forModelAt: modelURL, trainingData: batchProvider, configuration: nil, completionHandler: { context in let updatedModel = context.model let fileManager = FileManager.default do { // Create a directory for the updated model. try fileManager.createDirectory(at: tempUpdatedModelURL, withIntermediateDirectories: true, attributes: nil) // Save the updated model to temporary filename. try updatedModel.write(to: tempUpdatedModelURL) // Replace any previously updated model with this one. _ = try fileManager.replaceItemAt(updatedModelURL, withItemAt: tempUpdatedModelURL) loadUpdatedModel() print("Updated model saved to:\n\t\(updatedModelURL)") } catch let error { print("Could not save updated model to the file system: \(error)") return } }) updateTask.resume() } /// Loads the updated Spam Classifier, if available. /// - Tag: LoadUpdatedModel private static func loadUpdatedModel() { guard FileManager.default.fileExists(atPath: updatedModelURL.path) else { // The updated model is not present at its designated path. return } // Create an instance of the updated model. guard let model = try? SpamClassifier(contentsOf: updatedModelURL) else { return } // Use this updated model to make predictions in the future. updatedSpamClassifier = model } }
1
0
481
Jul ’24
Can you match a new photo with existing images?
I'm looking for a solution to take a picture or point the camera at a piece of clothing and match that image with an image the user has stored in my app. I'm storing the data in a Core Data database as a Binary Data object. Since the user also takes the pictures they store in the database I think I cannot use pre-trained Core ML models. I would like the matching to be done on device if possible instead of going to an external service. That will probably describe the item based on what the AI sees, but then I cannot match the item with the stored images in the app. Does anyone know if this is possible with frameworks as Vision or VisionKit?
2
0
726
Jul ’24