Where can I find CreateML logs? I'd like to inspect log lines if they exist to diagnose what kind of error the app encounters when I provide it training data for a multi-label image classifier and the UI displays "Data Analysis stopped".
I do see some crash reports for "MLRecipeExecutionService" in the Console app which seem related, but I haven't spotted anything useful there yet.
Create ML
RSS for tagCreate machine learning models for use in your app using Create ML.
Post
Replies
Boosts
Views
Activity
Note: I posted this to the feedback assistant but haven't gotten a response for 3months =( FB13482199
I am trying to train a large image classifier. I have a training run for ~300000 images. Each image has a folder and the file names within the folders are somewhat random. 381 classes. I am on an M2 Pro, Sonoma 14.0 running CreateML Version 5.0 (121.1). I would prefer not to pursue the pytorch/HF -> coremltools route.
CreateML seems to consistently crash ~25000-30000 images in during the feature extraction phase with "Unexpected Error". It does not seem to be due to an out of memory issue. I am looking for some guidance since it seems impossible to debug why this is consistently crashing.
My initial assumption was that it could be due to blank/corrupt files. I do not think that is the case. I also checked if there were any special characters in the data/folders. I wasn't able to go through all, but did try some programatic regex. Don't think this is the case either.
I attached the sysdiagnose results in feedback assistant after the crash happened. I did notice when going into /var/logs there was some write issue saying that Mac had written too much to disk. Note: I also tried Xcode 15.2-beta this time and the associated CoreML version.
My questions:
How can I fix this?
How should I go about debugging CreateML errors in the future?
'Unexpected Error' - where can I go about getting the exact createml logs on my device? This is far too broad of an error statement
Please let me know. As a note, I did successfully train a past model on ~100000 images. I am planning to 10-15x that if this run is successful. Please help, spent a lot of time gathering the extra data and to date have been an occasional power user of createml. Haven't heard back from Apple since December =/. I assume I'm not the only one with this problem, so looking for any instructions to hands on debug and help others. Thx!
For example: we use DocKit for birdwatching, so we have an unknown field distance and direction.
Distance = ?
Direction = ?
For example, the rock from which the observation is made. The task is to recognize the number of birds caught in the frame, add a detection frame and collect statistics.
Question:
What is the maximum number of frames processed with custom object recognition?
If not enough, can I do the calculations myself and transfer to DokKit for fast movement?
I'm training an activity classifier with CreateML and when I add samples to the Preview tab, the length of the sample it displays does not match its actual length.
I have set prediction window size to 15 and sample rate to 10. The activity is roughly 1.5 seconds.
When I put a 1.49 second sample into preview, it says it is 00:00.06 seconds:
and when I put a 12.91 second sample into preview, it says it is 00:00.52 seconds:
Here is the code I am using to print out sensor data in csv format:
if motionManager.isDeviceMotionAvailable {
motionManager.deviceMotionUpdateInterval = 0.1
motionManager.startDeviceMotionUpdates(to: .main) { data, error in
guard let data = data, let startTime = self.startTime else { return }
let timestamp = Date().timeIntervalSince(startTime)
let xAcc = data.userAcceleration.x
let yAcc = data.userAcceleration.y
let zAcc = data.userAcceleration.z
let xRotRate = data.rotationRate.x
let yRotRate = data.rotationRate.y
let zRotRate = data.rotationRate.z
let roll = data.attitude.roll
let pitch = data.attitude.pitch
let yaw = data.attitude.yaw
let row = "\(timestamp),\(xAcc),\(yAcc),\(zAcc),\(xRotRate),\(yRotRate),\(zRotRate),\(roll),\(pitch),\(yaw)"
print(row)
}
}
And here is the data for the 1.49 second sample mentioned above:
The What’s New in Create ML session in WWDC24 went into great depth with time-series forecasting models (beginning at: 15:14) and mentioned these new models, capabilities, and tools for iOS 18. So, far, all I can find is API documentation. I don’t see any other session in WWDC24 covering these new time-series forecasting Create ML features.
Is there more substance/documentation on how to use these with Create ML? Maybe I am looking in the wrong place but I am fairly new with ML.
Are there any food truck / donut shop demo/sample code like in the video?
It is of great interest to get ahead of the curve on this within business applications that may take advantage of this with inventory / ordering data.
In the WWDC24 What’s New In Create ML
at 6:03 the presenter introduced TimeSeriesClassifier as a new component of Create ML Components. Where are documentation and code examples for this feature? My app captures accelerometer time series data that I want to classify.
Thank you so much!
Is it possible to change the folder where .blob are stocked when we training a model ?
I trying to train a model to track an object, and I'm limited to the extraordinary 256Gb of my 2024 Mac ! :(
Hi everyone,
I'm curious about the capabilities of ARKit's object tracking feature. Specifically, I'd like to know:
Is there a size limit for the objects that can be tracked?
Can ARKit differentiate between two objects with the same shape but different models (e.g., different colors)?
Are objects with single colors and generic shapes (like squares or circles) effectively trackable?
Any insights or examples from your experiences would be greatly appreciated!
Thanks in advance.
After I have a dataframe of data with one column as features with type MLshapedarray and one column of annotations with type Int.
How can I convert them to the correct input type for the timeseriesclassifier?
I have made a text classifier model but I want to train it on device too.
When text is classified wrong, user can make update the model on device.
Code :
//
// SpamClassifierHelper.swift
// LearningML
//
// Created by Himan Dhawan on 7/1/24.
//
import Foundation
import CreateMLComponents
import CoreML
import NaturalLanguage
enum TextClassifier : String {
case spam = "spam"
case notASpam = "ham"
}
class SpamClassifierModel {
// MARK: - Private Type Properties
/// The updated Spam Classifier model.
private static var updatedSpamClassifier: SpamClassifier?
/// The default Spam Classifier model.
private static var defaultSpamClassifier: SpamClassifier {
do {
return try SpamClassifier(configuration: .init())
} catch {
fatalError("Couldn't load SpamClassifier due to: \(error.localizedDescription)")
}
}
// The Spam Classifier model currently in use.
static var liveModel: SpamClassifier {
updatedSpamClassifier ?? defaultSpamClassifier
}
/// The location of the app's Application Support directory for the user.
private static let appDirectory = FileManager.default.urls(for: .applicationSupportDirectory,
in: .userDomainMask).first!
class var urlOfModelInThisBundle : URL {
let bundle = Bundle(for: self)
return bundle.url(forResource: "SpamClassifier", withExtension:"mlmodelc")!
}
/// The default Spam Classifier model's file URL.
private static let defaultModelURL = urlOfModelInThisBundle
/// The permanent location of the updated Spam Classifier model.
private static var updatedModelURL = appDirectory.appendingPathComponent("personalized.mlmodelc")
/// The temporary location of the updated Spam Classifier model.
private static var tempUpdatedModelURL = appDirectory.appendingPathComponent("personalized_tmp.mlmodelc")
// MARK: - Public Type Methods
static func predictLabelFor(_ value: String) throws -> (predication :String?, confidence : String) {
let spam = try NLModel(mlModel: liveModel.model)
let result = spam.predictedLabel(for: value)
let confidence = spam.predictedLabelHypotheses(for: value, maximumCount: 1).first?.value ?? 0
return (result,String(format: "%.2f", confidence * 100))
}
static func updateModel(newEntryText : String, spam : TextClassifier) throws {
guard let modelURL = Bundle.main.url(forResource: "SpamClassifier", withExtension: "mlmodelc") else {
fatalError("Could not find model in bundle")
}
// Create feature provider for the new image
let featureProvider = try MLDictionaryFeatureProvider(dictionary: ["label": MLFeatureValue(string: newEntryText), "text": MLFeatureValue(string: spam.rawValue)])
let batchProvider = MLArrayBatchProvider(array: [featureProvider])
let updateTask = try MLUpdateTask(forModelAt: modelURL, trainingData: batchProvider, configuration: nil, completionHandler: { context in
let updatedModel = context.model
let fileManager = FileManager.default
do {
// Create a directory for the updated model.
try fileManager.createDirectory(at: tempUpdatedModelURL,
withIntermediateDirectories: true,
attributes: nil)
// Save the updated model to temporary filename.
try updatedModel.write(to: tempUpdatedModelURL)
// Replace any previously updated model with this one.
_ = try fileManager.replaceItemAt(updatedModelURL,
withItemAt: tempUpdatedModelURL)
loadUpdatedModel()
print("Updated model saved to:\n\t\(updatedModelURL)")
} catch let error {
print("Could not save updated model to the file system: \(error)")
return
}
})
updateTask.resume()
}
/// Loads the updated Spam Classifier, if available.
/// - Tag: LoadUpdatedModel
private static func loadUpdatedModel() {
guard FileManager.default.fileExists(atPath: updatedModelURL.path) else {
// The updated model is not present at its designated path.
return
}
// Create an instance of the updated model.
guard let model = try? SpamClassifier(contentsOf: updatedModelURL) else {
return
}
// Use this updated model to make predictions in the future.
updatedSpamClassifier = model
}
}
Hello everyone,
I am trying to train using CreateML Version 6.0 Beta (146.1), feature extractor Image Feature Print v2.
I am using 100K images for a total ~4GB on my M3 Max 48GB (MacOs 15.0 Beta (24A5279h))
The images seems to be correctly read and visualized in the Data Source section (no images with corrupted data seems to be there).
When I start the training it's all fine for the first 6k ~ 7k pictures, then I receive the following error:
Failed to create CVPixelBufferPool. Width = 0, Height = 0, Format = 0x00000000
It is the first time I am using it, so I don't really have so much of experience.
Could you help me to understand what could be the problem?
Thanks a lot
TimeseriesClassifier crash when update as follows:
Hi everyone, I attempted to use the MultivariateLinearRegressor from the Create ML Components framework to fit some multi-dimensional data linearly (4 dimensions in my example). I aim to obtain multi-dimensional output points (2 points in my example). However, when I fit the model with my training data and test it, it appears that only the first element of my training data is used for training, regardless of whether I use CreateMLComponents.AnnotatedBatch or [CreateMLComponents.AnnotatedFeature, CoreML.MLShapedArray>] as input.
let sourceMatrix: [[Double]] = [
[0,0.1,0.2,0.3],
[0.5,0.2,0.6,0.2]
]
let referenceMatrix: [[Double]] = [
[0.2,0.7],
[0.9,0.1]
]
Here is a test code to test the function (ios 18.0 beta, Xcode 16.0 beta)
In this example I train the model to learn 2 multidimensional points (4 dimensions) and here are the results of the predictions:
▿ 2 elements
▿ 0 : AnnotatedPrediction<MLShapedArray<Double>, MLShapedArray<Double>>
▿ prediction : 0.20000000298023224 0.699999988079071
▿ _storage : <StandardStorage<Double>: 0x600002ad8270>
▿ annotation : 0.2 0.7
▿ _storage : <StandardStorage<Double>: 0x600002b30600>
▿ 1 : AnnotatedPrediction<MLShapedArray<Double>, MLShapedArray<Double>>
▿ prediction : 0.23158159852027893 0.9509953260421753
▿ _storage : <StandardStorage<Double>: 0x600002ad8c90>
▿ annotation : 0.9 0.1
▿ _storage : <StandardStorage<Double>: 0x600002b55f20>
0.23158159852027893 0.9509953260421753 is totally random and should be far more closer to [0.9,0.1].
Here is the test code : ( i run it on "My mac, Designed for Ipad")
ContentView.swift
import CoreImage
import CoreImage.CIFilterBuiltins
import UIKit
import CoreGraphics
import Accelerate
import Foundation
import CoreML
import CreateML
import CreateMLComponents
func createMLShapedArray(from array: [Double], shape: [Int]) -> MLShapedArray<Double> {
return MLShapedArray<Double>(scalars: array, shape: shape)
}
func calculateTransformationMatrixWithNonlinearity(sourceRGB: [[Double]], referenceRGB: [[Double]], degree: Int = 3) async throws -> MultivariateLinearRegressor<Double>.Model {
let annotatedFeatures2 = zip(sourceRGB, referenceRGB).map { (featureArray, targetArray) -> AnnotatedFeature<MLShapedArray<Double>, MLShapedArray<Double>> in
let featureMLShapedArray = createMLShapedArray(from: featureArray, shape: [featureArray.count])
let targetMLShapedArray = createMLShapedArray(from: targetArray, shape: [targetArray.count])
return AnnotatedFeature(feature: featureMLShapedArray, annotation: targetMLShapedArray)
}
// Flatten the sourceRGBPoly into a single-dimensional array
var flattenedArray = sourceRGB.flatMap { $0 }
let featuresMLShapedArray = createMLShapedArray(from: flattenedArray, shape: [2, 4])
flattenedArray = referenceRGB.flatMap { $0 }
let targetMLShapedArray = createMLShapedArray(from: flattenedArray, shape: [2, 2])
// Create AnnotatedFeature instances
/* let annotatedFeatures2: [AnnotatedFeature<MLShapedArray<Double>, MLShapedArray<Double>>] = [
AnnotatedFeature(feature: featuresMLShapedArray, annotation: targetMLShapedArray)
]*/
let annotatedBatch = AnnotatedBatch(features: featuresMLShapedArray, annotations: targetMLShapedArray)
var regressor = MultivariateLinearRegressor<Double>()
regressor.configuration.learningRate = 0.1
regressor.configuration.maximumIterationCount=5000
regressor.configuration.batchSize=2
let model = try await regressor.fitted(to: annotatedBatch,validateOn: nil)
//var model = try await regressor.fitted(to: annotatedFeatures2)
// Proceed to prediction once the model is fitted
let predictions = try await model.prediction(from: annotatedFeatures2)
// Process or use the predictions
print(predictions)
print("Predictions:", predictions)
return model
}
struct ContentView: View {
var body: some View {
VStack {}
.onAppear {
Task {
do {
let sourceMatrix: [[Double]] = [
[0,0.1,0.2,0.3],
[0.5,0.2,0.6,0.2]
]
let referenceMatrix: [[Double]] = [
[0.2,0.7],
[0.9,0.1]
]
let model = try await calculateTransformationMatrixWithNonlinearity(sourceRGB: sourceMatrix, referenceRGB: referenceMatrix, degree: 2
)
print("Model fitted successfully:", model)
} catch {
print("Error:", error)
}
}
}
}
}
I‘ve created text classification project and selected BERT algorithm With 100 iterations for json file. Json file is valid but training always cancels on 37 iteration…
Because tool does not provide any cancellation reasons I have no clue why it happens. Can I check reasons somehow? Or do anyone knows possible reasons or solutions for this?
How do I directly input landmarks to the activity classifier rather than inputting an image/video?
Hello,
I’m currently working on Tiny ML or ML on Edge using the Google Colab platform. Due to the exhaust of my compute unit’s free usage, I’m being prompted to pay. I’ve been considering leveraging the GPU capabilities of my iPad M1 and Intel-based Mac. Both devices utilize Thunderbolt ports capable of sharing connections up to 30GB/s. Since I’m primarily using a classification model, extensive GPU usage isn’t necessary.
I’m looking for assistance or guidance on utilizing the iPad’s processor as an eGPU on my Mac, possibly through an API or Apple technology. Any help would be greatly appreciated!
How do I use either of these data sources with MLHandActionClassifierwith on visionOS?
MLHandActionClassifier.DataSource.labeledKeypointsDataFrame
MLHandActionClassifier.DataSource.labeledKeypointsData
visionOS ARKit HandTracking provides us with 27 joints and 3D co-ordinates which differs from the 21 joint, 2D co-ordinates that these two data sources mention in their documentation.
We can use the CreateML App to build object tracking model in Xcode 16, but is it possible to use CreateML framework as well?
No documentation of Create ML object tracking is found yet. The latest documentation I can found is Xcode 15.
https://developer.apple.com/documentation/CreateML?changes=latest_minor
Really apricated the new feature of object tracking, thank you Apple Team.
I try to use Create ML Spatial template. but unexpected error is occured in 1-3 minitues. I try some times and same results. Spatial template is not available on an M1 mac ?
My development environment is
Apple M1 Pro
macOS: 15.0
Xcode: 16.0 beta
CreateML: 6.0 beta
I have created and trained a Hand Pose classifier model and am trying to test it. I have noticed in the WWDC2021 "Classify hand poses and actions with Create ML" the preview windows has a prediction result that gives you the prediction based on the live preview or the images. Mine does not have that. When i try to import pictures or do the live test there is no result. Its just the wireframe view and under it there is nothing.
How do I fix this please?
Thanks.