Hello,
I’m attempting to convert a TensorFlow model to CoreML using the coremltools package, but I’m encountering an error during the conversion process. The error traceback points to an issue within the Cast operation in the MIL (Model Intermediate Layer) when it tries to perform type inference:
AttributeError: 'float' object has no attribute 'astype'
Here is the relevant part of the error traceback:
File ~/.pyenv/versions/3.10.12/lib/python3.10/site-packages/coremltools/converters/mil/mil/ops/defs/iOS15/elementwise_unary.py", line 896, in get_cast_value
return input_var.val.astype(dtype=type_map[dtype_val])
I’ve tried converting a model from the yamnet-tensorflow2 repository, and this error occurs when CoreML tries to cast a float type during the conversion of certain operations. I’m currently using Python 3.10 and coremltools version 6.0.1, with TensorFlow 2.x.
Has anyone encountered a similar issue or can offer suggestions on how to resolve this?
I’ve also considered that this might be related to mismatches in the model’s data types, but I’m not sure how to proceed.
Platform and package versions:
coremltools 6.1
tensorflow 2.10.0
tensorflow-estimator 2.10.0
tensorflow-hub 0.16.1
tensorflow-io-gcs-filesystem 0.37.1
Python 3.10.12
pip 24.3.1 from ~/.pyenv/versions/3.10.12/lib/python3.10/site-packages/pip (python 3.10)
Darwin MacBook-Pro.local 24.1.0 Darwin Kernel Version 24.1.0: Thu Oct 10 21:02:27 PDT 2024; root:xnu-11215.41.3~2/RELEASE_X86_64 x86_64
Any help or pointers would be greatly appreciated!
Create ML
RSS for tagCreate machine learning models for use in your app using Create ML.
Posts under Create ML tag
50 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
I am developing with Apple Vision Pro to implement object tracking functionality, but each model needs to go into Create ML for training, and the training time is very long. Are there other ways to shorten training time while obtaining reference files in the same format?
Additionally, can the delay in object tracking be further optimized? Although the refresh rate has been optimized, there is still a noticeable delay.
Hello All,
I'm developing a machine learning model for image classification, which requires managing an exceptionally large dataset comprising over 18,000 classes. I've encountered several hurdles while using Create ML, and I would appreciate any insights or advice from those who have faced similar challenges.
Current Issues:
Create ML Failures with Large Datasets:
When using Create ML, the process often fails with errors such as "Failed to create CVPixelBufferPool." This issue appears when handling particularly large volumes of data.
Custom Implementation Struggles:
To bypass some of the limitations of Create ML, I've developed a custom solution leveraging the MLImageClassifier within the CreateML framework in my own SwiftUI MacOS app.
Initially I had similar errors as I did in Create ML, but I discovered I could move beyond the "extracting features" stage without crashing by employing a workaround: using a timer to cancel and restart the job every 30 seconds. This method is the only way I've been able to finish the extraction phase, even with large datasets, but it causes many errors in the console if I allow it to run too long.
Lack of Progress Reporting:
Using MLJob<MLImageClassifier>, I've noticed that progress reporting stalls after the feature extraction phase. Although system resources indicate activity, there is no programmatic feedback on what is occurring.
Things I've Tried:
Data Validation: Ensured that all images in the dataset are valid and non-corrupted, which helps prevent unnecessary issues from faulty data.
Custom Implementation with CreateML Framework: Developed a custom solution using the MLImageClassifier within the CreateML framework to gain more control over the training process.
Timer-Based Workaround: Employed a workaround using a timer to cancel and restart the job every 30 seconds to move past the "extracting features" phase, allowing progress even with larger datasets.
Monitoring System Resources: Observed ongoing system resource usage when process feedback stalled, confirming background processing activity despite the lack of progress reporting.
Subset Testing: Successfully created and tested a model on a subset of the data, which validated the approach worked for smaller datasets and could produce a functioning model.
Router Model Concept: Considered training multiple models for different subsets of data and implementing a "router" model to decide which specialized model to utilize based on input characteristics.
What I Need Help With:
Handling Large Datasets:
I'm seeking strategies or best practices for effectively utilizing Create ML with large datasets.
Any guidance on memory management or alternative methodologies would be immensely helpful.
Improving Progress Reporting:
I'm looking for ways to obtain more consistent and programmatic progress updates during the training and testing phases.
I'm working on a Mac M1 Pro w/ 32GB RAM, with Apple Silicon and am fully integrated within the Apple ecosystem. I am very grateful for any advice or experiences you could share to help overcome these challenges.
Thank you!
I've pasted the relevant code below:
func go() {
if self.trainingSession == nil {
self.trainingSession = createTrainingSession()
}
if self.startTime == nil {
self.startTime = Date()
}
job = try! MLImageClassifier.resume(self.trainingSession)
job.phase
.receive(on: RunLoop.main)
.sink { phase in
self.phase = phase
}
.store(in: &cancellables)
job.checkpoints
.receive(on: RunLoop.main)
.sink { checkpoint in
self.state = "\(checkpoint)\n\(self.job.progress)"
self.progress = self.job.progress.fractionCompleted + 0.2
self.updateTimeEstimates()
}
.store(in: &cancellables)
job.result
.receive(on: DispatchQueue.main)
.sink(receiveCompletion: { completion in
switch completion {
case .failure(let error):
print("Training Failed: \(error.localizedDescription)")
case .finished:
print("🎉🎉🎉🎉 TRAINING SESSION FINISHED!!!!")
self.trainingFinished = true
}
}, receiveValue: { classifier in
Task {
await self.saveModel(classifier)
}
})
.store(in: &cancellables)
}
private func createTrainingSession() -> MLTrainingSession<MLImageClassifier> {
do {
print("Initializing training Data...")
let trainingData: MLImageClassifier.DataSource = .labeledDirectories(at: trainingDataURL)
let modelParameters = MLImageClassifier.ModelParameters(
validation: .split(strategy: .automatic),
augmentation: self.augmentations,
algorithm: .transferLearning(
featureExtractor: .scenePrint(revision: 2),
classifier: .logisticRegressor
)
)
let sessionParameters = MLTrainingSessionParameters(
sessionDirectory: self.sessionDirectoryURL,
reportInterval: 1,
checkpointInterval: 100,
iterations: self.numberOfIterations
)
print("Initializing training session...")
let trainingSession: MLTrainingSession<MLImageClassifier>
if FileManager.default.fileExists(atPath: self.sessionDirectoryURL.path) && isSessionCreated(atPath: self.sessionDirectoryURL.path()) {
do {
trainingSession = try MLImageClassifier.restoreTrainingSession(sessionParameters: sessionParameters)
}
catch {
print("error resuming, exiting.... \(error.localizedDescription)")
fatalError()
}
}
else {
trainingSession = try MLImageClassifier.makeTrainingSession(
trainingData: trainingData,
parameters: modelParameters,
sessionParameters: sessionParameters
)
}
return trainingSession
} catch {
print("Failed to initialize training session: \(error.localizedDescription)")
fatalError()
}
}
Hi Apple Developer Community,
I’m exploring ways to fine-tune the SNSoundClassifier to allow users of my iOS app to personalize the model by adding custom sounds or adjusting predictions. While Apple’s WWDC session on sound classification explains how to train from scratch, I’m specifically interested in using SNSoundClassifier as the base model and building/fine-tuning on top of it.
Here are a few questions I have:
1. Fine-Tuning on SNSoundClassifier:
Is there a way to fine-tune this model programmatically through APIs? The manual approach using macOS, as shown in this documentation is clear, but how can it be done dynamically - within the app for users or in a cloud backend (AWS/iCloud)?
Are there APIs or classes that support such on-device/cloud-based fine-tuning or incremental learning? If not directly, can the classifier’s embeddings be used to train a lightweight custom layer?
Training is likely computationally intensive and drains too much on battery, doing it on cloud can be right way but need the right apis to get this done. A sample code will do good.
2. Recommended Approach for In-App Model Customization:
If SNSoundClassifier doesn’t support fine-tuning, would transfer learning on models like MobileNetV2, YAMNet, OpenL3, or FastViT be more suitable?
Given these models (SNSoundClassifier, MobileNetV2, YAMNet, OpenL3, FastViT), which one would be best for accuracy and performance/efficiency on iOS? I aim to maintain real-time performance without sacrificing battery life. Also it is important to see architecture retention and accuracy after conversion to CoreML model.
3. Cost-Effective Backend Setup for Training:
Mac EC2 instances on AWS have a 24-hour minimum billing, which can become expensive for limited user requests. Are there better alternatives for deploying and training models on user request when s/he uploads files (training data)?
4. TensorFlow vs PyTorch:
Between TensorFlow and PyTorch, which framework would you recommend for iOS Core ML integration? TensorFlow Lite offers mobile-optimized models, but I’m also curious about PyTorch’s performance when converted to Core ML.
5. Metrics:
Metrics I have in mind while picking the model are these: Publisher, Accuracy, Fine-Tuning capability, Real-Time/Live use, Suitability of iPhone 16, Architectural retention after coreML conversion, Reasons for unsuitability, Recommended use case.
Any insights or recommended approaches would be greatly appreciated.
Thanks in advance!
From my early testing it seems like the object tracking works best for static objects. For example, if I am holding something in my hand the object tracker is slow to update.
Is there anything that can be modified to decrease the tracking latency?
I noticed that the Enterprise API has some override features is this something that can only be done using Enterprise?
I have created and trained a Hand Pose classifier model and am trying to test it. I have noticed in the WWDC2021 "Classify hand poses and actions with Create ML" the preview windows has a prediction result that gives you the prediction based on the live preview or the images. Mine does not have that. When i try to import pictures or do the live test there is no result. Its just the wireframe view and under it there is nothing.
How do I fix this please?
Thanks.
We can use the CreateML App to build object tracking model in Xcode 16, but is it possible to use CreateML framework as well?
No documentation of Create ML object tracking is found yet. The latest documentation I can found is Xcode 15.
https://developer.apple.com/documentation/CreateML?changes=latest_minor
Really apricated the new feature of object tracking, thank you Apple Team.
I‘ve created text classification project and selected BERT algorithm With 100 iterations for json file. Json file is valid but training always cancels on 37 iteration…
Because tool does not provide any cancellation reasons I have no clue why it happens. Can I check reasons somehow? Or do anyone knows possible reasons or solutions for this?
Hi everyone, I attempted to use the MultivariateLinearRegressor from the Create ML Components framework to fit some multi-dimensional data linearly (4 dimensions in my example). I aim to obtain multi-dimensional output points (2 points in my example). However, when I fit the model with my training data and test it, it appears that only the first element of my training data is used for training, regardless of whether I use CreateMLComponents.AnnotatedBatch or [CreateMLComponents.AnnotatedFeature, CoreML.MLShapedArray>] as input.
let sourceMatrix: [[Double]] = [
[0,0.1,0.2,0.3],
[0.5,0.2,0.6,0.2]
]
let referenceMatrix: [[Double]] = [
[0.2,0.7],
[0.9,0.1]
]
Here is a test code to test the function (ios 18.0 beta, Xcode 16.0 beta)
In this example I train the model to learn 2 multidimensional points (4 dimensions) and here are the results of the predictions:
▿ 2 elements
▿ 0 : AnnotatedPrediction<MLShapedArray<Double>, MLShapedArray<Double>>
▿ prediction : 0.20000000298023224 0.699999988079071
▿ _storage : <StandardStorage<Double>: 0x600002ad8270>
▿ annotation : 0.2 0.7
▿ _storage : <StandardStorage<Double>: 0x600002b30600>
▿ 1 : AnnotatedPrediction<MLShapedArray<Double>, MLShapedArray<Double>>
▿ prediction : 0.23158159852027893 0.9509953260421753
▿ _storage : <StandardStorage<Double>: 0x600002ad8c90>
▿ annotation : 0.9 0.1
▿ _storage : <StandardStorage<Double>: 0x600002b55f20>
0.23158159852027893 0.9509953260421753 is totally random and should be far more closer to [0.9,0.1].
Here is the test code : ( i run it on "My mac, Designed for Ipad")
ContentView.swift
import CoreImage
import CoreImage.CIFilterBuiltins
import UIKit
import CoreGraphics
import Accelerate
import Foundation
import CoreML
import CreateML
import CreateMLComponents
func createMLShapedArray(from array: [Double], shape: [Int]) -> MLShapedArray<Double> {
return MLShapedArray<Double>(scalars: array, shape: shape)
}
func calculateTransformationMatrixWithNonlinearity(sourceRGB: [[Double]], referenceRGB: [[Double]], degree: Int = 3) async throws -> MultivariateLinearRegressor<Double>.Model {
let annotatedFeatures2 = zip(sourceRGB, referenceRGB).map { (featureArray, targetArray) -> AnnotatedFeature<MLShapedArray<Double>, MLShapedArray<Double>> in
let featureMLShapedArray = createMLShapedArray(from: featureArray, shape: [featureArray.count])
let targetMLShapedArray = createMLShapedArray(from: targetArray, shape: [targetArray.count])
return AnnotatedFeature(feature: featureMLShapedArray, annotation: targetMLShapedArray)
}
// Flatten the sourceRGBPoly into a single-dimensional array
var flattenedArray = sourceRGB.flatMap { $0 }
let featuresMLShapedArray = createMLShapedArray(from: flattenedArray, shape: [2, 4])
flattenedArray = referenceRGB.flatMap { $0 }
let targetMLShapedArray = createMLShapedArray(from: flattenedArray, shape: [2, 2])
// Create AnnotatedFeature instances
/* let annotatedFeatures2: [AnnotatedFeature<MLShapedArray<Double>, MLShapedArray<Double>>] = [
AnnotatedFeature(feature: featuresMLShapedArray, annotation: targetMLShapedArray)
]*/
let annotatedBatch = AnnotatedBatch(features: featuresMLShapedArray, annotations: targetMLShapedArray)
var regressor = MultivariateLinearRegressor<Double>()
regressor.configuration.learningRate = 0.1
regressor.configuration.maximumIterationCount=5000
regressor.configuration.batchSize=2
let model = try await regressor.fitted(to: annotatedBatch,validateOn: nil)
//var model = try await regressor.fitted(to: annotatedFeatures2)
// Proceed to prediction once the model is fitted
let predictions = try await model.prediction(from: annotatedFeatures2)
// Process or use the predictions
print(predictions)
print("Predictions:", predictions)
return model
}
struct ContentView: View {
var body: some View {
VStack {}
.onAppear {
Task {
do {
let sourceMatrix: [[Double]] = [
[0,0.1,0.2,0.3],
[0.5,0.2,0.6,0.2]
]
let referenceMatrix: [[Double]] = [
[0.2,0.7],
[0.9,0.1]
]
let model = try await calculateTransformationMatrixWithNonlinearity(sourceRGB: sourceMatrix, referenceRGB: referenceMatrix, degree: 2
)
print("Model fitted successfully:", model)
} catch {
print("Error:", error)
}
}
}
}
}
Hello everyone,
I am trying to train using CreateML Version 6.0 Beta (146.1), feature extractor Image Feature Print v2.
I am using 100K images for a total ~4GB on my M3 Max 48GB (MacOs 15.0 Beta (24A5279h))
The images seems to be correctly read and visualized in the Data Source section (no images with corrupted data seems to be there).
When I start the training it's all fine for the first 6k ~ 7k pictures, then I receive the following error:
Failed to create CVPixelBufferPool. Width = 0, Height = 0, Format = 0x00000000
It is the first time I am using it, so I don't really have so much of experience.
Could you help me to understand what could be the problem?
Thanks a lot
I tried to train on some of my USDZs.
For some the CreateML viewport shows the object's dimensions and enables the Train button.
Many others, including the official USDZ for the iPhone 15Pro (iphone_15_pro_blue_titanium_5G.usdz) do not show the dimensions and keep [Train] disabled.
What do I have to author inside the USDZ to make it work?
There are no lights or cameras in the asset file, as recommended in the release notes. Also no animations.
I'm working on training an object tracking model in CreateML for visionOS that has fan blades on it and looking to try to train while ignoring a section of the geometry. When I train currently, it can detect the object if the fan blades are in the same orientation as when scanned but if they move it struggles.
I see there is an "objects to avoid" data source that can be added but upon reading the description, I don't think it does what I'm needing but I could be wrong. Is there anyway to have the training ignore a part of the geometry that has a significant effect on the silhouette of the object?
I have created a Hand Action Classification project following the guidelines which causes the Create ML tool to provide the very cryptic "Unexpected Error". The feature extraction phase is fine, with the error occurring during model training after the tool reports completion of the first five training iterations as it moves on to report the next ten.
The project is small with 3 training classes and 346 items
I have tried to vary the frame rate and action duration, with all augmentations unset, but the error still persists.
Can you please confirm how I may get further error diagnostic information so that I can determine why Create ML is unable to work with my training data?
Mac OS is Sonoma 14.5 on an iMac 24-inch, M1, 2021. Create ML is Version 5.0 (121.4)
Trouble with Core ML Object Tracking for Spherical Objects Using WWDC Sample Code and Object Capture
Hi everyone,
I'm working with Core ML for object tracking and have successfully trained a couple of models. However, when I try to use the reference object file in the object tracking sample code from the WWDC video, it doesn't work.
I'm training the model on a single-color plastic spherical object. Could this be the reason for the issue? I also attempted to use USDZ 3D assets that resemble the real object. Do these need to be trained with the Object Capture app to work properly?
Speaking of the Object Capture app, my experience hasn't been great. When I scanned my spherical object, the result was far from a sphere—it looked more like a mushy dough.
Any insights or suggestions would be greatly appreciated!
Hi everyone,
I was wondering, on how accurate is the Hand Classification ML? For Example: Is it possible to understand the different letters of the Sign Language Alphabet or is it only capable of recognizing simple poses like a thumbs up?
When I wanted to call the Reality Composer Pro scene containing Object Tracking, I tried the following code:
RealityView { content in
if let model = try? await Entity(named: "Scene", in: realityKitContentBundle) {
content.add(model)
}
}
Obviously, this is wrong. We need to add some configurations that can enable Object Tracking to Reality View. What do we need to add?
Note:I have seen https://developer.apple.com/videos/play/wwdc2024/10101/, but I don't know much about it.
In the WWDC24 What’s New In Create ML
at 6:03 the presenter introduced TimeSeriesClassifier as a new component of Create ML Components. Where are documentation and code examples for this feature? My app captures accelerometer time series data that I want to classify.
Thank you so much!
Hey,
In the "Explore object tracking for visionOS" session we explore how a Globe can be tracked, and objects can be anchored to various positions. My question is if the physical Globe is rotated, will the anchored objects also respond to this in real-time?
I would like to overlap a virtual map on top of a physical globe, so when the user rotates the physical globe, the virtual map also seamlessly responds. Is this possible using Object Tracking?
Thanks
When I try to train an object tracking model in Create ML, the progress bar stays stuck at 0% for about a minute. However, the training cancels soon afterwards without an additional error message. I've tried multiple different USDZ files and tried on two different Apple silicon Mac devices. Is anyone else experiencing this issue?
Thank you!
The What’s New in Create ML session in WWDC24 went into great depth with time-series forecasting models (beginning at: 15:14) and mentioned these new models, capabilities, and tools for iOS 18. So, far, all I can find is API documentation. I don’t see any other session in WWDC24 covering these new time-series forecasting Create ML features.
Is there more substance/documentation on how to use these with Create ML? Maybe I am looking in the wrong place but I am fairly new with ML.
Are there any food truck / donut shop demo/sample code like in the video?
It is of great interest to get ahead of the curve on this within business applications that may take advantage of this with inventory / ordering data.