Explore the power of machine learning within apps. Discuss integrating machine learning features, share best practices, and explore the possibilities for your app.

Post

Replies

Boosts

Views

Activity

jax.lax.conv_transpose not correctly implemented
Good evening! Tried to use Flax nn.ConvTranspose which calls jax.lax.conv_transpose but it looks like it isn't implemented correctly for the METAL backend, works fine on CPU. File "/Users/cemlyn/Documents/VCLless/mnist_vae/venv/lib/python3.11/site-packages/flax/linen/linear.py", line 768, in __call__ y = lax.conv_transpose( ^^^^^^^^^^^^^^^^^^^ jaxlib.xla_extension.XlaRuntimeError: UNKNOWN: <unknown>:0: error: type of return operand 0 ('tensor<1x8x8x64xf32>') doesn't match function result type ('tensor<1x14x14x64xf32>') in function @main <unknown>:0: note: see current operation: "func.return"(%0) : (tensor<1x8x8x64xf32>) -> () Versions: pip list | grep jax jax 0.4.11 jax-metal 0.0.4 jaxlib 0.4.11
2
0
572
Oct ’23
Question: Will TensorFlow-Metal and JAX-Metal code be open sourced?
Will TensorFlow-Metal and JAX-Metal code be open sourced? Reasons why I ask: If it is open sourced on GitHub or something it might make it easier for people to find issues and create new ones if necessary, also the open source community might be able to help ;) I'd love to learn about how you guys implement some of these operations :P (I know you guys made an Apple tutorial on how to implement TensorFlow custom op for Metal which was fire https://developer.apple.com/documentation/metal/metal_sample_code_library/customizing_a_tensorflow_operation)
0
1
397
Oct ’23
Core ML Model performance far lower on iOS 17 vs iOS 16 (iOS 17 not using Neural Engine)
Hello, I posted an issue on the coremltools GitHub about my Core ML models not performing as well on iOS 17 vs iOS 16 but I'm posting it here just in case. TL;DR The same model on the same device/chip performs far slower (doesn't use the Neural Engine) on iOS 17 compared to iOS 16. Longer description The following screenshots show the performance of the same model (a PyTorch computer vision model) on an iPhone SE 3rd gen and iPhone 13 Pro (both use the A15 Bionic). iOS 16 - iPhone SE 3rd Gen (A15 Bioinc) iOS 16 uses the ANE and results in fast prediction, load and compilation times. iOS 17 - iPhone 13 Pro (A15 Bionic) iOS 17 doesn't seem to use the ANE, thus the prediction, load and compilation times are all slower. Code To Reproduce The following is my code I'm using to export my PyTorch vision model (using coremltools). I've used the same code for the past few months with sensational results on iOS 16. # Convert to Core ML using the Unified Conversion API coreml_model = ct.convert( model=traced_model, inputs=[image_input], outputs=[ct.TensorType(name="output")], classifier_config=ct.ClassifierConfig(class_names), convert_to="neuralnetwork", # compute_precision=ct.precision.FLOAT16, compute_units=ct.ComputeUnit.ALL ) System environment: Xcode version: 15.0 coremltools version: 7.0.0 OS (e.g. MacOS version or Linux type): Linux Ubuntu 20.04 (for exporting), macOS 13.6 (for testing on Xcode) Any other relevant version information (e.g. PyTorch or TensorFlow version): PyTorch 2.0 Additional context This happens across "neuralnetwork" and "mlprogram" type models, neither use the ANE on iOS 17 but both use the ANE on iOS 16 If anyone has a similar experience, I'd love to hear more. Otherwise, if I'm doing something wrong for the exporting of models for iOS 17+, please let me know. Thank you!
0
1
1.2k
Oct ’23
MacOS M2 upgrade Sonoma 14.0 can not train model with tensorflow
I can train a yolov3 at MacOS M2 ventura with tensorflow-macos=2.9.0 and tensorflow-mental=0.5. But when I upgrade the system to Sonoma14.0. I can not train model with below error. I could train MacOS M1 even I upgrade to Sonoma 14.0 although it report - error: 'anec.gain_offset_control' op. But M1 there is no error for last - `MPSKernel MTLComputePipelineStateCache unable to load function ndArrayConvolution2DGradientWithWeightsA14. Compute function exceeds available temporary registers: (null) When I change my optimizer from Adam to SGD. - error: 'anec.gain_offset_control' op will disappear. So this error happen due something in Adam. But for error - `MPSKernel MTLComputePipelineStateCache unable to load function ndArrayConvolution2DGradientWithWeightsA14. Compute function exceeds available temporary registers: (null) I can not resolve it. ERROR Info MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/MPSGraphUtilities.mm":294:0)): error: 'anec.gain_offset_control' op result #0 must be 4D/5D memref of 16-bit float or 8-bit signed integer or 8-bit unsigned integer values, but got 'memref<1x1x1x1xi1>' loc("mps_select"("(mpsFileLoc): /AppleInternal/Library/BuildRoots/75428952-3aa4-11ee-8b65-46d450270006/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/MPSGraphUtilities.mm":294:0)): error: 'anec.gain_offset_control' op result #0 must be 4D/5D memref of 16-bit float or 8-bit signed integer or 8-bit unsigned integer values, but got 'memref<1x1x1x1xi1>' loc("mps_select"("(mpsFileLoc): /AppleInternal/Library/BuildRoots/75428952-3aa4-11ee-8b65-46d450270006/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/MPSGraphUtilities.mm":294:0)): error: 'anec.gain_offset_control' op result #0 must be 4D/5D memref of 16-bit float or 8-bit signed integer or 8-bit unsigned integer values, but got 'memref<1x1x1x1xi1>' loc("mps_select"("(mpsFileLoc): /AppleInternal/Library/BuildRoots/75428952-3aa4-11ee-8b65-46d450270006/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/MPSGraphUtilities.mm":294:0)): error: 'anec.gain_offset_control' op result #0 must be 4D/5D memref of 16-bit float or 8-bit signed integer or 8-bit unsigned integer values, but got 'memref<1x1x1x1xi1>' loc("mps_select"("(mpsFileLoc): /AppleInternal/Library/BuildRoots/75428952-3aa4-11ee-8b65-46d450270006/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/MPSGraphUtilities.mm":294:0)): error: 'anec.gain_offset_control' op result #0 must be 4D/5D memref of 16-bit float or 8-bit signed integer or 8-bit unsigned integer values, but got 'memref<1x1x1x1xi1>' /AppleInternal/Library/BuildRoots/90c9c1ae-37b6-11ee-a991-46d450270006/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShaders/MPSCore/Utility/MPSLibrary.mm:550: failed assertion `MPSKernel MTLComputePipelineStateCache unable to load function ndArrayConvolution2DGradientWithWeightsA14. Compute function exceeds available temporary registers: (null)
0
1
720
Oct ’23
preferredMetalDevice shows null for MLBoostedTreeRegressor
I had code that ran 7x faster in Ventura compared to how it runs now in Sonoma. For the basic model training I used let pmst = MLBoostedTreeRegressor.ModelParameters(validation: .split(strategy: .automatic),maxIterations:10000) let model = try MLBoostedTreeRegressor(trainingData: trainingdata, targetColumn: columntopredict, parameters: pmst) Which took around 2 secs in Ventura and now takes between 10 and 14 seconds in Sonoma I have tried to investigate why, and have noticed that when I use I see these results useWatchSPIForScribble: NO, allowLowPrecisionAccumulationOnGPU: NO, allowBackgroundGPUComputeSetting: NO, preferredMetalDevice: (null), enableTestVectorMode: NO, parameters: (null), rootModelURL: (null), profilingOptions: 0, usePreloadedKey: NO, trainWithMLCompute: NO, parentModelName: , modelName: Unnamed_Model, experimentalMLE5EngineUsage: Enable, preparesLazily: NO, predictionConcurrencyHint: 0, Why is the preferred Metal Device null? If I do let devices = MTLCopyAllDevices() for device in devices { config.preferredMetalDevice = device print(device.name) } I can see that the M1 chipset is available but not selected (from reading the literature the default should be nil?) Is this the reason why it is so slow? Is there a way to force a change in the config or elsewhere? Why has the default changed, if it has?
6
0
948
Oct ’23
Recognizing Speech in Live Audio Sample Project
I'm developing a project where I want to transcribe live speech from the user on IOS devices. I wanted to test out the Speech framework by downloading the sample code from https://developer.apple.com/documentation/speech/recognizing_speech_in_live_audio. I'm using Xcode 15 and running it on an Ipad with IOS 17 installed. I run the app and manage to approve the permissions to use the microphone and to use live speech transcription, but as soon as I press 'start recording', I get the following error in Xcode, and nothing happens on the ipad screen. +[SFUtilities issueReadSandboxExtensionForFilePath:error:] issueReadSandboxExtensionForFilePath:error:: Inaccessible file (/var/mobile/Containers/Data/Application/1F1AB092-95F2-4E5F-A369-475E15114F26/Library/Caches/Vocab) : error=Error Domain=kAFAssistantErrorDomain Code=203 "Failed to access path: /var/mobile/Containers/Data/Application/1F1AB092-95F2-4E5F-A369-475E15114F26/Library/Caches/Vocab method:issueReadSandboxExtensionForFilePath:error:" UserInfo={NSLocalizedDescription=Failed to access path: /var/mobile/Containers/Data/Application/1F1AB092-95F2-4E5F-A369-475E15114F26/Library/Caches/Vocab method:issueReadSandboxExtensionForFilePath:error:} Can someone guide me in the right direction to fix this?
1
0
549
Oct ’23
Is there any way to make the model run on GPU / Neural Engine?
Hi folks, I'm working on converting a GPT2 model to coreml with KV caching enabled. I have a GPT2 model runinng on GPU with static input shape It seems once I enable flexible shape (i.e. either range shape or enumerated shape), the model will be run on CPU according to the performance report. I can see new operators being added ( get_shape and general_slice ) and it is not supported by GPU / ANE Wondering if there's any way to get around this to get the model running on GPU / ANE? How does the machine decide whether to run the model on GPU / Neural Engine? Thanks!
1
0
862
Oct ’23
unsuccessful importing of tensorflow
Hi. I have followed the instructions here to install tensorflow with GPU support for my 16inch 2019 intel macbook pro (with AMD graphic). The installation process seems to be successful (I get no error) but, when I try to test it, after running import tensorflow as tf I get the following error: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/mahonik/.virtualenvs/tf-metal-new/lib/python3.11/site-packages/tensorflow/__init__.py", line 445, in <module> _ll.load_library(_plugin_dir) File "/Users/mahonik/.virtualenvs/tf-metal-new/lib/python3.11/site-packages/tensorflow/python/framework/load_library.py", line 151, in load_library py_tf.TF_LoadLibrary(lib) tensorflow.python.framework.errors_impl.NotFoundError: dlopen(/Users/mahonik/.virtualenvs/tf-metal-new/lib/python3.11/site-packages/tensorflow-plugins/libmetal_plugin.dylib, 0x0006): Symbol not found: __ZN10tensorflow16TensorShapeProtoC1ERKS0_ Referenced from: <C62E0AB4-567E-3E14-8F96-9F07A746C4DC> /Users/mahonik/.virtualenvs/tf-metal-new/lib/python3.11/site-packages/tensorflow-plugins/libmetal_plugin.dylib Expected in: <0B1F231A-6766-3F61-81D9-6782129807A9> /Users/mahonik/.virtualenvs/tf-metal-new/lib/python3.11/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so My env's packages ... numpy 1.26.1 tensorboard 2.14.1 tensorboard-data-server 0.7.1 tensorflow 2.14.0 tensorflow-estimator 2.14.0 tensorflow-io-gcs-filesystem 0.34.0 tensorflow-metal 1.0.0 ...
4
1
839
Oct ’23
Why is the case that every operator is supported by the ANE but the model still runs on GPU
Hi everyone, Wondering if you know how the device decide which compute unit (GPU, CPU or ANE) to use when compute units are set to ALL? I'm working on optimizing a GPT2 model to run on ANE. I ran the performance report for the existing model and the report showed me operators not supported by ANE. Then I went onto remove these operators and converted the model to CoreML again. This time the performance report showed that every operator is supported by ANE but the device still prefers GPU when the compute units are set to ALL and perfers CPU when the compute units are set to CPU and ANE. ALL CPU and ANE Does anyone know why? Thank you in advance!
0
0
686
Oct ’23
!! Assistance needed: Create ML - Scan file with multiple tables on it
Objective: I am in the process of developing an application that utilizes machine learning (Core ML) to interact with photographs of documents, specifically focusing on those containing tables. Step 1: Capturing the Image The application will initiate by allowing users to take photos of documents. The key here is not just any part of the document, but specifically the sections where tables are present. Step 2: Image Analysis through Machine Learning Upon capturing the image, the next phase involves a machine learning model. Using Apple's Create ML tool with Swift, the application will analyze the image. The model's task is two-fold: Identifying the Table: Distinguish the table from other document information, ensuring it recognizes and isolates the table structure within the photograph. Ignoring Irrelevant Information: Concurrently, the model will disregard all non-table content, focusing the application's resources on the table data. Step 3: Data Extraction and Training Once the table is identified, the real work begins. The application will engage in detailed scrutiny, where it's trained to understand and recognize row and column data based on specific datasets. This training will enable the application to 'read' the table accurately, much like a human would, by identifying the organization of information into rows and columns. Step 4: Information Storage Post-analysis, the application will extract this critical data, storing it in a structured format. Each piece of identifiable information from the rows and columns will be systematically organized into a Dictionary or an Object. This structure is not just for immediate use but also efficient for future data operations within the app. Conclusion: Through these sequential steps, the application transitions from merely capturing an image to intelligently recognizing, deciphering, and storing table data from within a physical document. This streamlined process is all courtesy of integrating machine learning into the app's functionality, promising significant efficiency and accuracy in data handling. Realistically, I have not found any good examples out there so I am attempting to create my own ML (with no experience 😅), so any guidance or help would be very much appreciated.
1
1
655
Oct ’23
update already saved model
I followed the video of Composing advanced models with Create ML Components. I have created the model with let urlParameter = URL(fileURLWithPath: "/path/to/model.pkg") let (training, validation) = dataFrame.randomSplit(by: 0.8) let model = try await transformer.fitted(to: DataFrame(training), validateOn: DataFrame(validation)) { event in guard let tAccuracy = event.metrics[.trainingAccuracy] as? Double else { return } print(tAccuracy) } try transformer.write(model, to: url) print("done") Next goal is to read the model and update it with new dataFrame let urlCSV = URL(fileURLWithPath: "path/to/newData.csv") var model = try transformer.read(from: urlParameters) // loading created model let newDataFrame = try DataFrame(contentsOfCSVFile: urlCSV ) // new dataFrame with features and annotations try await transformer.update(&model, with: newDataFrame) // I want to keep previous learned data and update the model with new try transformer.write(model, to: urlParameters) // the model saves but the only last added dataFrame are saved. Previous one just replaced with new one But looks like I only replace old data with new one. **The Question ** How can add new data to model I created without losing old one ?
2
0
698
Nov ’23
Memory „Leak“ when using cpu+gpu
My app allows the user to select different stable diffusion models, and I noticed a very strange issue concerning memory management. When using the StableDiffusionPipeline (https://github.com/apple/ml-stable-diffusion) with cpu+gpu, around 1.5 GB of memory is not properly released after generateImages is called and the pipeline is released. When generating more images with a new StableDiffusionPipeline object, memory is reused and stays stable at around 1.5 GB after inference is complete. Everything, especially MLModels, are released properly. Guessing, MLModel seems to create a persistent cache. Here is the problem: When using a different MLModel afterwards, another 1.5 GB is not released and stays resident. Using a third model, this totales to 4.5 GB of unreleased, persistent memory. At first I thought that would be a bug in the StableDiffusionPipeline – but I was able to reproduce this behaviour in a very minimal objective-c sample without ARC: MLArrayBatchProvider *batchProvider = [[MLArrayBatchProvider alloc] initWithFeatureProviderArray:@[<VALID FEATURE PROVIDER>]]; MLModelConfiguration *config = [[MLModelConfiguration alloc] init]; config.computeUnits = MLComputeUnitsCPUAndGPU; MLModel *model = [[MLModel modelWithContentsOfURL:[NSURL fileURLWithPath:<VALID PATH TO .mlmodelc SD 1.5 FILE>] configuration:config error:&error] retain]; id<MLBatchProvider> returnProvider = [model predictionsFromBatch:batchProvider error:&error]; [model release]; [config release]; [batchProvider release]; After running this minimal code, 1.5 GB of persistent memory is present that is not released during the lifetime of the app. This only happens on macOS 14(.1) Sonoma and on iOS 17(.1), but not on macOS 13 Ventura. On Ventura, everything works as expected and the memory is released when predictionsFromBatch: is done and the model is released. Some observations: This only happens using cpu+gpu, not cpu+ane (since the memory is allocated out of process) and not using cpu-only It does not matter which stable diffusion model is used, I tried custom sd-derived models as well as the apple-provided sd 1.5 models I reproduced the issue on MBP 16" M1 Max with macOS 14.1, iPhone 12 mini with iOS 17.0.3 and iPad Pro M2 with iPadOS 17.1 The memory that "leaks" are mostly huge malloc block of 100-500 MB of size OR IOSurfaces This memory is allocated during predictionsFromBatch, not while loading the model Loading and unloading a model does not leak memory – only when predictionsFromBatch is called, the huge memory chunk is allocated and never freed during the lifetime of the app Does anybody have any clue what is going on? I highly suspect that I am missing something crucial, but my colleagues and me looked everywhere trying to find a method of releasing this leaked/cached memory.
2
0
924
Nov ’23
Issues with installing Tensorflow on M1 MacBook Pro
I have been following the instructions here: https://developer.apple.com/metal/tensorflow-plugin/ I manage to execute step 1 set up the environment, step 2 install base Tensorflow but when I try to execute step 3 Install tensorflow-metal plug-in with the line "python -m pip install tensorflow-metal", I get the following messages: "ERROR: Could not find a version that satisfies the requirement tensorflow-metal (from versions: none) ERROR: No matching distribution found for tensorflow-metal" What am I missing here? So the code used are as follows: Step 1 python3 -m venv ~/venv-metal source ~/venv-metal/bin/activate python -m pip install -U pip Step 2 python -m pip install tensorflow Step 3 python -m pip install tensorflow-metal
2
0
630
Nov ’23
M1 GPU python process stopped?
I've been running tensorflow with python 3.9 to training a CNN model, and this process is accelerated by the GPU. After 80 epochs the process went to sleep (status S) and its GPU usage drops to 0 percent, I am wondering if this traing process crashed the GPU or the OS is mandatating the process to go to sleep because it takes up too much GPU time? Thanks a lot!
1
0
663
Nov ’23