why it’s taking too long to download appl intelligence on iPhone 15 pro max .
how long it will take to download.
I use 5G now it counting 4 weeks still downloading.
Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.
Post
Replies
Boosts
Views
Activity
I was working on my project and when I tried to train a model the kernel crashed, so I restarted the kernel and tried the same and still I got the same crashing issue. Then I read one of the thread having the same issue where the apple support was saying to install tensorflow-macos and tensorflow-metal and read the guide from this site:
https://developer.apple.com/metal/tensorflow-plugin/
and I did so, I tried every single thing and when I tried the test code provided in the site, I got the same error, here's the code and the output.
Code:
import tensorflow as tf
cifar = tf.keras.datasets.cifar100
(x_train, y_train), (x_test, y_test) = cifar.load_data()
model = tf.keras.applications.ResNet50(
include_top=True,
weights=None,
input_shape=(32, 32, 3),
classes=100,)
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False)
model.compile(optimizer="adam", loss=loss_fn, metrics=["accuracy"])
model.fit(x_train, y_train, epochs=5, batch_size=64)
and here's the output:
Epoch 1/5
The Kernel crashed while executing code in the current cell or a previous cell.
Please review the code in the cell(s) to identify a possible cause of the failure.
Click here for more info.
View Jupyter log for further details.
And here's the half of log file as it was not fully coming:
metal_plugin/src/device/metal_device.cc:1154] Metal device set to: Apple M1
2024-10-06 23:30:49.894405: I metal_plugin/src/device/metal_device.cc:296] systemMemory: 8.00 GB
2024-10-06 23:30:49.894420: I metal_plugin/src/device/metal_device.cc:313] maxCacheSize: 2.67 GB
2024-10-06 23:30:49.894444: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:305] Could not identify NUMA node of platform GPU ID 0, defaulting to 0. Your kernel may not have been built with NUMA support.
2024-10-06 23:30:49.894460: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:271] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 0 MB memory) -> physical PluggableDevice (device: 0, name: METAL, pci bus id: )
2024-10-06 23:30:56.701461: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:117] Plugin optimizer for device_type GPU is enabled.
[libprotobuf FATAL google/protobuf/message_lite.cc:353] CHECK failed: target + size == res:
libc++abi: terminating due to uncaught exception of type google::protobuf::FatalException: CHECK failed: target + size == res:
Please respond to this post as soon as possible as I am working on my project now and getting this error again n again.
Device: Apple MacBook Air M1.
Why does Apple sell Iphones 16 with AI that won’t work in europe? You do not sell a car without a engine. Was it not better to wait to bring it to europe?
I don’t see the Apple Intelligence tap, even with iOS 18.1, language and region United States, Sri in English US… what can I do?
We are experiencing a major issue with the native .version1 of the SoundAnalysis framework in iOS 18, which has led to all our user not having recordings. Our core feature relies heavily on sound analysis in the background, and it previously worked flawlessly in prior iOS versions. However, in the new iOS 18, sound analysis stops working in the background, triggering a critical warning.
Details of the issue:
We are using SoundAnalysis to analyze background sounds and have enabled the necessary background permissions.
We are using the latest XCode
A warning now appears, and sound analysis fails in the background. Below is the warning message we are encountering:
Warning Message:
Execution of the command buffer was aborted due to an error during execution. Insufficient Permission (to submit GPU work from background)
[Espresso::handle_ex_plan] exception=Espresso exception: "Generic error": Insufficient Permission (to submit GPU work from background) (00000006:kIOGPUCommandBufferCallbackErrorBackgroundExecutionNotPermitted); code=7 status=-1
Unable to compute the prediction using a neural network model. It can be an invalid input data or broken/unsupported model (error code: -1).
CoreML prediction failed with Error Domain=com.apple.CoreML Code=0 "Failed to evaluate model 0 in pipeline" UserInfo={NSLocalizedDescription=Failed to evaluate model 0 in pipeline, NSUnderlyingError=0x30330e910 {Error Domain=com.apple.CoreML Code=0 "Failed to evaluate model 1 in pipeline" UserInfo={NSLocalizedDescription=Failed to evaluate model 1 in pipeline, NSUnderlyingError=0x303307840 {Error Domain=com.apple.CoreML Code=0 "Unable to compute the prediction using a neural network model. It can be an invalid input data or broken/unsupported model (error code: -1)." UserInfo={NSLocalizedDescription=Unable to compute the prediction using a neural network model. It can be an invalid input data or broken/unsupported model (error code: -1).}}}}}
We urgently need guidance or a fix for this, as our application’s main functionality is severely impacted by this background permission error. Please let us know the next steps or if this is a known issue with iOS 18.
I'm trying to generate a json for my training data, tried manually first and then tried using roboflow and I still get the same error:
_annotations.createml.json file contains field "Index 0" that is not of type String.
the json format provided by roboflow was
[{"image":"menu1_jpg.rf.44dfacc93487d5049ed82952b44c81f7.jpg","annotations":[{"label":"100","coordinates":{"x":497,"y":431.5,"width":32,"height":10}}]}]
any help would be greatly appreciated
Hey guys,
If you are still having issues on having your download stuck at 99% and switching on and off WiFi, I believe I have a fix.
turn your cellular data completely off. It will force the device to just use wifi.
Then the download will restart and actually complete.
I can use BLAS and LAPACK functions via the Accelerate framework to perform vector and matrix arithmetic and linear algebra calculations. But do these functions take advantage of Apple Silicon features?
I understand we can use MPSImageBatch as input to
[MPSNNGraph encodeBatchToCommandBuffer: ...]
method.
That being said, all inputs to the MPSNNGraph need to be encapsulated in a MPSImage(s).
Suppose I have an machine learning application that trains/infers on thousands of input data where each input has 4 feature channels. Metal Performance Shaders is chosen as the primary AI backbone for real-time use.
Due to the nature of encodeBatchToCommandBuffer method, I will have to create a MTLTexture first as a 2D texture array. The texture has pixel width of 1, height of 1 and pixel format being RGBA32f.
The general set up will be:
#define NumInputDims 4
MPSImageBatch * infBatch = @[];
const uint32_t totalFeatureSets = N;
// Each slice is 4 (RGBA) channels.
const uint32_t totalSlices = (totalFeatureSets * NumInputDims + 3) / 4;
MTLTextureDescriptor * descriptor = [MTLTextureDescriptor texture2DDescriptorWithPixelFormat: MTLPixelFormatRGBA32Float
width: 1
height: 1
mipmapped: NO];
descriptor.textureType = MTLTextureType2DArray
descriptor.arrayLength = totalSlices;
id<MTLTexture> texture = [mDevice newTextureWithDescriptor: descriptor];
// bytes per row is `4 * sizeof(float)` since we're doing one pixel of RGBA32F.
[texture replaceRegion: MTLRegionMake3D(0, 0, 0, 1, 1, totalSlices)
mipmapLevel: 0
withBytes: inputFeatureBuffers[0].data()
bytesPerRow: 4 * sizeof(float)];
MPSImage * infQueryImage = [[MPSImage alloc] initWithTexture: texture
featureChannels: NumInputDims];
infBatch = [infBatch arrayByAddingObject: infQueryImage];
The training/inference will be:
MPSNNGraph * mInferenceGraph = /*some MPSNNGraph setup*/;
MPSImageBatch * returnImage = [mInferenceGraph encodeBatchToCommandBuffer: commandBuffer
sourceImages: @[infBatch]
sourceStates: nil
intermediateImages: nil
destinationStates: nil];
// Commit and wait...
// Read the return image for the inferred result.
As you can see, the setup is really ad hoc - a lot of 1x1 pixels just for this sole purpose.
Is there any better way I can achieve the same result while still on Metal Performance Shaders? I guess a further question will be: can MPS handle general machine learning cases other than CNN? I can see the APIs are revolved around convolution network, both from online documentations and header files.
Any response will be helpful, thank you.
Hi folks, I'm trying to import data to train a model and getting the above error. I'm using the latest Xcode, have double checked the formatting in the annotations file, and used jpgrepair to remove any corruption from the data files. Next step is to try a different dataset, but is this a particular known error? (Or am I doing something obviously wrong?)
2019 Intel Mac, Xcode 15.4, macOS Sonoma 14.1.1
Thanks
Hi everyone!
I appreciate your help. I am a researcher and I use UMAP to cluster my data. Reproducibility is a key requirement for my field, so I set a random seed for reproducibility.
After coming back to my project after some time, I do not get the same results than previously even though I am working in a virtual environment, which I did not change.
When pondering about the reasons, I remembered that I upgraded my OS from Sonoma 14.1.1 to 14.5, so I was wondering whether the change in OS might cause those issues.
I'm sorry if this question is obvious to developer folks, but before I downgrade my OS or create a virtual machine, any tipp is much appreciated. Thank you!
What are the possible KPI requirements set by Apple AI for cellular networks, e.g. regarding latency, throughput or jitter?
What is the expected effect on iPhone energy consumption?
Trying to experiment with Genmoji per the WWDC documentation and samples, but I don't seem to get Genmoji keyboard.
I see this error in my log:
Received port for identifier response: <(null)> with error:Error Domain=RBSServiceErrorDomain Code=1 "Client not entitled" UserInfo={RBSEntitlement=com.apple.runningboard.process-state,
NSLocalizedFailureReason=Client not entitled, RBSPermanent=false}
elapsedCPUTimeForFrontBoard couldn't generate a task port
Is anything presently supported for developers? All I have done here is a simple app with a UITextView and code for:
textView.supportsAdaptiveImageGlyph = true
Any thoughts?
Hi everyone,
I’m currently in the process of converting and optimizing the Stable Diffusion XL model for iOS 18. I followed the steps from the WWDC 2024 session on model optimization, specifically the one titled "Bring your machine learning and AI models to Apple Silicon."
I utilized the Stable Diffusion XL model and the tools available in the ml-stable-diffusion GitHub repository and ran the following script to convert the model into an .mlpackage:
python3 -m python_coreml_stable_diffusion.torch2coreml \
--convert-unet \
--convert-vae-decoder \
--convert-text-encoder \
--xl-version \
--model-version stabilityai/stable-diffusion-xl-base-1.0 \
--bundle-resources-for-swift-cli \
--refiner-version stabilityai/stable-diffusion-xl-refiner-1.0 \
--attention-implementation SPLIT_EINSUM \
-o ../PotraitModel/ \
--custom-vae-version madebyollin/sdxl-vae-fp16-fix \
--latent-h 128 \
--latent-w 96 \
--chunk-unet
The model conversion worked without any issues. However, when I proceeded to optimize the model in a Jupyter notebook, following the same process shown in the WWDC session, I encountered an error during the post-training quantization step. Here’s the code I used for that:
op_config = cto_coreml.0pPalettizerConfig(
nbits=4,
mode="kmeans",
granularity="per_grouped_channel",
group_size=16,
)
config = cto_coreml.OptimizationConfig(op_config)
compressed_model = cto_coreml.palettize_weights(mlmodel, config)
Unfortunately, I received the following error:
AssertionError: The IOS16 only supports per-tensor LUT, but got more than one lut on 0th axis. LUT shape: (80, 1, 1, 1, 16, 1)
It appears that the minimum deployment target of the MLModel is set to iOS 16, which might be causing compatibility issues. How can I update the minimum deployment target to iOS 18? If anyone has encountered this issue or knows a workaround, I would greatly appreciate your guidance!
Thanks in advance for any help!
Hi everyone,
I'm working on an iOS app built in Swift using Xcode, where I'm integrating Roboflow's object detection API to extract items from grocery receipts. My goal is to identify key information (like items, total, tax, etc.) from the images of these receipts.
I'm successfully sending images to the Roboflow API and receiving predictions with bounding box data, but when I attempt to extract text from the detected regions (bounding boxes), it appears that the text extraction is failing—no text is being recognized. The issue seems to be that the bounding boxes are either not properly being handled or something is going wrong in the way I process the API response.
Here's a brief breakdown of what I'm doing:
The image is captured, converted to base64, and sent to the Roboflow API.
The API response comes back with bounding boxes for the detected elements (items, date, subtotal, etc.).
The problem occurs when I try to extract the text from the image using the bounding box data—it seems like the bounding boxes are being found, but no text is returned.
I suspect the issue might be happening because the app’s segue to the results view controller is triggered before the OCR extraction completes, or there might be a problem in my code handling the bounding box response.
Response Data:
{
"inference_id": "77134cce-91b5-4600-a59b-fab74350ca06",
"time": 0.09240847699993537,
"image": {
"width": 370,
"height": 502
},
"predictions": [
{
"x": 163.5,
"y": 250.5,
"width": 313.0,
"height": 127.0,
"confidence": 0.9357666373252869,
"class": "Item",
"class_id": 1,
"detection_id": "753341d5-07b6-42a1-8926-ecbc61128243"
},
{
"x": 52.5,
"y": 417.5,
"width": 89.0,
"height": 23.0,
"confidence": 0.8819760680198669,
"class": "Date",
"class_id": 0,
"detection_id": "b4681149-d538-47b1-8700-d9528bf1daa0"
},
...
]
}
And the log showing bounding boxes:
Prediction: ["width": 313, "y": 250.5, "x": 163.5, "detection_id": 753341d5-07b6-42a1-8926-ecbc61128243, "class": Item, "height": 127, "confidence": 0.9357666373252869, "class_id": 1]
No bounding box found in prediction.
I've double-checked the bounding box coordinates, and everything seems fine. Does anyone have experience with using OCR alongside object detection APIs in Swift? Any help on how to ensure the bounding boxes are properly processed and used for OCR would be greatly appreciated!
Also, would it help to delay the segue to the results view controller until OCR is complete?
Thank you!
The metal plugin for TensorFlow had its GitHub repo taken down, and on pypi, the last update was a year ago for TF 2.14. What's the status on the metal plugin? For now it seems to work fine for TF 2.15 but what's the plan for the future?
When I use VNGenerateForegroundInstanceMaskRequest to generate the mask in the simulator by SwiftUI, there is an error "Could not create inference context".
Then I add the code to make the vision by CPU:
let request = VNGenerateForegroundInstanceMaskRequest()
let handler = VNImageRequestHandler(ciImage: inputImage)
#if targetEnvironment(simulator)
if #available(iOS 18.0, *) {
let allDevices = MLComputeDevice.allComputeDevices
for device in allDevices {
if(device.description.contains("MLCPUComputeDevice")){
request.setComputeDevice(.some(device), for: .main)
break
}
}
} else {
// Fallback on earlier versions
request.usesCPUOnly = true
}
#endif
do {
try handler.perform([request])
if let result = request.results?.first {
let mask = try result.generateScaledMaskForImage(forInstances: result.allInstances, from: handler)
return CIImage(cvPixelBuffer: mask)
}
} catch {
print(error)
}
Even I force the simulator to run the code by CPU, but it still have the error: "Could not create inference context"
Was just wondering, not sure if anyone else had thought about this.
but different sound output device have different mechanism of sound throw.
can we not put in something which can go into bluetooth settings and overseeing if it is a music device connected would automatically set the EQ differently( as per user requirement)
So its somewhat like each music device would have specific music EQ stored for the same which can be recognized via bluetooth.
I’m currently developing an app that features a main view with a UITableView. When users select a row, they are navigated to a detail view that contains a UITextField. This UITextField already supports Writing Tools.
My question is: When a user long-presses a UITableView cell, is it possible to add a Writing Tools option to the Context Menu, allowing users to interact with the Writing Tools more conveniently?like Summary detail text
I'm trying to disable Writing Tools for a specific TextField using .writingToolsBehavior(.disabled), but when running the app on my iPhone 16 Pro with Apple Intelligence enabled, I can still use Writing Tools on the text box. I also see no difference with .writingToolsBehavior(.limited).
Is there something I'm doing wrong or is this a bug?
Sample code below:
import SwiftUI
struct ContentView: View {
@State var text = ""
var body: some View {
VStack {
TextField("Enter Text", text: $text)
.writingToolsBehavior(.disabled)
}
.padding()
}
}
#Preview {
ContentView()
}