At one point, Mac Mail's apple added a summarize functionality that worked. Now when I click on Summarize, I get:
"Summaries Unavailable
Mail summarization is unavailable at this time. Try again later."
I've rebooted, stopped/restarted Apple AI, waited a day to see if it was synching up things, etc.
I'm running the latest version of Apple OS (Version 15.1 (24B82)).
any ideas?
Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.
Post
Replies
Boosts
Views
Activity
Hello, I recently updated my iPhone 16 Pro Max to iOS 18.2 beta, and I noticed that I have the Playground app. However, it hasn't been approved yet, and it's been almost 24 hours. Is there a solution for this?
I have been stuck at the “Early Access Requested” for about 48 hours. Usually they take about an hour or less to accept your request but it seems Like this one is very slow, is an issue on my end or Apple’s.
Please let me know if there is a solution.
Playground long time 24h don’t early access request ed
Is it just me or is early access image playground not available, been waiting for a little over 24hrs and still no access. (no rush for the team if there’s smth wrong) they might be busy rolling out the first few apple intelligence features (ios 18.1) public release.
I requested early access for image playground 24 hours ago and haven't gotten it approved. Are they rolling it out slowly or is it a problem with my phone.
It would be good if we could tallk to Siri with emojis as well. I’m pretty sure emojis are her’s native language.
I'm hitting a limit when trying to train an Image Classifier.
It's at about 16k images (in line with the error info) - and it gives the error:
IOSurface creation failed: e00002be parentID: 00000000 properties: {
IOSurfaceAllocSize = 529984;
IOSurfaceBytesPerElement = 4;
IOSurfaceBytesPerRow = 1472;
IOSurfaceElementHeight = 1;
IOSurfaceElementWidth = 1;
IOSurfaceHeight = 360;
IOSurfaceName = CoreVideo;
IOSurfaceOffset = 0;
IOSurfacePixelFormat = 1111970369;
IOSurfacePlaneComponentBitDepths = (
8,
8,
8,
8
);
IOSurfacePlaneComponentNames = (
4,
3,
2,
1
);
IOSurfacePlaneComponentRanges = (
1,
1,
1,
1
);
IOSurfacePurgeWhenNotInUse = 1;
IOSurfaceSubsampling = 1;
IOSurfaceWidth = 360;
} (likely per client IOSurface limit of 16384 reached)
I feel like I was able to use more images than this before upgrading to Sonoma - but I don't have the receipts....
Is there a way around this?
I have oodles of spare memory on my machine - it's using about 16gb of 64 when it crashes...
code to create the model is
let parameters = MLImageClassifier.ModelParameters(validation: .dataSource(validationDataSource),
maxIterations: 25,
augmentation: [],
algorithm: .transferLearning(
featureExtractor: .scenePrint(revision: 2),
classifier: .logisticRegressor
))
let model = try MLImageClassifier(trainingData: .labeledDirectories(at: trainingDir.url), parameters: parameters)
I have also tried the same training source in CreateML, it runs through 'extracting features', and crashes at about 16k images processed.
Thank you
Originally when Apple Intelligence launched there are some T&C for using and activate this Apple Intelligence.
For activating Apple Intelligence at China first of all purchased iPhone must be non-Chinese iPhone means that the iPhone aren’t purchased in China not include Hong Kong and Macau. Also if use this Chinese Apple account are also unable to activate the Apple Intelligence.
To activate the Apple Intelligence, I have travel to Hong Kong and purchase iPhone 16 pro Max. To reach the requirement of activating Apple Intelligence, I’ve decided to switch my region from China to United States.
I’ve started to switch my account from October 19, CST 2:00am, Shanghai time 3:00pm. Till now CST time 8:30am October 24, Shanghai Time 9:30pm I still can’t join the Apple Intelligence waitlist. I’m also upgraded my phone to iOS 18.2.
I’ve contacted the Apple support using my Chinese phone number and it transferred to me Philippines Apple support team. It seems like the Philippine Apple support team doesn’t help me get anything. The Philippine Apple support team keeps saying that the beta version iOS right now in my phone has some problem on it. But when I log out this Apple ID, and changed to another Apple ID that is UK ones. I can successfully enable the Apple Intelligence.
What does this say?! This shows that my Apple account has a problem on it. It doesn’t switch successfully to United States server! And the Philippine Apple support team keep asking me to restore my iPhone like crazy! I’ve told them that I have used several Apple account that is from United States and United Kingdom that can successfully enable the Apple Intelligence.
But the Philippine Apple support team said that my Apple account doesn’t have any problem!
Apple please solve the problem! Anyone who have facing this kind of problem please share to us!!!
Cheers!
I'm using the iOS 18.2 beta on my iPhone 15 Pro Max, but can't find Apple Intelligence, and the Settings app still shows the Old Siri logo.
Hi
I got access to developer beta 18.2 . All the features are working fine but image Playground stuck at Early Access requested. Can someone share how much time it takes before you are able to access image playground? I have been stuck for 3 hours
So, I was working with organizing the Home Screen. Then I lost track of where I was and planed to just move it to the library but instead deleted it. How do I get It back?
Hi, I found when continuously predicting with the same Core ML model in 120 FPS will be faster than in 60 FPS.
I use Macbook Pro M2 and turn on ProMotion to run Core ML model prediction with a 120 FPS video, the average prediction time is 7.46ms as below:
But when I turn off ProMotion, set 60 Hz refresh rate, and run Core ML model prediction with a 60 FPS video, the average prediction time is 10.91ms as below:
What could be the technical explanation for these results? Is there any documentation or technical literature that addresses this behavior?
Using an iPhone 15 Pro. I'm in Australia on iOS 18.2 Beta 1 but there's no mention of Apple Intelligence anywhere in the phone. Settings looks identical without the menu for AI, there's no playground image generation. It's as if my phone doesn't know it exists
I'm trying to run a coreML model.
This is an image classifier generated using:
let parameters = MLImageClassifier.ModelParameters(validation: .dataSource(validationDataSource),
maxIterations: 25,
augmentation: [],
algorithm: .transferLearning(
featureExtractor: .scenePrint(revision: 2),
classifier: .logisticRegressor
))
let model = try MLImageClassifier(trainingData: .labeledDirectories(at: trainingDir.url), parameters: parameters)
I'm trying to run it with the new async Vision api
let model = try MLModel(contentsOf: modelUrl)
guard let modelContainer = try? CoreMLModelContainer(model: model) else {
fatalError("The model is missing")
}
let request = CoreMLRequest(model: modelContainer)
let image = NSImage(named:"testImage")!
let cgImage = image.toCGImage()!
let handler = ImageRequestHandler(cgImage)
do {
let results = try await handler.perform(request)
print(results)
} catch {
print("Failed: \(error)")
}
This gives me
Failed: internalError("Error Domain=com.apple.Vision Code=7 "The VNDetectorProcessOption_ScenePrints required option was not found" UserInfo={NSLocalizedDescription=The VNDetectorProcessOption_ScenePrints required option was not found}")
Please help! Am I missing something?
I'm using an iPhone 15 Pro Max and running developer beta 18.2 released today. I've already been an 'Apple Intelligence' user and now have been able to link it with my PAID ChatGPT account.
HOWEVER;
I'm searching for these Image features everyone seems to be posting about and cannot find them anywhere.
I'm apparently supposed to sign up for beta access to the Image features through some new Apple natively released app that was supposedly included in this build update, which I cannot find.
What gives??!!
I downloaded the RC beta version on my Macbook and joined the waitlist so far I haven't received any message or any kind of notifications that I'm in but I have a question kinda silly but just want confirmation. By joining the beta and AI on my MacBook, whenever the official version is released, am I gonna have AI on my iPhone since already joined AI through my MacBook in the beta version? Kinda curious about it.
hi,
I am currently running LSTM on TensorFlow. However, when i switched from keras2 to keras3. code running time has increased 10 times -- it seems there is no GPU acceleration.
Here is my code:
batch size = 256
optimiser = adam
activation = tanh
_______________________________________________
Layer (type) Output Shape Param #
=============================================
input_1 (InputLayer) [(None, 7, 16)] 0
bidirectional (Bidirection (None, 7, 320) 226560
al)
bidirectional_1 (Bidirecti (None, 7, 512) 1181696
onal)
bidirectional_2 (Bidirecti (None, 256) 656384
onal)
dense (Dense) (None, 1) 257
==============================================
Total params: 2064897 (7.88 MB)
Trainable params: 2064897 (7.88 MB)
Non-trainable params: 0 (0.00 Byte)
______________________________________________
This is keras 3.6.0 + tensorflow 2.17.0 + tensorflow-metal 1.1.0 training status:
Training------------
Epoch 1/200
28/681 ━━━━━━━━━━━━━━━━━━━━ 8:13 756ms/step - loss: 0.5901 - mape: 338.6876 - mse: 0.8591
This is keras 2.14.0 + tensorflow 2.14.0 + tensorflow-metal 1.1.0 training status:
Training------------
Epoch 1/200
681/681 [==============================] - 37s 49ms/step - loss: 3.6345 - mape: 499038.7500 - mse: 34.4148 - val_loss: 3.5452 - val_mape: 41.7964 - val_mse: 32.0133 - lr: 0.0010
Is that because keras3 has no GPU support on macos?
Apart from that, if I change LSTM activation from tanh to sigmoid in keras2, it does not have GPU support as well.
My system is 15.0.1 and the code was running on python3.11
I am not sure why these happen.
Thanks
Hello.
I can't find anything about the SSML that is used in Apple's speech synthesis.
SSML from Google, Amazon and W3C either don't work or work incorrectly.
Where is Apple's documentation for their implementation of SSML?
My iPhone 15 Pro is from Hong Kong (China). I am outside of China and Asia in general. I have never been to China myself and the iPhone was activated in another country. And it is not the EU.
My iPhone's language, Siri and region settings are changed to US English. Updated to iOS 18.1 RC. But Apple Intelligence doesn't show up in the Siri settings.