Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.

All subtopics

Post

Replies

Boosts

Views

Activity

CoreML Crash on iOS18 Beta5
Hello, My App works well on iOS17 and previous iOS18 Beta version, while it crashes on latest iOS18 Beta5, when it calling model predictionFromFeatures. Calling stack of crash is as: *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: 'Unrecognized ANE execution priority MLANEExecutionPriority_Unspecified' Last Exception Backtrace: 0 CoreFoundation 0x000000019bd6408c __exceptionPreprocess + 164 1 libobjc.A.dylib 0x000000019906b2e4 objc_exception_throw + 88 2 CoreFoundation 0x000000019be5f648 -[NSException initWithCoder:] 3 CoreML 0x00000001b7507340 -[MLE5ExecutionStream _setANEExecutionPriorityWithOptions:] + 248 4 CoreML 0x00000001b7508374 -[MLE5ExecutionStream _prepareForInputFeatures:options:error:] + 248 5 CoreML 0x00000001b7507ddc -[MLE5ExecutionStream executeForInputFeatures:options:error:] + 68 6 CoreML 0x00000001b74ce5c4 -[MLE5Engine _predictionFromFeatures:stream:options:error:] + 80 7 CoreML 0x00000001b74ce7fc -[MLE5Engine _predictionFromFeatures:options:error:] + 208 8 CoreML 0x00000001b74cf110 -[MLE5Engine _predictionFromFeatures:usingState:options:error:] + 400 9 CoreML 0x00000001b74cf270 -[MLE5Engine predictionFromFeatures:options:error:] + 96 10 CoreML 0x00000001b74ab264 -[MLDelegateModel _predictionFromFeatures:usingState:options:error:] + 684 11 CoreML 0x00000001b70991bc -[MLDelegateModel predictionFromFeatures:options:error:] + 124 And my model file type is ml package file. Source code is as below: //model MLModel *_model; ...... // model init MLModelConfiguration* config = [[MLModelConfiguration alloc]init]; config.computeUnits = MLComputeUnitsCPUAndNeuralEngine; _model = [MLModel modelWithContentsOfURL:compileUrl configuration:config error:&error]; ..... // model prediction MLPredictionOptions *option = [[MLPredictionOptions alloc]init]; id<MLFeatureProvider> outFeatures = [_model predictionFromFeatures:_modelInput options:option error:&error]; Is there anything wrong? Any advice would be appreciated.
3
1
413
Aug ’24
Loading CoreML model increases app size?
Hi, i have been noticing some strange issues with using CoreML models in my app. I am using the Whisper.cpp implementation which has a coreML option. This speeds up the transcribing vs Metal. However every time i use it, the app size inside iphone settings -> General -> Storage increases - specifically the "documents and data" part, the bundle size stays consistent. The Size of the app seems to increase by the same size of the coreml model, and after a few reloads it can increase to over 3-4gb! I thought that maybe the coreml model (which is in the bundle) is being saved to file - but i can't see where, i have tried to use instruments and xcode plus lots of printing out of cache and temp directory etc, deleting the caches etc.. but no effect. I have downloaded the container of the iphone from xcode and inspected it, there are some files stored inthe cache but only a few kbs, and even though the value in the settings-> storage shows a few gb, the container is only a few mb. Please can someone help or give me some guidance on what to do to figure out why the documents and data is increasing? where could this folder be pointing to that is not in the xcode downloaded container?? This is the repo i am using https://github.com/ggerganov/whisper.cpp the swiftui app and objective-C app both do the same thing i am witnessing when using coreml. Thanks in advance for any help, i am totally baffled by this behaviour
6
3
907
May ’24
TimeSeriesClassifier
In the WWDC24 What’s New In Create ML at 6:03 the presenter introduced TimeSeriesClassifier as a new component of Create ML Components. Where are documentation and code examples for this feature? My app captures accelerometer time series data that I want to classify. Thank you so much!
3
2
551
Jun ’24
Using OpenIntents with voice
Hi, I am working with AppIntents, and have created an OpenIntent with a target using a MyAppContact AppEntity that I have created. This works fine when running from Shortcuts because it pops up a list of options from the 'suggestedEntities` method. But It doesn't work well when using with Siri. It invokes the AppIntent, but keeps repeatedly asking for the value of the 'target' entity, which you can't really pass in with voice. What's the workaround here? Can an OpenIntent be activated by voice as well?
0
0
461
Aug ’24
Idea's to improve Apple Watch
Dear Apple Team, I have a suggestion to enhance the Apple Watch user experience. A new feature could provide personalized recommendations based on weather conditions and the user’s mood. For example, during hot weather, it could suggest drink something cold, or if the user feeling down,it could offer ways the boost their mood. This kind of feature could make the Apple Watch not just a health and fitness tracker but also a more functional personal assistant. “İmprove communication with Apple Watch” Feature #1: Noise detection and location suggestions. Imagine having your Apple Watch detect ambient noise levels and suggest a quieter location for your call. Feature #2: Context-aware call response options. If you can't answer a call, your Apple Watch could offer pre-set responses to communicate your status and reduce missed call anxiety. For example, if you're in a busy restaurant, your Apple Watch could suggest moving to a quieter spot nearby for a better conversation. Or if you’re in a movie theater,your Apple Watch could send an automatic “I’m in the movie’s” text to the caller. “İmprove user experience and app management“ Automated Sleep Notifications: The ability for the Apple Watch to automatically turn off notifications or change the watch face when the user is sleeping would provide a more seamless experience. For instance,when the watch detects that the user is in sleep mode,it could enable Do Not Disturb to silence calls and alerts. Caller Notification: In addition,it would be great if the Apple Watch could inform callers that the user is currently sleeping. This could help manage expectations for those attempting to reach the user at night. App Management to Conserve Battery: Implementing a feature that detects draining app's when the user is asleep could further enhance battery life. The watch and the iphone could close or pause apps that are using significant power while the user is not active. I believe these features could provide valuable advancements in enhancing the Apple Watch's usability for those who prioritize a restful night's sleep. Thank you for considering my suggestions. Best regards, Mahmut Ötgen Istanbul,Turkey
1
0
843
Aug ’24
iOS 18.1 beta - App crashes at runtime while using Translation.TranslationError in project
I'm trying to cast the error thrown by TranslationSession.translations(from:) as Translation.TranslationError. However, the app crashes at runtime whenever Translation.TranslationError is used in the project. Environment: iOS Version: 18.1 beta Xcode Version: 16 beta yld[14615]: Symbol not found: _$s11Translation0A5ErrorVMa Referenced from: <3426152D-A738-30C1-8F06-47D2C6A1B75B> /private/var/containers/Bundle/Application/043A25BC-E53E-4B28-B71A-C21F77C0D76D/TranslationAPI.app/TranslationAPI.debug.dylib Expected in: /System/Library/Frameworks/Translation.framework/Translation
1
1
572
Aug ’24
Chat gpt audio in background
Dear Apple Development Team, I’m writing to express my concerns and request a feature enhancement regarding the ChatGPT app for iOS. Currently, the app's audio functionality does not work when the app is in the background. This limitation significantly affects the user experience, particularly for those of us who rely on the app for ongoing, interactive voice conversations. Given that many apps, particularly media and streaming services, are allowed to continue audio playback when minimized, it’s frustrating that the ChatGPT app cannot do the same. This restriction interrupts the flow of conversation, forcing users to stay within the app to maintain an audio connection. For users who multitask on their iPhones, being able to switch between apps while continuing to listen or interact with ChatGPT is essential. The ability to reference notes, browse the web, or even respond to messages while maintaining an ongoing conversation with ChatGPT would greatly enhance the app’s usability and align it with other background-capable apps. I understand that Apple prioritizes resource management and device performance, but I believe there’s a strong case for allowing apps like ChatGPT to operate with background audio. Given its growing importance as a tool for productivity, learning, and communication, adding this capability would provide significant value to users. I hope you will consider this feedback for future updates to iOS, or provide guidance on any existing APIs that could be leveraged to enable such functionality. Thank you for your time and consideration. Best regards, luke yes I used gpt to write this.
1
0
312
Aug ’24
How to deploy Vision Transformer with ANE to Achieve Faster Uncached Load Speed
I wanted to deploy some ViT models on an iPhone. I referred to https://machinelearning.apple.com/research/vision-transformers for deployment and wrote a simple demo based on the code from https://github.com/apple/ml-vision-transformers-ane. However, I found that the uncached load time on the phone is very long. According to the blog, the input is already aligned to 64 bytes, but the speed is still very slow. Is there any way to speed it up? This is my test case: import torch import coremltools as ct import math from torch import nn class SelfAttn(torch.nn.Module): def __init__(self, window_size, num_heads, dim, dim_out): super().__init__() self.window_size = window_size self.num_heads = num_heads self.dim = dim self.dim_out = dim_out self.q_proj = nn.Conv2d( in_channels=dim, out_channels=dim_out, kernel_size=1, ) self.k_proj = nn.Conv2d( in_channels=dim, out_channels=dim_out, kernel_size=1, ) self.v_proj = nn.Conv2d( in_channels=dim, out_channels=dim_out, kernel_size=1, ) def forward(self, x): B, HW, C = x.shape image_shape = (B, C, self.window_size, self.window_size) x_2d = x.permute((0, 2, 1)).reshape(image_shape) # BCHW x_flat = torch.unsqueeze(x.permute((0, 2, 1)), 2) # BC1L q, k, v_2d = self.q_proj(x_flat), self.k_proj(x_flat), self.v_proj(x_2d) mh_q = torch.split(q, self.dim_out // self.num_heads, dim=1) # BC1L mh_v = torch.split( v_2d.reshape(B, -1, x_flat.shape[2], x_flat.shape[3]), self.dim_out // self.num_heads, dim=1 ) mh_k = torch.split( torch.permute(k, (0, 3, 2, 1)), self.dim_out // self.num_heads, dim=3 ) scale_factor = 1 / math.sqrt(mh_q[0].size(1)) attn_weights = [ torch.einsum("bchq, bkhc->bkhq", qi, ki) * scale_factor for qi, ki in zip(mh_q, mh_k) ] attn_weights = [ torch.softmax(aw, dim=1) for aw in attn_weights ] # softmax applied on channel "C" mh_x = [torch.einsum("bkhq,bchk->bchq", wi, vi) for wi, vi in zip(attn_weights, mh_v)] x = torch.cat(mh_x, dim=1) return x window_size = 8 path_batch = 1024 emb_dim = 96 emb_dim_out = 96 x = torch.rand(path_batch, window_size * window_size, emb_dim) qkv_layer = SelfAttn(window_size, 1, emb_dim, emb_dim_out) jit = torch.jit.trace(qkv_layer, (x)) mlmod_fixed_shape = ct.convert( jit, inputs=[ ct.TensorType("x", x.shape), ], convert_to="mlprogram", ) mlmodel_path = "test_ane.mlpackage" mlmod_fixed_shape.save(mlmodel_path) The uncached load took nearly 36 seconds, and it was just a single matrix multiplication.
0
1
288
Aug ’24
Bug Report: macOS 15 Beta - PyTorch gridsample Not Utilising Apple Neural Engine on MacBook Pro M2
In macOS 15 beta the gridsample function from PyTorch is not executing as expected on the Apple Neural Engine in MacBook Pro M2. Please find below a Python code snippet that demonstrates the problem: import coremltools as ct import torch.nn as nn import torch.nn.functional as F class PytorchGridSample(torch.nn.Module): def __init__(self, grids): super(PytorchGridSample, self).__init__() self.upsample1 = nn.ConvTranspose2d(512, 256, kernel_size=4, stride=2, padding=1) self.upsample2 = nn.ConvTranspose2d(256, 128, kernel_size=4, stride=2, padding=1) self.upsample3 = nn.ConvTranspose2d(128, 64, kernel_size=4, stride=2, padding=1) self.upsample4 = nn.ConvTranspose2d(64, 32, kernel_size=4, stride=2, padding=1) self.upsample5 = nn.ConvTranspose2d(32, 3, kernel_size=4, stride=2, padding=1) self.grids = grids def forward(self, x): x = self.upsample1(x) x = F.grid_sample(x, self.grids[0], padding_mode='reflection', align_corners=False) x = self.upsample2(x) x = F.grid_sample(x, self.grids[1], padding_mode='reflection', align_corners=False) x = self.upsample3(x) x = F.grid_sample(x, self.grids[2], padding_mode='reflection', align_corners=False) x = self.upsample4(x) x = F.grid_sample(x, self.grids[3], padding_mode='reflection', align_corners=False) x = self.upsample5(x) x = F.grid_sample(x, self.grids[4], padding_mode='reflection', align_corners=False) return x def convert_to_coreml(model, input_): traced_model = torch.jit.trace(model, example_inputs=input_, strict=False) coreml_model = ct.converters.convert( traced_model, inputs=[ct.TensorType(shape=input_.shape)], compute_precision=ct.precision.FLOAT16, minimum_deployment_target=ct.target.macOS14, compute_units=ct.ComputeUnit.ALL ) return coreml_model def main(pt_model, input_): coreml_model = convert_to_coreml(pt_model, input_) coreml_model.save("grid_sample.mlpackage") if __name__ == "__main__": input_tensor = torch.randn(1, 512, 4, 4) grids = [torch.randn(1, 2*i, 2*i, 2) for i in [4, 8, 16, 32, 64, 128]] pt_model = PytorchGridSample(grids) main(pt_model, input_tensor)
0
0
267
Aug ’24
Upgraded to MacOS 15, CoreML models is more slower
After I upgraded to MacOS 15 Beta 4(M1 16G), the sampling speed of apple ml-stable-diffusion was about 40% slower than MacOS 14. And when I recompile and run with xcode 16, the following error will appear: loc("EpicPhoto/Unet.mlmodelc/model.mil":2748:12): error: invalid axis: 4294967296, axis must be in range -|rank| <= axis < |rank| Assertion failed: (0 && "failed to infer output types"), function _inferJITOutputTypes, file GPUBaseOps.mm, line 339. I checked the macos 15 release notes and saw that the problem of slow running of Core ML models was fixed, but it didn't seem to be fixed. Fixed: Inference time for large Core ML models is slower than expected on a subset of M-series SOCs (e.g. M1, M1 max) on macOS. (129682801)
2
0
329
Aug ’24
translationTask does not execute when content appears
The documentation for translationTask(source:target:action:) says it should translate when content appears, but this isn't happening. I’m only able to translate when I manually associate that task with a configuration, and instantiate the configuration. Here’s the complete source code: import SwiftUI import Translation struct ContentView: View { @State private var originalText = "The orange fox jumps over the lazy dog" @State private var translationTaskResult = "" @State private var translationTaskResult2 = "" @State private var configuration: TranslationSession.Configuration? var body: some View { List { // THIS DOES NOT WORK Section { Text(translationTaskResult) .translationTask { session in Task { @MainActor in do { let response = try await session.translate(originalText) translationTaskResult = response.targetText } catch { print(error) } } } } // THIS WORKS Section { Text(translationTaskResult2) .translationTask(configuration) { session in Task { @MainActor in do { let response = try await session.translate(originalText) translationTaskResult2 = response.targetText } catch { print(error) } } } Button(action: { if configuration == nil { configuration = TranslationSession.Configuration() return } configuration?.invalidate() }) { Text("Translate") } } } } } How can I automatically translate a given text when it appears using the new translationTask API?
3
1
616
Jun ’24
Apple intelligence feature in macOS 15.1 beta software testing not working with compatible hardware and system settings
I am attempting to install the macOS 15.1 update alongside the Apple Intelligence beta feature, to experience and then integrate into my developing application. I have a compatible MacBook Air M2 with a regional designation of the United States and language as US English, but is however purchased in mainland China. i have downloaded 15.1 but apple intelligence does not display, is this arising from buying it in China?
0
0
438
Aug ’24
Help Needed: Error Codes in VCPHumanPoseImageRequest.mm[85] and NSArrayM insertObject
Hey all 👋🏼 We're currently working on a video processing project using the Vision framework (face, body and hand pose detection), and We've encountered a couple of errors that I need help with. We are on Xcode 16 Beta 3, testing on an iPhone 14 Pro running iOS 18 beta. The error messages are as follows: [LOG_ERROR] /Library/Caches/com.apple.xbs/Sources/MediaAnalysis/VideoProcessing/VCPHumanPoseImageRequest.mm[85]: code 18,446,744,073,709,551,598 encountered an unexpected condition: *** -[__NSArrayM insertObject:atIndex:]: object cannot be nil What we've tried: Debugging: I’ve tried stepping through the code, but the errors occur before I can gather any meaningful insights. Searching Documentation: Looked through Apple’s developer documentation and forums but couldn’t find anything related to these specific error codes. Nil Check: Added checks to ensure objects are not nil before inserting them into arrays, but the error persists. Here are my questions: Has anyone encountered similar errors with the Vision framework, specifically related to VCPHumanPoseImageRequest and NSArray operations? Is there any known issue or bug in the version of the framework I might be using? Could it also be related to the beta? Are there any additional debug steps or logging mechanisms I can implement to narrow down the cause? Any suggestions on how to handle nil objects more effectively in this context? I would greatly appreciate any insights or suggestions you might have. Thank you in advance for your assistance! Thanks all!
3
0
498
Jul ’24