Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.

All subtopics

Post

Replies

Boosts

Views

Activity

jax-metal fails to install on M1 clean environment
Hi all, I'm having trouble even getting jax-metal latest version to install on my M1 MacBook Pro. In a clean conda environment, I pip install jax-metal and get In [1]: import jax; print(jax.numpy.arange(10)) Platform 'METAL' is experimental and not all JAX functionality may be correctly supported! --------------------------------------------------------------------------- XlaRuntimeError Traceback (most recent call last) [... skipping hidden 1 frame] File ~/opt/anaconda3/envs/metal/lib/python3.11/site-packages/jax/_src/xla_bridge.py:977, in _init_backend(platform) 976 logger.debug("Initializing backend '%s'", platform) --> 977 backend = registration.factory() 978 # TODO(skye): consider raising more descriptive errors directly from backend 979 # factories instead of returning None. File ~/opt/anaconda3/envs/metal/lib/python3.11/site-packages/jax/_src/xla_bridge.py:666, in register_plugin.<locals>.factory() 665 if not xla_client.pjrt_plugin_initialized(plugin_name): --> 666 xla_client.initialize_pjrt_plugin(plugin_name) 667 updated_options = {} File ~/opt/anaconda3/envs/metal/lib/python3.11/site-packages/jaxlib/xla_client.py:176, in initialize_pjrt_plugin(plugin_name) 169 """Initializes a PJRT plugin. 170 171 The plugin needs to be loaded first (through load_pjrt_plugin_dynamically or (...) 174 plugin_name: the name of the PJRT plugin. 175 """ --> 176 _xla.initialize_pjrt_plugin(plugin_name) XlaRuntimeError: INVALID_ARGUMENT: Mismatched PJRT plugin PJRT API version (0.47) and framework PJRT API version 0.51). During handling of the above exception, another exception occurred: RuntimeError Traceback (most recent call last) Cell In[1], line 1 ----> 1 import jax; print(jax.numpy.arange(10)) File ~/opt/anaconda3/envs/metal/lib/python3.11/site-packages/jax/_src/numpy/lax_numpy.py:2952, in arange(start, stop, step, dtype) 2950 ceil_ = ufuncs.ceil if isinstance(start, core.Tracer) else np.ceil 2951 start = ceil_(start).astype(int) # type: ignore -> 2952 return lax.iota(dtype, start) 2953 else: 2954 if step is None and start == 0 and stop is not None: File ~/opt/anaconda3/envs/metal/lib/python3.11/site-packages/jax/_src/lax/lax.py:1282, in iota(dtype, size) 1277 def iota(dtype: DTypeLike, size: int) -> Array: 1278 """Wraps XLA's `Iota 1279 <https://www.tensorflow.org/xla/operation_semantics#iota>`_ 1280 operator. 1281 """ -> 1282 return broadcasted_iota(dtype, (size,), 0) File ~/opt/anaconda3/envs/metal/lib/python3.11/site-packages/jax/_src/lax/lax.py:1292, in broadcasted_iota(dtype, shape, dimension) 1289 static_shape = [None if isinstance(d, core.Tracer) else d for d in shape] 1290 dimension = core.concrete_or_error( 1291 int, dimension, "dimension argument of lax.broadcasted_iota") -> 1292 return iota_p.bind(*dynamic_shape, dtype=dtype, shape=tuple(static_shape), 1293 dimension=dimension) File ~/opt/anaconda3/envs/metal/lib/python3.11/site-packages/jax/_src/core.py:387, in Primitive.bind(self, *args, **params) 384 def bind(self, *args, **params): 385 assert (not config.enable_checks.value or 386 all(isinstance(arg, Tracer) or valid_jaxtype(arg) for arg in args)), args --> 387 return self.bind_with_trace(find_top_trace(args), args, params) File ~/opt/anaconda3/envs/metal/lib/python3.11/site-packages/jax/_src/core.py:391, in Primitive.bind_with_trace(self, trace, args, params) 389 def bind_with_trace(self, trace, args, params): 390 with pop_level(trace.level): --> 391 out = trace.process_primitive(self, map(trace.full_raise, args), params) 392 return map(full_lower, out) if self.multiple_results else full_lower(out) File ~/opt/anaconda3/envs/metal/lib/python3.11/site-packages/jax/_src/core.py:879, in EvalTrace.process_primitive(self, primitive, tracers, params) 877 return call_impl_with_key_reuse_checks(primitive, primitive.impl, *tracers, **params) 878 else: --> 879 return primitive.impl(*tracers, **params) File ~/opt/anaconda3/envs/metal/lib/python3.11/site-packages/jax/_src/dispatch.py:86, in apply_primitive(prim, *args, **params) 84 prev = lib.jax_jit.swap_thread_local_state_disable_jit(False) 85 try: ---> 86 outs = fun(*args) 87 finally: 88 lib.jax_jit.swap_thread_local_state_disable_jit(prev) [... skipping hidden 17 frame] File ~/opt/anaconda3/envs/metal/lib/python3.11/site-packages/jax/_src/xla_bridge.py:902, in backends() 900 else: 901 err_msg += " (you may need to uninstall the failing plugin package, or set JAX_PLATFORMS=cpu to skip this backend.)" --> 902 raise RuntimeError(err_msg) 904 assert _default_backend is not None 905 if not config.jax_platforms.value: RuntimeError: Unable to initialize backend 'METAL': INVALID_ARGUMENT: Mismatched PJRT plugin PJRT API version (0.47) and framework PJRT API version 0.51). (you may need to uninstall the failing plugin package, or set JAX_PLATFORMS=cpu to skip this backend.) jax.__version__ is 0.4.27.
4
0
1.2k
May ’24
Cannot assign a device for operation RandomUniform M3 Macbook pro 14.4.1
Cannot assign a device for operation encoder/down1/downs_0/conv1/weight/Initializer/random_uniform/RandomUniform: Could not satisfy explicit device specification '' because the node {{colocation_node encoder/down1/downs_0/conv1/weight/Initializer/random_uniform/RandomUniform}} was colocated with a group of nodes that required incompatible device '/device:GPU:0'. All available devices [/job:localhost/replica:0/task:0/device:CPU:0, /job:localhost/replica:0/task:0/device:GPU:0]. Colocation Debug Info: Colocation group had the following types and supported devices: Root Member(assigned_device_name_index_=-1 requested_device_name_='/device:GPU:0' assigned_device_name_='' resource_device_name_='/device:GPU:0' supported_device_types_=[CPU] possible_devices_=[] Identity: GPU CPU Mul: GPU CPU AddV2: GPU CPU Sub: GPU CPU RandomUniform: GPU CPU Assign: CPU VariableV2: GPU CPU Const: GPU CPU
0
0
494
May ’24
The CoreML runtime is inconsistent.
for (int i = 0; i < 1000; i++){ double st_tmp = CFAbsoluteTimeGetCurrent(); retBuffer = [self.enhancer enhance:pixelBuffer error:&error]; double et_tmp = CFAbsoluteTimeGetCurrent(); NSLog(@"[enhance once] %f ms ", (et_tmp - st_tmp) * 1000); } When I run a CoreML model using the above code, I notice that the runtime gradually decreases at the beginning. output: [enhance once] 14.965057 ms [enhance once] 12.727022 ms [enhance once] 12.818098 ms [enhance once] 11.829972 ms [enhance once] 11.461020 ms [enhance once] 10.949016 ms [enhance once] 10.712981 ms [enhance once] 10.367990 ms [enhance once] 10.077000 ms [enhance once] 9.699941 ms [enhance once] 9.370089 ms [enhance once] 8.634090 ms [enhance once] 7.659078 ms [enhance once] 7.061005 ms [enhance once] 6.729007 ms [enhance once] 6.603003 ms [enhance once] 6.427050 ms [enhance once] 6.376028 ms [enhance once] 6.509066 ms [enhance once] 6.452084 ms [enhance once] 6.549001 ms [enhance once] 6.616950 ms [enhance once] 6.471038 ms [enhance once] 6.462932 ms [enhance once] 6.443977 ms [enhance once] 6.683946 ms [enhance once] 6.538987 ms [enhance once] 6.628990 ms ... In most deep learning inference frameworks, there is usually a warmup process, but typically, only the first inference is slower. Why does CoreML have a decreasing runtime at the beginning? Is there a way to make only the first inference time longer, while keeping the rest consistent? I use the CoreML model in the (void)display_pixels:(IJKOverlay *)overlay function.
1
1
648
May ’24
Loading CoreML model increases app size?
Hi, i have been noticing some strange issues with using CoreML models in my app. I am using the Whisper.cpp implementation which has a coreML option. This speeds up the transcribing vs Metal. However every time i use it, the app size inside iphone settings -> General -> Storage increases - specifically the "documents and data" part, the bundle size stays consistent. The Size of the app seems to increase by the same size of the coreml model, and after a few reloads it can increase to over 3-4gb! I thought that maybe the coreml model (which is in the bundle) is being saved to file - but i can't see where, i have tried to use instruments and xcode plus lots of printing out of cache and temp directory etc, deleting the caches etc.. but no effect. I have downloaded the container of the iphone from xcode and inspected it, there are some files stored inthe cache but only a few kbs, and even though the value in the settings-> storage shows a few gb, the container is only a few mb. Please can someone help or give me some guidance on what to do to figure out why the documents and data is increasing? where could this folder be pointing to that is not in the xcode downloaded container?? This is the repo i am using https://github.com/ggerganov/whisper.cpp the swiftui app and objective-C app both do the same thing i am witnessing when using coreml. Thanks in advance for any help, i am totally baffled by this behaviour
6
3
1.1k
May ’24
jax installation
followed instruction in https://developer.apple.com/metal/jax/ I got Successfully installed importlib-metadata-7.1.0 jax-0.4.28 jax-metal-0.0.7 jaxlib-0.4.28 opt-einsum-3.3.0 scipy-1.13.0 six-1.16.0 zipp-3.18.2 but the test failed python -c 'import jax; print(jax.numpy.arange(10))' Traceback (most recent call last): File "", line 1, in File "/Users/erivas/jax-metal/lib/python3.9/site-packages/jax/init.py", line 37, in import jax.core as _core File "/Users/erivas/jax-metal/lib/python3.9/site-packages/jax/core.py", line 18, in from jax._src.core import ( File "/Users/erivas/jax-metal/lib/python3.9/site-packages/jax/_src/core.py", line 39, in from jax._src import dtypes File "/Users/erivas/jax-metal/lib/python3.9/site-packages/jax/_src/dtypes.py", line 33, in from jax._src import config File "/Users/erivas/jax-metal/lib/python3.9/site-packages/jax/_src/config.py", line 27, in from jax._src import lib File "/Users/erivas/jax-metal/lib/python3.9/site-packages/jax/_src/lib/init.py", line 84, in cpu_feature_guard.check_cpu_features() RuntimeError: This version of jaxlib was built using AVX instructions, which your CPU and/or operating system do not support. You may be able work around this issue by building jaxlib from source.
0
0
873
May ’24
Custom Dataset Training With YOLO model on Mac M1 Pro
Hi I have only recently started working on ML on my Mac M1 Pro, previously I was working on a Windows platform. I am having difficulties getting my machine set up right so that its ready for the super fast training I was hoping for when I got it. Please help me with this and let me know if and where I am going wrong. So, I tried a custom dataset training using Yolov8 model. I want to train for a 100 epochs. Now the same dataset and hyperparameters take about 2.5 hours on a T4 GPU on Google Colab, whereas I was only at around 60 epochs after 24 hours on my M1 pro. I have home brew, miniconda, pytorch nightly for mac installed and set the device to mps when training the YOLO model. I feel that this is reaaallly slow. What should I be doing right? Thank you Lakshmi
0
1
671
May ’24
Hundreds of AI models mining and indexing data on MAC OS.
Hi, this is the 3rd time I'm trying to post this on the forum, apple moderators ignoring it. I'm a deep learning expert with a specialization of image processing. I want to know why I have hundreds of AI models on my Mac that are indexing everything on my computer while it is idle, using programs like neuralhash that I can't find any information about. I can understand if they are being used to enhance the user experience on Spotlight, Siri, Photos, and other applications, but I couldn't find the necessary information on the web. Usually, (spyware) software like this uses them to classify files in an X/Y coordinate system. This feels like a more advanced version of stuxnet. find / -type f -name "*.weights" > ai_models.txt find / -type f -name "*labels*.txt" > ai_model_labels.txt Some of the classes from the files; file_name: SCL_v0.3.1_9c7zcipfrc_558001-labels-v3.txt document_boarding_pass document_check_or_checkbook document_currency_or_bill document_driving_license document_office_badge document_passport document_receipt document_social_security_number hier_curation hier_document hier_negative curation_meme file_name: SceneNet5_detection_labels-v8d.txt CVML_UNKNOWN_999999 aircraft automobile bicycle bird bottle bus canine consumer_electronics feline fruit furniture headgear kite fish computer_monitor motorcycle musical_instrument document people food sign watersport train ungulates watercraft flower appliance sports_equipment tool
4
2
1.6k
Jun ’24
PyTorch to CoreML Model inaccuracy
I am currently working on a 2D pose estimator. I developed a PyTorch vision transformer based model with 17 joints in COCO format for the same and then converted it to CoreML using CoreML tools version 6.2. The model was trained on a custom dataset. However, upon running the converted model on iOS, I observed a significant drop in accuracy. You can see it in this video (https://youtu.be/EfGFrOZQGtU) that demonstrates the outputs of the PyTorch model (on the left) and the CoreML model (on the right). Could you please confirm if this drop in accuracy is expected and suggest any possible solutions to address this issue? Please note that all preprocessing and post-processing techniques remain consistent between the models. P.S. While converting I also got the following warning. : TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if x.numel() == 0 and obsolete_torch_version(TORCH_VERSION, (1, 4)): P.P.S. When we initialize the CoreML model on iOS 17.0, we get this error: Validation failure: Invalid Pool kernel width (13), must be [1-8] or 20. Validation failure: Invalid Pool kernel width (9), must be [1-8] or 20. Validation failure: Invalid Pool kernel width (13), must be [1-8] or 20. Validation failure: Invalid Pool kernel width (9), must be [1-8] or 20. Validation failure: Invalid Pool kernel width (13), must be [1-8] or 20. This neural network model does not have a parameter for requested key 'precisionRecallCurves'. Note: only updatable neural network models can provide parameter values and these values are only accessible in the context of an MLUpdateTask completion or progress handler.
2
0
1.3k
Jun ’24
Shortcuts App Intent Only for Active Subscribers
I have a Shortcuts action via an App Intent that I want only for active subscribers to use. I have a shared class that handles all the subcription related things. But for some reason my code only works if the app is active in the background. Once the app is quitted and the user performs the Shortcut, the not subscribed error is thrown – even though the user is subscribed. How can I ensure that my subscription check is done correctly, if the app isn’t open in the background? My Code App Intent excerpt: @MainActor func perform() async throws -> some IntentResult & ReturnsValue<MeterIntentEntity> { // Validate that the user is subscribed. // Cancels action with error message if not subscribed. if SubscriptionManager.shared.userIsSubscribed == false { throw IntentError.notSubscribed } // More Code … // Finish and pass created value as result. return .result(value: something) } Subscription Manager excerpt: class SubscriptionManager: ObservableObject { // A singleton for our entire app to use static let shared = SubscriptionManager() let productIds = ["my_sub1", "my_sub2"] @Published private(set) var availableSubscriptions: [Product] @Published private(set) var purchasedSubscriptions: [Product] = [] public var userIsSubscribed: Bool { return !self.purchasedSubscriptions.isEmpty } init() { // Initialize empty products, and then do a product request asynchronously to fill them in. availableSubscriptions = [] Task { await updatePurchasedProducts() } } @MainActor func updatePurchasedProducts() async { for await result in Transaction.currentEntitlements { do { let transaction = try checkVerified(result) if let subscription = availableSubscriptions.first(where: { $0.id == transaction.productID }) { purchasedSubscriptions.append(subscription) } } catch { Logger.subscription.error("Error loading users user's purchased products.") } } }
1
0
569
Jun ’24
"accelerate everyday tasks" in apps without intents?
From https://www.apple.com/newsroom/2024/06/introducing-apple-intelligence-for-iphone-ipad-and-mac/: Powered by Apple Intelligence, Siri becomes more deeply integrated into the system experience. With richer language-understanding capabilities, Siri is more natural, more contextually relevant, and more personal, with the ability to simplify and accelerate everyday tasks. From https://developer.apple.com/apple-intelligence/: Siri is more natural, more personal, and more deeply integrated into the system. Apple Intelligence provides Siri with enhanced action capabilities, and developers can take advantage of pre-defined and pre-trained App Intents across a range of domains to not only give Siri the ability to take actions in your app, but to make your app’s actions more discoverable in places like Spotlight, the Shortcuts app, Control Center, and more. SiriKit adopters will benefit from Siri’s enhanced conversational capabilities with no additional work. And with App Entities, Siri can understand content from your app and provide users with information from your app from anywhere in the system. Based on this, as well as the video at https://developer.apple.com/videos/play/wwdc2024/10133/ , my understanding is that in order for Siri to be able to execute tasks in applications, those applications must implement the Siri Intents API. Can someone at Apple please clarify: will it be possible for Siri or some other aspect of Apple Intelligence / Core ML / Create ML to take actions in applications which do not support these APIs (e.g. web apps, Citrix apps, legacy apps)? Thank you!
2
1
727
Jun ’24
AppIntentVocabulary (INPlayMediaIntent) is unstable.
I am developing an iOS app that supports INPlayMediaIntent. We are trying to increase the recognition rate of content names, which are song titles, using AppIntentVocabulary. As a sample, some extracts are shown below. <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>IntentPhrases</key> <array> <dict> <key>IntentName</key> <string>INPlayMediaIntent</string> <key>IntentExamples</key> <array> <string>Mezamashi Appで湖畔の朝を再生</string> <string>湖畔の朝をMezamashi Appで再生して</string> </array> </dict> </array> <key>ParameterVocabularies</key> <array> <dict> <key>ParameterNames</key> <array> <string>INPlayMediaIntent.playlistTitle</string> </array> <key>ParameterVocabulary</key> <array> <dict> <key>VocabularyItemIdentifier</key> <string>ID1</string> <key>VocabularyItemSynonyms</key> <array> <dict> <key>VocabularyItemPronunciation</key> <string>aogamagaeru</string> <key>VocabularyItemPhrase</key> <string>青ガマガエル</string> </dict> </array> </dict> <dict> <key>VocabularyItemIdentifier</key> <string>ID2</string> <key>VocabularyItemSynonyms</key> <array> <dict> <key>VocabularyItemPronunciation</key> <string>kohon no asa</string> <key>VocabularyItemPhrase</key> <string>湖畔の朝</string> </dict> </array> </dict> <dict> <key>VocabularyItemIdentifier</key> <string>ID3</string> <key>VocabularyItemSynonyms</key> <array> <dict> <key>VocabularyItemPronunciation</key> <string>kumageratachi no uta</string> <key>VocabularyItemPhrase</key> <string>クマゲラたちの歌</string> </dict> </array> </dict> </array> </dict> </array> </dict> </plist> When running on the iOS 17.5 simulator in Xcode 15.4, the results are as follows. mediaName = VocabularyItemIdentifier mediaIdentifier = nil <INMediaSearch: 0x6000026212c0> { reference = 0; mediaType = 0; sortOrder = 0; albumName = <null>; mediaName = ID1; genreNames = ( ); artistName = <null>; moodNames = ( ); releaseDate = <null>; mediaIdentifier = <null>; } However, when running on an iOS 17.5 device, the following applies. mediaName = VocabularyItemPhrase mediaIdentifier = VocabularyItemIdentifier <INMediaSearch: 0x301efd9e0> { reference = 0; mediaType = 5; sortOrder = 0; albumName = <null>; mediaName = 青ガマガエル; genreNames = ( ); artistName = <null>; moodNames = ( ); releaseDate = <null>; mediaIdentifier = ID1; } The results are not stable, for example, sometimes everything else returns null. I have tried everything, but it is just taking a long time. Does anyone have any advice on this?
1
0
494
Jun ’24
"Error: Intent of type INStartCallIntent is not supported for this app category"
I am trying to make a voip car play app using siri let assistant = CPAssistantCellConfiguration(position: .top, visibility: .always, assistantAction: .startCall) let siriTmeplate = CPListTemplate(title: "Siri", sections: [sectionItems, loadingSection], assistantCellConfiguration: assistant) siriTmeplate.tabSystemItem = .recents siriTmeplate.showsTabBadge = false Using the above code gives me the error "Error: Intent of type INStartCallIntent is not supported for this app category" on app luanch I have INStartCallIntent in my apps info plist and I have all the entitlements and I have "business" as the app category, I can fine 0 help online with this. what does this error really mean and how can I fix it please
2
0
629
Jun ’24
How to add support for Siri / Apple Intelligence to my existing AppEntity?
iOS 18 adds a specific macro for exposing your search app intent, app entities, etc, to siri but how are you meant to add it to your existing objects without removing it entirely from < iOS 18 users? For example, i get the following error: AssistantIntent(schema:) is only available in iOS 18 or newer. Add @available attribute to enclosing struct. I don't want to do that since i still want to support iOS 17 users with my existing shortcuts. Do i need to duplicate my entire shortcuts model to add the new macro?
1
0
985
Jun ’24