I've been attempting to install tf metal on my computer so that I can use GPUs instead of CPUs. I have tf macOS installed already, and I am fully updated with pip and tf. I'm currently 2 months into building and training a tf CNN, and I'm at the point where training a single epoch for my network will take a week (I have a lot of data that I need to use). I desperately need to use GPUs but am stuck with CPUs for now. I can't get access to a cluster, so the best I can do is continue to use my M2 MacBook. Is there any other way I can install TF metal? Is there a way I can use GPUs (rather than CPUs) when using TF if I can't get install metal?
I keep getting this error message:
"ERROR: Could not find a version that satisfies the requirement tensorflow-metal (from versions: none) ERROR: No matching distribution found for tensorflow-metal"
I looked on apple forums, tried to download it from GitHub (the page is down), and anything else I could think of and/or find on the internet to help, but it still isn't installing.
I've used the following commands and still no luck:
python -m pip install tensorflow-metal
pip install https://github.com/apple/tensorflow_metal/releases/download/v0.5.0/tensorflow_metal-0.5.0-py3-none-any.whl
pip install tensorflow-metal
pip3 install tensorflow-metal
SYSTEM_VERSION_COMPAT=0 python -m pip install tensorflow-metal
SYSTEM_VERSION_COMPAT=0 pip install tensorflow-macos tensorflow-metal
conda install -c anaconda tensorflow-gpu
Any help would be appreciated! Thanks so much!
General
RSS for tagExplore the power of machine learning within apps. Discuss integrating machine learning features, share best practices, and explore the possibilities for your app.
Post
Replies
Boosts
Views
Activity
I am searching for a method to remove background from a video. it can be from camera Session fileOutput url or from photo library.
I was able to accomplish live preview of removed background with the depth data and some metal framework code from the example Enhancing Live Video by Leveraging TrueDepth Camera Data. However I count figure out a way to save this as a video so that I can upload it.
Also this method is using over 150% of cpu ( Xcode cpu usage ), which seems to be quite a lot and the device is getting heated up so fast and drops the frames when It hot.
I also found something similar from GitHub using CoreML example by Dmitry Voitekh which only uses less than 40% cpu.
Any information regarding this will be helpful.
Objective : Remove Background from video and save it
I am opening the Siri shortcut screen from the viewDidLoad method, as follows:
override func viewDidLoad() {
super.viewDidLoad()
// Present the Siri Shortcut screen to add Card Payment Intent
let viewController = INUIAddVoiceShortcutViewController(shortcut: INShortcut(intent: self.cardPaymentIntent)!)
viewController.modalPresentationStyle = .pageSheet
// Setting Delegate
viewController.delegate = self
self.present(viewController, animated: true, completion: nil)
}
// Delegate Method Conformance :: INUIAddVoiceShortcutViewControllerDelegate
@available(iOS 12.0, *)
func addVoiceShortcutViewController(_ controller: INUIAddVoiceShortcutViewController, didFinishWith voiceShortcut: INVoiceShortcut?, error: Error?) {
controller.dismiss(animated: true, completion: nil)
// The issue is here. Whether we add the or Dismiss the Siri shortcut screen without adding it, this delegate gets called.
}
@available(iOS 12.0, *)
func addVoiceShortcutViewControllerDidCancel(_ controller: INUIAddVoiceShortcutViewController) {
controller.dismiss(animated: true, completion: nil)
}
// Card Payment Intent
public var cardPaymentIntent: CardPaymentIntent {
let intent = CardPaymentIntent()
intent.suggestedInvocationPhrase = NSLocalizedString("Pay my credit card", comment: "")
return intent
}
Whenever I present the siri shortcut screen, either I add the shortcut or dismiss the screen without adding. In both cases , the shortcut is added. And this method is called every time
func addVoiceShortcutViewController(_ controller: INUIAddVoiceShortcutViewController, didFinishWith voiceShortcut: INVoiceShortcut?, error: Error?)
Any solution ? while I dismiss the screen, i want it not to be added into the shortcut
Xcode 15.3 AppIntentsSSUTraining warning: missing the definition of locale # variables.1.definitions
Hello!
I've noticed that adding localizations for AppShortcuts triggers the following warnings in Xcode 15.3:
warning: missing the definition of zh-Hans # variables.1.definitions
warning: missing the definition of zh-Hans # variables.2.definitions
This occurs with both legacy strings files and String Catalogs.
Example project: https://github.com/gongzhang/AppShortcutsLocalizationWarningExample
I'm trying to create an App Shortcut so that users can interact with one of my app's features using Siri. I would like to be able to turn this shortcut on or off at runtime using a feature toggle.
Ideally, I would be able to do something like this.
struct MyShortcuts: AppShortcutsProvider {
static var appShortcuts: [AppShortcut] {
// This shortcut is always available
AppShortcut(
intent: AlwaysAvailableIntent(),
phrases: ["Show my always available intent with \(.applicationName)"],
shortTitle: "Always Available Intent",
systemImageName: "infinity"
)
// This shortcut is only available when "myCoolFeature" is available
if FeatureProvider.shared.isAvailable("myCoolFeature") {
AppShortcut(
intent: MyCoolFeatureIntent(),
phrases: ["Show my cool feature in \(.applicationName)"],
shortTitle: "My Cool Feature Intent",
systemImageName: "questionmark"
)
}
}
}
However, this does not work because the existing buildOptional implementation is limited to components of type (any _AppShortcutsContentMarker & _LimitedAvailabilityAppShortcutsContentMarker)?.
All other attempts at making appShortcuts dynamic have resulted in shortcuts not working at all. I've tried:
Creating a makeAppShortcuts method that returns [AppShortcut] and invoking this method from within the appShortcuts
Extending AppShortcutsBuilder to support a buildOptional block isn't restricted to a component type of (any _AppShortcutsContentMarker & _LimitedAvailabilityAppShortcutsContentMarker)?
Extending AppShortcutsBuilder to support buildArray and using compactMap(_:) to return an empty array when the feature is disabled
I haven't used SiriKit before but it appears that shortcut suggestions were set at runtime by invoking setShortcutSuggestions(_:), meaning that what I'm trying to do would be possible. I'm not against using SiriKit if I have to but my understanding is that the App Intents framework is meant to be a replacement for SiriKit, and the prompt within Xcode to replace older custom intents with App Intents indicates that that is indeed the case.
Is there something obvious that I'm just missing or is this simply not possible with the App Intent framework? Is the App Intent framework not meant to replace SiriKit and should I just use that instead?
Hi,
I am working on creating a EntityPropertyQuery for my App entity. I want the user to be able to use Shortcuts to search by a property in a related entity, but I'm struggling with how the syntax for that looks.
I know the documentation for 'EntityPropertyQuery' suggests that this should be possible with a different initializer for the 'QueryProperty' that takes in a 'entityProvider' but I can't figure out how it works.
For e.g. my CJPersonAppEntity has 'emails', which is of type CJEmailAppEntity, which has a property 'emailAddress'. I want the user to be able to find the 'person' by looking up an email address.
When I try to provide this as a Property to filter by, inside CJPersonAppEntityQuery, but I get a syntax error:
static var properties = QueryProperties {
Property(\CJPersonEmailAppEntity.$emailAddress, entityProvider: { person in
person.emails // error
}) {
EqualToComparator { NSPredicate(format: "emailAddress == %@", $0) }
ContainsComparator { NSPredicate(format: "emailAddress CONTAINS %@", $0) }
}
}
The error says "Cannot convert value of type '[CJPersonEmailAppEntity]' to closure result type 'CJPersonEmailAppEntity'"
So it's not expecting an array, but an individual email item. But how do I provide that without running the predicate query that's specified in the closure?
So I tried something like this , just returning something without worrying about correctness:
Property(\CJPersonEmailAppEntity.$emailAddress, entityProvider: { person in
person.emails.first ?? CJPersonEmailAppEntity() // satisfy compiler
}) {
EqualToComparator { NSPredicate(format: "emailAddress == %@", $0) }
ContainsComparator { NSPredicate(format: "emailAddress CONTAINS %@", $0) }
}
and it built the app, but failed on another the step 'Extracting app intents metadata':
error: Entity CJPersonAppEntity does not contain a property named emailAddress. Ensure that the property is wrapped with an @Property property wrapper
So I'm not sure what the correct syntax for handling this case is, and I can't find any other examples of how it's done. Would love some feedback for this.
Hi,
I am working with AppIntents, and have created an OpenIntent with a target using a MyAppContact AppEntity that I have created. This works fine when running from Shortcuts because it pops up a list of options from the 'suggestedEntities` method. But It doesn't work well when using with Siri. It invokes the AppIntent, but keeps repeatedly asking for the value of the 'target' entity, which you can't really pass in with voice.
What's the workaround here? Can an OpenIntent be activated by voice as well?
Hi,
I have a 128Gb 2024 M2 iPad Air. I upgraded to iPadOS 18 Beta 3 & I’m not seeing the option in settings to join the waitlist for the Apple Intelligence? Is this just an option in settings they’re gradually rolling out to devices? I know my iPad is eligible.
The documentation for translationTask(source:target:action:) says it should translate when content appears, but this isn't happening. I’m only able to translate when I manually associate that task with a configuration, and instantiate the configuration.
Here’s the complete source code:
import SwiftUI
import Translation
struct ContentView: View {
@State private var originalText = "The orange fox jumps over the lazy dog"
@State private var translationTaskResult = ""
@State private var translationTaskResult2 = ""
@State private var configuration: TranslationSession.Configuration?
var body: some View {
List {
// THIS DOES NOT WORK
Section {
Text(translationTaskResult)
.translationTask { session in
Task { @MainActor in
do {
let response = try await session.translate(originalText)
translationTaskResult = response.targetText
} catch { print(error) }
}
}
}
// THIS WORKS
Section {
Text(translationTaskResult2)
.translationTask(configuration) { session in
Task { @MainActor in
do {
let response = try await session.translate(originalText)
translationTaskResult2 = response.targetText
} catch { print(error) }
}
}
Button(action: {
if configuration == nil {
configuration = TranslationSession.Configuration()
return
}
configuration?.invalidate()
}) { Text("Translate") }
}
}
}
}
How can I automatically translate a given text when it appears using the new translationTask API?
Controls' actions use App Intents in iOS 18.
However, when executing App Intents from Controls, alert dialogs such as IntentDialog can't be used.
This makes it unclear how to display errors when trying to create Controls that do not launch the app.
Is there any way to handle this?
My application is being supported by both iOS and iPadOS platforms. I would like to add AppIntents only on iOS and not on iPadOS. I have not found any blogs related to it. Is it possible? If so, May I know how can we do it?
If not, what is the best practice to avoid showing Siri shortcuts on iPad.
Hello everybody,
I am running into an error with BNNS.NormalizationLayer. It appears to only work with .vector, and matrix shapes throws layerApplyFail during training. Inference doesn't throw but the output stays the same.
How to correctly use BNNS.NormalizationLayer with matrix shapes? How to debug layerApplyFail exception?
Thanks
let array: [Float32] = [
01, 02, 03, 04, 05, 06,
07, 08, 09, 10, 11, 12,
13, 14, 15, 16, 17, 18,
]
// let inputShape: BNNS.Shape = .vector(6 * 3) // works
let inputShape: BNNS.Shape = .matrixColumnMajor(6, 3)
let input = BNNSNDArrayDescriptor.allocateUninitialized(scalarType: Float32.self, shape: inputShape)
let output = BNNSNDArrayDescriptor.allocateUninitialized(scalarType: Float32.self, shape: inputShape)
let beta = BNNSNDArrayDescriptor.allocate(repeating: Float32(0), shape: inputShape, batchSize: 1)
let gamma = BNNSNDArrayDescriptor.allocate(repeating: Float32(1), shape: inputShape, batchSize: 1)
let activation: BNNS.ActivationFunction = .identity
let layer = BNNS.NormalizationLayer(type: .layer(normalizationAxis: 0), input: input, output: output, beta: beta, gamma: gamma, epsilon: 1e-12, activation: activation)!
let layerInput = BNNSNDArrayDescriptor.allocate(initializingFrom: array, shape: inputShape)
let layerOutput = BNNSNDArrayDescriptor.allocateUninitialized(scalarType: Float32.self, shape: inputShape)
// try layer.apply(batchSize: 1, input: layerInput, output: layerOutput, for: .inference) // No throw
try layer.apply(batchSize: 1, input: layerInput, output: layerOutput, for: .training)
_ = layerOutput.makeArray(of: Float32.self) // All zeros when .inference
Hi there,I’m a Computer Science student and I have a MacBook Pro 2019 and I’m thinking in buying a new Mac either a Mac Studio or a MacBook Pro but I want to use it for ML.
I’m now doing a segmentation model and I’m wondering if I could use Core Ml or the Apple Neural Engine in the new M3 chips to train it, I’m now using colab and tensorflow to create the model but it’s not doing the job, I’m falling short of Cuda memory.
Thanks :)
Hi, all.
I've been writing various computational functions using Metal.
However, in the following operation functions, unlike + and *, there is an accuracy issue in the / operation.
This is a function that divides a matrix of shape [n, x, y] and a scalar [1].
When compared to numpy or torch, if I change the operator of the above function to * or + instead of /, I can get completely the same results, but in the case of /, there is a difference in the mean of more than 1e-5.
(For reference, this was written with reference to the metal kernel code in llama.cpp)
kernel void kernel_div_single_f16(
device const half * src0,
device const half * src1,
device half * dst,
constant int64_t & ne00,
constant int64_t & ne01,
constant int64_t & ne02,
constant int64_t & ne03,
uint3 tgpig[[threadgroup_position_in_grid]],
uint3 tpitg[[thread_position_in_threadgroup]],
uint3 ntg[[threads_per_threadgroup]]) {
const int64_t i03 = tgpig.z;
const int64_t i02 = tgpig.y;
const int64_t i01 = tgpig.x;
const uint offset = i03*ne02*ne01*ne00 + i02*ne01*ne00 + i01*ne00;
for (int i0 = tpitg.x; i0 < ne00; i0 += ntg.x) {
dst[offset + i0] = src0[offset+i0] / *src1;
}
}
My mac book is,
Macbork Pro(16, 2021) / macOS 12.5 / Apple M1 Pro.
Are there any issues related to Div? Thanks in advance for your reply.
"Last year, I upgraded to an M2 Max laptop, expecting that tensorflow-metal would facilitate effective local prototyping utilizing the Apple Silicon's capabilities.
It has been quite some time since tensorflow-metal was last updated, and there appear to be several unresolved issues noted by the community here. I've personally observed the following behavior with my setup:
Without tensorflow-metal:
import tensorflow as tf
for _ in range(10):
print(tf.random.normal((3,)).numpy())
[-1.4213976 0.08230731 -1.1260201 ]
[ 1.2913705 -0.47693467 -1.2886043 ]
[ 0.09144169 -1.0892165 0.9313669 ]
[ 1.1081179 0.9865657 -1.0298151]
[ 0.03328908 -0.00655857 -0.02662632]
[-1.002391 -1.1873596 -1.1168724]
[-1.2135247 -1.2823236 -1.0396363]
[-0.03492929 -0.9228362 0.19147137]
[-0.59353966 0.502279 0.80000925]
[-0.82247525 -0.13076428 0.99579334]
With tensorflow-metal:
import tensorflow as tf
for _ in range(10):
print(tf.random.normal((3,)).numpy())
[ 1.0031303 0.8095635 -0.0610961]
[-1.3544159 0.7045493 0.03666191]
[-1.3544159 0.7045493 0.03666191]
[-1.3544159 0.7045493 0.03666191]
[-1.3544159 0.7045493 0.03666191]
[-1.3544159 0.7045493 0.03666191]
[-1.3544159 0.7045493 0.03666191]
[-1.3544159 0.7045493 0.03666191]
[-1.3544159 0.7045493 0.03666191]
[-1.3544159 0.7045493 0.03666191]
Given these observations, it seems there may be an issue with the randomness of tf.random.normal when using tensorflow-metal.
My current setup includes MacOS 14.5, tensorflow 2.14.1, and tensorflow-macos 2.14.1. I am interested in understanding if there are known solutions or workarounds for this behavior.
Furthermore, could anyone provide an update on whether tensorflow-metal is still being actively developed, or if alternative approaches are recommended for utilizing the GPU capabilities of this hardware?
We're using App Intents to launch are control our app via Siri. Siri's responses have been fairly random, some with a "Done" popup, others with a verbal confirmation, others saying "I'm sorry, there's been a problem". The latter is bogus and doesn't look good to potential investors when the app is actually working fine.
There appears to be no way in code that I've been able to find so far that would have been tell Siri to STFU. Let us handle our own errors.
Otherwise is there a means to supply Siri with a dictionary of restored messages that could be triggered inside the app?
Hi,
I have a AppEnum and I try to use a custom SF Symbol as DisplayRepresentation.Image but it's not working. I get a blank image when AppEnum picker appears.
The custom SF Symbol is stored in the target's asset catalog and it works in the target (eg: Image(named: "custom_sfsymbol"). The issue occurs when I try to use it in a DisplayRepresentation
static var caseDisplayRepresentations: [Self: DisplayRepresentation] = [
.sample : DisplayRepresentation(title: "sample_title", image: DisplayRepresentation.Image(named: "custom_sfsymbol")),
Can we use a custom SF Symbol in a DisplayRepresentation.Image?
I have followed the SoupChef example in migrating Custom Intents from SiriKit to AppIntents. However, we only require one iOS release back, so we can require iOS 17. Thus, I eliminated everything that was strictly for backwards compatibility, most notably the SiriKit Extension that required enormous amounts of code to try to coordinate with the real app which worked poorly anyway.
I tested for example that an NFC tag Automation created in Shortcuts works to execute an AppIntent while the app is backgrounded.
I am now receiving a beta report that indicates someone trying to execute one of our migrated AppIntents from their HomePod is not working, and they say it used to work sometimes (not all the time). I'm sure most such cases used to require the SiriKit Extension in the old SiriKit world. I am terrified that I may need to rebuild that monster once again when the new (to me) AppIntent API seemed so beautiful without it and seemed to work without it. The AppIntent API documentation seems to indicate that SiriKit Extensions are no longer related or required. What is the truth here? Do I need to re-implement everything twice in the SiriKit Extension like a barbarian, or can we live in the new world with AppIntents?
Thank you.
I am very new to App Intents and I am trying to add them to my On Device LLM ChatBot app so my users can get answers to any questions anywhere in iOS.
I have the following code and it is working wonderfully in the Shortcuts app.
import AppIntents
struct AskAi: AppIntent {
static var openAppWhenRun: Bool = false
static let title: LocalizedStringResource = "Ask Ai About"
static let description = "Gets an answer from Ai for your question."
@Parameter(title: "Question")
var question: String
static var parameterSummary: some ParameterSummary {
Summary("Ask Ai About \(\.$question)")
}
@MainActor
func perform() async throws -> some IntentResult & ReturnsValue<String> {
let bot: Bot = Bot()
await bot.respond(to: self.question)
return .result(
value: bot.output
)
}
}
class AppShortcuts: AppShortcutsProvider {
static var appShortcuts: [AppShortcut] {
AppShortcut(
intent: AskAi(),
phrases: [
"Ask \(.applicationName) \(\.$question)",
"Get \(.applicationName) answer for \(\.$question)",
"Open \(\.$question) using \(.applicationName) ",
"Using \(.applicationName) get help with \(\.$question)"
],
shortTitle: "Ask Ai",
systemImageName: "sparkles"
)
}
}
I can create a shortcut for this AppIntent and that allows me say speak the response.
I can call my shortcut via iOS 18 Beta 1 by the Shortcut name I set in the Shortcuts app and that allows it to work.
It does not work at all by just Asking Siri any of the phrases I have defined.
The info.plist has an app name alias defined just to be sure.
I even added the Siri capability in Xcode-beta.
I also tried using the ProvidesDialog return type too.
Whatever I do the AppIntent is invisible to Siri.
Siri tries to search the web, looking for my app name in the contacts or have an error Apple Cash which has nothing to do with what I was talking about.
Is there anything else I am missing for setting up iOS AppIntents to work with Siri?
Enhance Dialogue not showing for me as an option anywhere .I’m using an Apple TV 4K 1st Generation running IOS 18 public beta ,JBL soundbar and LG OLED all have eARC.