Explore the power of machine learning within apps. Discuss integrating machine learning features, share best practices, and explore the possibilities for your app.

Post

Replies

Boosts

Views

Activity

Access to sound classification for app running in background
Can access to SoundAnalysis (sound classifier built into next version of MacOS, iOS, WatchOS) be provided to my app running in the background on iPhone or Apple Watch? I want to monitor local sounds from Apple Watch and iPhones and take remote action for out of band data (ie. send alert to caregiver if coughing rate is too high, or if someone is knocking on the door for more than a minute, etc.)
2
0
575
Sep ’21
NLTagger does not enumerate anymore?
I am using NLTagger to tag lexical classes of words, but it suddenly just stopped working. I boiled my code down to the most basic version, but it's never executing the closure of the enumerateTags() function. What do I have to change or what should I try? for e in sentenceArray { let cupcake = "I like you, have a cupcake" tagger.string = cupcake tagger.enumerateTags(in: cupcake.startIndex..<cupcake.endIndex, unit: .word, scheme: .nameTypeOrLexicalClass) { tag, range in print("TAG") return true }
5
1
2.1k
Nov ’21
Getting ValueError: Categorical Cross Entropy loss layer input (Identity) must be a softmax layer output.
I am working on the neural network classifier provided on the coremltools.readme.io in the updatable->neural network section(https://coremltools.readme.io/docs/updatable-neural-network-classifier-on-mnist-dataset). I am using the same code but I get an error saying that the coremltools.converters.keras.convert does not exist. But this I know can be coreml version issue. Right know I am using coremltools version 6.2. I converted this model to mlmodel with .convert only. It got converted successfully. But I face an error in the make_updatable function saying the loss layer must be softmax output. Even the coremlt package API reference there I found its because the layer name is softmaxND but it should be softmax. Now the problem is when I convert the model from Keras sequential model to coreml model. the layer name and type change. And the softmax changes to softmaxND. Does anyone faced this issue? if I execute this builder.inspect_layers(last=4) I get this output [Id: 32], Name: sequential/dense_1/Softmax (Type: softmaxND) Updatable: False Input blobs: ['sequential/dense_1/MatMul'] Output blobs: ['Identity'] [Id: 31], Name: sequential/dense_1/MatMul (Type: batchedMatmul) Updatable: False Input blobs: ['sequential/dense/Relu'] Output blobs: ['sequential/dense_1/MatMul'] [Id: 30], Name: sequential/dense/Relu (Type: activation) Updatable: False Input blobs: ['sequential/dense/MatMul'] Output blobs: ['sequential/dense/Relu'] In the make_updatable function when I execute builder.set_categorical_cross_entropy_loss(name='lossLayer', input='Identity') I get this error ValueError: Categorical Cross Entropy loss layer input (Identity) must be a softmax layer output.
2
0
1.1k
Apr ’23
Failure of speech recognition when "supportsOnDeviceRecognition" is set to "True".
I am using SFSpeechRecognizer to perform speech recognition, but I am getting the following error. [SpeechFramework] -[SFSpeechRecognitionTask localSpeechRecognitionClient:speechRecordingDidFail:]_block_invoke Ignoring subsequent local speech recording error: Error Domain=kAFAssistantErrorDomain Code=1101 "(null)" Setting requiresOnDeviceRecognition to False works correctly, but previously it worked with True with no error. The value of supportsOnDeviceRecognition was True, so the device is recognizing that it supports speech recognition. iPad Pro 11inch iOS 16.5. Is this expected behavior?
2
0
2.0k
Jun ’23
SFSpeechRecognitionResult discards previous transcripts with on-device option set to true
Hi everyone, I might need some help with on-device recognition. It seems that the speech recognition task will discard whatever it has transcribed after a new sentence starts (or it believes it becomes a new sentence) during a single audio session, with requiresOnDeviceRecognition is set to true. This doesn't happen with requiresOnDeviceRecognition set to false. System environment: macOS 14 with Xcode 15, deploying to iOS 17 Thank you all!
13
4
1.7k
Jun ’23
Siri enters loop of requesting parameter when running AppIntent
I want to add shortcut and Siri support using the new AppIntents framework. Running my intent using shortcuts or from spotlight works fine, as the touch based UI for the disambiguation is shown. However, when I ask Siri to perform this action, she gets into a loop of asking me the question to set the parameter. My AppIntent is implemented as following: struct StartSessionIntent: AppIntent { static var title: LocalizedStringResource = "start_recording" @Parameter(title: "activity", requestValueDialog: IntentDialog("which_activity")) var activity: ActivityEntity @MainActor func perform() async throws -> some IntentResult & ProvidesDialog { let activityToSelect: ActivityEntity = self.activity guard let selectedActivity = Activity[activityToSelect.name] else { return .result(dialog: "activity_not_found") } ... return .result(dialog: "recording_started \(selectedActivity.name.localized())") } } The ActivityEntity is implemented like this: struct ActivityEntity: AppEntity { static var typeDisplayRepresentation = TypeDisplayRepresentation(name: "activity") typealias DefaultQuery = MobilityActivityQuery static var defaultQuery: MobilityActivityQuery = MobilityActivityQuery() var id: String var name: String var icon: String var displayRepresentation: DisplayRepresentation { DisplayRepresentation(title: "\(self.name.localized())", image: .init(systemName: self.icon)) } } struct MobilityActivityQuery: EntityQuery { func entities(for identifiers: [String]) async throws -> [ActivityEntity] { Activity.all()?.compactMap({ activity in identifiers.contains(where: { $0 == activity.name }) ? ActivityEntity(id: activity.name, name: activity.name, icon: activity.icon) : nil }) ?? [] } func suggestedEntities() async throws -> [ActivityEntity] { Activity.all()?.compactMap({ activity in ActivityEntity(id: activity.name, name: activity.name, icon: activity.icon) }) ?? [] } } Has anyone an idea what might be causing this and how I can fix this behavior? Thanks in advance
3
3
1.4k
Jun ’23
TTS problem iOS 17 beta
I see a lot of crashes on iOS 17 beta regarding some problem of "Text To Speech". Does anybody has a clue why TTS crashes? Anybody else seeing the same problem? Exception Type: EXC_BAD_ACCESS (SIGSEGV) Exception Subtype: KERN_INVALID_ADDRESS at 0x000000037f729380 Exception Codes: 0x0000000000000001, 0x000000037f729380 VM Region Info: 0x37f729380 is not in any region. Bytes after previous region: 3748828033 Bytes before following region: 52622617728 REGION TYPE START - END [ VSIZE] PRT/MAX SHRMOD REGION DETAIL MALLOC_NANO 280000000-2a0000000 [512.0M] rw-/rwx SM=PRV ---> GAP OF 0xd20000000 BYTES commpage (reserved) fc0000000-1000000000 [ 1.0G] ---/--- SM=NUL ...(unallocated) Termination Reason: SIGNAL 11 Segmentation fault: 11 Terminating Process: exc handler [36389] Triggered by Thread: 9 ..... Thread 9 name: Thread 9 Crashed: 0 libobjc.A.dylib 0x000000019eeff248 objc_retain_x8 + 16 1 AudioToolboxCore 0x00000001b2da9d80 auoop::RenderPipeUser::~RenderPipeUser() + 112 (AUOOPRenderPipePool.mm:400) 2 AudioToolboxCore 0x00000001b2e110b4 -[AUAudioUnit_XPC internalDeallocateRenderResources] + 92 (AUAudioUnit_XPC.mm:904) 3 AVFAudio 0x00000001bfa4cc04 AUInterfaceBaseV3::Uninitialize() + 60 (AUInterface.mm:524) 4 AVFAudio 0x00000001bfa894bc AVAudioEngineGraph::PerformCommand(AUGraphNodeBaseV3&, AVAudioEngineGraph::ENodeCommand, void*, unsigned int) const + 772 (AVAudioEngineGraph.mm:3317) 5 AVFAudio 0x00000001bfa93550 AVAudioEngineGraph::_Uninitialize(NSError**) + 132 (AVAudioEngineGraph.mm:1469) 6 AVFAudio 0x00000001bfa4b50c AVAudioEngineImpl::Stop(NSError**) + 396 (AVAudioEngine.mm:1081) 7 AVFAudio 0x00000001bfa4b094 -[AVAudioEngine stop] + 48 (AVAudioEngine.mm:193) 8 TextToSpeech 0x00000001c70b3c5c __55-[TTSSynthesisProviderAudioEngine renderSpeechRequest:]_block_invoke + 1756 (TTSSynthesisProviderAudioEngine.m:613) 9 libdispatch.dylib 0x00000001ae4b0740 _dispatch_call_block_and_release + 32 (init.c:1519) 10 libdispatch.dylib 0x00000001ae4b2378 _dispatch_client_callout + 20 (object.m:560) 11 libdispatch.dylib 0x00000001ae4b990c _dispatch_lane_serial_drain + 748 (queue.c:3885) 12 libdispatch.dylib 0x00000001ae4ba470 _dispatch_lane_invoke + 432 (queue.c:3976) 13 libdispatch.dylib 0x00000001ae4c5074 _dispatch_root_queue_drain_deferred_wlh + 288 (queue.c:6913) 14 libdispatch.dylib 0x00000001ae4c48e8 _dispatch_workloop_worker_thread + 404 (queue.c:6507) ... Thread 9 crashed with ARM Thread State (64-bit): x0: 0x0000000283309360 x1: 0x0000000000000000 x2: 0x0000000000000000 x3: 0x00000002833093c0 x4: 0x00000002833093c0 x5: 0x0000000101737740 x6: 0x0000000000000013 x7: 0x00000000ffffffff x8: 0x0000000283309360 x9: 0x3c788942d067009a x10: 0x0000000101547000 x11: 0x0000000000000000 x12: 0x00000000000007fb x13: 0x00000000000007fd x14: 0x000000001ee24020 x15: 0x0000000000000020 x16: 0x0000b1037f729360 x17: 0x000000037f729360 x18: 0x0000000000000000 x19: 0x0000000000000000 x20: 0x00000001016a8de8 x21: 0x0000000283e21d00 x22: 0x0000000283b3f1f8 x23: 0x0000000283098000 x24: 0x00000001bfb4fc35 x25: 0x00000001bfb4fc43 x26: 0x000000028033a688 x27: 0x0000000280c93090 x28: 0x0000000000000000 fp: 0x000000016fc86490 lr: 0x00000001b2da9d80 sp: 0x000000016fc863e0 pc: 0x000000019eeff248 cpsr: 0x1000 esr: 0x92000006 (Data Abort) byte read Translation fault
21
2
6.8k
Jun ’23
metal 0.5.0: converge ; metal 1.0.1: failure to converge
macbook pro m2 max/ 64G / macos:13.2.1 (22D68) import tensorflow as tf def runMnist(device = '/device:CPU:0'): with tf.device(device): #tf.config.set_default_device(device) mnist = tf.keras.datasets.mnist (x_train, y_train), (x_test, y_test) = mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0 model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(input_shape=(28, 28)), tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(10) ]) loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) model.compile(optimizer='adam', loss=loss_fn, metrics=['accuracy']) model.fit(x_train, y_train, epochs=10) runMnist(device = '/device:CPU:0') runMnist(device = '/device:GPU:0')
4
1
1.5k
Aug ’23
Crash when trying to use tensorflow in a macOS VM
I created a macOS 14 VM using https://github.com/s-u/macosvm which uses the Virtualization Framework. I want to check if I can use paravirtualized graphics for tensorflow workloads. I followed the steps from https://developer.apple.com/metal/tensorflow-plugin/ but when I run the script from step 4. Verify, I get a segmentation fault (see below). Did anyone try to get this kind of GPU compute in a VM and succeed? /Users/teuf/venv-metal/lib/python3.9/site-packages/urllib3/__init__.py:34: NotOpenSSLWarning: urllib3 v2 only supports OpenSSL 1.1.1+, currently the 'ssl' module is compiled with 'LibreSSL 2.8.3'. See: https://github.com/urllib3/urllib3/issues/3020 warnings.warn( 2023-11-20 07:41:11.723578: I metal_plugin/src/device/metal_device.cc:1154] Metal device set to: Apple Paravirtual device 2023-11-20 07:41:11.723620: I metal_plugin/src/device/metal_device.cc:296] systemMemory: 10.00 GB 2023-11-20 07:41:11.723626: I metal_plugin/src/device/metal_device.cc:313] maxCacheSize: 0.50 GB 2023-11-20 07:41:11.723700: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:306] Could not identify NUMA node of platform GPU ID 0, defaulting to 0. Your kernel may not have been built with NUMA support. 2023-11-20 07:41:11.723968: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:272] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 0 MB memory) -> physical PluggableDevice (device: 0, name: METAL, pci bus id: <undefined>) zsh: segmentation fault python3 ./tensorflow-test.py Thread 0 Crashed:: Dispatch queue: metal gpu stream 0 MPSCore 0x1999598f8 MPSDevice::GetMPSLibrary_DoNotUse(MPSLibraryInfo const*) + 92 1 MPSCore 0x19995c544 0x199927000 + 218436 2 MPSCore 0x19995c908 0x199927000 + 219400 3 MetalPerformanceShadersGraph 0x1fb696a58 0x1fb583000 + 1129048 4 MetalPerformanceShadersGraph 0x1fb6f0cc8 0x1fb583000 + 1498312 5 MetalPerformanceShadersGraph 0x1fb6ef2dc 0x1fb583000 + 1491676 6 MetalPerformanceShadersGraph 0x1fb717ea0 0x1fb583000 + 1658528 7 MetalPerformanceShadersGraph 0x1fb717ce4 0x1fb583000 + 1658084 8 MetalPerformanceShadersGraph 0x1fb6edaac 0x1fb583000 + 1485484 9 MetalPerformanceShadersGraph 0x1fb7a85e0 0x1fb583000 + 2250208 10 MetalPerformanceShadersGraph 0x1fb7a79f0 0x1fb583000 + 2247152 11 MetalPerformanceShadersGraph 0x1fb6602b4 0x1fb583000 + 905908 12 MetalPerformanceShadersGraph 0x1fb65f7b0 0x1fb583000 + 903088 13 libmetal_plugin.dylib 0x1156dfdcc invocation function for block in metal_plugin::runMPSGraph(MetalStream*, MPSGraph*, NSDictionary*, NSDictionary*) + 164 14 libdispatch.dylib 0x18e79b910 _dispatch_client_callout + 20 15 libdispatch.dylib 0x18e7aacc4 _dispatch_lane_barrier_sync_invoke_and_complete + 56 16 libmetal_plugin.dylib 0x1156dfd14 metal_plugin::runMPSGraph(MetalStream*, MPSGraph*, NSDictionary*, NSDictionary*) + 108 17 libmetal_plugin.dylib 0x115606634 metal_plugin::MPSStatelessRandomUniformOp<float>::ProduceOutput(metal_plugin::OpKernelContext*, metal_plugin::Tensor*) + 876 18 libmetal_plugin.dylib 0x115607620 metal_plugin::MPSStatelessRandomOpBase::Compute(metal_plugin::OpKernelContext*) + 620 19 libmetal_plugin.dylib 0x1156061f8 void metal_plugin::ComputeOpKernel<metal_plugin::MPSStatelessRandomUniformOp<float>>(void*, TF_OpKernelContext*) + 44 20 libtensorflow_framework.2.dylib 0x10b807354 tensorflow::PluggableDevice::Compute(tensorflow::OpKernel*, tensorflow::OpKernelContext*) + 148 21 libtensorflow_framework.2.dylib 0x10b7413e0 tensorflow::(anonymous namespace)::SingleThreadedExecutorImpl::Run(tensorflow::Executor::Args const&) + 2100 22 libtensorflow_framework.2.dylib 0x10b70b820 tensorflow::FunctionLibraryRuntimeImpl::RunSync(tensorflow::FunctionLibraryRuntime::Options, unsigned long long, absl::lts_20230125::Span<tensorflow::Tensor const>, std::__1::vector<tensorflow::Tensor, std::__1::allocator<tensorflow::Tensor>>*) + 420 23 libtensorflow_framework.2.dylib 0x10b715668 tensorflow::ProcessFunctionLibraryRuntime::RunMultiDeviceSync(tensorflow::FunctionLibraryRuntime::Options const&, unsigned long long, std::__1::vector<std::__1::variant<tensorflow::Tensor, tensorflow::TensorShape>, std::__1::allocator<std::__1::variant<tensorflow::Tensor, tensorflow::TensorShape>>>*, std::__1::function<absl::lts_20230125::Status (tensorflow::ProcessFunctionLibraryRuntime::ComponentFunctionData const&, tensorflow::ProcessFunctionLibraryRuntime::InternalArgs*)>) const + 1336 24 libtensorflow_framework.2.dylib 0x10b71a8a4 tensorflow::ProcessFunctionLibraryRuntime::RunSync(tensorflow::FunctionLibraryRuntime::Options const&, unsigned long long, absl::lts_20230125::Span<tensorflow::Tensor const>, std::__1::vector<tensorflow::Tensor, std::__1::allocator<tensorflow::Tensor>>*) const + 848 25 libtensorflow_cc.2.dylib 0x2801b5008 tensorflow::KernelAndDeviceFunc::Run(tensorflow::ScopedStepContainer*, tensorflow::EagerKernelArgs const&, std::__1::vector<std::__1::variant<tensorflow::Tensor, tensorflow::TensorShape>, std::__1::allocator<std::__1::variant<tensorflow::Tensor, tensorflow::TensorShape>>>*, tsl::CancellationManager*, std::__1::optional<tensorflow::EagerFunctionParams> const&, std::__1::optional<tensorflow::ManagedStackTrace> const&, tsl::CoordinationServiceAgent*) + 572 26 libtensorflow_cc.2.dylib 0x28016613c tensorflow::EagerKernelExecute(tensorflow::EagerContext*, absl::lts_20230125::InlinedVector<tensorflow::TensorHandle*, 4ul, std::__1::allocator<tensorflow::TensorHandle*>> const&, std::__1::optional<tensorflow::EagerFunctionParams> const&, tsl::core::RefCountPtr<tensorflow::KernelAndDevice> const&, tensorflow::GraphCollector*, tsl::CancellationManager*, absl::lts_20230125::Span<tensorflow::TensorHandle*>, std::__1::optional<tensorflow::ManagedStackTrace> const&) + 452 27 libtensorflow_cc.2.dylib 0x2801708ec tensorflow::ExecuteNode::Run() + 396 28 libtensorflow_cc.2.dylib 0x2801b0118 tensorflow::EagerExecutor::SyncExecute(tensorflow::EagerNode*) + 244 29 libtensorflow_cc.2.dylib 0x280165ac8 tensorflow::(anonymous namespace)::EagerLocalExecute(tensorflow::EagerOperation*, tensorflow::TensorHandle**, int*) + 2580 30 libtensorflow_cc.2.dylib 0x2801637a8 tensorflow::DoEagerExecute(tensorflow::EagerOperation*, tensorflow::TensorHandle**, int*) + 416 31 libtensorflow_cc.2.dylib 0x2801631e8 tensorflow::EagerOperation::Execute(absl::lts_20230125::Span<tensorflow::AbstractTensorHandle*>, int*) + 132
2
0
854
Nov ’23
Loss unable to converge with tensorflow-metal > 0.8.0
Hi, I've found that training loss is often unable to converge when training a model on a M2 Max or M1 on GPU. After I found many other requests in this forum and no answer from Apple or anyone to resolve this issue, I tried to find the package combination that failed and the one that made it work. This issue seems to happen on some TF (tensorflow), TFM (tensorflow-metal), and Batch size combinations. The only most recent combination that seems to work in any situation is: Tensorflow 2.12 Tensorflow-metal 0.8.0 They must be installed by pip and not conda forge like this : pip install tensorflow-macos==2.12 pip install tensorflow-metal==0.8.0 Every other recent combination failed to get their training loss converge. Here is the code to reproduce the issue. Sometimes the divergence appears clearly with only 10 epochs and sometimes Epoch must be increased up to 30 to get it more clearly. (train_images, train_labels), (test_images, test_labels) = datasets.cifar10.load_data() # Normalize pixel values to be between 0 and 1 train_images, test_images = train_images / 255.0, test_images / 255.0 class_names = ['airplane', 'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck'] epochs = 20 batch_size = 128 with tf.device('gpu:0'): model = tf.keras.models.Sequential([ tf.keras.layers.Conv2D(32,3,activation = 'relu',padding='same',input_shape=train_images.shape[1:]), tf.keras.layers.MaxPooling2D(2), tf.keras.layers.Conv2D(64,3,activation = 'relu',padding='same'), tf.keras.layers.MaxPooling2D(2), tf.keras.layers.Conv2D(128,3,activation = 'relu',padding='same'), tf.keras.layers.MaxPooling2D(2), tf.keras.layers.Flatten(), tf.keras.layers.Dense(64,activation='relu'), tf.keras.layers.Dense(10) ]) model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) history = model.fit(train_images, train_labels, epochs=epochs, batch_size=batch_size, validation_data=(test_images, test_labels)) pd.DataFrame(history.history).plot(subplots=(['loss','val_loss'],['accuracy','val_accuracy']),layout=(1,2),figsize=(15,5)); Here are the results for some combinations. TF TF Metal Batch Size Epochs Training Loss Convergence 2.14.0 1.1.0 128 10 NO 2.14.0 1.1.0 512 10 YES 2.13.0 1.0.0 128 20 NO 2.13.0 1.0.0 512 20 YES 2.12.0 1.0.0 128 20 NO 2.12.0 1.0.0 512 30 NO 2.12.0 0.8.0 128 20 YES 2.12.0 0.8.0 512 30 YES As an example here is the loss and accuracy curve for TF2.14 and TFM 1.1.0 with batch size = 128. The training loss (blue line) goes up. For TF2.12, TFM 1.0.0, batch size =128, The training loss (blue line) goes up. And the one that work as expected : TF2.12, TFM 0.8.0, batch size=128 So, Apple, can you please fix it in the next release ? I also suggest that before to publish a release, you implement a simple automated testing procedure that train some models like this with various batch size and epoch an analyze the loss in the history to detect major training loss divergence. Thank you Best regards
0
1
580
Nov ’23
CreateML API train soft lock on 90%
Hello, I'm trying to train a MLImageClassifier dataset using Swift using the function MLImageClassifier.train. It doesn't change the dataset size (I have the same problem with a smaller one), but when the train reaches the 9 completedUnitCount of 10, even if the CPU usage is still high, seems to happen a soft lock that doesn't never brings the model to its completion (or error). The dataset is made of jpg images, using the CreateML app doesn't appear any problem during the training. There is any known issue with CreateML training APIs about part 9 of the process? There is any information about this part of the training job? Thank you
1
0
855
Nov ’23
Updating CoreML model on device gives negative mean squared error loss
I converted a toy Pytorch regression model to CoreML mlmodel using coremltools and set it to be updatable with mean_squared_error_loss. But when testing the training, the context.metrics[.lossValue] can give negative value which is impossible. Further more, context.metrics[.lossValue] result is very different from my own computed training loss as shown in the screenshot attached. I was wondering if I used a wrong way to extract the training loss from context? Does context.metrics[.lossValue] really give MSE if I used coremltools function set_mean_squared_error_loss to set the loss? Any suggestion is appreciated. Since the validation loss decreases as epoch goes, the model should be indeed updated correctly. I am using coremltools==7.0, xcode==15.0.1 Here is my code to convert Pytorch model to updatable CoreML model: import coremltools from coremltools.models.neural_network import NeuralNetworkBuilder, SgdParams, AdamParams from coremltools.models import datatypes # Load the model specification spec = coremltools.utils.load_spec('regression.mlmodel') builder = NeuralNetworkBuilder(spec=spec) builder.inspect_output_features() # Name: linear_1 # Make layers updatable builder.make_updatable(['linear_0', 'linear_1']) # Manually add a mean squared error loss layer feature = ('linear_1', datatypes.Array(1)) builder.set_mean_squared_error_loss(name='lossLayer', input_feature=feature) # define the optimizer (Adam in this example) adam_params = AdamParams(lr=0.01, beta1=0.9, beta2=0.999, eps=1e-8, batch=16) builder.set_adam_optimizer(adam_params) # Set the number of epochs builder.set_epochs(100) # Save the updated model updated_model = coremltools.models.MLModel(spec) updated_model.save('updatable_regression30.mlmodel') Here is the code I use to try to update the saved updatable_regression30.mlmodel: import CoreML import GameKit func generateSampleData(numSamples: Int, seed: UInt64) -> ([MLMultiArray], [MLMultiArray]) { // simple regression: y = 10 * sum(x) + 1 var inputArray = [MLMultiArray]() var outputArray = [MLMultiArray]() // Create a random number generator with a fixed seed let randomSource = GKLinearCongruentialRandomSource(seed: seed) let randomDistribution = GKRandomDistribution(randomSource: randomSource, lowestValue: 0, highestValue: 1000) for _ in 0..<numSamples { do { let input = try MLMultiArray(shape: [1, 2], dataType: .float32) let output = try MLMultiArray(shape: [1], dataType: .float32) var sumInput: Float = 0 for i in 0..<input.shape[1].intValue { // Generate random value using the fixed seed generator let inputValue = Float(randomDistribution.nextInt()) / 1000.0 input[[0, i] as [NSNumber]] = NSNumber(value: inputValue) sumInput += inputValue } output[0] = NSNumber(value: 10.0 * sumInput + 1.0) inputArray.append(input) outputArray.append(output) } catch { print("Error occurred while creating MLMultiArrays: \(error)") } } return (inputArray, outputArray) } func computeLoss(model: MLModel, data: ([MLMultiArray], [MLMultiArray])) -> Double { let (inputData, outputData) = data var totalLoss: Double = 0 for (index, input) in inputData.enumerated() { let output = outputData[index] if let prediction = try? model.prediction(from: MLDictionaryFeatureProvider(dictionary: ["x": MLFeatureValue(multiArray: input)])), let predictedOutput = prediction.featureValue(for: "linear_1")?.multiArrayValue { let loss = (output[0].doubleValue - predictedOutput[0].doubleValue) totalLoss += loss * loss // squared error } } return totalLoss / Double(inputData.count) // mean of squared errors } func trainModel() { // Load the updatable model guard let updatableModelURL = Bundle.main.url(forResource: "updatable_regression30", withExtension: "mlmodelc") else { print("Failed to load the updatable model") return } // Generate sample data let (inputData, outputData) = generateSampleData(numSamples: 200, seed: 8) let validationData = generateSampleData(numSamples: 100, seed:18) // Create an MLArrayBatchProvider from the sample data var featureProviders = [MLFeatureProvider]() for (index, input) in inputData.enumerated() { let output = outputData[index] let dataPointFeatures: [String: MLFeatureValue] = [ "x": MLFeatureValue(multiArray: input), "linear_1_true": MLFeatureValue(multiArray: output) ] if let provider = try? MLDictionaryFeatureProvider(dictionary: dataPointFeatures) { featureProviders.append(provider) } } let batchProvider = MLArrayBatchProvider(array: featureProviders) // Define progress handlers let progressHandlers = MLUpdateProgressHandlers(forEvents: [.trainingBegin, .epochEnd], progressHandler: { context in switch context.event { case .trainingBegin: print("Training began.") case .epochEnd: let loss = context.metrics[.lossValue] as! Double let validationLoss = computeLoss(model: context.model, data: validationData) let computedTrainLoss = computeLoss(model: context.model, data: (inputData, outputData)) print("Epoch \(context.metrics[.epochIndex]!) ended. Training Loss: \(loss), Computed Training Loss: \(computedTrainLoss), Validation Loss: \(validationLoss)") default: break } } ) // Create an update task with progress handlers let updateTask = try! MLUpdateTask(forModelAt: updatableModelURL, trainingData: batchProvider, configuration: nil, progressHandlers: progressHandlers) // Start the update task updateTask.resume() } // call trainModel() to start training
1
0
785
Nov ’23
Error generating files in compilation cause AppEntity and Widget Extension on iOS17
When I add AppEnity to my model, I receive this error that is still repeated for each attribute in the model. The models are already marked for Widget Extension in Target Membership. I have already cleaned and restarted, nothing works. Will anyone know what I'm doing wrong? Unable to find matching source file for path "@_swiftmacro_21HabitWidgetsExtension0A05ModelfMm.swift" import SwiftData import AppIntents enum FrecuenciaCumplimiento: String, Codable { case diario case semanal case mensual } @Model final class Habit: AppEntity { @Attribute(.unique) var id: UUID var nombre: String var descripcion: String var icono: String var color: String var esHabitoPositivo: Bool var valorObjetivo: Double var unidadObjetivo: String var frecuenciaCumplimiento: FrecuenciaCumplimiento static var typeDisplayRepresentation: TypeDisplayRepresentation = "Hábito" static var defaultQuery = HabitQuery() var displayRepresentation: DisplayRepresentation { DisplayRepresentation(title: "\(nombre)") } static var allHabits: [Habit] = [ Habit(id: UUID(), nombre: "uno", descripcion: "", icono: "circle", color: "#BF0000", esHabitoPositivo: true, valorObjetivo: 1.0, unidadObjetivo: "", frecuenciaCumplimiento: .mensual), Habit(id: UUID(), nombre: "dos", descripcion: "", icono: "circle", color: "#BF0000", esHabitoPositivo: true, valorObjetivo: 1.0, unidadObjetivo: "", frecuenciaCumplimiento: .mensual) ] /* static func loadAllHabits() async throws { do { let modelContainer = try ModelContainer(for: Habit.self) let descriptor = FetchDescriptor<Habit>() allHabits = try await modelContainer.mainContext.fetch(descriptor) } catch { // Manejo de errores si es necesario print("Error al cargar hábitos: \(error)") throw error } } */ init(id: UUID = UUID(), nombre: String, descripcion: String, icono: String, color: String, esHabitoPositivo: Bool, valorObjetivo: Double, unidadObjetivo: String, frecuenciaCumplimiento: FrecuenciaCumplimiento) { self.id = id self.nombre = nombre self.descripcion = descripcion self.icono = icono self.color = color self.esHabitoPositivo = esHabitoPositivo self.valorObjetivo = valorObjetivo self.unidadObjetivo = unidadObjetivo self.frecuenciaCumplimiento = frecuenciaCumplimiento } @Relationship(deleteRule: .cascade) var habitRecords: [HabitRecord] = [] } struct HabitQuery: EntityQuery { func entities(for identifiers: [Habit.ID]) async throws -> [Habit] { //try await Habit.loadAllHabits() return Habit.allHabits.filter { identifiers.contains($0.id) } } func suggestedEntities() async throws -> [Habit] { //try await Habit.loadAllHabits() return Habit.allHabits// .filter { $0.isAvailable } } func defaultResult() async -> Habit? { try? await suggestedEntities().first } }
3
2
639
Nov ’23
Is there a way to apply for formatting option to a Dataframe column outside of the explicit description(options:) method?
I'm building up a data frame for the sole purpose of using that lovely textual grid output. I'm getting output without any issue, but I'm trying to sort out how I might apply a formatter to a specific column so that print(dataframeInstance) "just works" nicely. In my use case, I'm running a function, collecting its output - appending that into a frame, and then using TabularData to get a nice output in a unit test, so I can see the patterns within the output. I found https://developer.apple.com/documentation/tabulardata/column/description(options:), but wasn't able to find any way to "pre-bind" that to a dataframe Column when I was creating it. (I have some double values that get a bit "excessive" in length due to the joys of floating point rounding) Is there a way of setting a formatter on a column at creation time, or after (using a property) that could basically use the same pattern as that description method above?
1
0
848
Nov ’23
Error in variable assignment
x = tf.Variable(tf.ones(3)) x[1].assign(5) Above code results in: tensorflow.python.framework.errors_impl.InvalidArgumentError: Cannot assign a device for operation ResourceStridedSliceAssign: Could not satisfy explicit device specification '/job:localhost/replica:0/task:0/device:GPU:0' because no supported kernel for GPU devices is available. Colocation Debug Info: Colocation group had the following types and supported devices: Root Member(assigned_device_name_index_=1 requested_device_name_='/job:localhost/replica:0/task:0/device:GPU:0' assigned_device_name_='/job:localhost/replica:0/task:0/device:GPU:0' resource_device_name_='/job:localhost/replica:0/task:0/device:GPU:0' supported_device_types_=[CPU] possible_devices_=[] ResourceStridedSliceAssign: CPU _Arg: GPU CPU Colocation members, user-requested devices, and framework assigned devices, if any: ref (_Arg) framework assigned device=/job:localhost/replica:0/task:0/device:GPU:0 ResourceStridedSliceAssign (ResourceStridedSliceAssign) /job:localhost/replica:0/task:0/device:GPU:0 Op: ResourceStridedSliceAssign Node attrs: ellipsis_mask=0, Index=DT_INT32, T=DT_FLOAT, shrink_axis_mask=1, end_mask=0, begin_mask=0, new_axis_mask=0 Registered kernels: device='XLA_CPU_JIT'; Index in [DT_INT32, DT_INT64]; T in [DT_FLOAT, DT_DOUBLE, DT_INT32, DT_UINT8, DT_INT16, DT_INT8, DT_COMPLEX64, DT_INT64, DT_BOOL, DT_QINT8, DT_QUINT8, DT_QINT32, DT_BFLOAT16, DT_UINT16, DT_COMPLEX128, DT_HALF, DT_UINT32, DT_UINT64, DT_FLOAT8_E5M2, DT_FLOAT8_E4M3FN, DT_INT4, DT_UINT4] device='DEFAULT'; T in [DT_INT32] device='CPU'; T in [DT_UINT64] device='CPU'; T in [DT_INT64] device='CPU'; T in [DT_UINT32] device='CPU'; T in [DT_UINT16] device='CPU'; T in [DT_INT16] device='CPU'; T in [DT_UINT8] device='CPU'; T in [DT_INT8] device='CPU'; T in [DT_INT32] device='CPU'; T in [DT_HALF] device='CPU'; T in [DT_BFLOAT16] device='CPU'; T in [DT_FLOAT] device='CPU'; T in [DT_DOUBLE] device='CPU'; T in [DT_COMPLEX64] device='CPU'; T in [DT_COMPLEX128] device='CPU'; T in [DT_BOOL] device='CPU'; T in [DT_STRING] device='CPU'; T in [DT_RESOURCE] device='CPU'; T in [DT_VARIANT] device='CPU'; T in [DT_QINT8] device='CPU'; T in [DT_QUINT8] device='CPU'; T in [DT_QINT32] device='CPU'; T in [DT_FLOAT8_E5M2] device='CPU'; T in [DT_FLOAT8_E4M3FN] [[{{node ResourceStridedSliceAssign}}]] [Op:ResourceStridedSliceAssign] name: strided_slice/_assign I am starting to regret my Macbook purchase. There are so many issues with tensorflow-metal: ADAM is slow Inconsistent values with CPU And now this, I saw a post regarding this but that was one year old. So, Macbooks are not even good for learning anymore?
0
0
531
Nov ’23