Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.

All subtopics

Post

Replies

Boosts

Views

Activity

Country List for iOS18 features?
Hi, does anyone know if there is a country list for the iOS18 features, specifically Apple Intelligence? As a developer in Switzerland it seems pretty confusing when only terms like "EU" are used without specific countries. For example: Apple Intelligence is not available in the EU or China for now Third-party App Stores are only available in the EU Switzerland has neither. So does Apple consider us part of the EU, EEA, geographically in Europe... differently on a feature-by-feature basis? I'm aware that it can get confusing as Switzerland is not part of the EU yet has many individual agreements, and that this can make things complicated in Betas. However, a clear list of feature availability by country for iOS18 must exist somewhere - I haven't found it yet though :)
0
3
1.3k
Jul ’24
PyTorch to CoreML Model inaccuracy
I am currently working on a 2D pose estimator. I developed a PyTorch vision transformer based model with 17 joints in COCO format for the same and then converted it to CoreML using CoreML tools version 6.2. The model was trained on a custom dataset. However, upon running the converted model on iOS, I observed a significant drop in accuracy. You can see it in this video (https://youtu.be/EfGFrOZQGtU) that demonstrates the outputs of the PyTorch model (on the left) and the CoreML model (on the right). Could you please confirm if this drop in accuracy is expected and suggest any possible solutions to address this issue? Please note that all preprocessing and post-processing techniques remain consistent between the models. P.S. While converting I also got the following warning. : TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if x.numel() == 0 and obsolete_torch_version(TORCH_VERSION, (1, 4)): P.P.S. When we initialize the CoreML model on iOS 17.0, we get this error: Validation failure: Invalid Pool kernel width (13), must be [1-8] or 20. Validation failure: Invalid Pool kernel width (9), must be [1-8] or 20. Validation failure: Invalid Pool kernel width (13), must be [1-8] or 20. Validation failure: Invalid Pool kernel width (9), must be [1-8] or 20. Validation failure: Invalid Pool kernel width (13), must be [1-8] or 20. This neural network model does not have a parameter for requested key 'precisionRecallCurves'. Note: only updatable neural network models can provide parameter values and these values are only accessible in the context of an MLUpdateTask completion or progress handler.
2
0
1.1k
Jun ’24
Disable AppIntent for iPadOS
My application is being supported by both iOS and iPadOS platforms. I would like to add AppIntents only on iOS and not on iPadOS. I have not found any blogs related to it. Is it possible? If so, May I know how can we do it? If not, what is the best practice to avoid showing Siri shortcuts on iPad.
1
0
388
Jul ’24
Dynamic AssistantSchema.CameraEnum
Hi, I have some questions about the new AssistantSchema.CameraEnum.captureDevice introduced with iOS 18 beta 4. Here's the context. I create a intent: @AssistantIntent(schema: .camera.setDevice) struct SetDeviceIntent { var device: CaptureDevice func perform() async throws -> some IntentResult { .result() } } @AssistantEnum(schema: .camera.captureDevice) enum CaptureDevice: String { case front case back case ultrawide } Some CaptureDevice cases are not available on some devices. e.g: CaptureMode.ultrawide is only available on iPhone, not on iPad. How can we make CaptureDevice dynamic? I don't think AppEnum supports @Dependency or something else.
0
2
354
Jul ’24
Dependency between AssistantSchema.CameraEnum
Hi, I have some questions about the new AssistantSchema.CameraEnum.captureMode and AssistantSchema.CameraEnum.captureDevice introduced with iOS 18 beta 4. Here's the context. I create a intent: @AssistantIntent(schema: .camera.startCapture) struct StartCaptureIntent { var captureMode: CaptureMode var timerDuration: CaptureDuration? var device: CaptureDevice? func perform() async throws -> some IntentResult { .result() } } And these app enums: @AssistantEnum(schema: .camera.captureDevice) enum CaptureDevice: String { case front case back case ultrawide } @AssistantEnum(schema: .camera.captureMode) enum CaptureMode: String { case modeA case modeB } Some CaptureDevice cases are not available in some CaptureMode. e.g: CaptureMode.modeA only supports CaptureDevice.back and CaptureDevice.front. In a classic AppIntent, I would create an AppEntity to represent CaptureDevice and use @IntentParameterDependency<CapturePhotoIntent>( \.$captureMode) to create a dependency between the captureMode and the captureDevice parameters. How can we create this dependency between two @AssistantEnum? I'm not sure this is possible as @AssistantEnum creates AppEnum.
0
1
303
Jul ’24
Documentation and usage of BNNS.NormalizationLayer
Hello everybody, I am running into an error with BNNS.NormalizationLayer. It appears to only work with .vector, and matrix shapes throws layerApplyFail during training. Inference doesn't throw but the output stays the same. How to correctly use BNNS.NormalizationLayer with matrix shapes? How to debug layerApplyFail exception? Thanks let array: [Float32] = [ 01, 02, 03, 04, 05, 06, 07, 08, 09, 10, 11, 12, 13, 14, 15, 16, 17, 18, ] // let inputShape: BNNS.Shape = .vector(6 * 3) // works let inputShape: BNNS.Shape = .matrixColumnMajor(6, 3) let input = BNNSNDArrayDescriptor.allocateUninitialized(scalarType: Float32.self, shape: inputShape) let output = BNNSNDArrayDescriptor.allocateUninitialized(scalarType: Float32.self, shape: inputShape) let beta = BNNSNDArrayDescriptor.allocate(repeating: Float32(0), shape: inputShape, batchSize: 1) let gamma = BNNSNDArrayDescriptor.allocate(repeating: Float32(1), shape: inputShape, batchSize: 1) let activation: BNNS.ActivationFunction = .identity let layer = BNNS.NormalizationLayer(type: .layer(normalizationAxis: 0), input: input, output: output, beta: beta, gamma: gamma, epsilon: 1e-12, activation: activation)! let layerInput = BNNSNDArrayDescriptor.allocate(initializingFrom: array, shape: inputShape) let layerOutput = BNNSNDArrayDescriptor.allocateUninitialized(scalarType: Float32.self, shape: inputShape) // try layer.apply(batchSize: 1, input: layerInput, output: layerOutput, for: .inference) // No throw try layer.apply(batchSize: 1, input: layerInput, output: layerOutput, for: .training) _ = layerOutput.makeArray(of: Float32.self) // All zeros when .inference
1
0
385
Jul ’24
AssistantIntent for Photos without library access
The new .photos AssistantSchema for intents allow integrating App Intents for Photos-related actions with Apple Intelligence. I was wondering if it would be possible to create intents that do not require full library access. Our app supports loading image from Photos via the PHPicker, which doesn't require any user permission. Now we want to support the .photos.openAsset schema in an app intent to allow interactions like "Open this image in BeCasso and apply preset X". Would that be possible without full library access?
0
0
322
Jul ’24
CreateMl Hand Pose Classifier Preview not showing the Prediction result
I have created and trained a Hand Pose classifier model and am trying to test it. I have noticed in the WWDC2021 "Classify hand poses and actions with Create ML" the preview windows has a prediction result that gives you the prediction based on the live preview or the images. Mine does not have that. When i try to import pictures or do the live test there is no result. Its just the wireframe view and under it there is nothing. How do I fix this please? Thanks.
1
0
302
Jul ’24
Training a Segmentation model
Hi there,I’m a Computer Science student and I have a MacBook Pro 2019 and I’m thinking in buying a new Mac either a Mac Studio or a MacBook Pro but I want to use it for ML. I’m now doing a segmentation model and I’m wondering if I could use Core Ml or the Apple Neural Engine in the new M3 chips to train it, I’m now using colab and tensorflow to create the model but it’s not doing the job, I’m falling short of Cuda memory. Thanks :)
1
0
380
Jul ’24
Div calculation issue in metal
Hi, all. I've been writing various computational functions using Metal. However, in the following operation functions, unlike + and *, there is an accuracy issue in the / operation. This is a function that divides a matrix of shape [n, x, y] and a scalar [1]. When compared to numpy or torch, if I change the operator of the above function to * or + instead of /, I can get completely the same results, but in the case of /, there is a difference in the mean of more than 1e-5. (For reference, this was written with reference to the metal kernel code in llama.cpp) kernel void kernel_div_single_f16( device const half * src0, device const half * src1, device half * dst, constant int64_t & ne00, constant int64_t & ne01, constant int64_t & ne02, constant int64_t & ne03, uint3 tgpig[[threadgroup_position_in_grid]], uint3 tpitg[[thread_position_in_threadgroup]], uint3 ntg[[threads_per_threadgroup]]) { const int64_t i03 = tgpig.z; const int64_t i02 = tgpig.y; const int64_t i01 = tgpig.x; const uint offset = i03*ne02*ne01*ne00 + i02*ne01*ne00 + i01*ne00; for (int i0 = tpitg.x; i0 < ne00; i0 += ntg.x) { dst[offset + i0] = src0[offset+i0] / *src1; } } My mac book is, Macbork Pro(16, 2021) / macOS 12.5 / Apple M1 Pro. Are there any issues related to Div? Thanks in advance for your reply.
1
0
384
Jul ’24
Using MLHandActionClassifierwith visionOS
How do I use either of these data sources with MLHandActionClassifierwith on visionOS? MLHandActionClassifier.DataSource.labeledKeypointsDataFrame MLHandActionClassifier.DataSource.labeledKeypointsData visionOS ARKit HandTracking provides us with 27 joints and 3D co-ordinates which differs from the 21 joint, 2D co-ordinates that these two data sources mention in their documentation.
1
0
379
Jul ’24
VisionKit crashes on iOS 16.4.
App crashes on iOS 16.4 when there is usage for ImageAnalysisInteraction api from VisionKit. App crashes before even starts. Here is output: dyld[3240]: Symbol not found: _$s9VisionKit24ImageAnalysisInteractionC7subject2atAC7SubjectVSgSo7CGPointV_tYaFTu Referenced from: <BAD7A699-FB4E-3D0E-8CD4-45CC9FC3D5E5> /Users/sereza/Library/Developer/CoreSimulator/Devices/B64EAF39-0DD9-49EC-A3F7-69675C94B8BE/data/Containers/Bundle/Application/F4E30E86-ED4D-4748-AB99-434208D55483/VisionKitChecker.app/VisionKitChecker Expected in: <F05E3A17-D74A-3EE2-BC8D-DDCC23E48707> /Library/Developer/CoreSimulator/Volumes/iOS_20E247/Library/Developer/CoreSimulator/Profiles/Runtimes/iOS 16.4.simruntime/Contents/Resources/RuntimeRoot/System/Library/Frameworks/VisionKit.framework/VisionKit Here is enough code to produce this crash. Please note that this code never gets called. It is enough that it exists in the project: import VisionKit @MainActor final class LiftHelper: ObservableObject { func doSomething() async throws { let interaction = ImageAnalysisInteraction() let _ = try await interaction.image(for: []) } }
1
0
344
Jul ’24
tensorflow-metal problems (tf.random.normal) and disappointments
"Last year, I upgraded to an M2 Max laptop, expecting that tensorflow-metal would facilitate effective local prototyping utilizing the Apple Silicon's capabilities. It has been quite some time since tensorflow-metal was last updated, and there appear to be several unresolved issues noted by the community here. I've personally observed the following behavior with my setup: Without tensorflow-metal: import tensorflow as tf for _ in range(10): print(tf.random.normal((3,)).numpy()) [-1.4213976 0.08230731 -1.1260201 ] [ 1.2913705 -0.47693467 -1.2886043 ] [ 0.09144169 -1.0892165 0.9313669 ] [ 1.1081179 0.9865657 -1.0298151] [ 0.03328908 -0.00655857 -0.02662632] [-1.002391 -1.1873596 -1.1168724] [-1.2135247 -1.2823236 -1.0396363] [-0.03492929 -0.9228362 0.19147137] [-0.59353966 0.502279 0.80000925] [-0.82247525 -0.13076428 0.99579334] With tensorflow-metal: import tensorflow as tf for _ in range(10): print(tf.random.normal((3,)).numpy()) [ 1.0031303 0.8095635 -0.0610961] [-1.3544159 0.7045493 0.03666191] [-1.3544159 0.7045493 0.03666191] [-1.3544159 0.7045493 0.03666191] [-1.3544159 0.7045493 0.03666191] [-1.3544159 0.7045493 0.03666191] [-1.3544159 0.7045493 0.03666191] [-1.3544159 0.7045493 0.03666191] [-1.3544159 0.7045493 0.03666191] [-1.3544159 0.7045493 0.03666191] Given these observations, it seems there may be an issue with the randomness of tf.random.normal when using tensorflow-metal. My current setup includes MacOS 14.5, tensorflow 2.14.1, and tensorflow-macos 2.14.1. I am interested in understanding if there are known solutions or workarounds for this behavior. Furthermore, could anyone provide an update on whether tensorflow-metal is still being actively developed, or if alternative approaches are recommended for utilizing the GPU capabilities of this hardware?
1
0
558
Jul ’24