Description
As of iOS 18, AVAudioSession.setPreferredIOBufferDuration ignores the requested buffer size when Sound Recognition or Vocal Shortcuts is enabled. This results in 1) much larger buffer sizes and 2) mismatched buffer sizes between input and output buffers, which causes ‘glitchy’ audio and increased latency.
Additionally, when this issue occurs AVAudioSession.setPreferredIOBufferDuration continues to return ‘true’ and no error is produced.
Steps to Reproduce:
Enable Vocal Shortcuts on a device running iOS 18. Enable at least one shortcut (e.g. Control Center).
Open or clone the example project (https://github.com/cwalo/SoundRecognitionBug)
Build and install the example project
Attach a headset and launch the application
Observe console logs showing
a requested buffer size of 0.005805 (256 samples @ 48k)
an actual buffer size of 0.023220 (1104 samples @48k - this is regularly the resulting buffer size in all of our tests)
Quit the app and detach the headset. Enable mutesOutput in AudioSystem.mm (to avoid feedback)
Launch the application
Observe
Same result from step 4
Mismatched hardware buffer size of 1104 and recorded frame count of 1024
Mismatched playbackCount and recordCount
Quit the app and disable vocal shortcuts
Launch the app
Observe IOBufferDuration matching the requested duration and matched buffer sizes (expected behavior)
Expected results:
Requested IOBufferDuration is respected or AVAudioSession returns false or error is produced
Input and output buffer sizes match
Device(s): iPhone 11 Pro, iPad Pro
OS: iOS 18.0.1
Environment: Xcode 16.1
FB: FB15715421
Related to: https://forums.developer.apple.com/forums/thread/765477
Accessibility
RSS for tagMake your apps function for a broad range of users using Accessibility APIs across all Apple platforms.
Posts under Accessibility tag
124 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
Hi Team,
We are integrating SwiftUI's Charts BarMark, UI looks good but when we try setting up custom ADA it doesn't reflect/override the accessibility label/value we set manually.
Is it iOS defect or is there any workaround?
Thanks in advance.
Sample:
Chart(data) {
BarMark(
x: .value("Category", $0.department),
y: .value("Profit", $0.profit)
)
.foregroundStyle(by: .value("Product Category", $0.productCategory))
.accessibilityIdentifier("BarMark")
.accessibilityLabel("Dep: \($0.department)")
.accessibilityValue("Profile: \($0.profit) Category: \($0.productCategory)")
}
Hello everyone,
I’m experiencing an issue with the accessibility feature "Keyboard Navigation" in iOS 18.0. Specifically, when enabling the "Allow Full Keyboard Access" option and attempting to navigate to inline navigation titles within stock iOS apps, the navigation doesn’t seem to work as expected.
Here’s how to replicate the issue:
Enable the accessibility option: Allow Full Keyboard Access (Settings > Accessibility > Keyboards > Full Keyboard Access).
Open any stock iOS app that uses an inline navigation style (for example, the Mail or Settings app).
Press the Tab key to cycle through items on the inline navigation bar.
In iOS 18.0, pressing the Tab key does not allow navigation to the inline navigation title, which was previously possible in iOS 16. This issue is specific to iOS 18 and iOS 17, as it worked fine on the earlier version (iOS 16).
Has anyone else encountered this issue or have suggestions for a workaround? Would love to hear your thoughts.
Prior to iOS 18.1, App-prefs:Wallpaper deeplinked to the wallpaper settings. On iOS 18.1, it now deeplinks to the settings app. Is there a new URL to deeplink to the Wallpaper settings?
The following URLs have been tested and do not work on iOS 18.1:
App-prefs:Wallpaper
App-prefs:wallpaper
App-prefs:root=wallpaper
App-prefs:root=Wallpaper
App-prefs:root=WALLPAPER
App-prefs:root=General&path=Wallpaper
prefs:root=Wallpaper
My app does not automatically switch languages (voices) in VoiceOver when I have VoiceOver on and the screen includes both English and Spanish content. Instead of switching between the correctly accented voice, whatever my manual Voices rotor setting is, that's what the content is announced as. I can manually switch the Voice in the rotor to make words sound inteligible but my main concern is that language changes are not auto-detected even though that feature in my Settings is on.
VO does detect language changes in other apps, so I think there must be either misplaced or missing accessibiiltyLanguage strings somewhere in my app. Or is it more than that for localization considerations?
I reached out to the Apple Accessibilty team and was directed to open a ticket here, as my question is about the underlying code.
I am a novice developer and primarily accessibility SME; i expect that wnen "detect languages" is on in the user settings for VoiceOver, that the voice for the screen reader speech output will automatically switch to the correct language / accent. I recognize there is a problem but am not sure where the breakdown is. I would like guidance how to fix it to relay to my teams.
https://developer.apple.com/documentation/objectivec/nsobject/1615192-accessibilitylanguage
As of iOS 18.1 being released we are having issues with our users experiencing issues with our app that relies on strobing the device torch.
We have narrowed this down to being caused on devices with adaptive true-tone flash and have submitted a radar: FB15787160.
The issue seems to be caused by ambient light levels. If run in a dark room, the torch strobes exactly as effectively as in previous iOS versions, if run in a light room, or outdoors, or near a window, the strobe will run for ~1s and then the torch will get stuck on for half a second or so (less frequently it gets stuck off) and then it will strobe again for ~1s and this behaviour repeats indefinitely.
If we go to a darker environment, and background and then foreground the app (this is required) the issue is resolved, until moving to an area with higher ambient light levels again. We have done a lot of debugging, and also discovered that turning off "Auto-Brightness" from Settings -> Accessibility -> Display & Text Size resolves the issue.
We have also viewed logs from Console.app at the time of the issue occurring and it seems to be that there are quite sporadic ambient light level readings at the time at which the issue occurs. The light readings transition from ~100 Lux to ~8000 Lux at the point that the issue starts occurring (seemingly caused by the rear sensor being affected by the torch). With "Auto-Brightness" turned off, it seems these readings stay at lower levels.
This is rendering the primary use case of our app essentially useless, would be great to get to the bottom of it! We can't even really detect it in-app as I believe using SensorKit is restricted to research applications and requires a review process with Apple before accessing?
Edit: It's worth noting this is also affecting other apps with strobe functionality in the exact same way
I have a stackview which have 2 labels
class TextView: UIView {
@IBOutlet private weak var stackView: UIStackView! {
didSet {
stackView.isAccessibilityElement = true
stackView.accessibilityLabel = label1.text + label2.text
}
}
@IBOutlet private weak var label1: UILabel! {
didSet {
label1.accessibilityIdentifier = "label1"
}
}
@IBOutlet private weak var: UILabel!{
didSet {
label2.accessibilityIdentifier = "label2"
}
}
}
My goal here is to have a combines accessibility label for the stackview and yet able to access the accessibilityIdentifier of child elements for automation.
VoiceOver does not support the plist property CFBundleSpokenName. This is wrong and should be fixed.
Ultimately the issue I am dealing with is that our app name is UWCU, and instead of VoiceOver pronouncing each letter, it tries to read this as a word and horribly butchers our organization's/app's name.
Alternatives such as using U.W.C.U. and U W C U are not acceptable.
@Apple, I know you're first response is going to be "no, it is working perfectly," but quite frankly you are wrong. I know you feel strongly about this, given your response in posts like this:
https://forums.developer.apple.com/forums/thread/734545?answerId=760084022
HOWEVER, with iOS 18, your argument for "VoiceOver should read what's on the screen" doesn't hold water anymore. With iOS 18, you Apple have added a new feature that lets users customize their home screens and completely remove the name of apps. Here's your own guide:
https://support.apple.com/guide/iphone/customize-apps-and-widgets-on-the-home-screen-iph385473442/ios
Quoted from your guide:
Make the icons bigger: Tap Large. (In large size, the names of the apps disappear.)
With large icons + VoiceOver turned on, VoiceOver still reads the app name even though it has disappeared from the screen. So, your own argument "VoiceOver should read the text as it appears on the screen" is invalid, because there is NO text on the screen.
If you can't tell, I'm pretty peeved about all this. There's a reason why screen readers support aria attributes to help deliver the right accessible experience. It's a simple ask for VoiceOver to do the same thing.
In iOS 17, the call to "UIApplication.shared.open("App-prefs:ACCESSIBILITY&path=HEARING_AID_TITLE")" was opening the device Settings and going to Accessibility and then Hearing Device which is very helpful.
Now in iOS 18, this call only opens the device Settings at the root.
I would like to know how to replace the URL so that it works like before.
canOpenUrl does return true, so I'm wondering if something is broken, or
if canOpenUrl is kind of lying a bit.
I also tried other paths to go to other screens and they don't work either.
So, I'm trying to create my own text-to-speech setup. Problem I'm having is whenever I do a test run, the speech gets a bit choppy at the start kind of skipping over maybe a word or a few characters.
A few details:
I've essentially built a separate class for handling the speech events.
AVSpeechSynthesizer is set up as a private variable for the class so I don't expect deallocation to be the issue. Especially since it's a problem at the start.
I've got a queue set up for what it's worth so that shouldn't be a problem.
I'd appreciate any advice.
Hi, is there any way to interact with the eye tracking accessibility feature in iOS and iPadOS? I want to be able to trigger an eye tracker recalibration programmatically. Also if possible, I would like to be able to retrieve gaze location data.
These are not intended to be features on an app being published to the app store but rather a custom made accessibility app.
I have a record button that either starts or stops a recording using the default action. When the user is recording, I want to add a custom action to discard the recording instead of saving it. That all works fine with the following code:
if isRecording {
recordButton.accessibilityCustomActions = [
.init(name: String(localized: "discard recording"), actionHandler: { [weak delegate] _ in
delegate?.discardRecording()
return true
})
]
recordButton.accessibilityLabel = String(localized: "stop recording", comment: "accessibility label")
} else {
recordButton.accessibilityCustomActions = []
recordButton.accessibilityLabel = String(localized: "start recording", comment: "accessibility label")
}
The problem I have is that when a user chose "discard recording", it becomes the default selected action again the next time the user records, and instead of stopping and saving the recording, the user might accidentally discard the next one as well.
How can I programmatically reset the selected action on this recordButton to the default action?
I want to create a utility to import a list of words to the Voice Control user custom vocabulary. Is there an API to do this? I noticed if you use the built-in export vocabulary functionality (Settings > Accessibility > Voice Control > ...) the file that gets exported is a plist document type. If there is no API to add words programmatically should I just create a utility that generates a plist file and import it using the built-in import vocabulary functionality (Settings > Accessibility > Voice Control > ...)?
When the native info panel (which displays the title, subtitle, description, and custom buttons) opens, the focus immediately shifts to the first button. As a result, VoiceOver skips the description, which is crucial for users relying on accessibility features.
I haven’t found a way to detect when it opens. Knowing this would allow me to trigger custom VoiceOver announcements or adjust the focus order dynamically.
Are any other people experiencing this issue, and how do we solve it?
Last November 13 I came up with a phone keyboard layout (strategy) that can make key size bigger hence less mistyping.
The typical phone keyboard looks like this:
My proposed keyboard looks like this:
Essentially, it's a split keyboard with the left-hand part stacked above/below the right-hand part. Key size/width/height and the vertical distance between the left-hand part and right-hand part may be adjustable to suit different phone widths and user hand sizes.
You guys can show the proposed keyboard's image on your phone and fit this keyboard to your phone width so you can actually simulate typing on it to see how it feels. On my phone, the letter keys in it are a little too big for my thumbs to reach the farthest keys, but as I said, key size should be adjustable to suit different phone widths and user hand sizes.
I frequently use my iPad to develop remotely in VSCode or prototype designs in Figma. This is currently all done in the browser. Given that many of these experiences rely heavily on the keyboard I was hoping there would be a solution to make a keyboard persistent on the screen, or at least a few hot keys.
Is it possible for me to develop an accessibility tool that could stay persistent on the screen? Perhaps something that would talk with assistive touch? Or is that in Apple’s no no square?
My app uses PDFKit, but I don't know how to solve this bug at all. Under the same IOS system and device model, some users' devices may experience crashes, while our own devices are functioning normally.
The following is the stack information for crashing:
0 libsystem_platform.dylib__os_unfair_lock_recursive_abort + 36
1 libsystem_platform.dylib__os_unfair_lock_lock_slow + 308
2 CoreGraphics_CGPDFPageCopyRootTaggedNode + 56
3 PDFKit-[PDFPageViewAccessibility accessibilityElements] + 76
4 UIAccessibility-[NSObject(AXPrivCategory) _accessibilityElements] + 56
5 UIAccessibility-[NSObjectAccessibility accessibilityElementCount] + 68
6 UIAccessibility-[NSObject(AXPrivCategory) _accessibilityHasOrderedChildren] + 44
7 UIAccessibility-[NSObject(AXPrivCategory) _accessibilityFrameForSorting] + 216
8 UIAccessibility-[NSObject _accessibilityCompareGeometry:] + 116
9 UIAccessibility-[NSObject(AXPrivCategory) accessibilityCompareGeometry:] + 52
10 CoreFoundation___CFSimpleMergeSort + 100
11 CoreFoundation___CFSimpleMergeSort + 248
12 CoreFoundation_CFSortIndexes + 260
13 CoreFoundation-[NSArray sortedArrayFromRange:options:usingComparator:] + 732
14 CoreFoundation-[NSMutableArray sortedArrayFromRange:options:usingComparator:] + 60
15 CoreFoundation-[NSArray sortedArrayUsingSelector:] + 168
16 UIAccessibility___57-[NSObject(AXPrivCategory) _accessibilityFindDescendant:]_block_invoke + 268
17 UIAccessibility___96-[NSObject(AXPrivCategory) _accessibilityEnumerateAXDescendants:passingTest:byYieldingElements:]_block_invoke + 140
18 UIAccessibility-[NSObject _accessibilityEnumerateAXDescendants:passingTest:byYieldingElements:] + 244
19 UIAccessibility-[NSObject _accessibilityFindFirstAXDescendantPassingTest:byYieldingElements:] + 272
20 UIAccessibility-[NSObject(AXPrivCategory) _accessibilityFindDescendant:] + 100
21 UIAccessibility__axuiElementForNotificationData + 276
22 UIAccessibility__massageAssociatedElementBeforePost + 36
23 UIAccessibility__UIAXBroadcastMainThread + 292
24 libdispatch.dylib__dispatch_call_block_and_release + 32
25 libdispatch.dylib__dispatch_client_callout + 20
26 libdispatch.dylib__dispatch_main_queue_drain + 980
27 libdispatch.dylib__dispatch_main_queue_callback_4CF + 44
28 CoreFoundation___CFRUNLOOP_IS_SERVICING_THE_MAIN_DISPATCH_QUEUE__ + 16
29 CoreFoundation___CFRunLoopRun + 1996
30 CoreFoundation_CFRunLoopRunSpecific + 572
31 GraphicsServices_GSEventRunModal + 164
32 UIKitCore-[UIApplication _run] + 816
33 UIKitCore_UIApplicationMain + 340
34 SwiftUIclosure #1 (Swift.UnsafeMutablePointer<Swift.UnsafeMutablePointer<Swift.Int8>?>) -> Swift.Never in SwiftUI.(KitRendererCommon in _ACC2C5639A7D76F611E170E831FCA491)(Swift.AnyObject.Type) -> Swift.Never + 168
35 SwiftUI SwiftUI.runApp(A) -> Swift.Never + 100
36 SwiftUI static (extension in SwiftUI):SwiftUI.App.main() -> () + 180
I got a UIControl, and I want to make it behavior like a custom UIAccessibilityElement.
UIControl *control = [[UIControl alloc] init];
control.isAccessibilityElement = NO;
CustomAccessibilityElement *elem = [[CustomAccessibilityElement alloc] initWithAccessibilityContainer:control];
elem.isAccessibilityElement = YES;
// some custom setting here
control.accessibilityElements = @[elem];
It worked well with an iPhone 13, iOS 15.5.1, but crashed with an iPhone SE, iOS 15.4.1 and the crash msg is :
"-[UIAccessibilityElement _addAccessibilityElementsAndOrderedContainersWithOptions:toCollection:]: unrecognized selector sent to instance 0x283b7c680"
Can you tell me the reason ?
Thanks a lot.
accessibilityUserInputLabels is working fine with any view I tried this on. Meaning that the control can be toggled with the provided alternative names when using Voice Control.
When setting this property on any UIBarButtonItem though, it seems Voice Control ignores the alternative names provided by setting accessibilityUserInputLabels. For comparison, accessibilityLabel works perfectly when set on UIBarButtonItem.
Is anyone facing the same issue?
Using Xcode 16.0 (16A242) on iOS 18
I have an issue while unlocking the app from Single App mode. Device Dock is popping up on top of my app and this is disturbing the experience of my app.
I have already done MDM configuration and the indented functionality is working fine with the below code, the app is successfully switching to single app mode and back.
The sample code below reproduces the issue.
Tap on Lock, the completion blocks returns true and the app is successfully switched to single app mode.
Tap on Unlock, the completion block returns true and the app is successfully switched from the single app mode.
Now the device dock is popped up on top of the app.
struct LockApp: App {
var body: some Scene {
WindowGroup {
VStack {
Button("Lock", systemImage: "lock") {
UIAccessibility.requestGuidedAccessSession(enabled: true) { success in
print("App has been locked", success)
}
}
.buttonStyle(.borderedProminent)
Button("UnLock", systemImage: "lock.open") {
UIAccessibility.requestGuidedAccessSession(enabled: false) { success in
print("App has been unlocked", success)
}
}
.buttonStyle(.borderedProminent)
}
}
}
}
Xcode Version: 16.0
Device: iPad Pro 12.9 inch 6th gen. OS: 18.1
Is this intended behaviour? Has anyone come across this issue?