Accessibility ui server turned on right after iPhone turned on after iOS 18 beta 3. Due to this issue, volume is forced max. Cant change the volume.
Explore best practices for creating inclusive apps for users of Apple accessibility features and users from diverse backgrounds.
Post
Replies
Boosts
Views
Activity
Hi,
we encounter an issue while using voice over in our app.
On a given page, voice over is running through the items but at some point (randomly) the volume of the voice over decreases dramatically.
It is only possible to restore the original volume by killing the app and launch it again.
It does not happen systematically and is pretty difficult to reproduce but it happens on a regular basis.
No interaction occurs with other apps (music or navigation apps) and there is no interaction with bluetooth or other air play devices.
The code does not involve any voice over changes.
We are looking for any clue that can lead to the issue root cause.
Thanks for the help !
I'm just joining FaceTime with the background setting that I have been advocating for a long time. It is about the Accessibility Lens, which we want to change the background of FaceTime video calls. I want to make suggestions or comments on those features we would like to have. Is there somewhere we can have this forum to discuss specifically with the FaceTime ecosystem?
I am using WKWebview and loading HTML content with couple of tags. However, I do not see any numbers for links when I say "Show Numbers" with voice control turned on.
Any help would be appreciated. I am testing it with 17.4.1
I am developing an application that needs to interact with active editable text boxes on macOS, similar to how Grammarly functions. Specifically, my application needs to:
Read and write text in the currently active editable text box where the keyboard input is directed.
Move the writer cursor and mark text within the text box.
Ideally, the solution should support cross-platform functionality, covering both macOS and Windows.
Does anyone know of any toolkits, libraries, or APIs that can facilitate this kind of functionality on macOS?
Greetings,
I am currently conceptualizing an application designed to interface with Carplay, enabling control over aftermarket automotive fragrance systems and ambient lighting within vehicles. Having perused your development guidelines, I am interested in understanding how my project might be classified within your framework. Specifically, I am exploring whether it would be appropriate to categorize this endeavor under the 'Driving Task' category, given its direct interaction with the vehicle environment.
Your insights on this matter would be greatly appreciated.
Best regards!
I have one project where I have XYZ scheme and target. I have Localizable.string under XYZ target for localization. I want to create a ABC target (duplicate of XYZ) and set custom language support for it. Let's say I have english, french and german for XYZ, I want hindi, japanese and chinese for ABC. I did the below steps
I went to Manage scheme and duplicated the XYZ (duplicate scheme = ABC).
I added new localization file only for ABC (LocalizationForABC.string) and made sure those reflect in File Inspector -> Target (only ABC selected) and also checked in Build Phases -> Copy Bundle Resources (LocalizationABC exists).
When I run the ABC target under let's say french, it works fine but when I build the project ABC, and remove french from XYZ, ABC is broken and it only runs in english.
Am I missing something here ?
My Iphone’s Battery health Was 100% till 4 months when I bought it after updating to ios 17.5 its draining very very quickly it was 100% when i Updated my iphone and after that in 4 days 1% BH drops,updated my iphone on 21th Of may and after 1 month its 92%.Any solution On this?
I have added a UI test that uses the newish app.performAccessibilityAudit() on each of the screens in my SwiftUI app.
I get a handful of test failures for various accessibility issues. Some of them are related to iOS issues (like the "legal" button in a map view doesn't have a big enough tappable are; and, I think, there is a contrast issue with selected tabs in a tab bar, or maybe it is a false positive).
Two of the error type I get are around Text elements that use default font styles (like headline) but that apparently get clipped at times and apparently "Dynamic Type font sizes are unsupported".
For some of the errors it supplies a screenshot of the whole screen along with an image of the element that is triggering the error. For these text errors it only provides the overall image. After looking at the video I saw that part of the test runs the text size all the way up and I could see that at the largest size or two the text was getting cut off despite using default settings for the frames on all the Text elements.
After a bit of trial and error I managed to get the text to wrap properly even at the largest of sizes and most of the related errors went away running on an iPhone simulator. I switched to an iPad simulator and the errors re-appeared even though the video doesn't show the text getting clipped.
Does anyone have any tips for actually fixing these issues? Or am I in false positive land here? Also, is there any way to get more specific info out of the test failures? Looking at the entire screen and trying to guess where the issue is kinda sucks.
About half the time or more when dictating text, if dictation mode is manually deactivated immediately when done speaking the last word is duplicated.
For example, if you dictate a text message (without using Siri) using the microphone button on the keyboard and are dictating a text message to someone by saying, ‘I’m on my way, be there soon.’
and hit send or stop the dictation as soon as you are done talking the dictated text will read.
‘I’m on my way, Be there soon soon.’
-currently running iOS 18, beta 1 and I’ve experienced this multiple times.
Hello, I'm trying to leverage PersonalVoice to read a phrase in my iOS application. My implementation works correctly on an iPhone 15, but does not work when I run the iOS application on an M2 Macbook Air.
Here are some snippets from my implementation
// This is how I request Personal Voice
AVSpeechSynthesizer.requestPersonalVoiceAuthorization() { status in
if status == .authorized {
var personalVoices = AVSpeechSynthesisVoice.speechVoices().filter { $0.voiceTraits.contains(.isPersonalVoice) }
}
}
// this is how I'm attempting to read
let utterance = AVSpeechUtterance(string:textToRead)
if let voice = personalVoices.first {
utterance.voice = voice
}
var synthesizer = AVSpeechSynthesizer()
synthesizer.speak(utterance)
I get the following error messages when I try this:
Cannot use AVSpeechSynthesizerBufferCallback with Personal Voices, defaulting to output channel.
Caller does not have kTCCServiceVoiceBanking access to personal voices. No speech will be generated
Voice not allowed to render speech! Will not set up synthesizer. Bailing now
Any suggestions on how to mitigate this issue?
Hey Apple,
We (our customer support teams) have received feedback from our customers complaining their hearing devices (hearing aids) appear to be connected to MFi and OS controls are working, but audio stream isn't working, and the app is unable to resolve a connection to the device via the CBCentralManager.retrieveConnectedPeripherals(withServices:)
The issues appear to disappear once the user unpairs and re-pair the hearing devices in the Accessibility > Hearing Devices options (they might also need to uninstall and reinstall the app as it is getting stuck due to invalid state), but we are unable to replicate this issue on our environments given it is intermittent and once we have upgraded a device to iOS 17.5.1, we don't have a mechanism to revert to an earlier version of it.
So far, we have received about 30 reports for the past 2 weeks. Most of our customers complain about the app not connecting to the devices, but the fact the audio stream is not happening could hint to a deeper problem than our app.
Are you guys aware of a problem affecting MFi hearing devices restoring after the OS upgrade process?
Hello everyone,
I'm currently working on an iOS application and I want to ensure it is as accessible as possible for all users, including those with disabilities. While I have a basic understanding of iOS accessibility features, I'm looking to deepen my knowledge and apply best practices comprehensively.
Specifically, I would like to know:
VoiceOver Optimization: What are the best practices for ensuring all UI elements are properly labeled and navigable with VoiceOver?
Color Contrast and Visual Design: How can I effectively check and adjust color contrast ratios to accommodate users with visual impairments?
Dynamic Type and Font Sizes:What are the recommended approaches for supporting Dynamic Type, and how can I ensure a consistent and readable layout across different text sizes?
Accessibility in Custom UI Elements: How can I make custom controls (e.g., custom buttons, sliders) accessible, especially when they do not follow standard UIKit components?
Testing Tools and Techniques: Which tools and techniques are most effective for QA to test accessibility on iOS?
Thank you in advance for your insights and advice.
Best regards,
Ale
My recently developed app M4G Radio shuts off when the screen times out. It does not do this on Android, only on the apple side. Is there something I can do to remedy this?
I'd like to use the eye tracking feature in the latest iPadOS 18 update as more than an accessibility feature. i.e. another input modality that can be detected by event + enum checks similar to how we can detect and distinguish between touches and Apple pencil inputs. This might make it a lot easier to control and interact with iPad-based AR experiences that involve walking around, regardless of whether eye-tracking is enabled for accessibility. When walking, it's challenging to hold the device and interact with the screen with touch or pencil at all. Eye tracking + speech as input modalities could assist here.
Also, this would help us create non-immersive AR experiences that parallel visionOS experiences that use eye tracking.
I propose an API option for enabling eye-tracking (and an optional calibration dialogue within-app), as well as a specific UIControl class that simply detect when the eye looks at the control using the standard (begin/changed/end) events.
My specific use case is that I'd like to treat eye-tracking-enabled UI elements or game objects differently depending on whether something is looked at with the eyes.
For example, to select game objects while using speech recognition, suppose we have 4 buttons with the same name in 4 corners of the screen. Call them "corner" buttons. If I have my proposed invisible UI element for gaze detection, I can create 4 large rectangular regions on the screen. Then if the user says "select the corner" the system could parse this command and disambiguate between the 4 corners by checking which of the rectangular regions I'm currently looking at. (Note: the idea would be to make the gaze regions rather large to compensate for error.)
The above is just a simple example, but the advantage over other methods like dwell is that it could be a lot faster.
Another simple example:
Using the same rectangular regions, instead of speech input, I could hold a button placed in just one spot on the screen, and look around the screen with my gaze to produce a laser beam for some kind of game, or draw curves (that I might smooth-out to reduce inaccuracy). OR maybe someone who does not have their hands available.
This would require us to have the ability to get the coordinates of the eye gaze, but otherwise the other approach of just opting to trigger uicontrol elements might work for coarse selection.
Would other developers find this useful as well? I'd like to propose this feature in feedback assistant, but I'm also opening-up a little discussion if by chance someone sees this.
In short, I propose:
a formal eye-tracking API for iPadOS 18+ that allows for turning on/off the tracking within the app, with the necessary user permissions
the API should produce begin/changed/ended events similar to the existing events in UIKit, including screen coordinates. There should be a way to identify that an event came from eye-tracking.
alternatively, we should have at minimum an invisible UIControl subclass that can detect when the eyes enter/leave the region.
Hola buenas no se me puede actualizar IOS 18 beta ni siquiera me aparece la opción de la beta tengo un iPhone 12 Pro
Is there a way for developers to generate IPA notation from user voice input like in the Settings app (Accessibility > VoiceOver > Speech > Pronunciations)?
Thought this might be a useful option for AAC apps.
I'm waiting to buy my next apple watch until I can take calls on it. That can only work if it goes though my hearing aides like the iphone does.
i build apps that act as Screen Readers to 1) add Vim motions everywhere on macOS 2) click (and more) AX Elements through the keyboard 3) scroll through the keyboard. it works extremely well with native apps. with non-native apps, i need to blast them with some extra AX Attributes (AXManualAccessibility, AXEnhancedUserInterface) to get them to expose their AX Elements. but there are a couple of apps tho which i can't get them to expose their AX Elements programmatically. now the weird thing is as soon as i start VoiceOver, those apps open up. or for some, if i use the Accessibility Inspector to go through their AX Elements, then they start opening up. so i'm wondering, is there one public known way that i'm missing to open up those apps, or is Apple using private APIs? any way i could make my apps behave like VoiceOver or the Accessibility Inspector to force those recalcitrant apps to open up?
thanks in advance.
Hello everyone. Is iPadOS18’s Eye-tracking feautre available on iPad Pro 12.9” 2018? If not, where can I find the compatible devices where Eye-tracking available?