Question:
Hi everyone,
I'm developing a Vision Pro app using the latest visionOS 2, and I've encountered some issues with the new hand gestures introduced in this update. My app is designed to display a UI element when a user's palm is detected. However, the new hand gestures for navigating key functions like Home View, Control Center, and adjusting the volume are interfering with my app's functionality.
What I'm Trying to Achieve
Detect when a user's palm is open and display a UI element.
Ensure that my app's custom hand gestures are not disturbed by the new default gestures in visionOS 2.
Problem
The new hand gestures in visionOS 2 (such as those for Home View, Control Center, and volume adjustment) are activating while my app is open, causing disruptions to my app's functionality. I want to disable these system-level gestures when my app is running.
Explore best practices for creating inclusive apps for users of Apple accessibility features and users from diverse backgrounds.
Post
Replies
Boosts
Views
Activity
I'm waiting to buy my next apple watch until I can take calls on it. That can only work if it goes though my hearing aides like the iphone does.
Hello everyone. Is iPadOS18’s Eye-tracking feautre available on iPad Pro 12.9” 2018? If not, where can I find the compatible devices where Eye-tracking available?
Hello,
When I run the following view on my iPhone with VoiceOver enabled, I encounter an issue with the voiceover cursor when I perform the following steps:
Move the VoiceOver cursor to the tabview dots for paging.
Swipe up with one finger to go to the next tab.
--> Tabview moves to the next tab.
--> The VoiceOver cursor jumps up to the tab.
But instead of the actual tab the previous tab is shown in the tabview again.
You can also see that the border of the VoiceOver cursor extends into the previous tab.
I suspect it has to do with the fact that despite the clipped modifier, the size of the image remains the same and extends into the previous tab.
struct ContentView: View {
var body: some View {
VStack {
TabView {
ForEach(1..<6) { index in
Button(
action: { },
label: {
ZStack {
GeometryReader { geo in
Image(systemName: "\(index).circle")
.resizable()
.scaledToFill()
.frame(width: geo.size.width)
.clipped()
}
}
}
)
.buttonStyle(PlainButtonStyle())
.padding([.bottom], 50)
}
}
.tabViewStyle(PageTabViewStyle(indexDisplayMode: .always))
.indexViewStyle(PageIndexViewStyle(backgroundDisplayMode: .always))
}
.padding()
}
}
How can I fix this?
Best regards
Pawel
i build apps that act as Screen Readers to 1) add Vim motions everywhere on macOS 2) click (and more) AX Elements through the keyboard 3) scroll through the keyboard. it works extremely well with native apps. with non-native apps, i need to blast them with some extra AX Attributes (AXManualAccessibility, AXEnhancedUserInterface) to get them to expose their AX Elements. but there are a couple of apps tho which i can't get them to expose their AX Elements programmatically. now the weird thing is as soon as i start VoiceOver, those apps open up. or for some, if i use the Accessibility Inspector to go through their AX Elements, then they start opening up. so i'm wondering, is there one public known way that i'm missing to open up those apps, or is Apple using private APIs? any way i could make my apps behave like VoiceOver or the Accessibility Inspector to force those recalcitrant apps to open up?
thanks in advance.
Hi,
I'd like to mark views that are inside a LazyVStack as headers for VoiceOver (make them appear in the headings rotor).
In a VStack, you just have add .accessibilityAddTraits(.isHeader) to your header view. However, if your view is in a LazyVStack, that won't work if the view is not visible. As its name implies, LazyVStack is lazy so that makes sense.
There is very little information online about system rotors, but it seems you are supposed to use .accessibilityRotor() with the headings system rotor (.accessibilityRotor(.headings)) outside of the LazyVStack. Something like the following.
.accessibilityRotor(.headings) {
ForEach(entries) { entry in
// entry.id must be the same as the id of the SwiftUI view it is about
AccessibilityRotorEntry(entry.name, id: entry.id)
}
}
It kinds of work, but only kind of. When using .accessibilityAddTraits(.isHeader) in a VStack, the view is in the headings rotor as soon as you change screen. However, when using .accessibilityRotor(.headings), the headers (headings?) are not in the headings rotor at the time the screen appears. You have to move the accessibility focus inside the screen before your headers show up.
I'm a beginner in regards to VoiceOver, so I don't know how a blind user used to VoiceOver would perceive this, but it feels to me that having to move the focus before the headers are in the headings rotor would mean some users would miss them.
So my question is: is there a way to have headers inside a LazyVStack (and are not necessarily visible at first) to be in the headings rotor as soon as the screen appears? (be it using .accessibilityRotor(.headings) or anything else)
The "SwiftUI Accessibility: Beyond the basics" talk from WWDC 2021 mentions custom rotors, not system rotors, but that should be close enough. It mentions that for accessibilityRotor to work properly it has to be applied on an accessibility container, so just in case I tried to move my .accessibilityRotor(.headings) to multiple places, with and without the accessibilityElement(children: .contain) modifier, but that did not seem to change the behavior (and I could not understand why accessibilityRotor could not automatically make the view it is applied on an accessibility container if needed).
Also, a related question: when using .accessibilityRotor(.headings) on a screen, is it fine to mix uses of .accessibilityRotor(.headings) and .accessibilityRotor(.headings)? In a screen with multiple type of contents (something like ScrollView { VStack { MyHeader(); LazyVStack { /* some content */ }; LazyVStack { /* something else */ } } }), having to declare all headers in one place would make code reusability harder.
Thanks
Is there a way for developers to generate IPA notation from user voice input like in the Settings app (Accessibility > VoiceOver > Speech > Pronunciations)?
Thought this might be a useful option for AAC apps.
Hi, I met an issue with the accessibility in swiftUI, did anyone met this before?
I set accessibilityValue to the button, but when the focus goes to the button, the voiceOver always says "1 update item"? I want Voiceover says the value I set. How could I achieve this?
Hello,
I have to develop an application for a customer running on iOS (iPhone). The app needs to read data from and send data to a custom HID-Device connected via USB-C.
As I read the docs, I thought HIDDriverKit (https://developer.apple.com/documentation/hiddriverkit) would be perfekt for that, but it is only available for macOS.
Is there anything similar for iOS?
Or can I go with DriverKit (https://developer.apple.com/documentation/driverkit)? Are ther any code examples? Has anybody done something like that before?
Is it possible to acces custom HID-Device on iOS at all?
Any help appreciated!!!
Immanuel
The triple click accessibility shortcut is assigned to trigger the assistive touch icon , this works very rarely as intended and triggers the shut down screen.
I have tried assigning different shortcuts but it the shut down screen is always triggered.
On restarting the phone , it seems to work a few more times as intended , but then again reverts to the bugged state.
I have create a feedback (FB13459492) for this in December 2023 but haven't received any reply. Updated recently with a screen recording.
Very excited about the new eye tracking in iPadOS and iOS 18. Some general eye tracking questions.
Does the initial iPadOS 18 beta include eye tracking? If not, in which beta will it be included?
Do developers need to do anything to their app for users to control their app using eye tracking?
Will all standard UIKit and SwiftUI views and controls work with eye tracking without code changes?
Will custom subclasses of UIControl work with eye tracking without code changes?
Looking forward to testing eye tracking.
Is there way to identify process which is handling a global keyboard shortcut? Let's say I press some CMD-ALT-L in any application, some file shows up on disk and there is no any kind of visual indication of where it's coming from. How do I find the offender?
I'm wondering if we can expect an enhancement in the text to speech functionality anytime soon. AVSpeechSynthesizer is a little outdated compared to many speech synthesizers nowadays that are more natural and can distinguish between acronyms and words. It would be an awesome upgrade that it seems Apple could do very well. Thanks.
I am currently developing an Share Extension for iOS and have run into an issue where the NSExtensionActionWantsFullScreenPresentation attribute doesn't seem to be working as expected. I've set this property to true in my Info.plist
I'm trying to access Persona avatar and set camera access permission and allowed manually from Settings > Privacy > Persona Virtual camera > MY_APP => allowed
I would like to access Persona image into image view.
Looking forward for solution
Thanks in advance
Is there any way to give blind users free access to features that everyone else has to pay for?
We currently have an odd issue with VoiceOver spelling a word letter by letter while the same word is spoken as a whole for other items.
The app is in German.
I have a View in SwiftUI whose button traits are removed, then a label "Start Tab 1 von 5" is added. "Tab is spoken as a whole word here, all fine.
If I change the label to "Tab-Schaltfläche" or for example "SimplyGo Tab 3 von 5", then "Tab" is spoken as "T A B", letter by letter. is there a way to force VoiceOver to speak it as a whole?
Hi everyone,
I'm developing a visionOS app using SwiftUI and RealityKit. I'm facing a challenge with accessing the children of a USDZ model.
Scenario:
I can successfully load and display a USDZ model in my RealityView.
When I import the model into Reality Composer Pro, I can access and manipulate its individual parts (children) using the scene hierarchy.
However, if I directly load the USDZ model from the Navigation tab in my project, I cannot seem to access the children programmatically.
Question:
Is there a way to access and manipulate the children of a USDZ model loaded directly from the Navigation tab in SwiftUI for visionOS?
Additional Information:
I've explored using Entity(named: "childName", in: realityKitContentBundle), but this only works for the main entity.
I'm open to alternative approaches if directly accessing children isn't feasible.
Thanks in advance for any insights or suggestions!
Best,
Siddharth
[Edited by Moderator]
Hello I have this issue where I have "shortcut keys" that I have added to a ViewController. When navigated to the ViewController I cannot use the "shortcut keys" right away. I first need to "press" anywhere on the screen to be able to use the shortcut.
I use this code:
- (void)viewDidLoad {
[super viewDidLoad];
UIKeyCommand* commandA = [UIKeyCommand keyCommandWithInput:@"a" modifierFlags:UIKeyModifierCommand action:@selector(handleCommand:)];
[self addKeyCommand:commandA];
}
- (void)handleCommand:(UIKeyCommand *)keyCommand {
// Hangup call.
[self btnIncomingAnswerPressed:NULL];
}
I am not sure why this happens, but I it would be really good if a user could use a shortcut key as soon as they navigate to the ViewControllor, instead of first have to press anywhere on the app.
Hi everyone,
I'm currently developing an application for VisionOS and I'm interested in implementing Dwell Control to improve accessibility for users with limited mobility. Specifically, I would like to include a toggle within my app's interface that allows users to enable or disable Dwell Control at the app level.
I've gone through the VisionOS documentation and the general accessibility guidelines, but I couldn't find detailed information on how to programmatically enable or disable Dwell Control within an app.
Here are my main questions:
Is it possible to programmatically enable or disable Dwell Control from within a VisionOS app?
If so, what are the specific API calls or methods needed to achieve this functionality?
Are there any best practices or additional resources for implementing Dwell Control in VisionOS that you could point me to?
Thanks!