Dive into the technical aspects of audio on your device, including codecs, format support, and customization options.

Audio Documentation

Post

Replies

Boosts

Views

Activity

AVSpeechSynthesizer Broken on iOS 17
I just upgraded to iOS 17 and it looks like AVSpeechSynthesizer is now broken. I noticed when feeding certain strings to AVSpeechUtterance it just dies flat out stops after only speaking a portion of the string. For example I fed it a string of approx. 1200 words and it speaks up to around 300 words or so and then just stops. The synthesizer delegate method -speechSynthesizer:didFinishSpeechUtterance: is called when this happens, as if this is supposed to be the end even though it is not even close to being finishes. Was working fine on iOS 16. FWIW I create the AVSpeechUtterance with -initWithString:
15
8
4.3k
Sep ’23
AVSpeechSynthesizer is broken on iOS 17 in Xcode 15
When you initialize AVSpeechSynthesizer as View prorety in SwiftUI project in Xcode 15 with iOS 17 simulator, you get some comments in console: Failed to get sandbox extensions Query for com.apple.MobileAsset.VoiceServicesVocalizerVoice failed: 2 #FactoryInstall Unable to query results, error: 5 Unable to list voice folder Query for com.apple.MobileAsset.VoiceServices.GryphonVoice failed: 2 Unable to list voice folder Unable to list voice folder Query for com.apple.MobileAsset.VoiceServices.GryphonVoice failed: 2 Unable to list voice folder When you try to run utterance inside a Button as synthesizer.speak(AVSpeechUtterance(string: "iOS 17 broke TextToSpeech")), you get endless stream of warnings that repeaths on and on in console like this: AddInstanceForFactory: No factory registered for id <CFUUID 0x60000024f200> F8BB1C28-BAE8-11D6-9C31-00039315CD46 Cannot find executable for CFBundle 0x600003b2cd20 </Library/Developer/CoreSimulator/Volumes/iOS_21A328/Library/Developer/CoreSimulator/Profiles/Runtimes/iOS 17.0.simruntime/Contents/Resources/RuntimeRoot/System/Library/PrivateFrameworks/TextToSpeechMauiSupport.framework> (not loaded) Failed to load first party audio unit from TextToSpeechMauiSupport.framework Could not instantiate audio unit. Error=Error Domain=NSOSStatusErrorDomain Code=-3000 "(null)" Could not instantiate audio unit. Error=Error Domain=NSOSStatusErrorDomain Code=-3000 "(null)" Could not instantiate audio unit. Error=Error Domain=NSOSStatusErrorDomain Code=-3000 "(null)" Could not instantiate audio unit. Error=Error Domain=NSOSStatusErrorDomain Code=-3000 "(null)" Could not instantiate audio unit. Error=Error Domain=NSOSStatusErrorDomain Code=-3000 "(null)" Couldn't find audio unit for request SSML Length: 40, Voice: [AVSpeechSynthesisProviderVoice 0x600002127e30] Name: Samantha, Identifier: com.apple.voice.compact.en-US.Samantha, Supported Languages ( "en-US" ), Age: 0, Gender: 0, Size: 0, Version: (null) VoiceProvider: Could not start synthesis for request SSML Length: 40, Voice: [AVSpeechSynthesisProviderVoice 0x600002127e30] Name: Samantha, Identifier: com.apple.voice.compact.en-US.Samantha, Supported Languages ( "en-US" ), Age: 0, Gender: 0, Size: 0, Version: (null), converted from tts request [TTSSpeechRequest 0x600003709680] iOS 17 broke TextToSpeech language: en-US footprint: compact rate: 0.500000 pitch: 1.000000 volume: 1.000000 Failed to speak request with error: Error Domain=TTSErrorDomain Code=-4010 "(null)". Attempting to speak again with fallback identifier: com.apple.voice.compact.en-US.Samantha CPU is under pressure (more than 100%). AVSpeechSynthesizer doesn't speak. All works fine on iOS 16. The code of View: import SwiftUI import AVFoundation struct ContentView: View { let synthesizer = AVSpeechSynthesizer() var body: some View { VStack { Button { synthesizer.speak(AVSpeechUtterance(string: "iOS 17 broke TextToSpeech")) } label: { Text("speak") } .buttonStyle(.borderedProminent) } .padding() } } #Preview { ContentView() } On the real device nothing at all happened. The same happens to my production app. I have so much crashes related to TextToSpeach and iOS 17. What's going on?
32
17
11k
Sep ’23
AVAudioMixerNode not mixing <1 node with voice processing formats
Hi there, I'm having some trouble with AVAudioMixerNode only working when there is a single input, and outputting silence or very quiet buzzing when >1 input node is connected. My setup has voice processing enabled, input going to a sink, and N source nodes going to the main mixer node, going to the output node. In all cases I am connecting nodes in the graph with the same declared format: 48kHz 1 channel Float32 PCM. This is working great for 1 source node, but as soon as I add a second it breaks. I can reproduce this behaviour in the SignalGenerator sample, when the same format is used everywhere. Again, it'll work fine with 1 source node even in this configuration, but add another and there's silence. Am I doing something wrong with formats here? Is this expected? As I understood it with voice processing on and use of a mixer node I should be able to use my own format essentially everywhere in my graph? My SignalGenerator modified repro example follows: import Foundation import AVFoundation // True replicates my real app's behaviour, which is broken. // You can remove one source node connection // to make it work even when this is true. let showBrokenState: Bool = true // SignalGenerator constants. let frequency: Float = 440 let amplitude: Float = 0.5 let duration: Float = 5.0 let twoPi = 2 * Float.pi let sine = { (phase: Float) -> Float in return sin(phase) } let whiteNoise = { (phase: Float) -> Float in return ((Float(arc4random_uniform(UINT32_MAX)) / Float(UINT32_MAX)) * 2 - 1) } // My "application" format. let format: AVAudioFormat = .init(commonFormat: .pcmFormatFloat32, sampleRate: 48000, channels: 1, interleaved: true)! // Engine setup. let engine = AVAudioEngine() let mainMixer = engine.mainMixerNode let output = engine.outputNode try! output.setVoiceProcessingEnabled(true) let outputFormat = engine.outputNode.inputFormat(forBus: 0) let sampleRate = Float(format.sampleRate) let inputFormat = format var currentPhase: Float = 0 let phaseIncrement = (twoPi / sampleRate) * frequency let srcNodeOne = AVAudioSourceNode { _, _, frameCount, audioBufferList -> OSStatus in let ablPointer = UnsafeMutableAudioBufferListPointer(audioBufferList) for frame in 0..<Int(frameCount) { let value = sine(currentPhase) * amplitude currentPhase += phaseIncrement if currentPhase >= twoPi { currentPhase -= twoPi } if currentPhase < 0.0 { currentPhase += twoPi } for buffer in ablPointer { let buf: UnsafeMutableBufferPointer<Float> = UnsafeMutableBufferPointer(buffer) buf[frame] = value } } return noErr } let srcNodeTwo = AVAudioSourceNode { _, _, frameCount, audioBufferList -> OSStatus in let ablPointer = UnsafeMutableAudioBufferListPointer(audioBufferList) for frame in 0..<Int(frameCount) { let value = whiteNoise(currentPhase) * amplitude currentPhase += phaseIncrement if currentPhase >= twoPi { currentPhase -= twoPi } if currentPhase < 0.0 { currentPhase += twoPi } for buffer in ablPointer { let buf: UnsafeMutableBufferPointer<Float> = UnsafeMutableBufferPointer(buffer) buf[frame] = value } } return noErr } engine.attach(srcNodeOne) engine.attach(srcNodeTwo) engine.connect(srcNodeOne, to: mainMixer, format: inputFormat) engine.connect(srcNodeTwo, to: mainMixer, format: inputFormat) engine.connect(mainMixer, to: output, format: showBrokenState ? inputFormat : outputFormat) // Put the input node to a sink just to match the formats and make VP happy. let sink: AVAudioSinkNode = .init { timestamp, numFrames, data in .zero } engine.attach(sink) engine.connect(engine.inputNode, to: sink, format: showBrokenState ? inputFormat : outputFormat) mainMixer.outputVolume = 0.5 try! engine.start() CFRunLoopRunInMode(.defaultMode, CFTimeInterval(duration), false) engine.stop()
2
1
1.1k
Sep ’23
AVAudioEngine: audio input does not work on iOS 17 simulator
Hello, I'm facing an issue with Xcode 15 and iOS 17: it seems impossible to get AVAudioEngine's audio input node to work on simulator. inputNode has a 0ch, 0kHz input format, connecting input node to any node or installing a tap on it fails systematically. What we tested: Everything works fine on iOS simulators <= 16.4, even with Xcode 15. Nothing works on iOS simulator 17.0 on Xcode 15. Everything works fine on iOS 17.0 device with Xcode 15. More details on this here: https://github.com/Fesongs/InputNodeFormat Any idea on this? Something I'm missing? Thanks for your help 🙏 Tom PS: I filed a bug on Feedback Assistant, but it usually takes ages to get any answer so I'm also trying here 😉
6
5
2.6k
Sep ’23
Add icon to DEXT based on AudioDriverKit
Dear Sirs, I'd like to add an icon to my audio driver based on AudioDriverKit. This icon should show up left of my audio device in the audio devices dialog. For an Audio Server Plugin I managed to do this using the property kAudioDevicePropertyIcon and CFBundleCopyResourceURL(...) but how would you do this with AudioDriverKit? Should I use IOUserAudioCustomProperty or IOUserAudioControl and how would I refer to the Bundle? Is there an example available somewhere? Thanks and best regards, Johannes
6
0
836
Sep ’23
Play sounds during an active call on iOS 17
I did an SwiftUI app where I use AudioSession, AVAudioPlayer and CallKit. I want to play small sounds when pressing a button during the active call, but apparently it is not possible on iOS 17. This was working on previous iOS versions. I tried already all the different audio session categories and modes and nothing seems to work. I tried to find for change logs or relevant docs / issues talking about this but I was unable to find any reference. Expected: Init call, play sound, hear sound. Actual: Init call, play sound, don't hear nothing. Without the active call, I can hear the song. Am I doing or understanding something wrong? What could possibly be happening? Thank you and appreciate if someone can provide an insight on this!
3
3
1k
Oct ’23
Color conversion matrix YCbCr422 and YCbCr420 10 bit to RGB
I need to know correct color conversion matrices for converting YCbCr422 and YCbCr420 10 bit video range sample buffers (BT.2020 color space) to RGB. AVFoundation framework mentions AVVideoYCbCrMatrix_ITU_R_2020 which is a string constant. But I need to know the full matrix that can be used to perform color conversion. I have this matrix for full range BT2020, not sure if this is correct and what is the correct way to adapt it to video range. let colorMatrixBT2020_fullRange = ColorConversion(matrix: matrix_float3x3(columns: (simd_float3(1.0, 1.0, 1.0), simd_float3(0.000, -0.11156702/0.6780, 1.8814), simd_float3(1.4746, -0.38737742/0.6780, 0.000))), offset: vector_float3(0.0, -0.5, -0.5))
0
0
368
Oct ’23
I Keep getting crashes on my M1 Mac while using logic x
Translated Report (Full Report Below) Process: Logic Pro X [1524] Path: /Applications/Logic Pro X.app/Contents/MacOS/Logic Pro X Identifier: com.apple.logic10 Version: 10.7.7 (5762) Build Info: MALogic-5762000000000000~2 (1A85) App Item ID: 634148309 App External ID: 854029738 Code Type: ARM-64 (Native) Parent Process: launchd [1] User ID: 502 Date/Time: 2023-10-10 12:52:02.8675 +0100 OS Version: macOS 13.0 (22A380) Report Version: 12 Anonymous UUID: D3A4AE8C-2CA2-CC80-A569-39459CA10192 Time Awake Since Boot: 4400 seconds System Integrity Protection: enabled Crashed Thread: 0 Dispatch queue: com.apple.main-thread Exception Type: EXC_BAD_ACCESS (SIGSEGV) Exception Codes: KERN_INVALID_ADDRESS at 0x0000000000000010 Exception Codes: 0x0000000000000001, 0x0000000000000010 Termination Reason: Namespace SIGNAL, Code 11 Segmentation fault: 11 Terminating Process: exc handler [1524] VM Region Info: 0x10 is not in any region. Bytes before following region: 105553518919664 REGION TYPE START - END [ VSIZE] PRT/MAX SHRMOD REGION DETAIL UNUSED SPACE AT START ---> MALLOC_NANO (reserved) 600018000000-600020000000 [128.0M] rw-/rwx SM=NUL ...(unallocated) Thread 0 Crashed:: Dispatch queue: com.apple.main-thread 0 Logic Pro X 0x1045c2794 0x10412c000 + 4810644 1 Logic Pro X 0x1045c273c 0x10412c000 + 4810556 2 Logic Pro X 0x1045c37c8 0x10412c000 + 4814792 3 Logic Pro X 0x10492bc40 0x10412c000 + 8387648 4 Logic Pro X 0x10492bd30 0x10412c000 + 8387888 5 Logic Pro X 0x1045c23d0 0x10412c000 + 4809680 6 Logic Pro X 0x1049838ac 0x10412c000 + 8747180 7 Logic Pro X 0x10498340c 0x10412c000 + 8745996 8 Logic Pro X 0x1044feb18 0x10412c000 + 4008728 9 CoreFoundation 0x191222fd0 NSDICTIONARY_IS_CALLING_OUT_TO_A_BLOCK + 24 10 CoreFoundation 0x19125f4b4 -[__NSDictionaryM enumerateKeysAndObjectsWithOptions:usingBlock:] + 212 11 Logic Pro X 0x104ae37b4 0x10412c000 + 10188724 12 Logic Pro X 0x1044fe7ac 0x10412c000 + 4007852 13 Logic Pro X 0x104ae9eb8 0x10412c000 + 10215096 14 Logic Pro X 0x10498987c 0x10412c000 + 8771708 15 Logic Pro X 0x104989434 0x10412c000 + 8770612 16 Logic Pro X 0x10521497c 0x10412c000 + 17729916 17 Foundation 0x19219c67c __NSFireTimer + 104 18 CoreFoundation 0x191277578 CFRUNLOOP_IS_CALLING_OUT_TO_A_TIMER_CALLBACK_FUNCTION + 32 19 CoreFoundation 0x191277220 __CFRunLoopDoTimer + 940 20 CoreFoundation 0x191276d78 __CFRunLoopDoTimers + 356 21 CoreFoundation 0x19125c760 __CFRunLoopRun + 1896 22 CoreFoundation 0x19125b8a4 CFRunLoopRunSpecific + 612 23 HIToolbox 0x19a8cf3bc RunCurrentEventLoopInMode + 292 24 HIToolbox 0x19a8cf200 ReceiveNextEventCommon + 672 25 HIToolbox 0x19a8cef48 _BlockUntilNextEventMatchingListInModeWithFilter + 72 26 AppKit 0x1944b4630 _DPSNextEvent + 632 27 AppKit 0x1944b37c0 -[NSApplication(NSEvent) _nextEventMatchingEventMask:untilDate:inMode:dequeue:] + 728 28 Logic Pro X 0x1055198b8 0x10412c000 + 20895928 29 AppKit 0x1944a7bf0 -[NSApplication run] + 464 30 AppKit 0x19447f058 NSApplicationMain + 880 31 Logic Pro X 0x104a6a7a8 0x10412c000 + 9693096 32 dyld 0x190e53e50 start + 2544 Thread 1:: caulk.messenger.shared:17 0 libsystem_kernel.dylib 0x19113ed6c semaphore_wait_trap + 8 1 caulk 0x19a5f6cfc caulk::mach::semaphore::wait_or_error() + 28 2 caulk 0x19a5d9634 caulk::concurrent::details::worker_thread::run() + 56 3 caulk 0x19a5d9278 void* caulk::thread_proxy<std::__1::tuple<caulk::thread::attributes, void (caulk::concurrent::details::worker_thread::)(), std::__1::tuplecaulk::concurrent::details::worker_thread* > >(void) + 96 4 libsystem_pthread.dylib 0x19117e06c _pthread_start + 148 5 libsystem_pthread.dylib 0x191178e2c thread_start + 8 Thread 2:: com.apple.NSEventThread 0 libsystem_kernel.dylib 0x19113edf0 mach_msg2_trap + 8 1 libsystem_kernel.dylib 0x1911508d8 mach_msg2_internal + 80 2 libsystem_kernel.dylib 0x191147638 mach_msg_overwrite + 540 3 libsystem_kernel.dylib 0x19113f16c mach_msg + 24 4 CoreFoundation 0x19125dbdc __CFRunLoopServiceMachPort + 160 5 CoreFoundation 0x19125c4c8 __CFRunLoopRun + 1232 6 CoreFoundation 0x19125b8a4 CFRunLoopRunSpecific + 612 7 AppKit 0x1945de248 _NSEventThread + 172 8 libsystem_pthread.dylib 0x19117e06c _pthread_start + 148 9 libsystem_pthread.dylib 0x191178e2c thread_start + 8 Thread 3:: MIDIClientNotificationThread 0 libsystem_kernel.dylib 0x19113edf0 mach_msg2_trap + 8 1 libsystem_kernel.dylib 0x1911508d8 mach_msg2_internal + 80 2 libsystem_kernel.dylib 0x191147638 mach_msg_overwrite + 540 3 libsystem_kernel.dylib 0x19113f16c mach_msg + 24 4 CoreFoundation 0x19125dbdc __CFRunLoopServiceMachPort + 160 5 CoreFoundation 0x19125c4c8 __CFRunLoopRun + 1232 6 CoreFoundation 0x19125b8a4 CFRunLoopRunSpecific + 612 7 Foundation 0x192163e58 -[NSRunLoop(NSRunLoop) runMode:beforeDate:] + 212 8 Foundation 0x1921d83b4 -[NSRunLoop(NSRunLoop) runUntilDate:] + 100 9 Logic Pro X 0x1045a8a74 0x10412c000 + 4704884 10 libsystem_pthread.dylib 0x19117e06c _pthread_start + 148 11 libsystem_pthread.dylib 0x191178e2c thread_start + 8 Thread 4:: SeqTimer
1
0
619
Oct ’23
AVAssetWriter error -12743 appending HDR Metadata
It seems AVAssetWriter is rejecting CVPixelBuffers with error -12743 when appending NSData for kCVImageBufferAmbientViewingEnvironmentKey for HDR videos. Here is my code: var ambientViewingEnvironment:CMFormatDescription.Extensions.Value? var ambientViewingEnvironmentData:NSData? ambientViewingEnvironment = sampleBuffer.formatDescription?.extensions[.ambientViewingEnvironment] let plist = ambientViewingEnvironment?.propertyListRepresentation ambientViewingEnvironmentData = plist as? NSData And then attaching this data, CVBufferSetAttachment(dstPixelBuffer, kCVImageBufferAmbientViewingEnvironmentKey, ambientViewingEnvironmentData! as CFData, .shouldPropagate) No matter what I do, including copying the attachment from sourcePixelBuffer to destinationPixelBuffer as it is, the error remains! var attachmentMode:CVAttachmentMode = .shouldPropagate let attachment = CVBufferCopyAttachment(sourcePixelBuffer!, kCVImageBufferAmbientViewingEnvironmentKey, &attachmentMode) NSLog("Attachment \(attachment!), mode \(attachmentMode)") CVBufferSetAttachment(dstPixelBuffer, kCVImageBufferAmbientViewingEnvironmentKey, attachment!, attachmentMode) I need to know if there is anything wrong in the way metadata is copied.
2
0
797
Oct ’23
Audio Workgroups and Mach Ports - Access Violation
I'm battling with Audio Workgroups on macOS. I've got it working for Standalone apps, getting the workgroup from the HAL/Device, and for AUv2/AUv3 plugins. I can verify that my plugin/app's processing threads are executing together with the main workgroup thread, using P-cores. So far so good! Now, I'm trying to get this working over IPC with my ***** app. From the documentation, I figured that I can get the mach port from the main audio workgroup (in my Audio Unit) using the os_workgroup_copy_port call. Then I pass this port over IPC to my ***** process, where I want to create a new workgroup from this mach port (which should be slaved to the master workgroup), using the os_workgroup_create_with_port call. However, when doing this, I get an access violation error in my external process. In my test case, I'm hosting an AUv2 in the AUXPC_arrow process (with Logic), and sending the mach port id over to my ***** App, which is also signed with the appropriate entitlements for accessing mach ports (I think): com.apple.security.temporary-exception.mach-lookup.global-name Now, the question is, should this automagically allow me to use a mach port owned by the AUXPC process? Does that process ALSO have to use some specific entitlement? I of course cannot change the entitlements of Apple's bundles. Many thanks for any assistance.
0
0
648
Oct ’23
deadlock when create AVAudioSession
there are some crash in my app. WatchdogVisibility: Foreground WatchdogCPUStatistics: ( "Elapsed total CPU time (seconds): 3.670 (user 2.300, system 1.370), 3% CPU", "Elapsed application CPU time (seconds): 0.404, 0% CPU" ) reportType:CrashLog maxTerminationResistance:Interactive>} Thread 0 Crashed: 0 681B06A0-7F6B-3FA3-A2CE-063DC1DA7B1B0x27e00 __ulock_wait 1 5D16936B-4E4C-3276-BA7A-69C9BC760ABA0xa0ff4 _dlock_wait 2 5D16936B-4E4C-3276-BA7A-69C9BC760ABA0xa0f24 _dispatch_once_wait 3 BFBF140A-DB1B-3B5B-ACE4-1C990763332A0x6e7c8 +[AVAudioSession sharedInstance] And i found another [AVAudioSession sharedInstance] symbol in other thread ProcessVisibility: Foreground ProcessState: Running WatchdogEvent: scene-create WatchdogVisibility: Foreground WatchdogCPUStatistics: ( "Elapsed total CPU time (seconds): 3.670 (user 2.300, system 1.370), 3% CPU", "Elapsed application CPU time (seconds): 0.404, 0% CPU" ) reportType:CrashLog maxTerminationResistance:Interactive>} 0 681B06A0-7F6B-3FA3-A2CE-063DC1DA7B1B0x272c8 mach_msg2_trap 1 681B06A0-7F6B-3FA3-A2CE-063DC1DA7B1B0x3a194 mach_msg2_internal 2 681B06A0-7F6B-3FA3-A2CE-063DC1DA7B1B0x3a46c mach_msg_overwrite 3 681B06A0-7F6B-3FA3-A2CE-063DC1DA7B1B0x27808 mach_msg 4 5D16936B-4E4C-3276-BA7A-69C9BC760ABA0xbb964 _dispatch_mach_send_and_wait_for_reply 5 5D16936B-4E4C-3276-BA7A-69C9BC760ABA0xbbcf4 dispatch_mach_send_with_result_and_wait_for_reply 6 A0D322A3-D772-3260-9690-71F28737C43C0x8ea8c xpc_connection_send_message_with_reply_sync 7 1715DE5D-0893-3AF0-B0C0-550BB14F91EC0x4dc3b4 __NSXPCCONNECTION_IS_WAITING_FOR_A_SYNCHRONOUS_REPLY__ 8 1715DE5D-0893-3AF0-B0C0-550BB14F91EC0x46f760 -[NSXPCConnection _sendInvocation:orArguments:count:methodSignature:selector:withProxy:] 9 A900B459-0127-379E-9CBA-0EAB9C5D559F0x1214a0 ___forwarding___ 10 A900B459-0127-379E-9CBA-0EAB9C5D559F0x187ef8 __forwarding_prep_0___ 11 BFBF140A-DB1B-3B5B-ACE4-1C990763332A0x7d700 -[AVAudioSession privateCreateSessionInServerUsingXPC] 12 BFBF140A-DB1B-3B5B-ACE4-1C990763332A0x7e874 -[AVAudioSession initWithSpecification:] 13 BFBF140A-DB1B-3B5B-ACE4-1C990763332A0x6e1b8 -[AVAudioSession initWithSessionType:] 14 BFBF140A-DB1B-3B5B-ACE4-1C990763332A0x6e158 __32+[AVAudioSession sharedInstance]_block_invoke 15 5D16936B-4E4C-3276-BA7A-69C9BC760ABA0xa0604 _dispatch_client_callout 16 5D16936B-4E4C-3276-BA7A-69C9BC760ABA0xa1e44 _dispatch_once_callout 17 BFBF140A-DB1B-3B5B-ACE4-1C990763332A0x6e7c8 +[AVAudioSession sharedInstance] why deadlock happend to [AVAudioSession sharedInstance]? i try get volume of [AVAudioSession sharedInstance] in my didFinishLaunchingWithOptions
0
0
459
Oct ’23
Monitoring Sound Input on Output Devices with the Lowest Possible Latency on MAC and iPhone
“I am trying to monitor sound input on an output device with the lowest possible latency on MAC and iPhone. I would like to know if it is possible to send the input buffer to the output device without having to do it through the callbacks of both processes, that is, as close as possible to redirecting them by hardware. I am using the Core Audio API, specifically AudioQueue Services, to achieve this. I also use HAL for configuration, but I would not like to depend too much on HAL since I understand that it is not accessible from iOS.
0
0
544
Oct ’23
Stereo recording and audio quality
Hello, I started to set audio stereo recording (both audio and video are recorded) and the audio quality seems to be lower than quality obtained with native camera application (configured for stereo). Using console to check the log, I found a difference between camera app and mine regarding MXSessionMode (of mediaserverd) in fact, camera application gives MXSessionMode = SpatialRecording and mine MXSessionMode = VideoRecording How can I configure capture session to finally have MXSessionMode = SpatialRecording? Any suggestion? Best regards
2
0
1.1k
Oct ’23
CATextLayer not updating foreground color property on appearance change
I have an issue with my macOS objC app that uses CATextLayer instances that adapt to the app appearance (dark, light). I had no problem with Ventura, so I suppose this is a Sonoma bug. But maybe I'm not doing the right things. Within -updateLayer , I call stringLayer.foregroundColor = NSColor.textColor.CGColor. (stringLayer is an instance of CATextLayer.) NSColor.textColor should adapt to the app appearance. But the color doesn't always change when the app appearance changes. So the text would turn black in dark mode (hence illegible) and white in light mode when I toggle the mode in the system preferences. To investigate wether the issues was specific to the system text color, I tried (again, within -updateLayer): NSColor *color = [NSApp.effectiveAppearance.name isEqualToString:NSAppearanceNameDarkAqua]? NSColor.redColor : NSColor.greenColor; stringLayer.foregroundColor = color.CGColor; I basically get the same issue. The correct color shows when the app launches, but doesn't change the first time I toggle the mode (dark/light). So the wrong color associates with the app appearance (red with light mode and green with dark mode). The code below works : NSColor *color = [NSApp.effectiveAppearance.name isEqualToString:NSAppearanceNameDarkAqua]? NSColor.redColor : NSColor.greenColor; NSDictionary *dic = @{NSFontAttributeName: [NSFont labelFontOfSize:10.0], NSForegroundColorAttributeName:color}; NSAttributedString *string = [[NSAttributedString alloc] initWithString:@"test" attributes: dic]; stringLayer.string = string; But the code below doesn't. The layer text color doesn't change when the app appearance changes. NSDictionary *dic = @{NSFontAttributeName: [NSFont labelFontOfSize:10.0], NSForegroundColorAttributeName:NSColor.textColor}; NSAttributedString *string = [[NSAttributedString alloc] initWithString:@"test" attributes: dic]; stringLayer.string = string; Note that the issue appears to be specific to the foreground color. The background color (which I update in the same method) is always set properly.
0
0
715
Oct ’23
AVPlayer fails to start playback during active VOIP call
In the app, I have VOIP functionality along with AVPlayer for playing videos from remote URLs. Once a VOIP call is established, AVPlayer gets AVPlayerRateDidChangeReasonSetRateFailed right after AVPlayerRateDidChangeReasonSetRateCalled in AVPlayer.rateDidChangeNotification observer when trying to start a video using the play() method. As a result, the video does not start. Checked AVAudioSession.interruptionNotification, it is not getting fired. AVPlayer functionality works as expected before and after the call. Issue observable on iOS 17 only. Any help would be appreciated.
0
0
639
Oct ’23