Sometimes when I call AudioWorkIntervalCreate the call hangs with the following stacktrace. The call is made on the main thread.
mach_msg2_trap 0x00007ff801f0b3ce
mach_msg2_internal 0x00007ff801f19d80
mach_msg_overwrite 0x00007ff801f12510
mach_msg 0x00007ff801f0b6bd
HALC_Object_AddPropertyListener 0x00007ff8049ea43e
HALC_ProxyObject::HALC_ProxyObject(unsigned int, unsigned int, unsigned int, unsigned int) 0x00007ff8047f97f2
HALC_ProxyObjectMap::_CreateObject(unsigned int, unsigned int, unsigned int, unsigned int) 0x00007ff80490f69c
HALC_ProxyObjectMap::CopyObjectByObjectID(unsigned int) 0x00007ff80490ecd6
HALC_ShellPlugIn::_ReconcileDeviceList(bool, bool, std::__1::vector<unsigned int, std::__1::allocator<unsigned int>>&, std::__1::vector<unsigned int, std::__1::allocator<unsigned int>>&) 0x00007ff8045d68cf
HALB_CommandGate::ExecuteCommand(void () block_pointer) const 0x00007ff80492ed14
HALC_ShellObject::ExecuteCommand(void () block_pointer) const 0x00007ff80470f554
HALC_ShellPlugIn::ReconcileDeviceList(bool, bool) 0x00007ff8045d6414
HALC_ShellPlugIn::ConnectToServer() 0x00007ff8045d74a4
HAL_HardwarePlugIn_InitializeWithObjectID(AudioHardwarePlugInInterface**, unsigned int) 0x00007ff8045da256
HALPlugInManagement::CreateHALPlugIn(HALCFPlugIn const*) 0x00007ff80442f828
HALSystem::InitializeDevices() 0x00007ff80442ebc3
HALSystem::CheckOutInstance() 0x00007ff80442b696
AudioObjectAddPropertyListener_mac_imp 0x00007ff80469b431
auoop::WorkgroupManager_macOS::WorkgroupManager_macOS() 0x00007ff8040fc3d5
auoop::gWorkgroupManager() 0x00007ff8040fc245
AudioWorkIntervalCreate 0x00007ff804034a33
Audio
RSS for tagDive into the technical aspects of audio on your device, including codecs, format support, and customization options.
Post
Replies
Boosts
Views
Activity
I am using AVFoundation to capture a photo. This was all working fine, then I realized all the photos were saving to the photo library in portrait mode. I wanted them to save in the orientation the device was in when the camera took the picture, much as the built in camera app does on iOS. So I added this code:
if let videoConnection = photoOutput.connection(with: .video),
videoConnection.isVideoOrientationSupported {
// From() is just a helper to get video orientations from the device orientation.
videoConnection.videoOrientation = .from(UIDevice.current.orientation)
print("Photo orientation set to \(videoConnection.videoOrientation).")
}
With this addition, the first photo taken after a device rotation logs this error in the debugger:
<<<< FigCaptureSessionRemote >>>> Fig assert: "err == 0 " at bail (FigCaptureSessionRemote.m:866) - (err=-12784)
Subsequent photos will not repeat the error. Once you rotate the device again, same behavior. Photos taken after the app loads, but before any rotations have been made, do not produce this error.
I have tried many things, no dice. If I comment this code out it works without error, but of course the photos are all saved in portrait mode again.
I'm getting an issue even unencrypted video playback also failing with status failed.
Error Domain=CoreMediaErrorDomain Code=-12927 "(null)"
I unable to find any info on above error code.
Is there some way to look this up?
Sample master M3U8 is shared below.
Note: If I use any variant M3U8 then it is working playing fine.
I connect two AVAudioNodes by using
- (void)connectMIDI:(AVAudioNode *)sourceNode to:(AVAudioNode *)destinationNode format:(AVAudioFormat * __nullable)format eventListBlock:(AUMIDIEventListBlock __nullable)tapBlock
and add a AUMIDIEventListBlock tap block to it to capture the MIDI events.
Both AUAudioUnits of the AVAudioNodes involved in this connection are set to use MIDI 1.0 UMP events:
[[avAudioUnit AUAudioUnit] setHostMIDIProtocol:(kMIDIProtocol_1_0)];
But all the MIDI voice channel events received are automatically converted to UMP MIDI 2.0 format. Is there something else I need to set so that the tap receives MIDI 1.0 UMPs?
(Note: My app can handle MIDI 2.0, so it is not really a problem. So this question is mainly to find out if I forgot to set the protocol somewhere...).
Thanks!!
I'm trying to expose my native shazamkit code to the host react native app.
The implementation works fine in a separate swift project but it fails when I try to integrate it into a React Native app.
Exception 'required condition is false: IsFormatSampleRateAndChannelCountValid(format)' was thrown while invoking exposed on target ShazamIOS with params (
1682,
1683
)
callstack: (
0 CoreFoundation 0x00007ff80049b761 __exceptionPreprocess + 242
1 libobjc.A.dylib 0x00007ff800063904 objc_exception_throw + 48
2 CoreFoundation 0x00007ff80049b56b +[NSException raise:format:] + 0
3 AVFAudio 0x00007ff846197929 _Z19AVAE_RaiseExceptionP8NSStringz + 156
4 AVFAudio 0x00007ff8461f2e90 _ZN17AUGraphNodeBaseV318CreateRecordingTapEmjP13AVAudioFormatU13block_pointerFvP16AVAudioPCMBufferP11AVAudioTimeE + 766
5 AVFAudio 0x00007ff84625f703 -[AVAudioNode installTapOnBus:bufferSize:format:block:] + 1456
6 muse 0x000000010a313dd0 $s4muse9ShazamIOSC6record33_35CC2309E4CA22278DC49D01D96C376ALLyyF + 496
7 muse 0x000000010a313210 $s4muse9ShazamIOSC5startyyF + 288
8 muse 0x000000010a312d03 $s4muse9ShazamIOSC7exposed_6rejectyyypSgXE_ySSSg_AGs5Error_pSgtXEtF + 83
9 muse 0x000000010a312e47 $s4muse9ShazamIOSC7exposed_6rejectyyypSgXE_ySSSg_AGs5Error_pSgtXEtFTo + 103
10 CoreFoundation 0x00007ff8004a238c __invoking___ + 140
11 CoreFoundation 0x00007ff80049f6b3 -[NSInvocation invoke] + 302
12 CoreFoundation 0x00007ff80049f923 -[NSInvocation invokeWithTarget:] + 70
13 muse 0x000000010a9210ef -[RCTModuleMethod invokeWithBridge:module:arguments:] + 2495
14 muse 0x000000010a925cb4 _ZN8facebook5reactL11invokeInnerEP9RCTBridgeP13RCTModuleDatajRKN5folly7dynamicEiN12_GLOBAL__N_117SchedulingContextE + 2036
15 muse 0x000000010a925305 _ZZN8facebook5react15RCTNativeModule6invokeEjON5folly7dynamicEiENK3$_0clEv + 133
16 muse 0x000000010a925279 ___ZN8facebook5react15RCTNativeModule6invokeEjON5folly7dynamicEi_block_invoke + 25
17 libdispatch.dylib 0x000000010e577747 _dispatch_call_block_and_release + 12
18 libdispatch.dylib 0x000000010e5789f7 _dispatch_client_callout + 8
19 libdispatch.dylib 0x000000010e5808c9 _dispatch_lane_serial_drain + 1127
20 libdispatch.dylib 0x000000010e581665 _dispatch_lane_invoke + 441
21 libdispatch.dylib 0x000000010e58e76e _dispatch_root_queue_drain_deferred_wlh + 318
22 libdispatch.dylib 0x000000010e58db69 _dispatch_workloop_worker_thread + 590
23 libsystem_pthread.dylib 0x000000010da67b84 _pthread_wqthread + 327
24 libsystem_pthread.dylib 0x000000010da66acf start_wqthread + 15
)
RCTFatal
facebook::react::invokeInner(RCTBridge*, RCTModuleData*, unsigned int, folly::dynamic const&, int, (anonymous namespace)::SchedulingContext)
facebook::react::RCTNativeModule::invoke(unsigned int, folly::dynamic&&, int)::$_0::operator()() const
invocation function for block in facebook::react::RCTNativeModule::invoke(unsigned int, folly::dynamic&&, int)
This is my swift file, error happens in the record function.
import Foundation
import ShazamKit
@objc(ShazamIOS)
class ShazamIOS : NSObject {
@Published var matching: Bool = false
@Published var mediaItem: SHMatchedMediaItem?
@Published var error: Error? {
didSet {
hasError = error != nil
}
}
@Published var hasError: Bool = false
private lazy var audioSession: AVAudioSession = .sharedInstance()
private lazy var session: SHSession = .init()
private lazy var audioEngine: AVAudioEngine = .init()
private lazy var inputNode = self.audioEngine.inputNode
private lazy var bus: AVAudioNodeBus = 0
override init() {
super.init()
session.delegate = self
}
@objc
func exposed(_ resolve:RCTPromiseResolveBlock, reject:RCTPromiseRejectBlock){
start()
resolve("ios code executed")
}
func start() {
switch audioSession.recordPermission {
case .granted:
self.record()
case .denied:
DispatchQueue.main.async {
self.error = ShazamError.recordDenied
}
case .undetermined:
audioSession.requestRecordPermission { granted in
DispatchQueue.main.async {
if granted {
self.record()
}
else {
self.error = ShazamError.recordDenied
}
}
}
@unknown default:
DispatchQueue.main.async {
self.error = ShazamError.unknown
}
}
}
private func record() {
do {
self.matching = true
let format = self.inputNode.outputFormat(forBus: bus)
self.inputNode.installTap(onBus: bus, bufferSize: 8192, format: format) { [weak self] (buffer, time) in
self?.session.matchStreamingBuffer(buffer, at: time)
}
self.audioEngine.prepare()
try self.audioEngine.start()
}
catch {
self.error = error
}
}
func stop() {
self.audioEngine.stop()
self.inputNode.removeTap(onBus: bus)
self.matching = false
}
@objc
static func requiresMainQueueSetup() -> Bool {
return true;
}
}
extension ShazamIOS: SHSessionDelegate {
func session(_ session: SHSession, didFind match: SHMatch) {
DispatchQueue.main.async { [self] in
if let mediaItem = match.mediaItems.first {
self.mediaItem = mediaItem
self.stop()
}
}
}
func session(_ session: SHSession, didNotFindMatchFor signature: SHSignature, error: Error?) {
DispatchQueue.main.async {[self] in
self.error = error
self.stop()
}
}
}
objC file
#import <Foundation/Foundation.h>
#import "React/RCTBridgeModule.h"
@interface RCT_EXTERN_MODULE(ShazamIOS, NSObject);
RCT_EXTERN_METHOD(exposed:(RCTPromiseResolveBlock)resolve reject:(RCTPromiseRejectBlock)reject)
@end
how I consume the exposed function in RN.
const {ShazamModule, ShazamIOS} = NativeModules;
const onPressIOSButton = () => {
ShazamIOS.exposed().then(result => console.log(result)).catch(e => console.log(e.message, e.code));
};
The above is the extra_data in the lhvC box in the 3D format of Apple's Vision Pro, which only contains sps/pps;
I can know that 0xa1 is sps nultype, 0x00 01 is the number of sps, and 0x00 17 is the length.
But what is the 0x01 f0 00 fc c3 02 at the beginning, and I can't find the corresponding definition.
I have an app that has the camera continuously running, as it is doing its own AI, have zero need for Apple'video effects, and am seeing a 200% performance hit after updating to Sonoma. The video effects are the "heaviest stack trace" when profiling my app with Instruments CPU profiler (see below).
Is forcing your software onto developers not something Microsoft would do? Is there really no way to opt out?
6671 Jamscape_exp (23038)
2697 start_wqthread
2697 _pthread_wqthread
2183 _dispatch_workloop_worker_thread
2156 _dispatch_root_queue_drain_deferred_wlh
2153 _dispatch_lane_invoke
2146 _dispatch_lane_serial_drain
1527 _dispatch_client_callout
1493 _dispatch_call_block_and_release
777 __88-[PTHandGestureDetector initWithFrameSize:asyncInitQueue:externalHandDetectionsEnabled:]_block_invoke
777 -[VCPHandGestureVideoRequest initWithOptions:]
508 -[VCPHandGestureClassifier initWithMinHandSize:]
508 -[VCPCoreMLRequest initWithModelName:]
506 +[MLModel modelWithContentsOfURL:configuration:error:]
506 -[MLModelAsset modelWithError:]
506 -[MLModelAsset load:]
506 +[MLLoader loadModelFromAssetAtURL:configuration:error:]
506 +[MLLoader _loadModelFromAssetAtURL:configuration:loaderEvent:error:]
505 +[MLLoader _loadModelFromArchive:configuration:loaderEvent:useUpdatableModelLoaders:error:]
505 +[MLLoader _loadWithModelLoaderFromArchive:configuration:loaderEvent:useUpdatableModelLoaders:error:]
505 +[MLLoader _loadModelFromArchive:configuration:modelVersion:compilerVersion:loaderEvent:useUpdatableModelLoaders:loadingClasses:error:]
505 +[MLLoader _loadModelWithClass:fromArchive:modelVersionInfo:compilerVersionInfo:configuration:error:]
445 +[MLMultiFunctionProgramEngine loadModelFromCompiledArchive:modelVersionInfo:compilerVersionInfo:configuration:error:]
333 -[MLMultiFunctionProgramEngine initWithProgramContainer:configuration:error:]
333 -[MLNeuralNetworkEngine initWithContainer:configuration:error:]
318 -[MLNeuralNetworkEngine _setupContextAndPlanWithConfiguration:usingCPU:reshapeWithContainer:error:]
313 -[MLNeuralNetworkEngine _addNetworkToPlan:error:]
313 espresso_plan_add_network
313 EspressoLight::espresso_plan::add_network(char const*, espresso_storage_type_t)
313 EspressoLight::espresso_plan::add_network(char const*, espresso_storage_type_t, std::__1::shared_ptrEspresso::net)
313 Espresso::load_network(std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator> const&, std::__1::shared_ptrEspresso::abstract_context const&, Espresso::compute_path, bool)
235 Espresso::reload_network_on_context(std::__1::shared_ptrEspresso::net const&, std::__1::shared_ptrEspresso::abstract_context const&, Espresso::compute_path)
226 Espresso::load_and_shape_network(std::__1::shared_ptrEspresso::SerDes::generic_serdes_object const&, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator> const&, std::__1::shared_ptrEspresso::abstract_context const&, Espresso::network_shape const&, Espresso::compute_path, std::__1::shared_ptrEspresso::blob_storage_abstract const&, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator> const&)
214 Espresso::load_network_layers_internal(std::__1::shared_ptrEspresso::SerDes::generic_serdes_object, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator> const&, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator> const&, std::__1::shared_ptrEspresso::abstract_context const&, Espresso::network_shape const&, std::__1::basic_istream<char, std::__1::char_traits>, Espresso::compute_path, bool, std::__1::shared_ptrEspresso::blob_storage_abstract const&)
208 Espresso::run_dispatch_v2(std::__1::shared_ptrEspresso::abstract_context, std::__1::shared_ptrEspresso::net, std::__1::vector<std::__1::shared_ptrEspresso::SerDes::generic_serdes_object, std::__1::allocator<std::__1::shared_ptrEspresso::SerDes::generic_serdes_object>> const&, Espresso::network_shape const&, Espresso::compute_path const&, std::__1::basic_istream<char, std::__1::char_traits>)
141 try_dispatch(std::__1::shared_ptrEspresso::abstract_context, std::__1::shared_ptrEspresso::net, std::__1::vector<std::__1::shared_ptrEspresso::SerDes::generic_serdes_object, std::__1::allocator<std::__1::shared_ptrEspresso::SerDes::generic_serdes_object>> const&, Espresso::network_shape const&, Espresso::compute_path const&, std::__1::basic_istream<char, std::__1::char_traits>, Espresso::platform const&, Espresso::compute_path const&)
131 Espresso::get_net_info_ir(std::__1::shared_ptrEspresso::abstract_context, std::__1::shared_ptrEspresso::net, std::__1::vector<std::__1::shared_ptrEspresso::SerDes::generic_serdes_object, std::__1::allocator<std::__1::shared_ptrEspresso::SerDes::generic_serdes_object>> const&, Espresso::network_shape const&, Espresso::compute_path const&, Espresso::platform const&, Espresso::compute_path const&, std::__1::shared_ptrEspresso::cpu_context_transfer_algo_t&, std::__1::shared_ptrEspresso::net_info_ir_t&, std::__1::shared_ptrEspresso::kernels_validation_status_t&)
131 Espresso::cpu_context_transfer_algo_t::create_net_info_ir(std::__1::vector<std::__1::shared_ptrEspresso::SerDes::generic_serdes_object, std::__1::allocator<std::__1::shared_ptrEspresso::SerDes::generic_serdes_object>> const&, std::__1::shared_ptrEspresso::abstract_context, Espresso::network_shape const&, Espresso::compute_path, std::__1::shared_ptrEspresso::net_info_ir_t)
120 Espresso::cpu_context_transfer_algo_t::check_all_kernels_availability_on_context(std::__1::vector<std::__1::shared_ptrEspresso::SerDes::generic_serdes_object, std::__1::allocator<std::__1::shared_ptrEspresso::SerDes::generic_serdes_object>> const&, std::__1::shared_ptrEspresso::abstract_context&, Espresso::compute_path, std::__1::shared_ptrEspresso::net_info_ir_t&)
120 is_kernel_available_on_engine(unsigned long, std::__1::shared_ptrEspresso::base_kernel, Espresso::kernel_info_t const&, std::__1::shared_ptrEspresso::SerDes::generic_serdes_object, std::__1::shared_ptrEspresso::abstract_context, Espresso::compute_path, std::__1::shared_ptrEspresso::net_info_ir_t, std::__1::shared_ptrEspresso::kernels_validation_status_t)
83 Espresso::ANECompilerEngine::mix_reshape_kernel::is_valid_for_engine(std::__1::shared_ptrEspresso::kernels_validation_status_t, Espresso::base_kernel::validate_for_engine_args_t const&) const
45 int ValidateLayer<ANECReshapeLayerDesc, ZinIrReshapeUnit, ZinIrReshapeUnitInfo, ANECReshapeLayerDescAlternate>(void, ANECReshapeLayerDesc const*, ANECTensorDesc const*, unsigned long, unsigned long*, ANECReshapeLayerDescAlternate**, ANECTensorValueDesc const*)
45 void ValidateLayer_Impl<ANECReshapeLayerDesc, ZinIrReshapeUnit, ZinIrReshapeUnitInfo, ANECReshapeLayerDescAlternate>(void*, ANECReshapeLayerDesc const*, ANECTensorDesc const*, unsigned long, unsigned long*, ANECReshapeLayerDescAlternate**, ANECTensorValueDesc const*)
(...)
This code to write UIImage data as heic works in iOS simulator with iOS < 17.5
import AVFoundation
import UIKit
extension UIImage {
public var heic: Data? { heic() }
public func heic(compressionQuality: CGFloat = 1) -> Data? {
let mutableData = NSMutableData()
guard let destination = CGImageDestinationCreateWithData(mutableData, AVFileType.heic as CFString, 1, nil),
let cgImage = cgImage else {
return nil
}
let options: NSDictionary = [
kCGImageDestinationLossyCompressionQuality: compressionQuality,
kCGImagePropertyOrientation: cgImageOrientation.rawValue,
]
CGImageDestinationAddImage(destination, cgImage, options)
guard CGImageDestinationFinalize(destination) else { return nil }
return mutableData as Data
}
public var isHeicSupported: Bool {
(CGImageDestinationCopyTypeIdentifiers() as! [String]).contains("public.heic")
}
var cgImageOrientation: CGImagePropertyOrientation { .init(imageOrientation) }
}
extension CGImagePropertyOrientation {
init(_ uiOrientation: UIImage.Orientation) {
switch uiOrientation {
case .up: self = .up
case .upMirrored: self = .upMirrored
case .down: self = .down
case .downMirrored: self = .downMirrored
case .left: self = .left
case .leftMirrored: self = .leftMirrored
case .right: self = .right
case .rightMirrored: self = .rightMirrored
@unknown default:
fatalError()
}
}
}
But with iOS 17.5 simulator it seems to be broken.
The call of CGImageDestinationFinalize
writes this error into the console:
writeImageAtIndex:962: *** CMPhotoCompressionSessionAddImage: err = kCMPhotoError_UnsupportedOperation [-16994] (codec: 'hvc1')
On physical devices it still seems to work.
Is there any known workaround for the iOS simulator?
When I receive the InterruptionBegan notification (the interruption type is AVAudioSessionInterruptionTypeBegan) , I pause playing music.
When I receive the InterruptionEnded notification (the interruption type is AVAudioSessionInterruptionTypeEnded), I resume playing music.
however, sometimes i has got the error code: AVAudioSessionErrorCodeCannotInterruptOthers (560557684)
If some malicious app to take up the audio, which leads to the third party app music playback recovery fails, an error AVAudioSessionErrorCodeCannotInterruptOthers.
In this case, can we know which apps are maliciously hogging the audio?
Hi,
I am getting into a trap. Please check stack-trace, howto fix this?
regards, Joël
stack-trace with ExtAudioFileWrite
Hello,
I can't get my head wrapped around the following problem:
I have an external USB microphone capable of samplerates of up to 500 kHz. I want to capture the samples and do analysis and display - no playback required. I can not find a way to run the microphone with its maximum samplerate, I always get 48 kHz.
I would like to stick to AVAudioEngine if possible.
Any pointer welcome.
thx!
volker
In my app, I only get one interruption notification when a phone call comes in, and nothing after that. The app uses AVAudioEngine. Is this a bug?
A very simple repro is to just register for the notification, but not do anything else with audio:
struct ContentView: View {
var body: some View {
VStack {
Image(systemName: "globe")
.imageScale(.large)
.foregroundStyle(.tint)
Text("Hello, world!")
}
.padding()
.onReceive(NotificationCenter.default.publisher(for: AVAudioSession.interruptionNotification)) { event in
handleAudioInterruption(event: event)
}
}
private func handleAudioInterruption(event: Notification) {
print("handleAudioInterruption")
guard let info = event.userInfo,
let typeValue = info[AVAudioSessionInterruptionTypeKey] as? UInt,
let type = AVAudioSession.InterruptionType(rawValue: typeValue) else {
print("missing the stuff")
return
}
if type == .began {
print("interruption began")
} else if type == .ended {
print("interruption ended")
guard let optionsValue = info[AVAudioSessionInterruptionOptionKey] as? UInt else { return }
if AVAudioSession.InterruptionOptions(rawValue: optionsValue).contains(.shouldResume) {
print("should resume")
}
}
}
}
And do this in the app's init:
@main
struct InterruptionsApp: App {
init() {
try! AVAudioSession.sharedInstance().setCategory(.playback,
options: [])
try! AVAudioSession.sharedInstance().setActive(true)
}
var body: some Scene {
WindowGroup {
ContentView()
}
}
}
Hi there,
I am encountering an issue in my project which utilizes a speech recognizer and occasionally plays audio files. The problem arises when I configure the AVAudioSession and enable voice processing. The system volume changes unexpectedly and becomes uncontrollable. Specifically, the volume is excessively loud on iPhone but quite low on iPad
let audioSession = AVAudioSession.sharedInstance()
try audioSession.setCategory(.playAndRecord, mode: .default, options: [.defaultToSpeaker, .allowBluetooth, .interruptSpokenAudioAndMixWithOthers])
try audioSession.setActive(true, options: .notifyOthersOnDeactivation)
try audioEngine.inputNode.setVoiceProcessingEnabled(true)
try audioEngine.outputNode.setVoiceProcessingEnabled(true)
I have provided a sample project here: Sample Project.
To reproduce the issue, please follow these steps on a real device:
Click on "Play recording" to hear the sound at normal volume.
Click on "Start recording" to set up the category and speech recognizer.
Click on "Stop recording" to stop the recording.
Click on "Play recording" again and observe that the sound volume has changed.
Thank you for your assistance.
I'm trying to capture audio samples from the selected output device on macOS using ScreenCaptureKit?
Thank you
Hi,
we have multiple threads in our CoreAudio server plugin carrying out necessary asynchronous work (namely handling USB callbacks and shuffling the required data to the IO).
Although these threads have been set up with the appropriate THREAD_TIME_CONSTRAINT_POLICY (which actually improves it) - on M* processors there is an extremely high, non-realtime amount of jitter of >10ms(!)
Now either the runloop notification from the USB stack comes that late or the thread driving the runloop hasn't been set up to correctly handling the callbacks in a timely manner.
Since AudioUnits threads requiring to comply to the frame deadlines can join the workgroup of the audio device is there a similar opportunity for the CoreAudio server plugin threads? And if so, how should these correctly be set up?
Thanks for any hints! Or pointing me to the docs :)
Using the hardware volume buttons on the iPhone, you have 16 steps you can adjust your volume to. I want to implement a volume control slider in my app. I am updating the value of the slider using AVAudioSession.sharedInstance().outputVolume. The problem is that this returns values rounded to the nearest 0 or 5. This makes the slider jump around. .formatted() is not causing this problem.
You can recreate the problem using code below.
@main
struct VolumeTestApp: App {
init() {
try? AVAudioSession.sharedInstance().setActive(true)
}
var body: some Scene {
WindowGroup {
ContentView()
}
}
}
struct ContentView: View {
@State private var volume = Double()
@State private var difference = Double()
var body: some View {
VStack {
Text("The volume changed by \(difference.formatted())")
Slider(value: $volume, in: 0...1)
}
.onReceive(AVAudioSession.sharedInstance().publisher(for: \.outputVolume), perform: { value in
volume = Double(value)
})
.onChange(of: volume) { oldValue, newValue in // Only used to make the problem more obvious
if oldValue > newValue {
difference = oldValue - newValue
} else {
difference = newValue - oldValue
}
}
}
}
Here is a video of the problem in action:
https://share.icloud.com/photos/00fmp7Vq1AkRetxcIP5EXeAZA
What am I doing wrong or what can I do to avoid this?
Thank you
Tested with library songs on an app targeted to Mac (Designed for iPad).
The same app running on iOS queries the same library songs and the duration is expressed correctly in seconds, as expected for the TimeInterval type.
Xcode 15.3
MacOS 14.5
FB13821671
We are using a VoiceProcessingIO audio unit in our VoIP application on Mac. In certain scenarios, the AudioComponentInstanceNew call blocks for up to five seconds (at least two). We are using the following code to initialize the audio unit:
OSStatus status;
AudioComponentDescription desc;
AudioComponent inputComponent;
desc.componentType = kAudioUnitType_Output;
desc.componentSubType = kAudioUnitSubType_VoiceProcessingIO;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
inputComponent = AudioComponentFindNext(NULL, &desc);
status = AudioComponentInstanceNew(inputComponent, &unit);
We are having the issue with current MacOS versions on a host of different Macs (x86 and x64 alike). It takes two to three seconds until AudioComponentInstanceNew returns.
We also see the following errors in the log multiple times:
AUVPAggregate.cpp:2560 AggInpStreamsChanged wait failed
and those right after (which I don't know if they matter to this issue):
KeystrokeSuppressorCore.cpp:44 ERROR: KeystrokeSuppressor initialization was unsuccessful. Invalid or no plist was provided. AU will be bypassed. vpStrategyManager.mm:486 Error code 2003332927 reported at GetPropertyInfo
I'm developing an app which use "System Audio Recording Only" API to capture system audio.
Is there any API to check if app is authorized? So I can instruct user to give my app with this permission.
Thanks.
Just installed macOS Sequoia and observed that the mClientID and mProcessID parameters in the AudioServerPlugInClientInfo structure are empty when called AddDeviceClient and RemoveDeviceClient functions of the AudioServerPlugInDriverInterface.
This data is essential to identify the connected client, and its absence breaks the basic functionality of the HAL plugins.
FB13858951 ticket filed.