Core Audio

RSS for tag

Interact with the audio hardware of a device using Core Audio.

Posts under Core Audio tag

53 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

Thread safety of AudioObject APIs
Are the AudioObject APIs (such as AudioObjectGetPropertyData, AudioObjectSetPropertyData, etc.) thread-safe? Meaning, for the same AudioObjectID is it safe to do things like: Get a property in one thread while setting the same property in another thread Set the same property in two different threads Add and remove property listeners in different threads Put differently, is there any internal synchronization or mutex for this kind of usage or is the burden on the caller? I was unable to find any documentation either way which makes me think that the APIs are not thread-safe.
0
2
120
1w
macOS Sequoia issue with with AudioServerPlugInDriverInterface
Just installed macOS Sequoia and observed that the mClientID and mProcessID parameters in the AudioServerPlugInClientInfo structure are empty when called AddDeviceClient and RemoveDeviceClient functions of the AudioServerPlugInDriverInterface. This data is essential to identify the connected client, and its absence breaks the basic functionality of the HAL plugins. FB13858951 ticket filed.
2
0
228
2w
AudioComponentInstanceNew takes up to five seconds to complete
We are using a VoiceProcessingIO audio unit in our VoIP application on Mac. In certain scenarios, the AudioComponentInstanceNew call blocks for up to five seconds (at least two). We are using the following code to initialize the audio unit: OSStatus status; AudioComponentDescription desc; AudioComponent inputComponent; desc.componentType = kAudioUnitType_Output; desc.componentSubType = kAudioUnitSubType_VoiceProcessingIO; desc.componentFlags = 0; desc.componentFlagsMask = 0; desc.componentManufacturer = kAudioUnitManufacturer_Apple; inputComponent = AudioComponentFindNext(NULL, &desc); status = AudioComponentInstanceNew(inputComponent, &unit); We are having the issue with current MacOS versions on a host of different Macs (x86 and x64 alike). It takes two to three seconds until AudioComponentInstanceNew returns. We also see the following errors in the log multiple times: AUVPAggregate.cpp:2560 AggInpStreamsChanged wait failed and those right after (which I don't know if they matter to this issue): KeystrokeSuppressorCore.cpp:44 ERROR: KeystrokeSuppressor initialization was unsuccessful. Invalid or no plist was provided. AU will be bypassed. vpStrategyManager.mm:486 Error code 2003332927 reported at GetPropertyInfo
2
0
218
2w
Sound Device Audio Mixing Issues
Hello, now I need to make a control software for a multi-channel sound card device. The Mixing is implemented on the system through ASIO driver on the Windows side. Let's say I have an OUTPUT A channel and an OUTPUT B channel. On Windows I can share the audio from channel A to channel B so that I can hear it on channel B as well, even though I chose channel A as the playback channel. Now I need to implement this function on the mac, but I have not contacted the development of this aspect before, so I would like to ask whether Core Audio or driver can implement this function
1
0
134
2w
(Audio) WorkGroup in CoreAudio server plugin
Hi, we have multiple threads in our CoreAudio server plugin carrying out necessary asynchronous work (namely handling USB callbacks and shuffling the required data to the IO). Although these threads have been set up with the appropriate THREAD_TIME_CONSTRAINT_POLICY (which actually improves it) - on M* processors there is an extremely high, non-realtime amount of jitter of >10ms(!) Now either the runloop notification from the USB stack comes that late or the thread driving the runloop hasn't been set up to correctly handling the callbacks in a timely manner. Since AudioUnits threads requiring to comply to the frame deadlines can join the workgroup of the audio device is there a similar opportunity for the CoreAudio server plugin threads? And if so, how should these correctly be set up? Thanks for any hints! Or pointing me to the docs :)
0
0
232
3w
CoreAudio audio output doesn't work anymore after signing application
Hi, My application doesn't start playback anymore after signing it with entitlements. <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>com.apple.security.app-sandbox</key> <true/> <key>com.apple.security.files.user-selected.read-only</key> <true/> <key>com.apple.security.device.audio-input</key> <true/> <key>com.apple.security.device.microphone</key> <true/> <key>com.apple.security.assets.music.read-write</key> <true/> <key>com.apple.security.network.server</key> <true/> </dict> </plist> regards, Joël
1
0
385
Apr ’24
kAudioHardwarePropertyDevices does not list AirPlay sound output device
The device listing core-audio API kAudioHardwarePropertyDevices does not list Airplay device if virtual-audio-driver is selected as sound output device in System settings. This virtual audio driver is developed by us and is named as BoomAudio. We need to select BoomAudio in System Settings Sound output so that we can get system audio and apply Boom effects/enhancement. But whenever BoomAudio is selected as sound output, we cannot get Airplay device in device-list API and hence cannot play-through to AirPlay sound output device. Steps: Select BoomAudio as Sound output in System Settings. (The issue occurs if any other sound output device like Headphone/Internal Speakers is selected) If AppleTV is connected then we should not AirPlay the system-display. Only Sound output of System should be Airplayed. Build and run the sample project that we have attached “SampleAirplayAudio Click on the button “Sound Output Device List”. Output: In the console of Xcode, Airplay device does not get listed. BoomAudio can be installed from the following path: https://d3jbf8nvvpx3fh.cloudfront.net/gdassets/airplaydts/Boom+2+Installer.zip The sample project 'SampleAirplayAudio' is available at this path: https://d3jbf8nvvpx3fh.cloudfront.net/gdassets/airplaydts/SampleAirplayAudio.zip We have already raised Bug report at Feedback Assistant Apple and the bug id is: FB7543204
0
0
295
Apr ’24
Recording Unprocessed Audio From Multiple Microphones iOS
Hi all, I'm working on an app that involves measuring the heading of one iPhone relative to another iPhone. I need to be able to record audio at the same time from at least 2 of built-in data sources at once. Does anyone know how I can achieve this? I've found that, when using the .measurement mode for an AVAudioSession, the stereo polar pattern is not available. Also, I see that it doesn't seem possible to select multiple data sources. Is there something I'm missing? If this is not possible, why not?
0
0
372
Mar ’24
failed to start VoiceProcessingIO AudioUnit on VisionPro (os 1.1.1)
Hello, We are trying to use an audio calling functionality for visionOS with no success since the update of visionOS. We do not used CallKit for this flow. We set the AudioSession as followed: [sessionInstance setCategory:AVAudioSessionCategoryPlayAndRecord mode:AVAudioSessionModeVoiceChat options: (AVAudioSessionCategoryOptionAllowBluetooth | AVAudioSessionCategoryOptionAllowBluetoothA2DP | AVAudioSessionCategoryOptionMixWithOthers) error:&error_]; We are creating our Audio unit as followed: AudioComponentDescription desc_; desc_.componentType = kAudioUnitType_Output; desc_.componentSubType = kAudioUnitSubType_VoiceProcessingIO; desc_.componentManufacturer = kAudioUnitManufacturer_Apple; desc_.componentFlags = 0; desc_.componentFlagsMask = 0; AudioComponent comp_ = AudioComponentFindNext(NULL, &desc_); IMSXThrowIfError(AudioComponentInstanceNew(comp_, &_audioUnit),"couldn't create a new instance of Apple Voice Processing IO."); UInt32 one_ = 1; IMSXThrowIfError(AudioUnitSetProperty(self.audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, audioUnitElementIOInput, &one_, sizeof(one_)), "could not enable input on Apple Voice Processing IO"); IMSXThrowIfError(AudioUnitSetProperty(self.audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Output, audioUnitElementIOOutput, &one_, sizeof(one_)), "could not enable output on Apple Voice Processing IO"); IMSTagLogInfo(kIMSTagAudio, @"Rate: %ld", _rate); bool isInterleaved = _channel == 2 ? true : false; self.ioFormat = CAStreamBasicDescription(_rate, _channel, CAStreamBasicDescription::kPCMFormatInt16, isInterleaved); IMSXThrowIfError(AudioUnitSetProperty(self.audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &_ioFormat, sizeof(self.ioFormat)), "couldn't set the input client format on Apple Voice Processing IO"); IMSXThrowIfError(AudioUnitSetProperty(self.audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 1, &_ioFormat, sizeof(self.ioFormat)), "couldn't set the output client format on Apple Voice Processing IO"); UInt32 maxFramesPerSlice_ = 4096; IMSXThrowIfError(AudioUnitSetProperty(self.audioUnit, kAudioUnitProperty_MaximumFramesPerSlice, kAudioUnitScope_Global, 0, &maxFramesPerSlice_, sizeof(UInt32)), "couldn't set max frames per slice on Apple Voice Processing IO"); UInt32 propSize_ = sizeof(UInt32); IMSXThrowIfError(AudioUnitGetProperty(self.audioUnit, kAudioUnitProperty_MaximumFramesPerSlice, kAudioUnitScope_Global, 0, &maxFramesPerSlice_, &propSize_), "couldn't get max frames per slice on Apple Voice Processing IO"); AURenderCallbackStruct renderCallbackStruct_; renderCallbackStruct_.inputProc = playbackCallback; renderCallbackStruct_.inputProcRefCon = (__bridge void *)self; IMSXThrowIfError(AudioUnitSetProperty(self.audioUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Output, 0, &renderCallbackStruct_, sizeof(renderCallbackStruct_)), "couldn't set render callback on Apple Voice Processing IO"); AURenderCallbackStruct inputCallbackStruct_; inputCallbackStruct_.inputProc = recordingCallback; inputCallbackStruct_.inputProcRefCon = (__bridge void *)self; IMSXThrowIfError(AudioUnitSetProperty(self.audioUnit, kAudioOutputUnitProperty_SetInputCallback, kAudioUnitScope_Input, 0, &inputCallbackStruct_, sizeof(inputCallbackStruct_)), "couldn't set render callback on Apple Voice Processing IO"); And as soon as we try to start the AudioUnit we have the following error: PhaseIOImpl.mm:1514 phaseextio@0x107a54320: failed to start IO directions 0x3, num IO streams [1, 1]: Error Domain=com.apple.coreaudio.phase Code=1346924646 "failed to pause/resume stream 6B273F5B-D6EF-41B3-8460-0E34B00D10A6" UserInfo={NSLocalizedDescription=failed to pause/resume stream 6B273F5B-D6EF-41B3-8460-0E34B00D10A6} We do not use PHASE framework on our side and the error is not clear to us nor documented anywhere. We also try to use a AudioUnit that only do Speaker witch works perfectly, but as soon as we try to record from an AudioUnit the start failed as well with the error AVAudioSessionErrorCodeCannotStartRecording We suppose that somehow inside PHASE an IO VOIP audio unit is running that prevent us from stoping/killing it when we try to create our own, and stop the whole flow. It used to work on visonOS 1.0.1 Regards, Summit-tech
0
0
493
Mar ’24
wot WHAT is what? Unexpected CoreAudio get property errors/behaviors
I see unexpected behavior when using AudioObjectGetPropertyData() to get the Channel Number Name or the Channel Category Name for the iPhone Microphone or the MacBook Pro Microphone audio devices. I am running macOS 14.4 Sonoma on a Intel MacBook Pro 15" 2019. I have a test program that loops though all audio devices on a system, and all channels on each device. It uses AudioObjectGetPropertyData() to get the device name and manufacturer name and then iterate over the input and output channels getting Channel Number Name, Channel Name and Channel Category. I would expect some of these values (like channel Name frequently is) to be empty CFStrings. Or for others to return FALSE to AudioObjectHasProperty() if the driver does not implement the property. And that is how things behave for most devices... ... except for the MacBook Pro Microphone and iPhone Microphone devices. There I get AudioObjectHasProperty() return TRUE but then a AudioObjectGetPropertyData() call with the exact same AudioObjectPropertyAddress returns with an error code 'WHAT'. Took me a little while to realize the error cord being returned was 'WHAT' not 'what' and I added a modified checkError() function here to capture that and more. So what surprised me is: If AudioObjectHasProperty() returns TRUE then I expect that the matching AudioObjectGetPropertyData() works. and What the heck is 'WHAT'? I assume it is supposed to mean 'what' aka kAudioHardwareUnspecifiedError. Why is that actual error value not returned? Are there other places that return 'WHAT' or capitalized versions of these standard OSStatus CoreAudio errors? The example program is not complex but is too long for here so it's on GitHub at https://github.com/Darryl-Ramm/Wot Here is some output from that program showing the unexpected behavior: output.txt
3
0
462
Mar ’24
Is AudioObjectShow() supposed to work? On what audio objects?
Hi Hopefully a simple question. I just reached for AudioObjectShow() to help debug stuff and it does not appear to work on audio devices or audio streams. It prints nothing for them. It does work on AudiokAudioObjectSystemObject, I've not explored what else it does or does not work on. I could not find any other posts about this, it it expected to work? On all audio objects? I'm on macOS 14.4. Here is a simple demo. AudioObjectShow() prints out info for the System AudiokAudioObjectSystemObject but then prints nothing as we loop through the audio devices in the system (and same for all streams on all these devices, but I'm not showing that here). #include <CoreAudio/AudioHardware.h> static const AudioObjectPropertyAddress devicesAddress = { kAudioHardwarePropertyDevices, kAudioObjectPropertyScopeGlobal, kAudioObjectPropertyElementMain }; static const AudioObjectPropertyAddress nameAddress = { kAudioObjectPropertyName, kAudioObjectPropertyScopeGlobal, kAudioObjectPropertyElementMain }; int main(int argc, const char *argv[]) { UInt32 size; AudioObjectID *audioDevices; AudioObjectGetPropertyDataSize(kAudioObjectSystemObject, &devicesAddress, 0, NULL, &size); audioDevices = (AudioObjectID *) malloc(size); AudioObjectGetPropertyData(kAudioObjectSystemObject, &devicesAddress, 0, NULL, &size, audioDevices); UInt32 nDevices = size / sizeof(AudioObjectID); printf("--- AudioObjectShow(kAudioObjectSystemObject):\n"); AudioObjectShow(kAudioObjectSystemObject); for (int i=0; i < nDevices; i++) { printf("-------------------------------------------------\n"); printf("audioDevices[%d] = 0x%x\n", i, audioDevices[i]); AudioObjectGetPropertyDataSize(audioDevices[i], &nameAddress, 0, NULL, &size); CFStringRef cfString = malloc(size); AudioObjectGetPropertyData(audioDevices[i], &nameAddress, 0, NULL, &size, &cfString); CFShow(cfString); //AudioObjectShow() give us anything? printf("--- AudioObjectShow(audioDevices[%d]=0x%x):\n", i, audioDevices[i]); AudioObjectShow(audioDevices[i]); printf("---\n"); } } Start of output... AudioObjectID: 0x1 Class: Audio System Object Name: The Audio System Object ------------------------------------------------- audioDevices[0] = 0xd2 Darryl\u2019s iPhone Microphone --- AudioObjectShow(audioDevices[0]=0xd2): --- ------------------------------------------------- audioDevices[1] = 0x41 LG UltraFine Display Audio --- AudioObjectShow(audioDevices[1]=0x41): --- ------------------------------------------------- audioDevices[2] = 0x3b LG UltraFine Display Audio --- AudioObjectShow(audioDevices[2]=0x3b): --- ------------------------------------------------- audioDevices[3] = 0x5d BlackHole 16ch --- AudioObjectShow(audioDevices[3]=0x5d): --- -------------------------------------------------
1
0
429
Mar ’24
Issue with kAudioProcessPropertyDevices property
The CoreAudio framework has a process class property kAudioProcessPropertyDevices, which is used to obtain an array of AudioObjectIDs that represent the devices currently used by the process for output. But this property behaves incorrectly. Specifically, if a process switches from one microphone to another while streaming, this property returns the output device ID as the input device ID. Steps to reproduce: run FaceTime select "MacBook Pro Microphone" as an input device from the FaceTime menu select "MacBook Pro Speaker" as an output device from the FaceTime menu start a call get kAudioProcessPropertyDevices for Input scope: returns ID1 - "MacBook Pro Microphone" [CORRECT] get kAudioProcessPropertyDevices for Output scope: returns ID2 - "MacBook Pro Speaker" [CORRECT] change the input device in the FaceTime menu to any other microphone ("AirPods Pro" - ID3) get kAudioProcessPropertyDevices for Input scope: returns ID2 "MacBook Pro Speaker" but should be ID3 "AirPods Pro" [INCORRECT] get "kAudioProcessPropertyDevices" for Output scope: returns ID2 "MacBook Pro Speaker" [CORRECT] Monitoring the property change for kAudioProcessPropertyDevices could provide a means to track audio streaming processes, but its current flaw renders it unusable. So I'm curious if the macOS developers plan to address this issue in future releases, or if anyone can come up with a reliable alternative for identifying processes and associated audio devices being used for playback or recording.
0
1
360
Mar ’24
With OS X Sonoma 14.4 update there is no rights to relaunch coreaudiod
Some of installers which we have suddenly become broken for users running the latest version of OS X, I found that the reason was that we install Core Audio HAL driver and because I wanted to avoid system reboot I relaunched coreaudio daemon via from a pkg post-install script. sudo launchctl kickstart -kp system/com.apple.audio.coreaudiod So with the OS update the command fails, if a computer has SIP enabled (what is the default). sudo launchctl kickstart -kp system/com.apple.audio.coreaudiod Password: Could not kickstart service "com.apple.audio.coreaudiod": 1: Operation not permitted It would be super nice if either the change can be: reverted OR I and similar people to know a workaround of how to hot-plug (and unplug) such a HAL driver.
5
1
3.1k
Apr ’24
Audio Workgroups: Aux threads joined to workgroup executed on E-Cores when App is in background
We develop virtual instruments for Mac/AU and are trying to get our AU-Plugins and our Standalone player to work with Audio Workgroups. When the Standalone App or Logic Pro is in the foreground and active all is well and as expected. However when the App or Logic Pro is not in focus all my auxiliary threads are running on E-Cores. Even though they are properly joined to the processing thread's workgroup. This leads to a lot of audible drop outs because deadlines are not met anymore. The processing thread itself stays on a p-core. But has to wait for the other threads to finish. How can I opt out of this behaviour? Our users certainly have use cases where they expect the Player to run smoothly even though they currently have a different App in focus.
0
0
451
Mar ’24
Audio Server Plugin entitlements and communication
I am currently working on planning a multi-component software system that consists of an Audio Server Plugin and an application for user interaction. I have very little experience with IPC/XPC and its performance implications, so I hope I can find a little guidance here. The Audio Server plugin publishes a number of multi-channel output devices on which it should perform computations and pass the result on to a different Core Audio device. My concerns here are: Can the plugin directly access other CoreAudio devices for audio output or is this prohibited by the sandboxing? If it cannot, would relaying the audio data via XPC be a good idea in terms of low latency stability? Can I use metal compute from within the Audio Server plugin? I have not found any information about metal related sandboxing entitlements. I am also concerned about performance implications as above. Regarding the user interface application, I would like to know: If a process that has not been started by launchd can communicate with the Audio Server plugin using XPC. If not, would a user agent instead of an app be a better choice? Or are there other communication channels that would work with sandboxing? Thank you very much! Andreas
0
0
578
Feb ’24
CoreAudio server plugin: 24bit big endian audio streams
At least under macOS Sonoma 14.2.1 kAudioFormatFlagIsBigEndian for 24bit audio doesn't seem to be supported by the CoreAudio engine when providing kAudioServerPlugInIOOperationWriteMix streaming buffers for our CoreAudio server plugin. Is that correct and to be expected? Or how should the AudioStreamBasicDescription be filled out on a kAudioStreamPropertyPhysicalFormat request to correctly announce 24bit big endian audio to CoreAudio? Thanks, hagen.
0
0
469
Feb ’24
ClassInfo Audio Unit Property not being set
I have a music player that is able to save and restore AU parameters using the kAudioUnitProperty_ClassInfo property. For non apple AUs, this works fine. But for any of the Apple units, the class info can be set only the first time after the audio graph is built. Subsequent sets of the property do not stick even though the OSStatus code is 0 upon return. Previously this had worked fine. But sometime, not sure when, the Apple provided AUs changed their behavior and is now causing me problems. Can anyone help shed light on this ? Thanks in advance for the help. Jeff Frey
0
0
534
Jan ’24