Vision pro cannot capture recordings over 16kHz to 24kHz at a sampling rate of 48kHz, why? Or can you tell me how to configure it?Vision pro cannot capture recordings over 16kHz to 24kHz at a sampling rate of 48kHz, why? Or can you tell me how to configure it?
AudioToolbox
RSS for tagRecord or play audio convert formats parse audio streams and configure your audio session using AudioToolbox.
Posts under AudioToolbox tag
20 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
在48kHz的采样率下,Vision pro无法捕获超过16khz到24khz的记录数据,为什么?或者你能告诉我如何配置吗?
Hi,
we have multiple threads in our CoreAudio server plugin carrying out necessary asynchronous work (namely handling USB callbacks and shuffling the required data to the IO).
Although these threads have been set up with the appropriate THREAD_TIME_CONSTRAINT_POLICY (which actually improves it) - on M* processors there is an extremely high, non-realtime amount of jitter of >10ms(!)
Now either the runloop notification from the USB stack comes that late or the thread driving the runloop hasn't been set up to correctly handling the callbacks in a timely manner.
Since AudioUnits threads requiring to comply to the frame deadlines can join the workgroup of the audio device is there a similar opportunity for the CoreAudio server plugin threads? And if so, how should these correctly be set up?
Thanks for any hints! Or pointing me to the docs :)
Hi,
I am getting into a trap. Please check stack-trace, howto fix this?
regards, Joël
stack-trace with ExtAudioFileWrite
When I connect my MacBook to my living room AirPort (older gen wallwart) via Music app, the music output in both rooms is synced.
When I try to setup a Multi-Output Device in AudioMidi setup, I'm not able to get them synced. I'm outputting to the same devices, they're all on the same sample rate, and I've played with the various settings (Primary Clock Source and Drift Sync). What gives? How are these connections different?
Intel MacBook Pro 2018 running Sonoma 14.5
Sometimes when I call AudioWorkIntervalCreate the call hangs with the following stacktrace. The call is made on the main thread.
mach_msg2_trap 0x00007ff801f0b3ce
mach_msg2_internal 0x00007ff801f19d80
mach_msg_overwrite 0x00007ff801f12510
mach_msg 0x00007ff801f0b6bd
HALC_Object_AddPropertyListener 0x00007ff8049ea43e
HALC_ProxyObject::HALC_ProxyObject(unsigned int, unsigned int, unsigned int, unsigned int) 0x00007ff8047f97f2
HALC_ProxyObjectMap::_CreateObject(unsigned int, unsigned int, unsigned int, unsigned int) 0x00007ff80490f69c
HALC_ProxyObjectMap::CopyObjectByObjectID(unsigned int) 0x00007ff80490ecd6
HALC_ShellPlugIn::_ReconcileDeviceList(bool, bool, std::__1::vector<unsigned int, std::__1::allocator<unsigned int>>&, std::__1::vector<unsigned int, std::__1::allocator<unsigned int>>&) 0x00007ff8045d68cf
HALB_CommandGate::ExecuteCommand(void () block_pointer) const 0x00007ff80492ed14
HALC_ShellObject::ExecuteCommand(void () block_pointer) const 0x00007ff80470f554
HALC_ShellPlugIn::ReconcileDeviceList(bool, bool) 0x00007ff8045d6414
HALC_ShellPlugIn::ConnectToServer() 0x00007ff8045d74a4
HAL_HardwarePlugIn_InitializeWithObjectID(AudioHardwarePlugInInterface**, unsigned int) 0x00007ff8045da256
HALPlugInManagement::CreateHALPlugIn(HALCFPlugIn const*) 0x00007ff80442f828
HALSystem::InitializeDevices() 0x00007ff80442ebc3
HALSystem::CheckOutInstance() 0x00007ff80442b696
AudioObjectAddPropertyListener_mac_imp 0x00007ff80469b431
auoop::WorkgroupManager_macOS::WorkgroupManager_macOS() 0x00007ff8040fc3d5
auoop::gWorkgroupManager() 0x00007ff8040fc245
AudioWorkIntervalCreate 0x00007ff804034a33
Hey all!
I'm building a Camera app using AVFoundation, and I am using AVCaptureVideoDataOutput and AVCaptureAudioDataOutput delegates. (I cannot use AVCaptureMovieFileOutput because I am doing some processing inbetween)
When recording the audio CMSampleBuffers to the AVAssetWriter, I noticed that compared to the stock iOS camera app, they are mono-audio, not stereo audio.
I wonder how recording in stereo audio works, are there any guides or documentation available for that?
Is a stereo audio frame still one CMSampleBuffer, or will it be multiple CMSampleBuffers? Do I need to synchronize them? Do I need to set up the AVAssetWriter/AVAssetWriterInput differently?
This is my Audio Session code:
func configureAudioSession(configuration: CameraConfiguration) throws {
ReactLogger.log(level: .info, message: "Configuring Audio Session...")
// Prevent iOS from automatically configuring the Audio Session for us
audioCaptureSession.automaticallyConfiguresApplicationAudioSession = false
let enableAudio = configuration.audio != .disabled
// Check microphone permission
if enableAudio {
let audioPermissionStatus = AVCaptureDevice.authorizationStatus(for: .audio)
if audioPermissionStatus != .authorized {
throw CameraError.permission(.microphone)
}
}
// Remove all current inputs
for input in audioCaptureSession.inputs {
audioCaptureSession.removeInput(input)
}
audioDeviceInput = nil
// Audio Input (Microphone)
if enableAudio {
ReactLogger.log(level: .info, message: "Adding Audio input...")
guard let microphone = AVCaptureDevice.default(for: .audio) else {
throw CameraError.device(.microphoneUnavailable)
}
let input = try AVCaptureDeviceInput(device: microphone)
guard audioCaptureSession.canAddInput(input) else {
throw CameraError.parameter(.unsupportedInput(inputDescriptor: "audio-input"))
}
audioCaptureSession.addInput(input)
audioDeviceInput = input
}
// Remove all current outputs
for output in audioCaptureSession.outputs {
audioCaptureSession.removeOutput(output)
}
audioOutput = nil
// Audio Output
if enableAudio {
ReactLogger.log(level: .info, message: "Adding Audio Data output...")
let output = AVCaptureAudioDataOutput()
guard audioCaptureSession.canAddOutput(output) else {
throw CameraError.parameter(.unsupportedOutput(outputDescriptor: "audio-output"))
}
output.setSampleBufferDelegate(self, queue: CameraQueues.audioQueue)
audioCaptureSession.addOutput(output)
audioOutput = output
}
}
This is how I activate the audio session just before I start recording:
let audioSession = AVAudioSession.sharedInstance()
try audioSession.updateCategory(AVAudioSession.Category.playAndRecord,
mode: .videoRecording,
options: [.mixWithOthers,
.allowBluetoothA2DP,
.defaultToSpeaker,
.allowAirPlay])
if #available(iOS 14.5, *) {
// prevents the audio session from being interrupted by a phone call
try audioSession.setPrefersNoInterruptionsFromSystemAlerts(true)
}
if #available(iOS 13.0, *) {
// allow system sounds (notifications, calls, music) to play while recording
try audioSession.setAllowHapticsAndSystemSoundsDuringRecording(true)
}
audioCaptureSession.startRunning()
And this is how I set up the AVAssetWriter:
let audioSettings = audioOutput.recommendedAudioSettingsForAssetWriter(writingTo: options.fileType)
let format = audioInput.device.activeFormat.formatDescription
audioWriter = AVAssetWriterInput(mediaType: .audio, outputSettings: audioSettings, sourceFormatHint: format)
audioWriter!.expectsMediaDataInRealTime = true
assetWriter.add(audioWriter!)
ReactLogger.log(level: .info, message: "Initialized Audio AssetWriter.")
The rest is trivial - I receive CMSampleBuffers of the audio in my delegate's callback, write them to the audioWriter, and it ends up in the .mov file - but it is not stereo, it's mono.
Is there anything I'm missing here?
I'm attempting to record from a device's microphone (under iOS) using AVAudioRecorder. The examples are all quite simple, and I'm following the same method. But I'm getting error messages on attempts to record, and the resulting M4A file (after several seconds of recording) is only 552 bytes long and won't load. Here's the recorder usage:
func startRecording()
{
let settings = [
AVFormatIDKey: Int(kAudioFormatMPEG4AAC),
AVSampleRateKey: 22050,
AVNumberOfChannelsKey: 1,
AVEncoderAudioQualityKey: AVAudioQuality.high.rawValue
]
do
{
recorder = try AVAudioRecorder(url: tempFileURL(), settings: settings)
recorder?.delegate = self
recorder!.record()
recording = true
}
catch
{
recording = false
recordingFinished(success: false)
}
}
The immediate sign of trouble appears to be the following, in the console. Note the 0 bits per channel and irrelevant 8K sample rate:
AudioQueueObject.cpp:1580 BuildConverter: AudioConverterNew returned -50 from: 0 ch, 8000 Hz, .... (0x00000000) 0 bits/channel, 0 bytes/packet, 0 frames/packet, 0 bytes/frame to: 1 ch, 8000 Hz, Int16
A subsequent attempt to load the file into AVAudioPlayer results in:
MP4_BoxParser.cpp:1089 DataSource read failed MP4AudioFile.cpp:4365 MP4Parser_PacketProvider->GetASBD() failed AudioFileObject.cpp:105 OpenFromDataSource failed AudioFileObject.cpp:80 Open failed
But that's not surprising given that it's only 500+ bytes and we had the earlier error. Anybody have an idea here? Every example on the Web shows essentially this exact method.
I've also tried constructing the recorder with
let audioFormat = AVAudioFormat.init(standardFormatWithSampleRate: 44100, channels: 1)
if audioFormat == nil
{
print("Audio format failed.")
}
else
{
do
{
recorder = try AVAudioRecorder(url: tempFileURL(), format: audioFormat!)
...
with mostly the same result. In that case the instantiation error message was the following, which at least mentions the requested sample rate:
AudioQueueObject.cpp:1580 BuildConverter: AudioConverterNew returned -50 from: 0 ch, 44100 Hz, .... (0x00000000) 0 bits/channel, 0 bytes/packet, 0 frames/packet, 0 bytes/frame to: 1 ch, 44100 Hz, Int32
I'm trying to add a USB mic to my Mini runing the latest Sonoma software but it full of crackles. Why isn't it clean?
I'm developing an iOS application that uses Core Audio. When I'm running the app on Silicon Macbook, the first time I call AudioUnitSetProperty the following error is logged:
CARP violation: using HAL semantics (AUIOImpl_Base)
Are others getting this, and is this part of normal process?
I'm also getting AQMEIO_HAL.cpp:862 kAudioDevicePropertyMute returned err 2003332927 when I set kAudioOutputUnitProperty_EnableIO for input.
There is a CustomPlayer class and inside it is using the MTAudioProcessingTap to modify the Audio buffer.
Let's say there are instances A and B of the Custom Player class.
When A and B are running, the process of B's MTAudioProcessingTap is stopped and finalize callback is coming up when A finishes the operation and the instance is terminated.
B is still experiencing this with some parts left to proceed. Same code same project is not happening in iOS 17.0 or lower.
At the same time when A is terminated, B can complete the task without any impact on B.
What changes to iOS 17.1 are resulting in these results? I'd appreciate it if you could give me an answer on how to avoid these issues.
let audioMix = AVMutableAudioMix()
var audioMixParameters: [AVMutableAudioMixInputParameters] = []
try composition.tracks(withMediaType: .audio).forEach { track in
let inputParameter = AVMutableAudioMixInputParameters(track: track)
inputParameter.trackID = track.trackID
var callbacks = MTAudioProcessingTapCallbacks(
version: kMTAudioProcessingTapCallbacksVersion_0,
clientInfo: UnsafeMutableRawPointer(
Unmanaged.passRetained(clientInfo).toOpaque()
),
init: { tap, clientInfo, tapStorageOut in
tapStorageOut.pointee = clientInfo
},
finalize: { tap in
Unmanaged<ClientInfo>.fromOpaque(MTAudioProcessingTapGetStorage(tap)).release()
},
prepare: nil,
unprepare: nil,
process: { tap, numberFrames, flags, bufferListInOut, numberFramesOut, flagsOut in
var timeRange = CMTimeRange.zero
let status = MTAudioProcessingTapGetSourceAudio(tap,
numberFrames,
bufferListInOut,
flagsOut,
&timeRange,
numberFramesOut)
if noErr == status {
....
}
})
var tap: Unmanaged<MTAudioProcessingTap>?
let status = MTAudioProcessingTapCreate(kCFAllocatorDefault,
&callbacks,
kMTAudioProcessingTapCreationFlag_PostEffects,
&tap)
guard noErr == status else {
return
}
inputParameter.audioTapProcessor = tap?.takeUnretainedValue()
audioMixParameters.append(inputParameter)
tap?.release()
}
audioMix.inputParameters = audioMixParameters
return audioMix
how do I add AudioToolbox on xcode 15.2?
Generate tones, etc. in code? Is there a way?
Regards, Patrick
We've been doing the following in our app for years without issues:
[[NSSound soundSystem:@"Basso"] play]
Suddenly we're seeing hundreds of crashes from macOS 14.0 users and we're not sure what's causing this. There are no memory leaks within the app and all the stack traces are around NSSound:
0 AudioToolbox 0x1f558 MEDeviceStreamClient::RemoveRunningClient(AQIONodeClient&, bool, bool) + 3096
1 AudioToolbox 0x1e8fc AQMEDevice::RemoveRunningClient(AQIONodeClient&, bool) + 108
2 AudioToolbox 0x1e854 AQMixEngine_Base::RemoveRunningClient(AQIONodeClient&, bool) + 76
3 AudioToolbox 0xcdd78 AudioQueueObject::StopRunning(AQIONode*, bool) + 244
4 AudioToolbox 0xcbdd0 AudioQueueObject::Stop(bool, bool, int*) + 736
5 AudioToolbox 0xf1840 AudioQueueXPC_Server::Stop(unsigned int, bool) + 172
6 AudioToolbox 0x1418b4 ___ZN20AudioQueueXPC_Bridge4StopEjb_block_invoke + 72
7 libdispatch.dylib 0x3910 _dispatch_client_callout + 20
8 libdispatch.dylib 0x130f8 _dispatch_sync_invoke_and_complete_recurse + 64
9 AudioToolbox 0x141844 AudioQueueXPC_Bridge::Stop(unsigned int, bool) + 184
10 AudioToolbox 0xa09b0 AQ::API::V2Impl::AudioQueueStop(OpaqueAudioQueue*, unsigned char) + 492
11 AVFAudio 0xbe12c AVAudioPlayerCpp::disposeQueue(bool) + 188
12 AVFAudio 0x341dc -[AudioPlayerImpl dealloc] + 72
13 AVFAudio 0x358a0 -[AVAudioPlayer dealloc] + 36
14 AppKit 0x1b13b4 -[NSAVAudioPlayerSoundEngine dealloc] + 44
15 AppKit 0x1b132c -[NSSound dealloc] + 164
16 libobjc.A.dylib 0xf418 AutoreleasePoolPage::releaseUntil(objc_object**) + 196
17 libobjc.A.dylib 0xbaf0 objc_autoreleasePoolPop + 260
18 CoreFoundation 0x3c57c _CFAutoreleasePoolPop + 32
19 Foundation 0x30e88 -[NSAutoreleasePool drain] + 140
20 Foundation 0x31f94 _NSAppleEventManagerGenericHandler + 92
21 AE 0xbd8c _AppleEventsCheckInAppWithBlock + 13808
22 AE 0xb6b4 _AppleEventsCheckInAppWithBlock + 12056
23 AE 0x4cc4 aeProcessAppleEvent + 488
24 HIToolbox 0x402d4 AEProcessAppleEvent + 68
25 AppKit 0x3a29c _DPSNextEvent + 1440
26 AppKit 0x80db94 -[NSApplication(NSEventRouting) _nextEventMatchingEventMask:untilDate:inMode:dequeue:] + 716
27 AppKit 0x2d43c -[NSApplication run] + 476
28 AppKit 0x4708 NSApplicationMain + 880
29 ??? 0x180739058 (Missing)
Hi!
I am working on an audio application on iOS. This is how I retreive the workgroup from the remoteIO audiounit (ioUnit). The unit is initialized and is working fine (meaning that it is regularly called by the system).
os_workgroup_t os_workgroup{nullptr};
uint32_t os_workgroup_index_size;
if (status = AudioUnitGetProperty(ioUnit, kAudioOutputUnitProperty_OSWorkgroup, kAudioUnitScope_Global, 0,
&os_workgroup, &os_workgroup_index_size);
status != noErr)
{
throw runtime_error("AudioUnitSetProperty kAudioOutputUnitProperty_OSWorkgroup - Failed with OSStatus: " +
to_string(status));
}
However the resulting os_workgroup's value is 0x40. Which seems not correct. No wonder I cannot join any other realtime threads to the workgroup as well. The returned status however is a solid 0.
Can anyone help?
Hello developers,
we have an issue with opening an Apple MPEG-4 audio file that apparently has a correct header but then no actual audio data. This file is 594 bytes and freezes completely the app's main thread and never returns from either of these calls:
NSURL *fileURL = [NSURL fileURLWithPath:filePath];
NSError *error;
AVAudioPlayer *audioPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL:fileURL error:&error]; // freez (call stack below)
AVAudioFile *audioFile = [[AVAudioFile alloc] initForReading:fileURL error:&error]; // freez
AudioFileID audioFileID;
OSStatus result = AudioFileOpenURL((__bridge CFURLRef)fileURL, kAudioFileReadPermission, 0, &audioFileID); // freez
Putting the debugger in pause reveals where it is stuck:
#0 0x00007ff81b7683f9 in MP4BoxParser_Track::GetSampleTableBox() ()
#1 0x00007ff81b76785a in MP4BoxParser_Track::GetInfoFromTrackSubBoxes() ()
#2 0x00007ff81b93fde5 in MP4AudioFile::UseAudioTrack(void*, unsigned int, unsigned int) ()
#3 0x00007ff81b93ab2c in MP4AudioFile::OpenFromDataSource() ()
#4 0x00007ff81b72ee85 in AudioFileObject::Open(__CFURL const*, signed char, int) ()
#5 0x00007ff81b72ed9d in AudioFileObject::DoOpen(__CFURL const*, signed char, int) ()
#6 0x00007ff81b72e1f0 in AudioFileOpenURL ()
#7 0x00007ffa382e8183 in -[AVAudioPlayer initWithContentsOfURL:fileTypeHint:error:] ()
With either of 3 calls the call stack is a little bit different but all in the end get stuck forever in MP4BoxParser_Track::GetSampleTableBox()
I'm attaching the incriminated audio file to the post (just rename it back to .m4a):
Audio_21072023_10462282.crash
How can we avoid this and verify that an audio file is openable and playable. Before, we were checking if a file that we belive be an audio contains data inside, if true then we create AVAudioPlayer with it and see if it return no errors and if the duration is >0. This bug breaks this fondamental logic and now we added a hotfix hack to check if the data is at least 600 bytes long. How do we correctly solve this if none of the methods above return any error but instead ALL hang?
I keep getting this error and I'm stuck for days. Any advice I cannot figure this out Asset validation failed Missing Info.plist value. A value for the Info.plist key 'CFBundleIconName' is missing in the bundle Apps built with iOS 11 or later SDK must supply app icons in an asset catalog and must also provide a value for this Info.plist key.
this crash only occus on iOS 17.0.
Crash Stack:
#136 Thread
SIGSEGV
SEGV_ACCERR
0 AudioToolbox AQ::API::V2Impl::AllocateBuffer(OpaqueAudioQueue*, void*, unsigned int, AudioStreamPacketDescription*, unsigned int, AudioQueueBuffer**) + 772
1 AudioToolbox AQ::API::V2Impl::AllocateBuffer(OpaqueAudioQueue*, void*, unsigned int, AudioStreamPacketDescription*, unsigned int, AudioQueueBuffer**) + 584
related codes:
- (instancetype)init {
if (self = [super init]) {
// settings
int Channels = 2;
int bytesPerFrame = 2 * Channels;
AudioStreamBasicDescription streamDesc;
streamDesc.mSampleRate = 44100;
streamDesc.mFormatID = kAudioFormatLinearPCM;
streamDesc.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
streamDesc.mBytesPerPacket = bytesPerFrame;
streamDesc.mFramesPerPacket = 1;
streamDesc.mBytesPerFrame = bytesPerFrame;
streamDesc.mChannelsPerFrame = Channels;
streamDesc.mBitsPerChannel = 16;
streamDesc.mReserved = 0;
// queue
AudioQueueNewOutput(&streamDesc, AEAudioQueueOutputCallback, (__bridge void * _Nullable)(self), nil, nil, 0, &_playQueue);
AudioQueueSetParameter(_playQueue, kAudioQueueParam_Volume, 1.0);
// buffers
for (int i = 0; i < QUEUE_BUFFER_SIZE; i++) {
AudioQueueAllocateBuffer(_playQueue, MIN_SIZE_PER_FRAME, _bufferList+i);
}
}
return self;
}
Hello,
I've encountered a recurring issue while trying to play back live streams using AVPlayer in an iOS app. The video stream is being delivered via Amazon Kinesis Video Streams (KVS) using HLS.
The specific issue is that audio frequently gets interrupted during playback. The video continues to play just fine, but the audio stops. This issue seems to occur only on iOS devices and not on other platforms or players.
When I check the console logs, I see a number of error messages that may be related to the issue:
2023-05-11 20:57:27.494719+0200 Development[53868:24121620] HALPlugIn::DeviceGetCurrentTime: got an error from the plug-in routine, Error: 1937010544 (stop)
2023-05-11 20:57:27.534340+0200 Development[53868:24121620] [aqme] AQMEIO.cpp:199 timed out after 0.011s (6269 6269); suspension count=0 (IOSuspensions: )
2023-05-11 20:57:30.592067+0200 Development[53868:24122309] HALPlugIn::DeviceGetCurrentTime: got an error from the plug-in routine, Error: 1937010544 (stop)
2023-05-11 20:57:30.592400+0200 Development[53868:24122309] HALPlugIn::DeviceGetCurrentTime: got an error from the plug-in routine, Error: 1937010544 (stop)
I've attempted to troubleshoot this issue in various ways, including trying different iOS devices and networks. I've also attempted to use VLC's player on iOS, which doesn't have the audio interruption issue, but it does encounter other problems.
I believe there might be some compatibility issue between AVPlayer and KVS. I've posted a similar issue on the Amazon KVS GitHub repo but I am reaching out here to see if anyone has faced a similar issue with AVPlayer and has found a solution or can provide some guidance.
Has anyone encountered this issue before, or does anyone have suggestions on how to address it? Any help would be greatly appreciated!
I am trying to setup a kAudioUnitSubType_VoiceProcessingIO audio unit for a VoIP macOS app. I tried kAudioUnitSubType_HALOutput first, but it was not suitable due to echo and noise, which makes sense.
kAudioUnitSubType_VoiceProcessingIO seemed promising. However, when using it with a bluetooth headset for output and built-in mic for input, volume gets really low compared to other device setups such as both internal input and output, or both bluetooth headset input and output. The alternative setups seem to work fine, but we can't request users to avoid specific setups. This makes kAudioUnitSubType_VoiceProcessingIO unusable for a macOS app.
Is kAudioUnitSubType_VoiceProcessingIO production ready for current macOS apps? Is there any way to avoid the volume issues?
Adding manual gain is not a workaround because voice becomes really distorted.
Thanks.