Dive into the technical aspects of audio on your device, including codecs, format support, and customization options.

Audio Documentation

Post

Replies

Boosts

Views

Activity

Bluetooth Speaker makes installTap fail to callback after first few seconds
If I have bluetooth speaker connected and I have installTap called on input Node, the callback is fired for 1-2 seconds then it doesnt anymore. I dont see any route or any notification handler called in between. engine.inputNode.removeTap(onBus: 0) engine.inputNode.installTap( onBus: 0, bufferSize: 4096, format: format ) { buffer, _ in // 3 guard let channelData = buffer.floatChannelData else { return } // This callback fails after some time. } Not sure if this is expected, but I noticed some other applications, they seem to work fine. If I remove bluetooth device, my input works fine. Also I have no issues with output on Speaker.
2
0
251
Sep ’24
iOS 18.0 bug - initial AVAudioSession.outputVolume returns zero
Is anyone experiencing an issue with initial ‘AVAudioSession.sharedInstance().outputVolume’ returning 0 after updating to iOS 18.0? I’m observing the outputVolume of AVAudioSession, and when I adjust the device volume, the change value returns correctly. However, when I call AVAudioSession.sharedInstance().outputVolume to get the current volume before knowing the change value, it returns 0, even though the device volume is not actually 0.
1
1
276
Sep ’24
io buffer sizes for audio driver based on IOUserAudioDevice
Dear Sirs, I've written an audio driver based on IOUserAudioDevice. In my IOOperationHandler I can receive and send the audio samples as expected. Is there any way to configure the number of samples transferred in each call? Currently it seem to be around 512 samples per call, which relates to 10.7 millisecs when operating on 48 kHz samplerate. I'd like to achieve something like 48 or 96 samples per call. I did some experiments and tried calls to SetOutputLatency() etc. but so far I didn't find the right way to change the in_io_buffer_frame_size in the callback. I'd like to do this as smaller buffer sizes would allow lower latencies for the subsequent audio processing. Thanks and best regards, Johannes
3
0
478
Jul ’24
MusicKit & React Native app, overlay part of a song to a video?
I have an app on which users learn choreography. To avoid copyright infringements we currently only have audio instructions and no music on the app. Could we enable those that are subscribed to Apple Music to listen to the part of a song the corresponds to the choreography? Usually they are 60 seconds long. The app is in React Native. Would it be possible to implement it so that opening a dance video automatically triggers the playback of that song from e.g. second 32 - 95? Since the video is looping, could it then start playing from second 32 again? Also looking for devs with experience in integrating the MusicKit for this usecase if it turns out to be possible.
0
0
227
Sep ’24
ExtAudioFileRead throwing AVAudioSessionErrorCodeResourceNotAvailable error on iOS and iPadOS 18
Calls to ExtAudioFileRead are throwing OSStatus 561145203 (AVAudioSessionErrorCodeResourceNotAvailable) on iOS and iPadOS 18 -- earlier versions of iOS have not exhibited this behavior. This is a longstanding code path that has seen a spike of these error codes since iOS 18's release. The following is also printed to the Xcode 16 console:
2
1
361
Sep ’24
Audio / Video sync issue on iOS using AVSampleBufferRenderSynchronizer
My current app implements a custom video player, based on a AVSampleBufferRenderSynchronizer synchronising two renderers: an AVSampleBufferDisplayLayer receiving decoded CVPixelBuffer-based video CMSampleBuffers, and an AVSampleBufferAudioRenderer receiving decoded lpcm-based audio CMSampleBuffers. The AVSampleBufferRenderSynchronizer is started when the first image (in presentation order) is decoded and enqueued, using avSynchronizer.setRate(_ rate: Float, time: CMTime), with rate = 1 and time the presentation timestamp of the first decoded image. Presentation timestamps of video and audio sample buffers are consistent, and on most streams, the audio and video are correctly synchronized. However on some network streams, on iOS, the audio and video aren't synchronized, with a time difference that seems to increase with time. On the other hand, with the same player code and network streams on macOS, the synchronization always works fine. This reminds me of something I've read, about cases where an AVSampleBufferRenderSynchronizer could not synchronize audio and video, causing them to run with independent and potentially drifting clocks, but I cannot find it again. So, any help / hints on this sync problem will be greatly appreciated! :)
1
0
833
May ’23
CarPlay issue after iOS 18 update
After upgrading to iOS 18 CarPlay with 2023 Lexus and iPhone 15 Pro Max shows multiple issues: • speakers reduced to Mono sound (going back to normal after some minutes and then reducing again) • no speaker sound at all • touching / moving phone while driving resulting in “on and off” sound No Reboot / Shutdown helps No Cable connection works @Apple: do you test your software professionally or is this outsourced to the community? Doesn’t look at all like a professional approach? Please solve this dangerous (traffic!) and annoying topic ASAP! Thanks - Torsten
1
0
437
Sep ’24
Understanding the number of input channels in Core Audio
Hello everyone, I'm new to Core Audio and still haven't found my footing. I'm learning how to capture audio from the default device, using Audio Units. On my MacBook, the default audio input is mono. But when I write a piece of code to capture audio using AUHAL, I'm discovering that I need to provide an AudioBufferList with two channels, not one. Also, when I try to capture audio from an audio interface with 20 audio inputs, I must provide an AudioBufferList with two channels, and not with 20 channels. To investigate the issue, I wrote a small diagnostic program, which opens the default audio device and probes it for the number of channels. Depending on which way I'm probing, I'm getting different results. When I probe the stream format, I'm getting information that there is 1 channels. But when I probe the input audio unit, I'm getting information that there are 2 input channels. Here's my program to demonstrate the issue: // InputDeviceChannels.m // Compile with: // clang -framework CoreAudio -framework AudioToolbox -framework CoreFoundation -framework AudioUnit -o InputDeviceChannels InputDeviceChannels.m // // On my system, this prints: // Device Name: MacBook Pro Microphone // Number of Channels (Stream Format): 1 // Number of Elements (Element Count): 2 #import <AudioToolbox/AudioToolbox.h> #import <AudioUnit/AudioUnit.h> #import <CoreAudio/CoreAudio.h> #import <Foundation/Foundation.h> void printDeviceInfo(AudioUnit audioUnit) { UInt32 size; OSStatus err; AudioStreamBasicDescription streamFormat; size = sizeof(streamFormat); err = AudioUnitGetProperty(audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 1, &streamFormat, &size); if (err != noErr) { printf("Error getting stream format\n"); exit(1); } int numChannels = streamFormat.mChannelsPerFrame; UInt32 elementCount; size = sizeof(elementCount); err = AudioUnitGetProperty(audioUnit, kAudioUnitProperty_ElementCount, kAudioUnitScope_Input, 0, &elementCount, &size); if (err != noErr) { printf("Error getting element count\n"); exit(1); } printf("Number of Channels (Stream Format): %d\n", numChannels); printf("Number of Elements (Element Count): %d\n", elementCount); } void printDeviceName(AudioDeviceID deviceID) { UInt32 size; OSStatus err; CFStringRef deviceName = NULL; size = sizeof(deviceName); err = AudioObjectGetPropertyData( deviceID, &(AudioObjectPropertyAddress){kAudioDevicePropertyDeviceNameCFString, kAudioObjectPropertyScopeGlobal, kAudioObjectPropertyElementMain}, 0, NULL, &size, &deviceName); if (err != noErr) { printf("Error getting device name\n"); exit(1); } char deviceNameStr[256]; if (!CFStringGetCString(deviceName, deviceNameStr, sizeof(deviceNameStr), kCFStringEncodingUTF8)) { printf("Error converting device name to C string\n"); exit(1); } CFRelease(deviceName); printf("Device Name: %s\n", deviceNameStr); } int main(int argc, const char *argv[]) { @autoreleasepool { OSStatus err; // Get the default input device ID AudioDeviceID input_device_id = kAudioObjectUnknown; { UInt32 property_size = sizeof(input_device_id); AudioObjectPropertyAddress input_device_property = { kAudioHardwarePropertyDefaultInputDevice, kAudioObjectPropertyScopeGlobal, kAudioObjectPropertyElementMain, }; err = AudioObjectGetPropertyData(kAudioObjectSystemObject, &input_device_property, 0, NULL, &property_size, &input_device_id); if (err != noErr || input_device_id == kAudioObjectUnknown) { printf("Error getting default input device ID\n"); exit(1); } } // Print the device name using the input device ID printDeviceName(input_device_id); // Open audio unit for the input device AudioComponentDescription desc = {kAudioUnitType_Output, kAudioUnitSubType_HALOutput, kAudioUnitManufacturer_Apple, 0, 0}; AudioComponent component = AudioComponentFindNext(NULL, &desc); AudioUnit audioUnit; err = AudioComponentInstanceNew(component, &audioUnit); if (err != noErr) { printf("Error creating AudioUnit\n"); exit(1); } // Enable IO for input on the AudioUnit and disable output UInt32 enableInput = 1; UInt32 disableOutput = 0; err = AudioUnitSetProperty(audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, 1, &enableInput, sizeof(enableInput)); if (err != noErr) { printf("Error enabling input on AudioUnit\n"); exit(1); } err = AudioUnitSetProperty(audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Output, 0, &disableOutput, sizeof(disableOutput)); if (err != noErr) { printf("Error disabling output on AudioUnit\n"); exit(1); } // Set the current device to the input device err = AudioUnitSetProperty(audioUnit, kAudioOutputUnitProperty_CurrentDevice, kAudioUnitScope_Global, 0, &input_device_id, sizeof(input_device_id)); if (err != noErr) { printf("Error setting device for AudioUnit\n"); exit(1); } // Initialize AudioUnit err = AudioUnitInitialize(audioUnit); if (err != noErr) { printf("Error initializing AudioUnit\n"); exit(1); } // Print device info printDeviceInfo(audioUnit); // Clean up AudioUnitUninitialize(audioUnit); AudioComponentInstanceDispose(audioUnit); } return 0; } It prints: Device Name: MacBook Pro Microphone Number of Channels (Stream Format): 1 Number of Elements (Element Count): 2 I tried to set the number of channels to 1 on the input unit, but it didn’t change anything. After calling setNumberOfChannels(1, audioUnit), I’m still getting the same output. Note 1: I know that I can ignore one channel, etc, etc. My purpose here is not to "somehow get it to work", I already did that. My purpose is to understand the API, so that I'll be able to write code that handles any number of audio inputs. Note 2: I already read a bunch of documentation, especially this here: https://developer.apple.com/library/archive/technotes/tn2091/ - perhaps the channel map could help here, but I can’t make sense of it - I tried to use it based on my understanding but I only got the -50 OSStatus. How should I understand this? Is it that that audio unit is an abstraction layer and automatically converts mono input into stereo input? Can I ask AUHAL to provide me the same number of input channels that the audio device has?
1
0
463
Sep ’24
iOS 18 arm64 simulator disables audio output with unknown "AudioConverterService" error
Hello, I'm getting an unknown, never-before-seen error at application launch, when running my iOS SpriteKit game on the iOS 18 arm64 simulator from Xcode 16.0 (16A242d) — AudioConverterOOP.cpp:847 Failed to prepare AudioConverterService: -302 This is occurs on all iOS 18 simulator devices, between application(_:didFinishLaunchingWithOptions:) and the first applicationDidBecomeActive(_:) — the SKScene object may have been already initialized by SpriteKit, but the scene's didMove(to:) method hasn't been called yet. Also, note that the error message is being emitted from a secondary (non-main) thread, obviously not created by the app. After the error occurs, no SKScene is able to play audio — this had never occurred on iOS versions prior to 18, neither on physical devices nor on the simulator. Has anyone seen anything like this on a physical device running 18? Unfortunately, at the moment I cannot test myself on an 18 device, only on the simulator... Thank you, D.
2
0
390
Sep ’24
Silent output MP4 when using AVAssetReaderHello, I am trying to create an MP4 by obtaining the content from another source MP4. The source MP4 would be read with `AVAssetReader` and the output written with `AVAssetWriter`. I wanted to do partial t
Hello, I am trying to create an MP4 by obtaining the content from another source MP4. The source MP4 would be read with AVAssetReader and the output written with AVAssetWriter. I wanted to do partial tests: first, I placed only the video in the output MP4. Now, I am trying to place only the audio in the output MP4. I even managed to get the output MP4 to have the same length (in seconds) as the source MP4. But the problem is simple: the output MP4 is simply silent. Naturally, I want it to have audio. Below are two excerpts from the source code. Reading and writing. Note: The variable videoURL is from the class where the function writeVideo() is located. Its assignment happens in another scope, already debugged. Snippet: let semaphore = DispatchSemaphore(value: 0) func writeVideo() { var audioReaderBuffers = [CMSampleBuffer]() // File directory url = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)[0].appendingPathComponent("*****/output.mp4") guard let url = url else { return } try FileManager.default.createDirectory(at: url.deletingLastPathComponent(), withIntermediateDirectories: true) if FileManager.default.fileExists(atPath: url.path()) { try FileManager.default.removeItem(at: url) } if let videoURL = videoURL { let videoAsset = AVAsset(url: videoURL) Task { let audioTrack = try await videoAsset.loadTracks(withMediaType: .audio).first! let reader = try AVAssetReader(asset: videoAsset) let audioSettings = [ AVFormatIDKey: kAudioFormatLinearPCM, AVSampleRateKey: 44100, AVNumberOfChannelsKey: 2 ] as [String : Any] let audioOutputTest = try await audioTrack.getAudioSettings() let readerAudioOutput = AVAssetReaderTrackOutput(track: audioTrack, outputSettings: audioSettings) reader.add(readerAudioOutput) reader.startReading() while let sampleBuffer = readerAudioOutput.copyNextSampleBuffer() { audioReaderBuffers.append(sampleBuffer) } semaphore.signal() } } semaphore.wait() let audioInput = createAudioInput(sampleBuffer: audioReaderBuffers[0]) let assetWriter = try AVAssetWriter(outputURL: url, fileType: .mp4) assetWriter.add(audioInput) assetWriter.startWriting() assetWriter.startSession(atSourceTime: .zero) for (index, buffer) in audioReaderBuffers.enumerated() { while !audioInput.isReadyForMoreMediaData { usleep(1000) } audioInput.append(buffer) } assetWriter.finishWriting { switch assetWriter.status { case .completed: print("Operation completed successfully: \(url.absoluteString)") case .failed: if let error = assetWriter.error { print("Error description: \(error.localizedDescription)") } else { print("Error not found.") } default: print("Error not found.") } } } Below is the createAudioInput method: func createAudioInput(sampleBuffer: CMSampleBuffer) -> AVAssetWriterInput { let audioSettings = [ AVFormatIDKey: kAudioFormatMPEG4AAC, AVSampleRateKey: 48000, AVEncoderBitRatePerChannelKey: 64000, AVNumberOfChannelsKey: 1 ] as [String : Any] let audioInput = AVAssetWriterInput(mediaType: .audio, outputSettings: audioSettings, sourceFormatHint: sampleBuffer.formatDescription) audioInput.expectsMediaDataInRealTime = false return audioInput } I await your help, please.
0
0
270
Sep ’24
tvOS 18.0 update harmed passthrough of multichannel audio for previously compatible hardware
tvOS 18 doesn't provide passthrough of multichannel audio for streaming apps offering content where it it promoted as available. This is true for devices for which the functionality existed before the 18.0 tvOS update. What's more, the 18.1 Public Beta did not provide a resolution for the issue. All streaming apps appear to be affected. Notably, Home Sharing does not appear to be affected, and continues to provide multichannel audio as it did before the 18.0 update.
1
2
419
Sep ’24
Using non-modular libraries in an audio-unit
I've got a bunch of Audio Units I've been developing over the last few months - in Xcode 14 based on the Audio Unit template that ships in Xcode. The DSP heavy-lifting is all done in Rust libraries, which I build for Intel and Apple Silicon, merge using lipo and build XCFrameworks from. There are several of these - one with the DSP code, and several others used from the UI - a mix of SwiftUI and Cocoa. Getting the integration of Rust, Objective C, C++ and Swift working and automated took a few weeks (my first foray into C++ since the 1990s), but has been solid, reliable and working well - everything loads in Logic Pro and Garage Band and works. But now I'm attempting the (woefully underdocumented) process of making 13 audio unit projects able to be loaded in-process - which means moving all of the actual code into a Framework. And therein lies the problem: None of the xcframeworks are modular, which leads to the dreaded "Include of non-modular header inside framework module". Imported directly into the app extension project with "Allow Non-modular Includes in Framework Modules" works fine - but in a framework this seems to be verboten. So, the obvious solution would be to add a module map to each xcframework, and then, poof, they're modular. BUT... due to a peculiar limitation of Xcode's build system I've spent days searching for a workaround for, it is only possible to have ONE xcframework containing a module.modulemap file in any project. More than that and xcodebuild will try to clobber them with each other and the build will fail. And there appears to be NO WAY to name the file anything other than module.modulemap inside an xcframework and have it be detected. So I cannot modularize my frameworks, because there is more than one of them. How the heck does one work around this? A custom module map file somewhere (that the build should find and understand applies to four xcframeworks - how?)? Something else? I've seen one dreadful workaround - https://medium.com/@florentmorin/integrating-swift-framework-with-non-modular-libraries-d18098049e18 - given that I'm generating a lot of the C and Objective C code for the audio in Rust, I suppose I could write a tool that parses the header files and generates Objective C code that imports each framework and declares one method for every single Rust call. But it seems to me there has to be a correct way to do this without jumping through such hoops. Thoughts?
2
0
250
Sep ’24
Implementing Enhanced Dialogue on tvOS 18
How does a third party developer go about supporting the new Enhanced Dialogue option for video apps in tvOS 18? If an app is using the standard AVPlayerViewController, I had assumed it would be a simple-ish matter of building against the tvOS 18 SDK but apparently not, the options don't appear, not even greyed out.
0
0
294
Sep ’24
MIDIFlushOutput does not cancel events scheduled over network
Calling MIDIFlushOutput on a network endpoint is not cancelling events scheduled with future timestamps -- they continue to send. e.g. func send(eventLists: [MIDIEventList]) { let outputPortRef = ... let networkDestination = ... for var eventList in eventLists { MIDISendEventList(outputPortRef, networkDestination.objectRef, &eventList) } } ... MIDIFlushOutput(networkDestination.objectRef) I'm seeing that MIDIFlushOutput does successfully cancel scheduled events on a non-network endpoint. How can I clear all scheduled outgoing events over a MIDI Network connection?
2
2
337
Aug ’24
How best to handle AirPods audio glitches in Game Mode?
Hello! The new lower latency support for AirPods in Game Mode is impressive, but I'm not sure of the best way to handle the transition into/out of Game Mode while audio is playing. In order to lower the latency, the system appears to drop some number of samples, with the result being a good deal less latency. My use case is macOS where it's easier to switch in/out of the fullscreen game (a simple swipe left), thus causing more issues for Game Mode since the audio is playing the entire time. It would be nice if offscreen games could remain in game mode, but I understand not wanting to give developers that control. Are there any best practices for avoiding or masking the audio glitch caused by this skip-ahead? Is there a system event I can receive to know when Game Mode is about to be enabled or disabled, where I could perhaps fade out the audio? My callback checks the inTimestamp->mSampleTime value to detect gaps, but it only rarely detects a Game Mode gap, even though the audio skip-ahead always happens. BTW, I am currently only developing on macOS (15.0) and I'm working at a low level with AudioUnit callbacks and a SpatialMixer. I am not currently using any higher-level audio APIs. And here's a few questions I don't necessarily expect answers to, but it doesn't hurt to ask: Is there any additional technical details about how this latency reduction works, or exactly how much of a reduction is achieved (or said another way, how many samples are dropped)? How much does this affect AirPods battery life? And finally, is there a way to query the actual latency value? I check the value for kAudioDevicePropertyLatency but it seems to always report 160ms for AirPods. Thanks!
1
0
360
Sep ’24
Song releaseDate always nil
I am fetching playlist songs from the users library and also need the releaseDate (or year) for the song for my use case. However, the releaseDate is always nil since I have upgraded to sequoia. I am pretty sure, this was working before the upgrade, but I couldn't find any documentation on changes related to this. Furthermore I noticed, the IDs also now seem to be the catalog IDs instead of the global ones like i.PkdZbQXsPJ4DX04 Here's in a nutshell what I am doing func fetchSongs(playlist: Playlist) async throws { let detailedPlaylist = try await playlist.with([.tracks]) var currentTracks: MusicItemCollection<Track>? = detailedPlaylist.tracks repeat { for track in currentTracks! { guard case .song(let song) = track else { print("This is not a song") continue } print(song.releaseDate) } currentTracks = try await currentTracks?.nextBatch() } while currentTracks != nil }
1
0
198
Sep ’24
carplay audio lost after ios18
Iphone 13mini updated to ios18. Carplay is wired on my 2021 RAM Laramie. After the update => Premium audio is lost, I can only hear low quality audio. When I manually change to Bluetooth instead of usb on the car, then audio comes in speaker mode on my phone and not on the truck.
2
2
249
Sep ’24