Dive into the technical aspects of audio on your device, including codecs, format support, and customization options.

Audio Documentation

Post

Replies

Boosts

Views

Activity

Toggling AVMusicTrack isMuted
Hi! I have an AVAudioSequencer with some AVMusicTracks that are filled with AVParameterEvents. If I toggle the isMuted property of a track, it will instantly mute when changed to true. However, after turning the muting to false, the events will only triggers on the next round of a loop and not instantly. Is this intended behaviour, and is there some way to get the events to trigger immediately after toggling the isMuted to be false?
1
0
223
Oct ’24
Audio Interruption Not Being Intercepted in AVAudioSession with Classification
Hi everyone, I’m experiencing an issue where audio interruptions (e.g., phone calls) are not being intercepted while running sound classification in an app that uses the AVAudioSession. Classification works fine, but interruptions aren’t handled, even though I’ve followed Apple’s guidelines on handling audio interruptions [1_Document]. The classification was initially based on [2_Classifer], where it worked perfectly. However, when I adopted classification in a more camera-focused app using [3_Cam], the interruption behavior stopped working. The classification setup is functioning with [3_Cam], but audio interruptions are not triggered. The listener is invoked before starting sound analysis as suggested in [2_Classifier]. startListeningForAudioSessionInterruptions() try startAnalyzing([(request, observer)]) FYI, one change I have made for classifications is following. This works fine in all cases. // try audioSession.setCategory(.record, mode: .default) try audioSession.setCategory(.playAndRecord, mode: .default, options: [.defaultToSpeaker, .allowBluetooth]) I suspect the issue might be related to the AVAudioSession configuration or how the app handles recording and playback together. Is there anything else I should check related to AVAudioSession? Are there additional APIs I could use to pre-check or better handle audio interruptions? Any suggestions or guidance would be greatly appreciated! Platform: Swift 5, Xcode 16, iOS 18. References: Document Classifier Cam Best Regards
1
0
223
Oct ’24
iOS18 Apple Car Play Audio Error
After updating iOS18.1 Beta Version, I have a lot of issues with my Apple Car Play as per following. Audio Quality Really Bad (it’s not playing with media instead voice channel. Sometime, it’s playing with the phone speakers even though I have connected the car play via cable. Its not operating well with car steering control such as Volume up & down, skip button. After receiving the phone call, it’s go back to the original audio quality but when my phone screen locked, its go bad again. I expect to fix these problems asap as I love to play music when I drive around.
1
0
242
Oct ’24
SoundRecognition causes Input/Output callbacks to have varying Buffer sizes and introduces Glitching
Hello, We have noticed an issue with SoundRecognition that causes glitching with our AudioUnit setup in Smule. Input and output frame sizes are inconsistent. Input frame size does not match [AVAudioSession sharedInstance].IOBufferDuration My best guess is that SoundRecognition influences the input frame size and not the output frame size. To reproduce use the example app here: https://github.com/MarkoGill/SoundRecognitionBug Hardware/OS iPhone 14 Pro on iOS 18 -> Experiences the problem iPhone 11 on iOS 18 -> Experiences the problem iPhone 15 on iOS 18 -> Not experiencing the problem Reproduction Steps Enable Sound Recognition (Settings > Accessibility > Sound Recognition > On) Enable a Sound for detection (Sounds > Dog > On) Open the example app with headset (it routes input to output) Notice glitching occurs Check the logs. Record and Playback buffer sizes vary Example Log: AU input sample rate: 48000.000000 AU output sample rate: 48000.000000 hardware sample rate: 48000.000000 hardware buffer size: 1104.000000 updated record frame counts: 1024 updated playback frame counts: 1104 Notes: You can disable Sound Recognition, restart the app, and playback behaves correctly.
3
1
510
Oct ’24
Distorted Audio When Recording External Mics With AVCaptureSession and AVAssetWriter
I’m working on a macOS app, written in Swift. My goal is to record audio from an external microphone, e.g., one connected via USB. For this, I’m using an AVCaptureSession and recording its output with an AVAssetWriter. This works perfectly in principle (and reliably with internal microphones, for example). The problem occurs after my app has successfully completed the first recording and I then want to make additional recordings (which makes me think it might be process-dependent, because it works again after restarting the app). The problem: Noisy or distorted-sounding audio files. In addition, the following error message appears in the Console from CoreAudio / its AudioConverter: Input data proc returned inconsistent 512 packets for 2048 bytes; at 3 bytes per packet, that is actually 682 packets It is easy to reproduce. This problem is reproducible even if I don’t configure the AVAssetWriter manually and instead let it receive its audioSettings using a preset from an AVOutputSettingsAssistant. I’m running on macOS 15.0 (24A335). I’ve filed a feedback including a demo project → FB15333298 🎟️ I would greatly appreciate any help! Have a great day, Martin
5
0
339
Oct ’24
AVAssetWriterInput -- inserting sample buffers with pauses in between not working
Hi, I'm trying to insert CMSampleBuffers into an AVAssetWriterInput that has been configured with expectsMediaDataInRealTime = false with pauses. That is, I insert fixed-length audio at specific (increasing and non-overlapping) time points with large gaps in between. E.g., 5 seconds of audio at t=3.0, 5 seconds of audio at t=12.0, etc. The first audio sample plays at t=3 in the final output video as expected. But then all the other samples are bunched up immediately after it instead of being scheduled at the correct time. Below is my code. I'm just loading the asset and then readjusting its timestamps to be correct in the absolute timeline. Why do they not get scheduled correctly when the timestamps and durations are definitely correct and non-overlapping? func addFrame(_ pixelBuffer: CVPixelBuffer) { guard CGSize(width: pixelBuffer.width, height: pixelBuffer.height) == outputSize else { return } let frameTime = CMTimeMake(value: frameCount, timescale: frameRate) if videoInput?.isReadyForMoreMediaData == true { pixelBufferAdaptor?.append(pixelBuffer, withPresentationTime: frameTime) frameCount += 1 currentTime = frameTime } } func addMP3AudioClip(_ audioData: Data) async throws { let tempURL = FileManager.default.temporaryDirectory.appendingPathComponent(UUID().uuidString + ".mp3") defer { try? FileManager.default.removeItem(at: tempURL) } try audioData.write(to: tempURL) let asset = AVAsset(url: tempURL) let duration = try await asset.load(.duration) let audioTrack = try await asset.loadTracks(withMediaType: .audio).first! let audioReader = try AVAssetReader(asset: asset) let outputSettings: [String: Any] = [ AVFormatIDKey: kAudioFormatLinearPCM, AVSampleRateKey: 44100, AVNumberOfChannelsKey: 2, AVLinearPCMBitDepthKey: 16, AVLinearPCMIsFloatKey: false, AVLinearPCMIsBigEndianKey: false, AVLinearPCMIsNonInterleaved: false ] let audioReaderOutput = AVAssetReaderTrackOutput(track: audioTrack, outputSettings: outputSettings) audioReader.add(audioReaderOutput) guard audioReader.startReading() else { throw NSError(domain: "AudioReaderError", code: 0, userInfo: [NSLocalizedDescriptionKey: "Failed to start reading audio"]) } let baseInsertionTime = currentTime.convertScale(duration.timescale, method: .default) // Capture the current video time when the method is called print("Adding audio clip at \(baseInsertionTime.seconds) seconds, duration: \(duration.seconds) seconds") var audioTime = CMTime.zero var totalDuration: Double = 0 while let sampleBuffer = audioReaderOutput.copyNextSampleBuffer() { let bufferDuration = CMSampleBufferGetDuration(sampleBuffer) let adjustedBuffer = adjustTimeStamp(of: sampleBuffer, by: baseInsertionTime) while !audioInput!.isReadyForMoreMediaData { try await Task.sleep(nanoseconds: 100_000_000) // 0.1 second } audioInput!.append(adjustedBuffer) print(" Adjusted time: \(adjustedBuffer.presentationTimeStamp.seconds)") audioTime = CMTimeAdd(audioTime, bufferDuration) totalDuration += bufferDuration.seconds } print("Finished adding audio clip. Last sample at: \(CMTimeAdd(baseInsertionTime, audioTime).seconds) seconds") print(" totalDuration=\(totalDuration)") } private func adjustTimeStamp(of sampleBuffer: CMSampleBuffer, by timeOffset: CMTime) -> CMSampleBuffer { var count: CMItemCount = 0 CMSampleBufferGetSampleTimingInfoArray(sampleBuffer, entryCount: 0, arrayToFill: nil, entriesNeededOut: &count) var timingInfo = [CMSampleTimingInfo](repeating: CMSampleTimingInfo(), count: count) CMSampleBufferGetSampleTimingInfoArray(sampleBuffer, entryCount: count, arrayToFill: &timingInfo, entriesNeededOut: nil) for i in 0..<count { timingInfo[i].presentationTimeStamp = CMTimeAdd(timingInfo[i].presentationTimeStamp, timeOffset) if timingInfo[i].decodeTimeStamp != .invalid { timingInfo[i].decodeTimeStamp = CMTimeAdd(timingInfo[i].decodeTimeStamp, timeOffset) } else { timingInfo[i].decodeTimeStamp = timingInfo[i].presentationTimeStamp } } var adjustedBuffer: CMSampleBuffer? CMSampleBufferCreateCopyWithNewTiming(allocator: nil, sampleBuffer: sampleBuffer, sampleTimingEntryCount: count, sampleTimingArray: &timingInfo, sampleBufferOut: &adjustedBuffer) return adjustedBuffer! }
0
0
226
Oct ’24
Now Playing info error with CarPlay on iOS 18
Our app, Universalis (Apple ID 284942719) plays audio successfully on all versions of iOS up to and including iOS 17. It uses the old MediaPlayer interface because it is targeted at versions all the way down to iOS 12. On iOS 18, it plays audio but CarPlay fails to show the Now Playing screen. Instead, a message box pops up in CarPlay saying "There was a problem loading this content", with an OK button. Nevertheless the audio plays correctly. This has been reported in the wild by a user of iOS 18 with CarPlay. I am also able to reproduce it locally, running the app in Xcode with the CarPlay Simulator, with an iPhone using iOS 18.0 or iOS 18.1. Earlier versions work correctly. Looking at the console log in CarPlay, the following error message appears about 10 seconds before the error message pops up: MSVEntitlementUtilities - Process Universalis PID[1173] - Group: (null) - Entitlement: com.apple.mediaremote.external-artwork-validation - Entitled: NO - Error: (null) The message has an orange background which appears to mean that it does not come directly from NSLog in the app. The message appears immediately after the request handler of MPMediaItemArtwork has been called requesting a 128 x 128 image and has successfully returned a 128x128 UIImage object. This has been reported through Feedback Assistant: Bug report ID is FB15343941 How can we work round this error?
3
0
383
Oct ’24
Why Aren't All Songs Being Added to the Queue?
Hi, I've recently been working with the Apple Music API and have had success in loading all the playlists on my account, loading songs from each playlist, and adding songs to the ApplicationMusicPlayer.share.queue. The problem I'm running into is that not all songs from the playlist are being added to the queue, despite confirming all the songs are being based on the PlaybackView.swift I'm about to share with you. I would also like to answer other underlying questions if possible. I am also open to any other suggestions. In this scenario were also assuming isShuffled is true every time. How can I determine when a song has ended? How can I get the album title information? How can I get the current song title, album information, and artist information without pressing play? I can only seem to update the screen when I select my play meaning ApplicationMusicPlayer.shared.play() is being called. How do I get the endTime of the song? I believe it should be ApplicationMusicPlayer.shared.queue.currentEntry.endTime but this doesn't seem to work. // // PlayBackView.swift // // Created by Justin on 8/16/24. // import SwiftUI import MusicKit import Foundation enum PlayState { case play case pause } struct PlayBackView: View { @State var song: Track @Binding var songs: [Track]? @State var isShuffled = false @State private var playState: PlayState = .pause @State private var isFirstPlay = true private let player = ApplicationMusicPlayer.shared private var isPlaying: Bool { return (player.state.playbackStatus == .playing) } var body: some View { VStack { // Album Cover HStack(spacing: 20) { if let artwork = player.queue.currentEntry?.artwork { ArtworkImage(artwork, height: 100) } else { Image(systemName: "music.note") .resizable() .frame(width: 100, height: 100) } VStack(alignment: .leading) { // Song Title Text(player.queue.currentEntry?.title ?? "Song Title Not Found") .font(.title) .fixedSize(horizontal: false, vertical: true) // How do I get AlbumTitle from here?? // Artist Name Text(player.queue.currentEntry?.subtitle ?? "Artist Name Not Found") .font(.caption) } } .padding() Spacer() // Progress View // endTime doesn't work and not sure why. ProgressView(value: player.playbackTime, total: player.queue.currentEntry?.endTime ?? 1.00) .progressViewStyle(.linear) .tint(.red.opacity(0.5)) // Duration View HStack { Text(durationStr(from: player.playbackTime)) .font(.caption) Spacer() if let duration = player.queue.currentEntry?.endTime { Text(durationStr(from: duration)) .font(.caption) } } Spacer() Button { Task { do { try await player.skipToNextEntry() } catch { print(error.localizedDescription) } } } label: { Label("", systemImage: "forward.fill") .tint(.white) } Spacer() // Play/Pause Button Button(action: { handlePlayButton() isFirstPlay = false }, label: { Text(playState == .play ? "Pause" : isFirstPlay ? "Play" : "Resume") .frame(maxWidth: .infinity) }) .buttonStyle(.borderedProminent) .padding() .font(.largeTitle) .tint(.red) } .padding() .onAppear { if isShuffled { songs = songs?.shuffled() if let songs, let firstSong = songs.first { player.queue = .init(for: songs, startingAt: firstSong) player.state.shuffleMode = .songs } } } .onDisappear { player.stop() player.queue = [] player.playbackTime = .zero } } private func handlePlayButton() { Task { if isPlaying { player.pause() playState = .pause } else { playState = .play await playTrack() } } } @MainActor private func playTrack() async { do { try await player.play() } catch { print(error.localizedDescription) } } private func durationStr(from duration: TimeInterval) -> String { let seconds = Int(duration) let minutes = seconds / 60 let remainder = seconds % 60 // Format the string to ensure two digits for the remainder (seconds) return String(format: "%d:%02d", minutes, remainder) } } //#Preview { // PlayBackView() //}
2
0
314
Oct ’24
CMSAMPLEBuffer: audio PCM to MP4 AAC
Hello, As explained in this link, the AVAssetReaderTrackOutput.copyNextSampleBuffer() returns a CMSampleBuffer in linear PCM audio format. I want to place this audio buffer into an AVAssetWriterInput of type kAudioFormatMPEG4AAC, but I can't manage the conversion. Could you help me by providing an extension that returns a CMSampleBuffer converted from linear PCM audio format to kAudioFormatMPEG4AAC? Example: extension CMSampleBuffer { func fromPCMToAAC() -> CMSampleBuffer? { // Here, get a new AudioStreamBasicDescription, create a CMSampleBuffer and a CMBlockBuffer } } I've tried multiple times but without success. Software: iOS 18.1 XCode: 16.0 Thank you!
1
0
283
Oct ’24
AVAudioEngine change playback rate in real time (AVAudioUnitVarispeed is non real time)
I am using the AVAudioEngine to play back samples in an iOS game. I would like to change the play back rate of a sample in real time. When using AVAudioUnitVarispeed for chaging the play back rate it creates stutters in the game as it isn't processed in real time (as stated here:AVAudioUnitTimeEffect) The other option I found to change the rate is by using an AVAudioEnvironmentNode and change the rate of the AVAudioPlayerNode. That works without creating stutters but limits the valid values for the rate from 0.5 to 2.0 (I need higher rates then 2.0). See here: AVAudio3DMixing. Are there any other ways to play back a sample with a rate control in real time?
0
0
164
Oct ’24
AudioConverterFillComplexBuffer not working for (E)AC3 in tvOS 18
Since upgrading to tvOS 18, the above function isn't working for me in converting a stream with these formats. It does work in decoding AAC, however. https://developer.apple.com/documentation/audiotoolbox/1503098-audioconverterfillcomplexbuffer?language=objc I pass a valid ioOutputDataPacketSize in, but it always comes out as zero. Has anyone else observed this too? I wonder if this is related to the issue being discussed widely about 5.1 sound being broken for many people after upgrading to tvOS 18? https://discussions.apple.com/thread/255769102?login=true&sortBy=rank EDIT: further information; the callback gets called once, asking for 1 packet (which is ok). I give it one packet and return noErr. However, after this, the callback is never invoked again. Must be a bug? EDIT2: the same code continues to work correctly on macOS in decoding the same audio stream.
4
2
295
Oct ’24
Apple Music Bug
Since the iOS 18 update, there's been a bug that always occurs when listening to music through Apple Music. When music is playing and the iPhone enters or exits standby mode, the music pauses by itself for 1 second. The initial conditions were the same as when this problem didn’t exist.
2
0
285
Oct ’24
AVPlayer Error in iOS 18.0
When attempting to play a video using AVPlayer on iOS 18.0, I am encountering an error that does not occur on versions earlier than 18.0. Could you please advise what might be causing this issue? Error Domain=AVFoundationErrorDomain Code=-11828 Error Domain=NSOSStatusErrorDomain Code=-12847 "(null)" This is Error code This is the URL information I retrieved using the curl command. HTTP/1.1 200 Content-Disposition: inline;filename="sample.mp4" Accept-Ranges: bytes ETag: sample.mp4 Last-Modified: Tue, 20 Jan 1970 23:52:10 GMT Expires: Mon, 07 Oct 2024 09:15:49 GMT Content-Range: bytes 0-987561/987561 Content-Type: application/octet-stream Content-Length: 987561 Date: Mon, 30 Sep 2024 09:15:49 GMT
2
0
268
Oct ’24
Call cannot be disconnected due to delay observed in AudioOutputUnitStop API
We have a VOIP calling application that releases resources at the end of call. When the AudioOutputUnitStop API is invoked, it takes upto 700millisecond to return back sometimes. If we comment that API call as a test then the AudioUnitUninitialize API takes upto 700ms. Once the cleanup is done, as part of the call flow, the application invokes a BYE SIP message. Hence in cases where the API takes more than 200 ms, the Bye message is sent with that much delay and gets blocked as per the server DDOS settings. (DDOS timer will start as soon as a the UDP socket is disconnected with the client and will timeout within 200 ms and Bye request coming post that time will get blocked) We need to understand why there is a delay of more than 200ms observed sometimes while in other cases it requires less than 50ms?
2
0
220
Sep ’24
CarPlay music issues iOS 18.1 (22B5054e)
When my CarPlay connects and tries to play music it plays it through the vehicles phone speakers. If I make a phone call and then hang up it pushes the sound back to the vehicles stereo speakers. Does anyone have a fix for this because this is very annoying. Sometimes it will just switch to the phone speakers and I have to complete the process again.
2
1
302
Sep ’24
Short small starter guide for AVAudioEngine and AVAudioSession on iOS
AVAudioEngine and AVAudioSession Welcome! I will start off with the terms AVAudioEngineImpl::Initialize(NSError**). Why? I want to make those who run into this issue have to possibility to find this post through Search Engines! This is short small breakdown based on what I observed while trying to use these two Components. It's not a guide that goes into all the details. If you're trying to figure out how to fix a crash, you may can find a common way to fix it, in this post! Is it possible to use AVAudioEngine and AVAudioSession together? The answer is yes. But you will face challenges regarding it. Mostly AVAudioEngine. Whatever you're trying to do, it will take a lot of testing. I don't know how it will be with an IDE. But with just .app and iPhone it will take some testing. Or a lot of testing. Something that helped me fixing a crash was, this here: https://developer.apple.com/documentation/avfaudio/audio_engine/audio_units/using_voice_processing This example Project by Apple, uses both AVAudioEngine and AVAudioSession. How can I fix AVAudioEngineImpl::Initialize(NSError**) ? I think this depends. If you're lucky and have a crash log, you may can find clues, but the stack trace sometimes doesn't really help either. I will mention common cases that I encountered though. inputNode https://developer.apple.com/documentation/avfaudio/avaudioengine/1386063-inputnode You need an inputNode apparently. You need to access it or else I think there won't be one. And if there isn't one, AVAudioEngine.start will most likely crash. The audio engine creates a singleton on demand when first accessing this variable. Doing this has prevented this common issue for me. .prepare deallocates and can cause a crash if you restart your AudioEngine Another issue I faced was handling .prepare wrong. You don't need .prepare. But if you use installTap or other things, I think you need it. Here is a common thing to note. If you had previous initialized inputNode. Those could be gone after using .prepare. You have to ensure you're accessing AVAudioEngine.inputNode again before calling .start() or whatever node you need. The Voice Processing Project, does this by creating a Managing Controller for AVAudioEngine with a sort of "setup" function, which ensures that everything is ready, before .prepare and .start get called. AVAudioSession's setCategory You have to experiment with it. The crashes can be very weird. Sometimes your App will only crash once, and then only after you install it again, or if you start it up. You are actually able to use .setActive and .setCategory with AVAduioEngine. Just do not try to do .setActive(false) before you've stopped the AudioEngine, as it will fail. Sometimes I'd run into an issue with .setActive(true) so you really have to experiment if leaving that part out resolves the issue or not. try session.setCategory(.multiRoute, mode: .default, options: [.defaultToSpeaker, .mixWithOthers]) Experiment with it. But these .multiRoute and .mixWithOthers have allowed me to use AVAudioEngine to make a test recording. And I can even switch the Data Sources and Polar Patterns without any issues. Sometimes you can get away without setting .setActive at all. Not sure if AVAudioEngine does it automatically. Short Summary If you use .prepare and then .stop, make sure to initialize things like .inputNode before calling .prepare and .start again. (THIS CAN BE DIFFERENT) Only call .setActive(false) after you used .stop. Otherwise I believe it has no chance to stop it. AVAudioSession setCategory is important. Ensure you use mixRoutes or experiment with all the modes. If you manage to solve your crash, you'll be able to indeed change the Data Sources and Polar Patterns and more! Use isRunning before using .start, this will save you from another crash. If you use .start while it's already running, I think try and catch won't save you here, you have to ensure you're not starting it twice. I hope that this short breakdown will help you to resolve your crash. If you get deeper into AVAudioEngine and AVAudioSession, you'll probably face more crashes. I yet, need to figure out how to solve them. I have a lot of trouble to put my Testing App on my iPhone, so I am sorry if this guide didn't cover every detail of it. A HUGE tip from me is to check the Documentations. As example, when I read the Documentation for inputNode I learned why my app crashed, it's because I never accessed and initialized one. The Developer Documentation can be a little bit of a laberynth, and I strongly recommend you to read every property you try to access if you believe they cause issues. And I also recommend to find example Projects like the Voice Processing ones. As there aren't any Code Examples in the Documentation.
0
0
307
Sep ’24
Bluetooth Speaker makes installTap fail to callback after first few seconds
If I have bluetooth speaker connected and I have installTap called on input Node, the callback is fired for 1-2 seconds then it doesnt anymore. I dont see any route or any notification handler called in between. engine.inputNode.removeTap(onBus: 0) engine.inputNode.installTap( onBus: 0, bufferSize: 4096, format: format ) { buffer, _ in // 3 guard let channelData = buffer.floatChannelData else { return } // This callback fails after some time. } Not sure if this is expected, but I noticed some other applications, they seem to work fine. If I remove bluetooth device, my input works fine. Also I have no issues with output on Speaker.
2
0
251
Sep ’24