Dive into the technical aspects of audio on your device, including codecs, format support, and customization options.

Audio Documentation

Post

Replies

Boosts

Views

Activity

ApplicationMusicPlayer.shared doesn't play explicit content on MacCatalyst
I am developing an app running on iOS/iPadOS and on macOS using MacCatalyst. It uses ApplicationMusicPlayer.shared to play music from Apple Music. However, on the Mac songs with contentRating == .explicit do not work. I will get the following error (sorry, German localization): Failed to prepareToPlay error=<MPMusicPlayerControllerErrorDomain.6 "Failed to prepare to play" {}> Error playing item: Der Vorgang konnte nicht abgeschlossen werden. (MPMusicPlayerControllerErrorDomain-Fehler 6.) On iOS/iPadOS these songs play correctly. What can I do to also play explicit songs using MacCatalyst? Thanks, Dirk
1
0
327
Aug ’24
ApplicationMusicPlayer's endSeeking() Fails to Stop Fast Forward - Seeking Solutions
I've encountered a critical issue while developing a music player app using SwiftUI and MusicKit. The problem persists across multiple devices and iOS versions, specifically with the endSeeking() method of ApplicationMusicPlayer, which fails to stop the fast-forward operation as expected. Development Environment: Xcode 16 Beta 6 macOS Sonoma 15.0 Beta 7 (24A5327a) Affected Devices: iPhone 11 Pro Max (iOS 17.6) iPhone SE 3 (iOS 18.0 Beta 7) Here's the relevant code snippet: Image(systemName: "forward.end.circle") .foregroundStyle(.accent) .gesture( TapGesture() .onEnded { _ in vm.nextTrack() } ) .simultaneousGesture( LongPressGesture(minimumDuration: 0.5) .onChanged { isPressing in if isPressing { vm.player.beginSeekingForward() } } .onEnded { _ in vm.player.endSeeking() } ) The issue manifests when the long press ends: despite invoking the endSeeking() method, the fast-forward operation persists. To troubleshoot, I've taken the following steps: Confirmed that vm.player is set to ApplicationMusicPlayer.shared. Attempted to combine endSeeking() with beginSeekingForward(), as per the documentation guidelines. Despite these efforts, the problem persists across all tested devices and OS versions. This leads me to two critical questions: Has anyone else encountered a similar issue? Could this potentially be an undocumented bug in the latest MusicKit implementation?
2
1
335
Aug ’24
What methods in what Framework to separate an audio file into two files?
I'm having trouble using SFSpeechRecognizer & SFSpeechRecognitionTask to show me the words from an audio file. I found a solution on stackoverflow to separate the audio file into smaller sizes. How would I do that programmatically using Swift for a macOS app Xcode project? I would prefer not to separate the file into smaller files. I will submit another post with more information for that.
3
0
452
Aug ’24
iOS Audio Lockscreen Problem in PWA
iOS Audio Lockscreen Problem in PWA Description When running a PWA on iOS; playing audio from the lockscreen works as expected until you leave the audio paused for 30 seconds. After this, the audio will cease to function until you return the PWA to the foreground. Reproduction In a PWA, create an HTML 5 audio element. Load an audio file into it. Set navigator.mediaSession data and action handlers for play and pause. Everything is in working order and your audio plays and pauses from the lock screen. Pause your audio and wait for 30 seconds. Now, press the play button. Your audio will no longer function. At this point, the only way to get the audio to function again is to open the PWA into the foreground. Once you do this, the audio will be in working order. What is expected In step number 6, when you press the play button, the audio should play. The lock screen audio should not enter a non-functional state or there should be some way to "wake up" the PWA. Closing If you follow these steps exactly on Android, you will see that the problem does not exist on those devices.
2
0
428
Aug ’24
AVAudioPlayerNode scheduleBuffer & Swift 6 Concurrency
We do custom audio buffering in our app. A very minimal version of the relevant code would look something like: import AVFoundation class LoopingBuffer { private var playerNode: AVAudioPlayerNode private var audioFile: AVAudioFile init(playerNode: AVAudioPlayerNode, audioFile: AVAudioFile) { self.playerNode = playerNode self.audioFile = audioFile } func scheduleBuffer(_ frames: AVAudioFrameCount) async { let audioBuffer = AVAudioPCMBuffer( pcmFormat: audioFile.processingFormat, frameCapacity: frames )! try! audioFile.read(into: audioBuffer, frameCount: frames) await playerNode.scheduleBuffer(audioBuffer, completionCallbackType: .dataConsumed) } } We are in the migration process to swift 6 concurrency and have moved a lot of our audio code into a global actor. So now we have something along the lines of import AVFoundation @globalActor public actor AudioActor: GlobalActor { public static let shared = AudioActor() } @AudioActor class LoopingBuffer { private var playerNode: AVAudioPlayerNode private var audioFile: AVAudioFile init(playerNode: AVAudioPlayerNode, audioFile: AVAudioFile) { self.playerNode = playerNode self.audioFile = audioFile } func scheduleBuffer(_ frames: AVAudioFrameCount) async { let audioBuffer = AVAudioPCMBuffer( pcmFormat: audioFile.processingFormat, frameCapacity: frames )! try! audioFile.read(into: audioBuffer, frameCount: frames) await playerNode.scheduleBuffer(audioBuffer, completionCallbackType: .dataConsumed) } } Unfortunately this now causes an error: error: sending 'audioBuffer' risks causing data races | `- note: sending global actor 'AudioActor'-isolated 'audioBuffer' to nonisolated instance method 'scheduleBuffer(_:completionCallbackType:)' risks causing data races between nonisolated and global actor 'AudioActor'-isolated uses I understand the error, what I don't understand is how I can safely use this API? AVAudioPlayerNode is not marked as @MainActor so it seems like it should be safe to schedule a buffer from a custom actor as long as we don't send it anywhere else. Is that right? AVAudioPCMBuffer is not Sendable so I don't think it's possible to make this callsite ever work from an isolated context. Even forgetting about the custom actor, if you instead annotate the class with @MainActor the same error is still present. I think the AVAudioPlayerNode.scheduleBuffer() function should have a sending annotation to make clear that the buffer can't be used after it's sent. I think that would make the compiler happy but I'm not certain. Am I overlooking something, holding it wrong, or is this API just pretty much unusable in Swift 6? My current workaround is just to import AVFoundation with @preconcurrency but it feels dirty and I am worried there may be a real issue here that I am missing in doing so.
3
0
490
Aug ’24
Configuration issues with CMBlockBuffer for AAC Audio
I am trying to acheive AAC playback, I have stripped off the ADTS header using a function. I am not being shown any errors by the Apple API however I cannot hear any playback. Here is my asbd My sample is definitely 44.1KHz and AAC_LC. Here is the file for reference: https://dl.espressif.com/dl/audio/ff-16b-2c-44100hz.aac Here are some relevant snippets of the code: AudioStreamBasicDescription desc = {0}; desc.mSampleRate = 44100; // Sample rate desc.mFormatID = kAudioFormatMPEG4AAC; // Format ID for AAC desc.mChannelsPerFrame = 2; // Stereo audio desc.mFramesPerPacket = 1024; // AAC typically uses 1024 frames per packet desc.mBitsPerChannel = 0; desc.mBytesPerPacket = 0; desc.mBytesPerFrame = 0; OSStatus status = CMAudioFormatDescriptionCreate(kCFAllocatorDefault, &desc, inlayout_size, //32 corresponding to stereo inlayout_buf, kAudioFormatMPEG4AAC, nil, nil, &_fmtDesc); const CMBlockBufferCustomBlockSource blockSource = { .version = kCMBlockBufferCustomBlockSourceVersion, .FreeBlock = customBlock_Free, .refCon = block, }; OSStatus status; CMSampleBufferRef sampleBuffer; CMBlockBufferRef blockBuf; status = CMBlockBufferCreateWithMemoryBlock(kCFAllocatorDefault, (block->p_buffer), // memoryBlock (block->i_buffer), // blockLength kCFAllocatorNull, // blockAllocator &blockSource, // customBlockSource 0, // offsetToData (block->i_buffer), // dataLength 0, // flags &blockBuf); const CMSampleTimingInfo timeInfo = { .duration = kCMTimeInvalid, .presentationTimeStamp = CMTimeMake(_ptsSamples, _sampleRate), .decodeTimeStamp = kCMTimeInvalid, }; status = CMSampleBufferCreateReady(kCFAllocatorDefault, blockBuf, // dataBuffer _fmtDesc, // formatDescription 1024, // numSamples 1, // numSampleTimingEntries &timeInfo, // sampleTimingArray 1, // numSampleSizeEntries &_bytesPerFrame, // sampleSizeArray &sampleBuf); The renderer then handles this sampleBuf which is working correctly as I have tested it for other formats. I have verified the hex dump of the p_buffer and it matches with that of the .aac file having removed the ADTS header. Here is an output example Hex dump of p_buffer(which is being passed to CMBlockBufferCreateWithMemoryBlock): 4CFE1DE0: 21 1A 8F 20 63 E7 FF FF 11 72 A3 20 C5 E3 B7 E9 4CFE1DF0: 42 F5 3D 9A D1 77 D2 F0 9A 00 00 B2 32 53 84 8C 4CFE1E00: E8 24 ED DF 23 04 3D CF A6 51 A8 D2 8F EE B3 FB 4CFE1E10: F4 CC 17 F9 7C 8B 75 06 29 8D D6 95 98 78 9D 87 4CFE1E20: 9C B4 9D 8E 2B 6C D2 90 D7 E3 C4 37 05 97 85 C1 4CFE1E30: F7 5E 7F D8 F3 DD 20 B5 73 31 C5 EC 3D 6F AC 5E 4CFE1E40: 45 AF CC 38 0D 5B 98 F5 F9 3B 3E D7 C3 8E 1B 38 4CFE1E50: F8 F1 9A 6F 96 05 15 CE 39 D6 2B 06 60 33 8A C4 4CFE1E60: EE 4F 6B B3 C9 CF F2 BF 3F B1 96 69 B9 62 34 62 4CFE1E70: CD 41 1C 08 CF 80 5F A4 60 BD 45 36 AC 66 00 40 4CFE1E80: 42 F6 95 F4 89 8A A2 24 11 01 74 08 82 33 94 D1 4CFE1E90: 0B 24 51 4A 55 28 06 21 78 85 D4 B5 13 49 1D AA 4CFE1EA0: 44 02 32 E9 42 61 8C 59 4A 65 96 4D BC BC AE D2 4CFE1EB0: F1 D0 00 00 D4 A2 F8 87 A0 FD C8 93 87 59 A2 CB 4CFE1EC0: BE B3 AB 49 C6 37 60 2B 50 26 D3 0C 1D 29 45 81 4CFE1ED0: D9 4E 62 5E 29 8E 27 19 75 FB 62 0B 3B C0 B9 E6 4CFE1EE0: EB A0 3F B8 D5 7E 77 90 C1 E2 9C D9 4E 5B 82 ED 4CFE1EF0: CF BC 55 1C 55 1B F2 DE CC B2 13 25 CB ED F5 B5 4CFE1F00: 6E F9 EF 38 DE 8C C4 38 C2 60 CF DA F3 F2 1F 80 4CFE1F10: C5 23 0C 3E 57 31 0D 5E EB 63 58 1A 28 38 7B B2 4CFE1F20: 0B F3 5B 33 96 59 55 44 4A 09 55 73 EC 94 A0 F3 4CFE1F30: FC F4 70 F9 76 FB FF 8D AD 13 01 30 05 C0 90 01 4CFE1F40: B2 37 27 24 44 B9 F0 24 4E C5 D4 25 D6 F7 20 4D 4CFE1F50: 39 92 5D 31 71 5B 4A B2 A4 C1 59 D4 42 60 1C 00 4CFE1F60: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 i_buffer: 383 CMBlockBufferIsEmpty check: 0 Block not null in wrapBuffer CMBlockBufferCreateWithMemoryBlock status: 0 I did try multiple configurations, I am shown no errors by any log however I cannot hear playback. Please help me identify what is wrong here. I have used this as a reference, which seems to be based on previous Apple documentation https://github.com/UFOooX/iOSAACStreamPlayer
1
0
336
Aug ’24
MIDIFlushOutput does not cancel events scheduled over network
Calling MIDIFlushOutput on a network endpoint is not cancelling events scheduled with future timestamps -- they continue to send. e.g. func send(eventLists: [MIDIEventList]) { let outputPortRef = ... let networkDestination = ... for var eventList in eventLists { MIDISendEventList(outputPortRef, networkDestination.objectRef, &eventList) } } ... MIDIFlushOutput(networkDestination.objectRef) I'm seeing that MIDIFlushOutput does successfully cancel scheduled events on a non-network endpoint. How can I clear all scheduled outgoing events over a MIDI Network connection?
2
2
337
Aug ’24
Bluetooth and microfone
Whenever I have any bluetooth devices connected (radio, car, earphones) and want to record a voice message, the phone assumes I am recording from those devices, both in the messages app and any other app. Half of those devices I own don’t even have a microphone, then no message gets recorded. Can you implement a choice of microphone to be used when recording something? Some apps don’t even have the option to pick the audio output, which is annoying, but having to disable bluetooth to record something is definitely worse.
1
0
252
Aug ’24
In iOS 18, the seek bar operation in a music player app is not working
I have developed and operated a music player app, but when I installed the iOS 18 public beta version on my device and checked the app's operation, I found that the seek bar stops immediately after starting playback, and I cannot change the playback position on the seek bar. Checking the logs, the following error is output when the seek bar stops: ERROR AudioQueueCreateTimeline status=1953330284 This is a value I have never seen before, and this issue did not occur in iOS 17 or earlier. I would like to know if this issue can be resolved, and if not, how I should handle it.
2
1
271
Aug ’24
IPCAUClient.cpp:139 IPCAUClient: can't connect to server (-66748) <0x104309130>
When using the AVSpeechSynthesizer() , I get an error after a couple of seconds :"IPCAUClient.cpp:139 IPCAUClient: can't connect to server (-66748) <0x104309130>", and then it speaks the text. The second time I call speak, there is no delay and error and it speaks immediately. Where does this error and delay come from and how can I resolve it? Intialization code: self.audioSession = AVAudioSession.sharedInstance() // 2) handle audio session first, before trying to read the text do { try audioSession.setCategory(.playback, mode: .voicePrompt, options: .duckOthers) try audioSession.setActive(false) } catch let error { Logger.model.debug("❓\(error.localizedDescription)") } speechSynthesizer = AVSpeechSynthesizer() speechSynthesizer.usesApplicationAudioSession = true Speak code: let utterance = AVSpeechUtterance(string: text) utterance.preUtteranceDelay = 0.1 utterance.rate = 0.5 utterance.pitchMultiplier = 0.75 utterance.prefersAssistiveTechnologySettings = false self.speechSynthesizer.speak(utterance) The last statement gives this error message!
1
4
534
Aug ’24
ApplicationMusicPlayer Audio Session Issue When Switching to AVAudioEngine in Background
Hi! I'm developing a music player app that interchanges between ApplicationMusicPlayer and AVAudioEngine. I'm facing an issue when switching from playback via ApplicationMusicPlayer to AVAudioEngine while the app is in background. Based on testing, it seems like the issue has to do with being unable to set audio focus in background, causing error AVAudioSessionErrorCodeCannotInterruptOthers. I would like to check if ApplicationMusicPlayer has its own audio focus separated from the app's own audio focus. If it is, is there anything that I can do to ensure that ApplicationMusicPlayer returns focus to the app? (I notice that the issue does not occur if we are moving playback from AVAudioEngine to ApplicationMusicPlayer. Not sure why the opposite does not work)
1
0
387
Aug ’24
Recordings on iOS 18.0 beta start with stuttering.
I'm experiencing stuttering every time I record something with my iOS app on iOS 18 beta. The code ran fine on previous iOS versions. The stuttering occurs for the first 2 seconds. Here's an example: https://soundcloud.com/thomas-walther-219010679/ios-18-stuttering The way I set up AVAudioEngine and AVAudioSession was vetted quite thoroughly during sessions at WWDC '23. Here is how the engine and the tap is configured: let engine = AVAudioEngine() let recorderNode = AVAudioMixerNode() engine.attach(recorderNode) engine.connect(engine.mainMixerNode, to: engine.outputNode, format: engine.outputNode.inputFormat(forBus: 0)) engine.connect(recorderNode, to: engine.mainMixerNode, format: recordingOutputFormat) engine.connect(engine.inputNode, to: recorderNode, format: engine.inputNode.inputFormat(forBus: 0)) let bufferSize: AVAudioFrameCount = 4096 recorderNode.installTap(onBus: 0, bufferSize: bufferSize, format: nil) { [weak self] buffer, time in guard let self = self else { return } do { // Write recording to disk try audioFile.write(buffer) } catch { // ... } } I tried setting a different buffer size, but with no luck. I also can't see any hangs in Instruments. Do you have any pointers on how to debug this?
4
0
594
Aug ’24
How to load Media Player album artwork and release memory after
Media Player album artwork images do not release memory after loading so memory accumulates, eventually impacting performance and causing crashes. Steps To Reproduce: Download example project and run on device Grant access to library of 100+ albums Scroll through albums on any tab screen Observe steady memory increase in Xcode debug navigator and crash as you scroll Observe same memory accumulation on other screens that use different methods to get album artwork images Observations: Various methods of obtaining album artwork insufficiently release memory and impact app performance. 1 - Artwork Image releases memory sometimes if the images are small and you scroll slow, but app will still accumulate memory, drop frames when scrolling, and eventually crash. 2 - Value For Property behaves similar to Artwork Image even when using the implicit size of the artwork bounds. 3 - Image From Disk problem seems related to perform() method since you can remove UIImage and memory still accumulates All 3 methods result in higher retain counts than expected for artwork objects that could be preventing memory from releasing
0
0
353
Aug ’24
ios sound recognition: to what extent can developers access apple's built-in sound recognition?
hi, i am currently developing an app that has core functionalities reliant on detecting user laughter in the background. in our early stages we noticed apple's built-in sound recognition functionality. at the core, i am guessing that sound recognition requires permission from the user to access the microphone 24/7. currently, using the conventional avenue of background audio recording, a yellow indicator will be present on the top of the iphone screen indicating recording. this is not the case for sound recognition; instead. if all sound processing/recognition is kept on-device, is there any way to avoid the yellow dot and achieve sound laughter in a way that is similar to how apple's sound recognition does it? from the settings interface for sound recognition accessible to the user in the settings app, the only detectable "people" sounds are baby crying, coughing, and shouting. is it also possible to add laughter to this list somehow? thank you in advance.
2
0
445
Aug ’24
How do I output different sounds to headphones and speakers while simultaneously recording, all without using AVAudioSession.Category.multiroute?
I need to find a way to allow recording from the mic while outputting two different sound streams to two different devices (speaker and headphones). I've done a fair bit of reading around using AVAudioSession.Category.multiroute but haven't found any modern examples. @theanalogkid posted a nice example using obj-C nine years ago, but others have noted that the code isn't readily translatable to Swift. To make matters worse, this is one of the very few examples on how to properly use multirouting. The official documentation is lacking, to say the least, and the WWDC 2012 session is, well, old enough to attend middle school and be a Taylor Swift fan, but definitely not in Swift. The few relevant forum posts here are spread over this middle schooler's life span and likely outdated, with most having no responses other than the poster's own plightful echo. They don't paint a pretty picture of .multiroute's health, with a recent poster noting that volume buttons don't work in this mode, contacting DTS and finding that there's no fix; another finding that it just doesn't work for certain devices, etc. Audio is giving me enough of a headache so I'd like to avoid slogging through this if possible. .multiroute feels like the developer mode of AVAudioSession, but without documentation. tl;dr - Without using .multiroute, is there a way to allow an app to output two different devices while simultaneously recording audio? If .multiroute is the only way to achieve this, can someone give me a quick rundown of how this category works?
1
0
464
Aug ’24
Configuring Apple Vision Pro's microphones to effectively pick up other speaker's voice
I am developing a visionOS app that captions speech in real environments. Currently, I am using Apple's built-in speech recognizer. However, when I was testing the app with a Vision Pro, the device seemed to only pick up the user's voice (in other words, the voices of the wearer of the Vision Pro device). For example, when the speech recognition task is running, and another person in front of me is talking, the system does not pick up the speech well. I tried to set the AVAudioSession to be equally sensitive to all directions: private func configureAudioSession() { do { try audioSession.setCategory(.record, mode: .measurement) try audioSession.setActive(true) if #available(visionOS 1.0, *) { let availableDataSources = audioSession.availableInputs?.first?.dataSources if let omniDirectionalSource = availableDataSources?.first(where: {$0.preferredPolarPattern == .omnidirectional}) { try audioSession.setInputDataSource(omniDirectionalSource) } } } catch { print("Failed to set up audio session: \(error)") } } And here is how I set up the speech recognition and configure the microphone inputs: private func startSpeechRecognition(completion: @escaping (String) -> Void) { do { // Cancel the previous task if it's running. if let recognitionTask = recognitionTask { recognitionTask.cancel() self.recognitionTask = nil } // The AudioSession is already active, creating input node. let inputNode = audioEngine.inputNode try inputNode.setVoiceProcessingEnabled(false) // Create and configure the speech recognition request recognitionRequest = SFSpeechAudioBufferRecognitionRequest() guard let recognitionRequest = recognitionRequest else { fatalError("Unable to create a recognition request") } recognitionRequest.shouldReportPartialResults = true // Keep speech recognition data on device if #available(iOS 13, *) { recognitionRequest.requiresOnDeviceRecognition = true } // Create a recognition task for speech recognition session. // Keep a reference to the task so that it can be canceled. recognitionTask = speechRecognizer?.recognitionTask(with: recognitionRequest) { result, error in // var isFinal = false if let result = result { // Update the recognizedText completion(result.bestTranscription.formattedString) } else if let error = error { completion("Recognition error: \(error.localizedDescription)") } if error != nil || result?.isFinal == true { // Stop recognizing speech if there is a problem self.audioEngine.stop() inputNode.removeTap(onBus: 0) self.recognitionRequest = nil self.recognitionTask = nil } } // Configure the microphone input let recordingFormat = inputNode.outputFormat(forBus: 0) inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { (buffer, when) in self.recognitionRequest?.append(buffer) } audioEngine.prepare() try audioEngine.start() } catch { completion("Audio engine could not start: \(error.localizedDescription)") } }
0
0
479
Jul ’24