Dive into the technical aspects of audio on your device, including codecs, format support, and customization options.

Audio Documentation

Post

Replies

Boosts

Views

Activity

Interference occurs between MTAudioProcessingTaps when using MTAudioProcessingTap from iOS17.1 and later.
There is a CustomPlayer class and inside it is using the MTAudioProcessingTap to modify the Audio buffer. Let's say there are instances A and B of the Custom Player class. When A and B are running, the process of B's MTAudioProcessingTap is stopped and finalize callback is coming up when A finishes the operation and the instance is terminated. B is still experiencing this with some parts left to proceed. Same code same project is not happening in iOS 17.0 or lower. At the same time when A is terminated, B can complete the task without any impact on B. What changes to iOS 17.1 are resulting in these results? I'd appreciate it if you could give me an answer on how to avoid these issues. let audioMix = AVMutableAudioMix() var audioMixParameters: [AVMutableAudioMixInputParameters] = [] try composition.tracks(withMediaType: .audio).forEach { track in let inputParameter = AVMutableAudioMixInputParameters(track: track) inputParameter.trackID = track.trackID var callbacks = MTAudioProcessingTapCallbacks( version: kMTAudioProcessingTapCallbacksVersion_0, clientInfo: UnsafeMutableRawPointer( Unmanaged.passRetained(clientInfo).toOpaque() ), init: { tap, clientInfo, tapStorageOut in tapStorageOut.pointee = clientInfo }, finalize: { tap in Unmanaged<ClientInfo>.fromOpaque(MTAudioProcessingTapGetStorage(tap)).release() }, prepare: nil, unprepare: nil, process: { tap, numberFrames, flags, bufferListInOut, numberFramesOut, flagsOut in var timeRange = CMTimeRange.zero let status = MTAudioProcessingTapGetSourceAudio(tap, numberFrames, bufferListInOut, flagsOut, &timeRange, numberFramesOut) if noErr == status { .... } }) var tap: Unmanaged<MTAudioProcessingTap>? let status = MTAudioProcessingTapCreate(kCFAllocatorDefault, &callbacks, kMTAudioProcessingTapCreationFlag_PostEffects, &tap) guard noErr == status else { return } inputParameter.audioTapProcessor = tap?.takeUnretainedValue() audioMixParameters.append(inputParameter) tap?.release() } audioMix.inputParameters = audioMixParameters return audioMix
1
0
712
Feb ’24
help with crash report for AUv3 plugin
Is this an uncaught C++ exception that could have originated from my code? or something else? (this report is from a tester) (also, why can't crash reporter tell you info about what exception wasn't caught?) (Per instructions here, to view the crash report, you'll need to rename the attached .txt to .ips to view the crash report) thanks! AudulusAU-2024-02-14-020421.txt
1
0
734
Feb ’24
Why has AVSpeechSynthesizer quit speaking?
I have AVSpeechSynthesizer built in to 6 apps for iPad/iOS that were working fine until recently. Sometime between November 2023 and Feb 2024, they just quit speaking on all the apps for no apparent reason. There have been both XCode and iOS updates in the interim, but I cannot be sure which caused it. It doesn't work either in XCode on simulation, nor on devices. What did Apple change? XCode 15.2 iOS 17+ SwiftUI let synth = AVSpeechSynthesizer() var thisText = "" func sayit(thisText: String) { let utterance = AVSpeechUtterance(string: thisText) utterance.voice = AVSpeechSynthesisVoice(language:"en-US") utterance.rate = 0.4 utterance.preUtteranceDelay = 0.1 synth.speak(utterance)}
2
1
677
Feb ’24
Using AVAudioEngine to monitor audio in VisionOS
I am writing code to monitor the incoming audio levels in VisionOS. It works properly in the simulator, but gets an error on the device. Curious if anyone has any tips. I took out some of the code so it's a bit shorter, as it fails in setupAudioEngine when I try to start the engine with this error: Error starting audio engine: The operation couldn’t be completed. (com.apple.coreaudio.avfaudio error 561145187.) Thanks in advance! Here is my code: class AudioInputMonitor: ObservableObject { private var audioEngine: AVAudioEngine? @Published var inputLevel: Float = 0 init() { requestMicrophonePermission() } private func requestMicrophonePermission() { AVAudioApplication.requestRecordPermission { granted in DispatchQueue.main.async { if granted { self.setupAudioSessionAndEngine() } else { print("Microphone permission not granted") // Handle the case where permission is not granted } } } } private func setupAudioSessionAndEngine() { do { let audioSession = AVAudioSession.sharedInstance() try audioSession.setCategory(.playAndRecord, mode: .measurement, options: []) try audioSession.setActive(true) self.setupAudioEngine() } catch { print("Failed to set up the audio session: \(error)") } } private func setupAudioEngine() { audioEngine = AVAudioEngine() guard let inputNode = audioEngine?.inputNode else { print("Failed to get the audio input node") return } let recordingFormat = inputNode.outputFormat(forBus: 0) inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { [weak self] (buffer, _) in self?.analyzeAudio(buffer: buffer) } do { try audioEngine?.start() } catch { print("Error starting audio engine: \(error.localizedDescription)") } } private func analyzeAudio(buffer: AVAudioPCMBuffer) { // removed to be brief } func stopMonitoring() { // removed to be brief } }
1
0
1k
Feb ’24
Trouble Getting RealityKit audio to play
I can't figure out how to get audio from my RealityKitContentBundle to play on Vision Pro... I have a scene in Reality Composer Pro called "WinterVivarium" which contains a 3D model of a tree, a particle emitter, a ChannelAudio entity, and an audio file (m4a) with 30 minutes of nature sounds. The 3D model and particle emitter load up just fine on my device, but I'm getting an error when I try to load the audio... Swift file below. When I run the app and this file gets called it throws the following error: "Error loading winter vivarium model and/or audio: The operation couldn’t be completed. (RealityKit.__REAsset.LoadError error 2.)" ChatGPT tells me error code 2 likely means "file not found" but I'm not sure on that one... Please help! import SwiftUI import RealityKit import RealityKitContent struct WinterVivarium: View { @State private var angle: Angle = .degrees(0) var body: some View { RealityView { content in let audioFilePath = "/Root/back-yard-feb-7am.m4a" let audioEntity = Entity() do { let entity = try await Entity(named: "WinterVivarium", in: realityKitContentBundle) content.add(entity) let resource = try await AudioFileResource.load(named: audioFilePath, from: "WinterVivarium.usda", in: RealityKitContent.RealityKitContentBundle) let audioController = audioEntity.playAudio(resource) } catch { print("Error loading winter vivarium model and/or audio: \(error.localizedDescription)") } } } #Preview { WinterVivarium() }
5
0
1.5k
Feb ’24
Is AVAudioPCMFormatFloat32 required for playing a buffer with AVAudioEngine / AVAudioPlayerNode
I have a PCM audio buffer (AVAudioPCMFormatInt16). When I try to play it using AVPlayerNode / AVAudioEngine an exception is thrown: "[[busArray objectAtIndexedSubscript:(NSUInteger)element] setFormat:format error:&nsErr]: returned false, error Error Domain=NSOSStatusErrorDomain Code=-10868 (related thread https://forums.developer.apple.com/forums/thread/700497?answerId=780530022#780530022) If I convert the buffer to AVAudioPCMFormatFloat32 playback works. My questions are: Does AVAudioEngine / AVPlayerNode require AVAudioPCMBuffer to be in the Float32 format? Is there a way I can configure it to accept another format instead for my application? If 1 is YES is this documented anywhere? If 1 is YES is this required format subject to change at any point? Thanks! I was looking to watch the "AVAudioEngine in Practice" session video from WWDC 2014 but I can't find it anywhere (https://forums.developer.apple.com/forums/thread/747008).
0
0
717
Feb ’24
CARP violation when calling AudioUnitSetProperty on Silicon Macbook
I'm developing an iOS application that uses Core Audio. When I'm running the app on Silicon Macbook, the first time I call AudioUnitSetProperty the following error is logged: CARP violation: using HAL semantics (AUIOImpl_Base) Are others getting this, and is this part of normal process? I'm also getting AQMEIO_HAL.cpp:862 kAudioDevicePropertyMute returned err 2003332927 when I set kAudioOutputUnitProperty_EnableIO for input.
7
0
1.5k
Feb ’24
Mixing ScreenCaptureKit audio with microphone audio
Hi, I'm new to AVAudioEngine(and macOS programming in general). I'm trying to mix microphone audio with ScreenCaptureKit audio using AVAudioEngine without playing it back. I've created a AVAudioPlayerNode and scheduling buffers in my SCStream handler: playerNode.scheduleBuffer(samples) and have connected the playerNode to the mainMixerNode. audioEngine.connect(audioEngine.inputNode, to: audioEngine.mainMixerNode, format: micFormat) audioEngine.connect(playerNode, to: audioEngine.mainMixerNode, format: format) The problem is that mainMixerNode plays the audio to the speaker creating a feedback loop. How can I prevent the mixer output from being played back. Also: Is this the best way of mixing microphone input with some other input? I ran into AVAudioEngine's manual rendering mode, which seems like the way to go for mixing audio without playing it back. However, I couldn't figure out how to connect microphone input to the AVAudioEngine in manual rendering mode?
0
0
831
Feb ’24
AudioDriverKit IOUserAudioIOOperationWriteEnd avoid for input loopback
Dear Sirs, I've written an audio driver based on AudioDriverKit. In my audio callback function I'm receiving calls with io operation IOUserAudioIOOperationWriteEnd and IOUserAudioIOOperationBeginRead as expected which means I see IOUserAudioIOOperationWriteEnd operations during a playback in an application like VLC or the browser and I see IOUserAudioIOOperationBeginRead when recording in Audacity etc.. But when I open the SystemSettings and goto Sound and I select my driver as input I also see calls with IOUserAudioIOOperationWriteEnd which seem to be the just read input data. I can also watch this when starting up Teams. I think the purpose is to add the (mic) input also to the output so you have the chance to listen to yourself. Nevertheless I'd like to fully avoid this but I don't see a way to distinguish between the playback audio data and the input audio data inside this callback. How could I do this? Or even better is there a switch which would completely switch off these callbacks which forward the input to the output? Thanks and best regards, Johannes
2
0
644
Feb ’24
Save/Restore properties on reboot in audio driver based on AudioDriverKit
Dear Sirs, when writing an AudioServerPlugin I can use the hosts WriteToStorage/CopyFromStorage functions to save and restore custom properties on restarting the machine. Are there corresponding functions for an audio driver based on AudioDriverKit? What would be the recommended way to save and restore properties so that they are available again after a reboot in an audio driver based on AudioDriverKit? Thanks and best regards, Johannes
1
0
519
Feb ’24
USB audio multi route input - AVAudioEngine inputNode
Hi everybody, I'm trying to use the multi input of an usb device using the AVAudioEngine. My aim is to connect different inputNode channels to 2 or more different audionode (f.e. mixer). I'm able to get a spefic input channel from the engine inputNode with OSStatus err = AudioUnitSetProperty(avEngine.inputNode.audioUnit, kAudioOutputUnitProperty_ChannelMap, kAudioUnitScope_Output, 1, outputChannelMap, propSize); but this will change the routing to all the input node and to all the destination mixer nodes. How to send channel 1 of inputNode to a mixerNode1 and channel 2 to another mixerNode2?
0
0
704
Feb ’24
iOS Callkit system screen, no audio if I wait for connection before calling CXAnswerCallAction::fulfill
I'm trying to integrate Callkit into a Flutter app that uses webRTC for calls and I have an issue with taking calls on locked screen. CXAnswerCallAction requires to have the action.fulfill() method called after the connection is established. Here is a pice of code without waiting for establishment of the connection: guard let call = self.callManager?.callWithUUID(uuid: action.callUUID) else{ action.fail() return } call.data.isAccepted = true self.answerCall = call self.callManager?.updateCall(call) sendEvent(SwiftCallKeepPlugin.ACTION_CALL_ACCEPT, call.data.toJSON()) DispatchQueue.main.asyncAfter(deadline: .now() + .milliseconds(1200)) { self.configureAudioSession() } action.fulfill() } This causes the connection time counter to be immediately visible on the screen, but the user still has to wait for connection establishment and can't hear anything. Here is the code that waits for the establishment of the connection before calling action.fulfill(): if(self.awaitedConnection.uuid != uuid) { action.fail() } else if(self.awaitedConnection.isConnected) { DispatchQueue.main.asyncAfter(deadline: .now() + .milliseconds(1200)) { self.configureAudioSession() } action.fulfill() } else { DispatchQueue.main.asyncAfter(deadline: .now() + .milliseconds(1000)) { self.waitForConnection(uuid: uuid, action: action) } } } public func provider(_ provider: CXProvider, perform action: CXAnswerCallAction) { guard let call = self.callManager?.callWithUUID(uuid: action.callUUID) else{ action.fail() return } call.data.isAccepted = true self.answerCall = call self.callManager?.updateCall(call) self.awaitedConnection.uuid = action.callUUID self.awaitedConnection.isConnected = false sendEvent(wiftCallKeepPlugin.ACTION_CALL_ACCEPT, call.data.toJSON()) waitForConnection(uuid: action.callUUID, action: action) } Unfortunately, though it works great on iOS 15.7, on 17.3 it causes lack of audio, no sound and no recording. I also can't enable it later when the call is ongoing. For reference: let session = AVAudioSession.sharedInstance() do{ try session.setCategory(AVAudioSession.Category.playAndRecord, options: AVAudioSession.CategoryOptions.allowBluetooth) try session.setMode(self.getAudioSessionMode(data?.audioSessionMode ?? "voiceChat")) try session.setActive(data?.audioSessionActive ?? true) try session.setPreferredSampleRate(data?.audioSessionPreferredSampleRate ?? 44100.0) try session.setPreferredIOBufferDuration(data?.audioSessionPreferredIOBufferDuration ?? 0.005) }catch{ print(error) } } I can see in the docs of action.fulfill() that "You should only call this method from the implementation of a CXProviderDelegate method". I this the reason for the issue? But how can I do it if I need to wait for the connection asynchronously and the provider method is synchronous?
1
0
723
Mar ’24
How to Add .sf2 Instruments to Multiple Channels Using AVFAudio?
Hello everyone, I'm relatively new to iOS development, and I'm currently working on a Flutter plugin package. I want to use the AVFAudio package to load instrument sounds from an SF2 file into different channels. Specifically, I'd like to load individual instruments from the SF2 file onto separate channels. However, I've been struggling to find a way to achieve this. Could someone guide me on how to load SF2 instrument sounds into different channels using AVFAudio? I've tried various combinations of parameters (program number, soundbank MSB, and soundbank LSB), but none seem to work. If anyone has experience with AVFAudio and SF2 files, I'd greatly appreciate your help. Perhaps there's a proven approach or a way to determine the correct values for these parameters? Should I use a soundfont editor to inspect specific values within the SF2 file? Thank you in advance for any assistance! Best regards, Melih
1
0
876
Mar ’24
Audio Position locked on App start for Fully Immersive Unity builds. No head tracked audio. Often starts hard panned to left or right
I’ve noticed that audio is locked into position with builds. It sometimes comes out the left ear, other times the right. I occasionally get it working with both, but the audio isn’t moving with head tracking. I've seen other reports of this issue in the Unity support forums too. Any ideas on how to fix this? Its being reported in my app reviews as a negative.
0
0
596
Mar ’24
Audio Workgroups: Aux threads joined to workgroup executed on E-Cores when App is in background
We develop virtual instruments for Mac/AU and are trying to get our AU-Plugins and our Standalone player to work with Audio Workgroups. When the Standalone App or Logic Pro is in the foreground and active all is well and as expected. However when the App or Logic Pro is not in focus all my auxiliary threads are running on E-Cores. Even though they are properly joined to the processing thread's workgroup. This leads to a lot of audible drop outs because deadlines are not met anymore. The processing thread itself stays on a p-core. But has to wait for the other threads to finish. How can I opt out of this behaviour? Our users certainly have use cases where they expect the Player to run smoothly even though they currently have a different App in focus.
0
0
685
Mar ’24
Example Spatial/Immersive Video Player
I know there have been a lot of questions about playing back spatial/immersive/MV-HEVC video content on the Vision Pro. Today, I released an example player on GitHub that might answer some questions. Of course, without official documentation on some of these formats, it could be that Apple will eventually do something a little different. We'll just have to wait. In the meantime: https://github.com/mikeswanson/SpatialPlayer
1
0
741
Mar ’24
AVAudioSession errorcode : AVAudioSessionErrorCodeCannotInterruptOthers
Background When I receive the InterruptionBegan notification (the interruption type is AVAudioSessionInterruptionTypeBegan) , I pause playing music. When I receive the InterruptionEnded notification (the interruption type is AVAudioSessionInterruptionTypeEnded), I resume playing music. however, sometimes i has got the error code: AVAudioSessionErrorCodeCannotInterruptOthers (560557684) Some Solutions I searched stackoverflow, there's some similar questions, and some solutions here are not very satisfying as : I don't want my app to mix with others, and once again, it all works most of the time. My app already uses remote control events so this doesn't solve anything. Questions 1.Have someone ever encountered this problem ? 2.Can we solve this problem and how ? 3.In addition, I noticed that there's property named otherAudioPlaying in AVAudioSession, we can know there's another app is playing,the quetion is if we can know which app is playing ?
0
0
430
Mar ’24