AVAudioEngine

RSS for tag

Use a group of connected audio node objects to generate and process audio signals and perform audio input and output.

Posts under AVAudioEngine tag

48 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

Short small starter guide for AVAudioEngine and AVAudioSession on iOS
AVAudioEngine and AVAudioSession Welcome! I will start off with the terms AVAudioEngineImpl::Initialize(NSError**). Why? I want to make those who run into this issue have to possibility to find this post through Search Engines! This is short small breakdown based on what I observed while trying to use these two Components. It's not a guide that goes into all the details. If you're trying to figure out how to fix a crash, you may can find a common way to fix it, in this post! Is it possible to use AVAudioEngine and AVAudioSession together? The answer is yes. But you will face challenges regarding it. Mostly AVAudioEngine. Whatever you're trying to do, it will take a lot of testing. I don't know how it will be with an IDE. But with just .app and iPhone it will take some testing. Or a lot of testing. Something that helped me fixing a crash was, this here: https://developer.apple.com/documentation/avfaudio/audio_engine/audio_units/using_voice_processing This example Project by Apple, uses both AVAudioEngine and AVAudioSession. How can I fix AVAudioEngineImpl::Initialize(NSError**) ? I think this depends. If you're lucky and have a crash log, you may can find clues, but the stack trace sometimes doesn't really help either. I will mention common cases that I encountered though. inputNode https://developer.apple.com/documentation/avfaudio/avaudioengine/1386063-inputnode You need an inputNode apparently. You need to access it or else I think there won't be one. And if there isn't one, AVAudioEngine.start will most likely crash. The audio engine creates a singleton on demand when first accessing this variable. Doing this has prevented this common issue for me. .prepare deallocates and can cause a crash if you restart your AudioEngine Another issue I faced was handling .prepare wrong. You don't need .prepare. But if you use installTap or other things, I think you need it. Here is a common thing to note. If you had previous initialized inputNode. Those could be gone after using .prepare. You have to ensure you're accessing AVAudioEngine.inputNode again before calling .start() or whatever node you need. The Voice Processing Project, does this by creating a Managing Controller for AVAudioEngine with a sort of "setup" function, which ensures that everything is ready, before .prepare and .start get called. AVAudioSession's setCategory You have to experiment with it. The crashes can be very weird. Sometimes your App will only crash once, and then only after you install it again, or if you start it up. You are actually able to use .setActive and .setCategory with AVAduioEngine. Just do not try to do .setActive(false) before you've stopped the AudioEngine, as it will fail. Sometimes I'd run into an issue with .setActive(true) so you really have to experiment if leaving that part out resolves the issue or not. try session.setCategory(.multiRoute, mode: .default, options: [.defaultToSpeaker, .mixWithOthers]) Experiment with it. But these .multiRoute and .mixWithOthers have allowed me to use AVAudioEngine to make a test recording. And I can even switch the Data Sources and Polar Patterns without any issues. Sometimes you can get away without setting .setActive at all. Not sure if AVAudioEngine does it automatically. Short Summary If you use .prepare and then .stop, make sure to initialize things like .inputNode before calling .prepare and .start again. (THIS CAN BE DIFFERENT) Only call .setActive(false) after you used .stop. Otherwise I believe it has no chance to stop it. AVAudioSession setCategory is important. Ensure you use mixRoutes or experiment with all the modes. If you manage to solve your crash, you'll be able to indeed change the Data Sources and Polar Patterns and more! Use isRunning before using .start, this will save you from another crash. If you use .start while it's already running, I think try and catch won't save you here, you have to ensure you're not starting it twice. I hope that this short breakdown will help you to resolve your crash. If you get deeper into AVAudioEngine and AVAudioSession, you'll probably face more crashes. I yet, need to figure out how to solve them. I have a lot of trouble to put my Testing App on my iPhone, so I am sorry if this guide didn't cover every detail of it. A HUGE tip from me is to check the Documentations. As example, when I read the Documentation for inputNode I learned why my app crashed, it's because I never accessed and initialized one. The Developer Documentation can be a little bit of a laberynth, and I strongly recommend you to read every property you try to access if you believe they cause issues. And I also recommend to find example Projects like the Voice Processing ones. As there aren't any Code Examples in the Documentation.
0
0
110
3d
Bluetooth Speaker makes installTap fail to callback after first few seconds
If I have bluetooth speaker connected and I have installTap called on input Node, the callback is fired for 1-2 seconds then it doesnt anymore. I dont see any route or any notification handler called in between. engine.inputNode.removeTap(onBus: 0) engine.inputNode.installTap( onBus: 0, bufferSize: 4096, format: format ) { buffer, _ in // 3 guard let channelData = buffer.floatChannelData else { return } // This callback fails after some time. } Not sure if this is expected, but I noticed some other applications, they seem to work fine. If I remove bluetooth device, my input works fine. Also I have no issues with output on Speaker.
2
0
82
4d
ExtAudioFileRead throwing AVAudioSessionErrorCodeResourceNotAvailable error on iOS and iPadOS 18
Calls to ExtAudioFileRead are throwing OSStatus 561145203 (AVAudioSessionErrorCodeResourceNotAvailable) on iOS and iPadOS 18 -- earlier versions of iOS have not exhibited this behavior. This is a longstanding code path that has seen a spike of these error codes since iOS 18's release. The following is also printed to the Xcode 16 console:
2
1
191
6d
Compressing AVAudioPCMBuffer within AVAudioEngine Tap
Hi everyone, I’m working on a project that involves streaming audio over WebSockets, and I need to compress the audio to reduce bandwidth usage. I’m currently using AVAudioEngine to capture and process audio in PCM format (AVAudioPCMBuffer), but I want to compress the buffer into Opus (or another efficient codec) before sending it over the network. Has anyone worked with compressing an AVAudioPCMBuffer into Opus format within a tap on the inputNode, or could you recommend the best approach for compressing the PCM buffer into a different format? I haven’t been able to find a working solution for this. Any advice or code examples would be greatly appreciated! Thanks in advance, Ondřej -- My current code without the compression: inputNode.installTap(onBus: .zero, bufferSize: 1440, format: nil) { [weak self] buffer, time in guard let self else { return } // 1. Send data // a) Convert the buffer into the desired format if let outputBuffer = buffer.convert(toFormat: Self.websocketInputFormat) { // b) Use the converted buffer // TODO: compress it into a different format if let data = outputBuffer.convertToData() { self.sendAudio(data) } } // 2. Get sound level self.visualizeRecorderBuffer(buffer) } func convert(toFormat outputFormat: AVAudioFormat) -> AVAudioPCMBuffer? { let outputFrameCapacity = AVAudioFrameCount( round(Double(frameLength) * (outputFormat.sampleRate / format.sampleRate)) ) guard let outputBuffer = AVAudioPCMBuffer(pcmFormat: outputFormat, frameCapacity: outputFrameCapacity), let converter = AVAudioConverter(from: format, to: outputFormat) else { return nil } converter.convert(to: outputBuffer, error: nil) { packetCount, status in status.pointee = .haveData return self } return outputBuffer } static private let websocketInputFormat = AVAudioFormat( commonFormat: .pcmFormatInt16, sampleRate: 16000, channels: 1, interleaved: false )!
1
0
254
3w
ApplicationMusicPlayer Audio Session Issue When Switching to AVAudioEngine in Background
Hi! I'm developing a music player app that interchanges between ApplicationMusicPlayer and AVAudioEngine. I'm facing an issue when switching from playback via ApplicationMusicPlayer to AVAudioEngine while the app is in background. Based on testing, it seems like the issue has to do with being unable to set audio focus in background, causing error AVAudioSessionErrorCodeCannotInterruptOthers. I would like to check if ApplicationMusicPlayer has its own audio focus separated from the app's own audio focus. If it is, is there anything that I can do to ensure that ApplicationMusicPlayer returns focus to the app? (I notice that the issue does not occur if we are moving playback from AVAudioEngine to ApplicationMusicPlayer. Not sure why the opposite does not work)
1
0
272
Aug ’24
AVAudioSession gets interrupted when closing a window
I have a visionOS app that plays audio using AVAudioEngine and presents both a window and an immersive space. If I close the window, the audio session gets interrupted and attempting to restart the session and audio engine has no effect. I need to dismiss the app, then reopen it, which reopens the main window, in order for audio to start playing again. This is in all visionOS 2 betas. Note that I have background audio enabled for my app.
2
1
343
Aug ’24
Recordings on iOS 18.0 beta start with stuttering.
I'm experiencing stuttering every time I record something with my iOS app on iOS 18 beta. The code ran fine on previous iOS versions. The stuttering occurs for the first 2 seconds. Here's an example: https://soundcloud.com/thomas-walther-219010679/ios-18-stuttering The way I set up AVAudioEngine and AVAudioSession was vetted quite thoroughly during sessions at WWDC '23. Here is how the engine and the tap is configured: let engine = AVAudioEngine() let recorderNode = AVAudioMixerNode() engine.attach(recorderNode) engine.connect(engine.mainMixerNode, to: engine.outputNode, format: engine.outputNode.inputFormat(forBus: 0)) engine.connect(recorderNode, to: engine.mainMixerNode, format: recordingOutputFormat) engine.connect(engine.inputNode, to: recorderNode, format: engine.inputNode.inputFormat(forBus: 0)) let bufferSize: AVAudioFrameCount = 4096 recorderNode.installTap(onBus: 0, bufferSize: bufferSize, format: nil) { [weak self] buffer, time in guard let self = self else { return } do { // Write recording to disk try audioFile.write(buffer) } catch { // ... } } I tried setting a different buffer size, but with no luck. I also can't see any hangs in Instruments. Do you have any pointers on how to debug this?
4
0
388
Aug ’24
How do I output different sounds to headphones and speakers while simultaneously recording, all without using AVAudioSession.Category.multiroute?
I need to find a way to allow recording from the mic while outputting two different sound streams to two different devices (speaker and headphones). I've done a fair bit of reading around using AVAudioSession.Category.multiroute but haven't found any modern examples. @theanalogkid posted a nice example using obj-C nine years ago, but others have noted that the code isn't readily translatable to Swift. To make matters worse, this is one of the very few examples on how to properly use multirouting. The official documentation is lacking, to say the least, and the WWDC 2012 session is, well, old enough to attend middle school and be a Taylor Swift fan, but definitely not in Swift. The few relevant forum posts here are spread over this middle schooler's life span and likely outdated, with most having no responses other than the poster's own plightful echo. They don't paint a pretty picture of .multiroute's health, with a recent poster noting that volume buttons don't work in this mode, contacting DTS and finding that there's no fix; another finding that it just doesn't work for certain devices, etc. Audio is giving me enough of a headache so I'd like to avoid slogging through this if possible. .multiroute feels like the developer mode of AVAudioSession, but without documentation. tl;dr - Without using .multiroute, is there a way to allow an app to output two different devices while simultaneously recording audio? If .multiroute is the only way to achieve this, can someone give me a quick rundown of how this category works?
1
0
364
Aug ’24
Configuring Apple Vision Pro's microphones to effectively pick up other speaker's voice
I am developing a visionOS app that captions speech in real environments. Currently, I am using Apple's built-in speech recognizer. However, when I was testing the app with a Vision Pro, the device seemed to only pick up the user's voice (in other words, the voices of the wearer of the Vision Pro device). For example, when the speech recognition task is running, and another person in front of me is talking, the system does not pick up the speech well. I tried to set the AVAudioSession to be equally sensitive to all directions: private func configureAudioSession() { do { try audioSession.setCategory(.record, mode: .measurement) try audioSession.setActive(true) if #available(visionOS 1.0, *) { let availableDataSources = audioSession.availableInputs?.first?.dataSources if let omniDirectionalSource = availableDataSources?.first(where: {$0.preferredPolarPattern == .omnidirectional}) { try audioSession.setInputDataSource(omniDirectionalSource) } } } catch { print("Failed to set up audio session: \(error)") } } And here is how I set up the speech recognition and configure the microphone inputs: private func startSpeechRecognition(completion: @escaping (String) -> Void) { do { // Cancel the previous task if it's running. if let recognitionTask = recognitionTask { recognitionTask.cancel() self.recognitionTask = nil } // The AudioSession is already active, creating input node. let inputNode = audioEngine.inputNode try inputNode.setVoiceProcessingEnabled(false) // Create and configure the speech recognition request recognitionRequest = SFSpeechAudioBufferRecognitionRequest() guard let recognitionRequest = recognitionRequest else { fatalError("Unable to create a recognition request") } recognitionRequest.shouldReportPartialResults = true // Keep speech recognition data on device if #available(iOS 13, *) { recognitionRequest.requiresOnDeviceRecognition = true } // Create a recognition task for speech recognition session. // Keep a reference to the task so that it can be canceled. recognitionTask = speechRecognizer?.recognitionTask(with: recognitionRequest) { result, error in // var isFinal = false if let result = result { // Update the recognizedText completion(result.bestTranscription.formattedString) } else if let error = error { completion("Recognition error: \(error.localizedDescription)") } if error != nil || result?.isFinal == true { // Stop recognizing speech if there is a problem self.audioEngine.stop() inputNode.removeTap(onBus: 0) self.recognitionRequest = nil self.recognitionTask = nil } } // Configure the microphone input let recordingFormat = inputNode.outputFormat(forBus: 0) inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { (buffer, when) in self.recognitionRequest?.append(buffer) } audioEngine.prepare() try audioEngine.start() } catch { completion("Audio engine could not start: \(error.localizedDescription)") } }
0
0
325
Jul ’24
Issue with AVAudioEngine and AVAudioSession after Interruption and Background Transition - 561145187 error code
Description: I am developing a recording-only application that supports background recording using AVAudioEngine. The app segments the recording into 60-second files for further processing. For example, a 10-minute recording results in ten 60-second files. Problem: The application functions as expected in the background. However, after the app receives an interruption (such as a phone call) and the interruption ends, I can successfully restart the recording. The problem arises when the app then transitions to the background; it fails to restart the recording. Specifically, after ending the call and transitioning the app to the background, the app encounters an error and is unable to restart AVAudioSession and AVAudioEngine. The only resolution is to close and restart the app, which is not ideal for user experience. Steps to Reproduce: 1. Start recording using AVAudioEngine. 2. The app records and saves 60-second segments. 3. Receive an interruption (e.g., an incoming phone call). 4. End the call. 5. Transition the app to the background. 6. Transition the app to the foreground and the session will be activated again. 7. Attempt to restart the recording. Expected Behavior: The app should resume recording seamlessly after the interruption and background transition. Actual Behavior: The app fails to restart AVAudioSession and AVAudioEngine, resulting in a continuous error. The recording cannot be resumed without closing and reopening the app. How I’m Starting the Recording: Configuration: internal func setAudioSessionCategory() { do { try audioSession.setCategory( .playAndRecord, mode: .default, options: [.defaultToSpeaker, .mixWithOthers, .allowBluetooth] ) } catch { debugPrint(error) } } internal func setAudioSessionActivation() { if UIApplication.shared.applicationState == .active { do { try audioSession.setPrefersNoInterruptionsFromSystemAlerts(true) try audioSession.setActive(true, options: .notifyOthersOnDeactivation) if audioSession.isInputGainSettable { try audioSession.setInputGain(1.0) } try audioSession.setPreferredIOBufferDuration(0.01) try setBuiltInPreferredInput() } catch { debugPrint(error) } } } Starting AVAudioEngine: internal func setupEngine() { if callObserver.onCall() { return } inputNode = audioEngine.inputNode audioEngine.attach(audioMixer) audioEngine.connect(inputNode, to: audioMixer, format: AVAudioFormat.validInputAudioFormat(inputNode)) } internal func beginRecordingEngine() { audioMixer.removeTap(onBus: 0) audioMixer.installTap(onBus: 0, bufferSize: 1024, format: AVAudioFormat.validInputAudioFormat(inputNode)) { [weak self] buffer, _ in guard let self = self, let file = self.audioFile else { return } write(file, buffer: buffer) } audioEngine.prepare() do { try audioEngine.start() recordingTimer = Timer.scheduledTimer(withTimeInterval: recordingInterval, repeats: true) { [weak self] _ in self?.handleRecordingInterval() } } catch { debugPrint(error) } } On the try audioEngine.start() call, I receive error code 561145187 in the catch block. Logs/Error Messages: • Error code: 561145187 Request: I would appreciate any guidance or solutions to ensure the app can resume recording after interruptions and background transitions without requiring a restart. Thank you for your assistance.
2
0
413
Aug ’24
Setting Audio Input node for AVAudioEngine causes outside audio to stop
I'm building an app that will allow users to record voice notes. The functionality of all that is working great; I'm trying to now implement changes to the AudioSession to manage possible audio streams from other apps. I want it so that if there is audio playing from a different app, and the user opens my app; the audio keep playing. When we start recording, any third party app audio should stop, and can then can resume again when we stop recording. This is my main audio setup code: private var audioEngine: AVAudioEngine! private var inputNode: AVAudioInputNode! func setupAudioEngine() { audioEngine = AVAudioEngine() inputNode = audioEngine.inputNode audioPlayerNode = AVAudioPlayerNode() audioEngine.attach(audioPlayerNode) let format = AVAudioFormat(standardFormatWithSampleRate: AUDIO_SESSION_SAMPLE_RATE, channels: 1) audioEngine.connect(audioPlayerNode, to: audioEngine.mainMixerNode, format: format) } private func setupAudioSession() { let audioSession = AVAudioSession.sharedInstance() do { try audioSession.setCategory(.playAndRecord, mode: .default, options: [.defaultToSpeaker, .allowBluetooth]) try audioSession.setPreferredSampleRate(AUDIO_SESSION_SAMPLE_RATE) try audioSession.setPreferredIOBufferDuration(0.005) // 5ms buffer for lower latency try audioSession.setActive(true) // Add observers setupInterruptionObserver() } catch { audioErrorMessage = "Failed to set up audio session: \(error)" } } This is all called upon app startup so we're ready to record whenever the user presses the record button. However, currently when this happens, any outside audio stops playing. I isolated the issue to this line: inputNode = audioEngine.inputNode When that's commented out, the audio will play -- but I obviously need this for recording functionality. Is this a bug? Expected behavior?
0
0
340
Jul ’24
Implementing Multi-Channel Audio Recording on iOS with Built-In and External Mics
Hi there community, First and foremost, a big thank you to everyone who takes the time to read this. TL;DR: How, if even possible, can I record multiple audio streams simultaneously on an iOS application (iPad/iPhone)? I'm working on a recorder for the iPad to gather data for a machine learning project focused on speech recognition. Our goal is to capture extensive speech data, which requires recording from multiple microphones. Specifically, I need to record from all mics connected to our Scarlett 4i4 audio interface and, most importantly, also record from the built-in mic on the iPad or iPhone at the same time. As a newcomer to Swift development, I initially explored AVAudioRecorder. However, I quickly realized that it only supports one active audio node at a time, making multi-channel recording impossible. (perhaps you can proof me wrong, would make my day) Next, I transitioned to using AVAudioEngine, but encountered the same limitation: I couldn't manage to get input nodes for both the built-in mic and the Scarlett interface channels simultaneously. The application started behaving oddly, often resulting in identical audio data being recorded across all files. Determined to find a solution, I delved deeper into the Core Audio framework, specifically using Audio Toolbox. My approach involved creating and configuring multiple Audio Units, each corresponding to a different audio input device. Here's a brief overview of my current implementation: Listing Available Input Devices: I used AVAudioSession to enumerate all available input devices. Creating Audio Units: For each device, I created an Audio Unit and attempted to configure it for recording. Setting Up Callbacks: I set up input and output callbacks to handle the audio processing. Despite my efforts over the last few days, I haven't had much success. The callbacks for the Audio Units don't seem to be invoked correctly, and I'm struggling to achieve simultaneous multi-channel recording. Below is a snippet of my latest attempt: let audioUnitCallback: AURenderCallback = { ( inRefCon: UnsafeMutableRawPointer, ioActionFlags: UnsafeMutablePointer<AudioUnitRenderActionFlags>, inTimeStamp: UnsafePointer<AudioTimeStamp>, inBusNumber: UInt32, inNumberFrames: UInt32, ioData: UnsafeMutablePointer<AudioBufferList>? ) -> OSStatus in guard let ioData = ioData else { return noErr } print("Input callback invoked") let audioUnit = inRefCon.assumingMemoryBound(to: AudioUnit.self).pointee var bufferList = AudioBufferList( mNumberBuffers: 1, mBuffers: AudioBuffer( mNumberChannels: 1, mDataByteSize: 0, mData: nil ) ) let status = AudioUnitRender(audioUnit, ioActionFlags, inTimeStamp, inBusNumber, inNumberFrames, &bufferList) if status != noErr { print("AudioUnitRender failed: \(status)") return status } // Copy rendered data to output buffer let buffer = UnsafeMutableAudioBufferListPointer(ioData)[0] buffer.mData?.copyMemory(from: bufferList.mBuffers.mData!, byteCount: Int(bufferList.mBuffers.mDataByteSize)) buffer.mDataByteSize = bufferList.mBuffers.mDataByteSize print("Rendered audio data") return noErr } let outputCallback: AURenderCallback = { ( inRefCon: UnsafeMutableRawPointer, ioActionFlags: UnsafeMutablePointer<AudioUnitRenderActionFlags>, inTimeStamp: UnsafePointer<AudioTimeStamp>, inBusNumber: UInt32, inNumberFrames: UInt32, ioData: UnsafeMutablePointer<AudioBufferList>? ) -> OSStatus in guard let ioData = ioData else { return noErr } print("Output callback invoked") // Process the output data if needed return noErr } In essence, I'm stuck and in need of guidance. Has anyone here successfully implemented multi-channel recording on iOS, especially involving both built-in microphones and external audio interfaces? Any shared experiences, insights, or suggestions on how to proceed would be immensely appreciated. Thank you once again for your time and assistance!
0
0
306
Jul ’24
How to show VoIP calls on Apple Watch (WatchOS9) with CallKit?
Now my main app is already invoked voip callkit, I would want to invoke voip on iWatch app, but I have some issue: 1、How to deal audio data and network connect of iWatch voip ? can we depends on iPhone app? 2、How to use voip callkit on iWatch that only via bluetooth connect ? 3、If main app is already support voip callkit, how to support callkit for iwatch? Do we need to repeat and independently implement callkit, network, and audio on iWatch? 4、how to add support dial number on iWatch use by the thirdpart app? the case is the same with on the iPhone use, user can send dial by system call record . Any help is appreciated, thanks in advance.
1
0
408
Jul ’24
Help Needed with AVAudioSession in Unity for Consistent Sound Output on iOS Devices
Hello, I hope this message finds you well. I am currently working on a Unity-based iOS application that requires continuous microphone input while also producing sound outputs. For this we need to use iOS echo cancellation, so some sounds need to be played via the iOS layer w/ echo cancellation, I am manually setting up the Audio Session after the app starts. Using the .playAndRecord mode of AVAudioSession. However, I am facing an issue where the volume of the sound output is inconsistent across different iOS devices and scenarios. The process is quite simple, for each AudioClip we are about to play via unity, we copy the buffer data to our iOS Swift layer, which then does all the processing then plays the audio via the native layer. Here are the specific issues I am encountering: The volume level for the game sound effects fluctuate between a normal audible volume and a very low volume. The sound output behaves differently depending on whether the app is launched with the device at full volume or on mute, and if the app is put into background and in foreground afterwards. The volume inconsistency affects my game negatively, as it is very hard to hear some audios, regardless of the device or its initial volume state. I have followed the basic setup for AVAudioSession as per the documentation, but the inconsistencies persist. I'm also aware that Unity uses FMOD to set up the audio routing in iOS, we configure our custom routing after that. We tried tweaking the output volume prior to playing an audio so there isn't much discrepancy, this seems to align the output volume, however there is still some places where the volume is super low, I've looked into the waveforms in Unity and they all seem consistent, there is no reason why the volume would take a dip. private var audioPlayer = AVAudioPlayerNode() @objc public func Play() { audioPlayer.volume = AVAudioSession.sharedInstance().outputVolume * 0.25 audioPlayer.play() } We also explored changing the audio session options to see if we had any luck but unfortunately nothing has changed. private func ConfigAudioSession() { let audioSession = AVAudioSession.sharedInstance(); do { try audioSession.setCategory(.playAndRecord, options: [.mixWithOthers, .allowBluetooth, .defaultToSpeaker]); try audioSession.setMode(.spokenAudio) try audioSession.setActive(true); } catch { //Treat error } } Could anyone provide guidance or suggest best practices to ensure a stable and consistent volume output in this scenario? Any advice on this issue would be greatly appreciated. Thank you in advance for your help!
0
0
451
Jul ’24
iOS18 web audio lock foucs and not release
在我们App中,打开一个H5页面,使用webplayer播放H5中的视频。 然后再去播放App的播放器,播放视频、或音频文件, 都存在抢不到音频焦点问题,声音响一下就停了,播放器还在运行。 尝试在每次App播放都先调用setCategory、setActive也不生效。 这个问题,在beta1~beta3都存在。 请问,webkit的 player做了什么处理,会一直锁定着音频焦点,App要怎么处理才能把焦点拿过来? In our App, open an H5 page and use webplayer to play the video in H5. Then go to the PlayApp player to play the video or audio file. There is a problem of not being able to grab the audio focus. The sound stops as soon as it sounds, but the player is still running. Trying to call setCategory and setActive every time in AppPlay does not work either. This problem exists in beta1~beta3. I would like to ask, what processing has been done by the webkit player to keep the audio focus locked? How can the app handle it so that it can take the focus?
1
0
386
Jul ’24
Device Volume Changes After Setting AVAudioSession Category
Hi there, I am encountering an issue in my project which utilizes a speech recognizer and occasionally plays audio files. The problem arises when I configure the AVAudioSession and enable voice processing. The system volume changes unexpectedly and becomes uncontrollable. Specifically, the volume is excessively loud on iPhone but quite low on iPad let audioSession = AVAudioSession.sharedInstance() try audioSession.setCategory(.playAndRecord, mode: .default, options: [.defaultToSpeaker, .allowBluetooth, .interruptSpokenAudioAndMixWithOthers]) try audioSession.setActive(true, options: .notifyOthersOnDeactivation) try audioEngine.inputNode.setVoiceProcessingEnabled(true) try audioEngine.outputNode.setVoiceProcessingEnabled(true) I have provided a sample project here: Sample Project. To reproduce the issue, please follow these steps on a real device: Click on "Play recording" to hear the sound at normal volume. Click on "Start recording" to set up the category and speech recognizer. Click on "Stop recording" to stop the recording. Click on "Play recording" again and observe that the sound volume has changed. Thank you for your assistance.
0
0
498
Jun ’24
USB microphone with high samplerate and AVAudioEngine
Hello, I can't get my head wrapped around the following problem: I have an external USB microphone capable of samplerates of up to 500 kHz. I want to capture the samples and do analysis and display - no playback required. I can not find a way to run the microphone with its maximum samplerate, I always get 48 kHz. I would like to stick to AVAudioEngine if possible. Any pointer welcome. thx! volker
2
0
634
May ’24
Voice Processing in multiple apps simultaneously
Hi everyone! We are wondering whether it's possible to have two macOS apps use the Voice Processing from Audio Engine at the same time, since we have had issues trying to do so. Specifically, our app seems to cut off the input stream from the other, only if it has Voice Processing enabled. We are developing a macOS app that records microphone input simultaneously with videoconference apps like Zoom. We are utilizing the Voice Processing from Audio Engine like in this sample: https://developer.apple.com/documentation/avfaudio/audio_engine/audio_units/using_voice_processing We have also noticed this behaviour in Safari recording audios with the Javascript Web Audio API, which also seems to use Voice Processing under the hood due to the Echo Cancellation. Any leads on this would be greatly appreciated! Thanks
0
0
472
May ’24
watchOS: Resume recording from AudioInterruption in background mode
Hi, I have a watchOS app that records audio for an extended period of time and because the mic is active, continues to record in background mode when the watch face is off. However, when a call comes in or Siri is activated, recording stops because of an audio interruption. Here is my code for setting up the session: private func setupAudioSession() { let audioSession = AVAudioSession.sharedInstance() do { try audioSession.setCategory(.playAndRecord, mode: .default, options: [.overrideMutedMicrophoneInterruption]) try audioSession.setActive(true, options: .notifyOthersOnDeactivation) } catch { print("Audio Session error: \(error)") } } Before this I register an interruption handler that holds a reference to my AudioEngine (which I start and stop each time recording is activated by the user): _audioInterruptionHandler = AudioInterruptionHandler(audioEngine: _audioEngine) And here is how this class implements recovery: fileprivate class AudioInterruptionHandler { private let _audioEngine: AVAudioEngine public init(audioEngine: AVAudioEngine) { _audioEngine = audioEngine // Listen to interrupt notifications NotificationCenter.default.addObserver(self, selector: #selector(handleAudioInterruption(notification:)), name: AVAudioSession.interruptionNotification, object: nil) } @objc private func handleAudioInterruption(notification: Notification) { guard let userInfo = notification.userInfo, let interruptionTypeRawValue = userInfo[AVAudioSessionInterruptionTypeKey] as? UInt, let interruptionType = AVAudioSession.InterruptionType(rawValue: interruptionTypeRawValue) else { return } switch interruptionType { case .began: print("[AudioInterruptionHandler] Interruption began") case .ended: print("[AudioInterruptionHandler] Interruption ended") print("Interruption ended") do { try AVAudioSession.sharedInstance().setActive(true) } catch { print("[AudioInterruptionHandler] Error resuming audio session: \(error.localizedDescription)") } default: print("[AudioInterruptionHandler] Unknown interruption: \(interruptionType.rawValue)") } } } Unfortunately, it fails with: Error resuming audio session: Session activation failed Is this even possible to do on watchOS? This code worked for me on iOS. Thank you, -- B.
2
0
649
Apr ’24