I tried to play music on my iPhone and it keeps skipping over all of the songs and not playing any music.
Audio
RSS for tagDive into the technical aspects of audio on your device, including codecs, format support, and customization options.
Post
Replies
Boosts
Views
Activity
I'm building an app that will allow users to record voice notes. The functionality of all that is working great; I'm trying to now implement changes to the AudioSession to manage possible audio streams from other apps. I want it so that if there is audio playing from a different app, and the user opens my app; the audio keep playing. When we start recording, any third party app audio should stop, and can then can resume again when we stop recording.
This is my main audio setup code:
private var audioEngine: AVAudioEngine!
private var inputNode: AVAudioInputNode!
func setupAudioEngine() {
audioEngine = AVAudioEngine()
inputNode = audioEngine.inputNode
audioPlayerNode = AVAudioPlayerNode()
audioEngine.attach(audioPlayerNode)
let format = AVAudioFormat(standardFormatWithSampleRate: AUDIO_SESSION_SAMPLE_RATE, channels: 1)
audioEngine.connect(audioPlayerNode, to: audioEngine.mainMixerNode, format: format)
}
private func setupAudioSession() {
let audioSession = AVAudioSession.sharedInstance()
do {
try audioSession.setCategory(.playAndRecord, mode: .default, options: [.defaultToSpeaker, .allowBluetooth])
try audioSession.setPreferredSampleRate(AUDIO_SESSION_SAMPLE_RATE)
try audioSession.setPreferredIOBufferDuration(0.005) // 5ms buffer for lower latency
try audioSession.setActive(true)
// Add observers
setupInterruptionObserver()
} catch {
audioErrorMessage = "Failed to set up audio session: \(error)"
}
}
This is all called upon app startup so we're ready to record whenever the user presses the record button.
However, currently when this happens, any outside audio stops playing.
I isolated the issue to this line: inputNode = audioEngine.inputNode
When that's commented out, the audio will play -- but I obviously need this for recording functionality.
Is this a bug? Expected behavior?
I’m having issues with volume after installing ios 18 beta 4. The volume toggle in control centre is turned up to full volume and is disabled. It only gets enabled after playing music. As soon as I pause music it gets disabled and my iPhone turns mute. Also having issue when playing music on AirPods; as soon as I turn my screen on, the music pauses. It happens every time o do this action.
Hi there community,
First and foremost, a big thank you to everyone who takes the time to read this.
TL;DR: How, if even possible, can I record multiple audio streams simultaneously on an iOS application (iPad/iPhone)?
I'm working on a recorder for the iPad to gather data for a machine learning project focused on speech recognition. Our goal is to capture extensive speech data, which requires recording from multiple microphones. Specifically, I need to record from all mics connected to our Scarlett 4i4 audio interface and, most importantly, also record from the built-in mic on the iPad or iPhone at the same time.
As a newcomer to Swift development, I initially explored AVAudioRecorder. However, I quickly realized that it only supports one active audio node at a time, making multi-channel recording impossible. (perhaps you can proof me wrong, would make my day) Next, I transitioned to using AVAudioEngine, but encountered the same limitation: I couldn't manage to get input nodes for both the built-in mic and the Scarlett interface channels simultaneously. The application started behaving oddly, often resulting in identical audio data being recorded across all files.
Determined to find a solution, I delved deeper into the Core Audio framework, specifically using Audio Toolbox. My approach involved creating and configuring multiple Audio Units, each corresponding to a different audio input device. Here's a brief overview of my current implementation:
Listing Available Input Devices: I used AVAudioSession to enumerate all available input devices.
Creating Audio Units: For each device, I created an Audio Unit and attempted to configure it for recording.
Setting Up Callbacks: I set up input and output callbacks to handle the audio processing.
Despite my efforts over the last few days, I haven't had much success. The callbacks for the Audio Units don't seem to be invoked correctly, and I'm struggling to achieve simultaneous multi-channel recording. Below is a snippet of my latest attempt:
let audioUnitCallback: AURenderCallback = { (
inRefCon: UnsafeMutableRawPointer,
ioActionFlags: UnsafeMutablePointer<AudioUnitRenderActionFlags>,
inTimeStamp: UnsafePointer<AudioTimeStamp>,
inBusNumber: UInt32,
inNumberFrames: UInt32,
ioData: UnsafeMutablePointer<AudioBufferList>?
) -> OSStatus in
guard let ioData = ioData else {
return noErr
}
print("Input callback invoked")
let audioUnit = inRefCon.assumingMemoryBound(to: AudioUnit.self).pointee
var bufferList = AudioBufferList(
mNumberBuffers: 1,
mBuffers: AudioBuffer(
mNumberChannels: 1,
mDataByteSize: 0,
mData: nil
)
)
let status = AudioUnitRender(audioUnit, ioActionFlags, inTimeStamp, inBusNumber, inNumberFrames, &bufferList)
if status != noErr {
print("AudioUnitRender failed: \(status)")
return status
}
// Copy rendered data to output buffer
let buffer = UnsafeMutableAudioBufferListPointer(ioData)[0]
buffer.mData?.copyMemory(from: bufferList.mBuffers.mData!, byteCount: Int(bufferList.mBuffers.mDataByteSize))
buffer.mDataByteSize = bufferList.mBuffers.mDataByteSize
print("Rendered audio data")
return noErr
}
let outputCallback: AURenderCallback = { (
inRefCon: UnsafeMutableRawPointer,
ioActionFlags: UnsafeMutablePointer<AudioUnitRenderActionFlags>,
inTimeStamp: UnsafePointer<AudioTimeStamp>,
inBusNumber: UInt32,
inNumberFrames: UInt32,
ioData: UnsafeMutablePointer<AudioBufferList>?
) -> OSStatus in
guard let ioData = ioData else {
return noErr
}
print("Output callback invoked")
// Process the output data if needed
return noErr
}
In essence, I'm stuck and in need of guidance. Has anyone here successfully implemented multi-channel recording on iOS, especially involving both built-in microphones and external audio interfaces? Any shared experiences, insights, or suggestions on how to proceed would be immensely appreciated.
Thank you once again for your time and assistance!
Some iOS apps with signatures or bundle IDs will receive the AVAudioSessionInterruptionTypeBegan callback when the headphones are disconnected, but will not receive the AVAudioSessionInterruptionTypeEnded callback. Not all bundle IDs can cause appeal issues,
May I ask why different bundle IDs result in the above differences, and what are the settings that bind bundle IDs that affect the notification of AVAudioSessionInterruptionType
Hello, I am building a new iOS app which uses AVSpeechSynthesizer and should be able to mix audio nicely with audio from other apps. AVSpeechSynthesizer seems to handle setting the AVAudioSession to active on it's own, but does not deactivate the audio session. This leads to issues, namely that other audio sources remain "ducked" after AVSpeechSynthesizer is done speaking.
I have implemented deactivating the audio session myself, which "works", in that it allows other audio sources to become "un-ducked", but it throws this exception each time even though it appears successful.
Error Domain=NSOSStatusErrorDomain Code=560030580 "Session deactivation failed" UserInfo={NSLocalizedDescription=Session deactivation failed}
It appears to be a bug with how AVSpeechSynthesizer handles activating/deactivating the audio session.
Below is a minimal example which illustrates the problem. It has two buttons, one which manually deactivates the audio sessions, which throws the exception, but otherwise works, and another button which leaves audio session management to the AVSpeechSynthesizer but does not "un-duck" other audio.
If you play some audio from another app (ex: Music), you'll see the button which throws/catches an exception successfully ducks/un-ducks the audio, while the one without attempting to deactivate the session ducks but does not un-duck the audio.
import AVFoundation
struct ContentView: View {
let workingSynthesizer = UnduckingSpeechSynthesizer()
let brokenSynthesizer = BrokenSpeechSynthesizer()
init() {
let audioSession = AVAudioSession.sharedInstance()
do {
try audioSession.setCategory(.playback, mode: .voicePrompt, options: [.duckOthers])
} catch {
print("Setup error info: \(error)")
}
}
var body: some View {
VStack {
Button("Works Correctly"){
workingSynthesizer.speak(text: "Hello planet")
}
Text("-------")
Button("Does not work"){
brokenSynthesizer.speak(text: "Hello planet")
}
}
.padding()
}
}
class UnduckingSpeechSynthesizer: NSObject {
var synth = AVSpeechSynthesizer()
let audioSession = AVAudioSession.sharedInstance()
override init(){
super.init()
synth.delegate = self
}
func speak(text: String) {
let utterance = AVSpeechUtterance(string: text)
synth.speak(utterance)
}
}
extension UnduckingSpeechSynthesizer: AVSpeechSynthesizerDelegate {
func speechSynthesizer(_ synthesizer: AVSpeechSynthesizer, didFinish utterance: AVSpeechUtterance) {
do {
try audioSession.setActive(false, options: .notifyOthersOnDeactivation)
}
catch {
// always throws an error
// Error Domain=NSOSStatusErrorDomain Code=560030580 "Session deactivation failed" UserInfo={NSLocalizedDescription=Session deactivation failed}
print("Deactivate error info: \(error)")
}
}
}
class BrokenSpeechSynthesizer {
var synth = AVSpeechSynthesizer()
let audioSession = AVAudioSession.sharedInstance()
func speak(text: String) {
let utterance = AVSpeechUtterance(string: text)
synth.speak(utterance)
}
}
(I have a separate issue where the first speech attempt takes a few seconds but I don't think it's related)
After integration with MusicKit, I have an issue with Watchdog. The crash log point on this stack trace:
ProcessState: Running
WatchdogEvent: scene-update
WatchdogVisibility: Background
WatchdogCPUStatistics: (
"Elapsed total CPU time (seconds): 72.560 (user 49.970, system 22.590), 39% CPU",
"Elapsed application CPU time (seconds): 11.270, 6% CPU"
) reportType:CrashLog maxTerminationResistance:Interactive>
Triggered by Thread: 0
Thread 0 Crashed:
0 libsystem_kernel.dylib 0x1dfa74808 mach_msg2_trap + 8
1 libsystem_kernel.dylib 0x1dfa78008 mach_msg2_internal + 80
2 libsystem_kernel.dylib 0x1dfa77f20 mach_msg_overwrite + 436
3 libsystem_kernel.dylib 0x1dfa77d60 mach_msg + 24
4 libdispatch.dylib 0x19e884b18 _dispatch_mach_send_and_wait_for_reply + 544
5 libdispatch.dylib 0x19e884eb8 dispatch_mach_send_with_result_and_wait_for_reply + 60
6 libxpc.dylib 0x1f386bac8 xpc_connection_send_message_with_reply_sync + 264
7 Foundation 0x195853998 __NSXPCCONNECTION_IS_WAITING_FOR_A_SYNCHRONOUS_REPLY__ + 16
8 Foundation 0x195850004 -[NSXPCConnection _sendInvocation:orArguments:count:methodSignature:selector:withProxy:] + 2160
9 Foundation 0x1958c820c -[NSXPCConnection _sendSelector:withProxy:arg1:] + 116
10 Foundation 0x1958c7e80 _NSXPCDistantObjectSimpleMessageSend1 + 60
11 MediaPlayer 0x1a8c0ff24 -[MPMusicPlayerController _validateServer] + 128
12 MediaPlayer 0x1a8c3f4f8 -[MPMusicPlayerApplicationController _establishConnectionIfNeeded] + 2144
13 MediaPlayer 0x1a8c0fbb8 -[MPMusicPlayerController onServer:] + 52
14 MediaPlayer 0x1a8c0ec94 -[MPMusicPlayerController _nowPlaying] + 372
15 MediaPlayer 0x1a8c161a4 -[MPMusicPlayerController nowPlayingItem] + 24
16 MusicKit 0x213253e78 -[MusicKit_SoftLinking_MPMusicPlayerController nowPlayingItem] + 24
17 MusicKit 0x2136ec1bc 0x2131b9000 + 5452220
18 MusicKit 0x2136ec70c 0x2131b9000 + 5453580
19 MusicKit 0x2136ed839 0x2131b9000 + 5457977
20 MusicKit 0x213221c65 0x2131b9000 + 429157
21 MusicKit 0x21354b741 0x2131b9000 + 3745601
22 libswift_Concurrency.dylib 0x1a1d0e775 completeTaskWithClosure(swift::AsyncContext*, swift::SwiftError*) + 1
According to the log - the app is in the background and the stack trace has only MusicKit. How could we disable or avoid this activity to avoid the Watchdog issue?
We are a music app, encountered a scene, there is no way to resume playing music, so I would like to ask about the technical plan, how to achieve it.
For example, when playing a video in another app, we pause the music playing and turn off the video, we should resume the music playing.
Our code is implemented, so listen AVAudioSessionInterruptionNotification, when we received the notice and judge AVAudioSessionInterruptionOptionShouldResume, we play music came again, Error 560557684(AVAudioSessionErrorCodeCannotInterruptOthers) was reported. We were very confused
NSError *error = nil;
AVAudioSession *audioSession = [AVAudioSession sharedInstance];
[audioSession setCategory:AVAudioSessionCategoryPlayback withOptions:0 error:&error];
[audioSession setActive:YES error:&error];
We compared the apple music app and found that apple music can resume playing.
Here is a video of the effects of our app:
https://drive.google.com/file/d/1J94S2kxkEpNvG536yzCnKmE7IN3cGzIJ/view?usp=sharing
Here's the apple music effect video:
https://drive.google.com/file/d/1c1Kdgkn2nhy8SdDvRJAFF2sPvqJ8fL48/view?usp=sharing
We want to improve our user experience. How can we do that?
I am looking for a way to know how much of the text is remaining (i.e., progress bar) when synthesizer.speak is called. I looked at this but it does not seem to provide any progress. is there any way to get the progress?
I have this code:
class SpeechSynthesizerDelegate: NSObject, AVSpeechSynthesizerDelegate {
func speechSynthesizer(_ synthesizer: AVSpeechSynthesizer, didFinish utterance: AVSpeechUtterance) {
print("Speech finished.")
}
func speechSynthesizer(_ synthesizer: AVSpeechSynthesizer, didCancel utterance: AVSpeechUtterance) {
print("Speech canceled.")
}
func speechSynthesizer(_ synthesizer: AVSpeechSynthesizer, didStart utterance: AVSpeechUtterance) {
print("Speech started.")
}
func speechSynthesizer(_ synthesizer: AVSpeechSynthesizer, didPause utterance: AVSpeechUtterance) {
print("Speech paused.")
...
that I try to use like this
let synthesizer = AVSpeechSynthesizer()
let delegate = SpeechSynthesizerDelegate()
synthesizer.delegate = delegate
but when I call
synthesizer.speak(utterance)
the delegate methods are not being called. I am running this on Mac OS ventura. How can I fix this?
I am running the code sample here https://developer.apple.com/documentation/avfoundation/speech_synthesis/ in a REPL on Mac OS Ventura
import AVFoundation
// Create an utterance.
let utterance = AVSpeechUtterance(string: "The quick brown fox jumped over the lazy dog.")
// Configure the utterance.
utterance.rate = 0.57
utterance.pitchMultiplier = 0.8
utterance.postUtteranceDelay = 0.2
utterance.volume = 0.8
// Retrieve the British English voice.
let voice = AVSpeechSynthesisVoice(language: "en-GB")
// Assign the voice to the utterance.
utterance.voice = voice
// Create a speech synthesizer.
let synthesizer = AVSpeechSynthesizer()
// Tell the synthesizer to speak the utterance.
synthesizer.speak(utterance)
It runs without errors but I don't hear any sound and the call to
synthesizer.speak
returns immediately. How can I fix this? Note I am running in REPL so synthesizer is not going out of scope and getting garbage collected.
HI,
I'm developing an iOS app that accepts an audio signal as input with the goal of analyzing the signal.
For my experiment I purchased a cheap ADC-DAC produced by Sabrent.
It works well but the sampling rate is 44.1 khz but I need at least something more (96 khz).
I'm looking around but I find many DACs used to connect headphones.
Can any of you suggest me an ADC-DAC, preferably not too expensive with a sampling rate of at least 96khz, working with iphones?
Are there any plans to support developers for a portion of the iPhone 15 series' 24MP photoshoot? I wonder if the app can support it other than the basic camera.
Music app stops playing when switching to the background
In apps that play music or music files, if you move to the home screen or run another app while the app is running, the music playback stops.
Our app does not have the code to stop playing when switching to the background.
We are guessing that some people experience this and others do not.
We usually guide users to reboot their devices and try again.
How can this phenomenon be improved in the code?
Or is this a bug or error in the OS?
I'm looking for a sample code project on integrating Spatial Audio into my app, Tunda Island, a music-loving, make friends and dating app. I have gone as far as purchasing a book "Exploring MusicKit" by Rudrank Riyam but to no avail.
Hello Apple Community,
I am developing an iOS app and would like to add a feature that allows users to play and organize Audible.com files within the app. Does Audible or the App Store provide any API or SDK for third-party apps to access and manage Audible content? If so, could you please provide some guidance on how to integrate it into my app?
Thank you for your assistance!
Best regards,
Yes it labs
Hello,
I hope this message finds you well. I am currently working on a Unity-based iOS application that requires continuous microphone input while also producing sound outputs. For this we need to use iOS echo cancellation, so some sounds need to be played via the iOS layer w/ echo cancellation, I am manually setting up the Audio Session after the app starts. Using the .playAndRecord mode of AVAudioSession. However, I am facing an issue where the volume of the sound output is inconsistent across different iOS devices and scenarios.
The process is quite simple, for each AudioClip we are about to play via unity, we copy the buffer data to our iOS Swift layer, which then does all the processing then plays the audio via the native layer.
Here are the specific issues I am encountering:
The volume level for the game sound effects fluctuate between a normal audible volume and a very low volume.
The sound output behaves differently depending on whether the app is launched with the device at full volume or on mute, and if the app is put into background and in foreground afterwards.
The volume inconsistency affects my game negatively, as it is very hard to hear some audios, regardless of the device or its initial volume state. I have followed the basic setup for AVAudioSession as per the documentation, but the inconsistencies persist.
I'm also aware that Unity uses FMOD to set up the audio routing in iOS, we configure our custom routing after that.
We tried tweaking the output volume prior to playing an audio so there isn't much discrepancy, this seems to align the output volume, however there is still some places where the volume is super low, I've looked into the waveforms in Unity and they all seem consistent, there is no reason why the volume would take a dip.
private var audioPlayer = AVAudioPlayerNode()
@objc public func Play() {
audioPlayer.volume = AVAudioSession.sharedInstance().outputVolume * 0.25
audioPlayer.play()
}
We also explored changing the audio session options to see if we had any luck but unfortunately nothing has changed.
private func ConfigAudioSession() {
let audioSession = AVAudioSession.sharedInstance();
do {
try audioSession.setCategory(.playAndRecord, options: [.mixWithOthers, .allowBluetooth, .defaultToSpeaker]);
try audioSession.setMode(.spokenAudio)
try audioSession.setActive(true);
}
catch {
//Treat error
}
}
Could anyone provide guidance or suggest best practices to ensure a stable and consistent volume output in this scenario? Any advice on this issue would be greatly appreciated.
Thank you in advance for your help!
I have an iPad Pro 12.9". I am looking to make an app which can take a simultaneous audio recording from two different microphones at the same time. I want to be able to specify which of the 5 built-in microphones each audio stream should use - ideally one should be from the microphone on the left side of the iPad, and the other should be from one of the mics at the top of the iPad. Is this possible to achieve with the API?
The end goal here is to be able to use the two audio streams and do some DSP on the recordings to determine the approximate direction a particular sound comes from.
在我们App中,打开一个H5页面,使用webplayer播放H5中的视频。
然后再去播放App的播放器,播放视频、或音频文件,
都存在抢不到音频焦点问题,声音响一下就停了,播放器还在运行。
尝试在每次App播放都先调用setCategory、setActive也不生效。
这个问题,在beta1~beta3都存在。
请问,webkit的 player做了什么处理,会一直锁定着音频焦点,App要怎么处理才能把焦点拿过来?
In our App, open an H5 page and use webplayer to play the video in H5.
Then go to the PlayApp player to play the video or audio file.
There is a problem of not being able to grab the audio focus. The sound stops as soon as it sounds, but the player is still running.
Trying to call setCategory and setActive every time in AppPlay does not work either.
This problem exists in beta1~beta3.
I would like to ask, what processing has been done by the webkit player to keep the audio focus locked? How can the app handle it so that it can take the focus?
I'm a newby at tvOS and want to know, if it is possible to override some system settings.
Especially I want to override the output to audio devices.
For the moment (using Apple TV 4K and tvOS 17) you can only select one device (TV, eARC, airPods) and I need a possibility to set a simultaneous output to airPods and TV or eARC.
Is such a programming possible?