Dive into the technical aspects of audio on your device, including codecs, format support, and customization options.

Audio Documentation

Post

Replies

Boosts

Views

Activity

iOS 18.0 bug - initial AVAudioSession.outputVolume returns zero
Is anyone experiencing an issue with initial ‘AVAudioSession.sharedInstance().outputVolume’ returning 0 after updating to iOS 18.0? I’m observing the outputVolume of AVAudioSession, and when I adjust the device volume, the change value returns correctly. However, when I call AVAudioSession.sharedInstance().outputVolume to get the current volume before knowing the change value, it returns 0, even though the device volume is not actually 0.
1
1
275
Sep ’24
iOS: I need to convert CMSampleBuffer Linear PCM audio to MPEG-4 AAC
Hello, I am a deaf-blind wheelchair user, and I program in Swift using a braille display. I’m reaching out for your help on an issue I’ve been struggling to solve. Basically, when I extract a CMSampleBuffer from an AVAsset of a video, it comes with the Audio Format ID as Linear PCM. However, when I try to pass this CMSampleBuffer to write another video using AVAssetWriter, the video ends up muted. The audio settings of the output video are configured to MPEG-4 AAC, but the input CMSampleBuffer has the Audio Format ID as Linear PCM. I would like to request an extension for CMSampleBuffer that converts Linear PCM audio to MPEG-4 AAC. I’ve searched extensively and couldn’t find anything. Looking forward to your help. Thank you.
1
0
235
Sep ’24
How to get Song Information From Queue?
I have a recent post kind of outlining a similar question here. This time though I'm confident that inserting an array of Track works when inserting into the ApplicationMusicPlayer.shared.queue but now I'm not sure how I can initialize the queue to display song title and artwork for example. I'm also not sure how to get the current item in the queue's artist information and album information which I feel should be easy to do so maybe I'm missing something obvious. Hope this paints of what I'm trying to do and I'm going to post the neccessary code here to help me debug/figure out this problem. import SwiftUI import MusicKit struct PlayBackView: View { @Environment(\.scenePhase) var scenePhase @Environment(\.openURL) private var openURL // Adding Enum Here for Question Sake enum PlayState { case play case pause } @State var song: Track @Binding var songs: [Track]? @State var isShuffled = false @State private var playState: PlayState = .pause @State private var songTimer: Int = Int.random(in: 5...30) @State private var roundTimer: Int = 5 @State private var isTimerActive = false // @State private var volumeValue = VolumeObserver() @State private var isFirstPlay = true @State private var isDancing = false @State private var player = ApplicationMusicPlayer.shared private var isPlaying: Bool { return (player.state.playbackStatus == .playing) } let timer = Timer.publish(every: 1, on: .main, in: .common).autoconnect() var playPauseImage: String { switch playState { case .play: "pause.fill" case .pause: "play.fill" } } var body: some View { VStack { // Album Cover HStack(spacing: 20) { if let artwork = player.queue.currentEntry?.artwork { ArtworkImage(artwork, height: 100) } else { Image(systemName: "music.note") .resizable() .frame(width: 100, height: 100) } VStack(alignment: .leading) { /* This is where I want to display song title, album title, and artist name for example */ // Song Title Text(player.queue.currentEntry?.title ?? "Unable to Find Song Title") .font(.title) .fixedSize(horizontal: false, vertical: true) // Album Title // Text(player.queue.currentEntry ?? "Album Title Not Found") // .font(.caption) // .fixedSize(horizontal: false, vertical: true) // I don't know what the subtitle actually grabs Text(player.queue.currentEntry?.subtitle ?? "Artist Name Not Found") .font(.caption) } } .padding() // Play/Pause Button Button(action: { handlePlayButton() isFirstPlay = false }, label: { Text(playState == .play ? "Pause" : isFirstPlay ? "Play" : "Resume") .frame(maxWidth: .infinity) }) .buttonStyle(.borderedProminent) .padding() .font(.largeTitle) .tint(.red) } .padding() // Maybe I should use the `.task` modifier here? .onAppear { // I'm sure this code could be improved but don't think it'll help answer the question at the moment. Task { if let songs = songs { do { if isShuffled { let shuffledSongs = songs.shuffled() try await player.queue.insert(shuffledSongs, position: .tail) handlePlayButton() } else { try await player.queue.insert(songs, position: .tail) } } catch { print(error.localizedDescription) } } } } } private func handlePlayButton() { Task { if isPlaying { player.pause() playState = .pause isTimerActive = false } else { playState = .play await playTrack() isTimerActive = true } } } @MainActor private func playTrack() async { do { try await player.play() } catch { print(error.localizedDescription) } } } //#Preview { // PlayBackView() //}
2
0
315
Sep ’24
MusicKit & React Native app, overlay part of a song to a video?
I have an app on which users learn choreography. To avoid copyright infringements we currently only have audio instructions and no music on the app. Could we enable those that are subscribed to Apple Music to listen to the part of a song the corresponds to the choreography? Usually they are 60 seconds long. The app is in React Native. Would it be possible to implement it so that opening a dance video automatically triggers the playback of that song from e.g. second 32 - 95? Since the video is looping, could it then start playing from second 32 again? Also looking for devs with experience in integrating the MusicKit for this usecase if it turns out to be possible.
0
0
226
Sep ’24
CarPlay issue after iOS 18 update
After upgrading to iOS 18 CarPlay with 2023 Lexus and iPhone 15 Pro Max shows multiple issues: • speakers reduced to Mono sound (going back to normal after some minutes and then reducing again) • no speaker sound at all • touching / moving phone while driving resulting in “on and off” sound No Reboot / Shutdown helps No Cable connection works @Apple: do you test your software professionally or is this outsourced to the community? Doesn’t look at all like a professional approach? Please solve this dangerous (traffic!) and annoying topic ASAP! Thanks - Torsten
1
0
437
Sep ’24
Silent output MP4 when using AVAssetReaderHello, I am trying to create an MP4 by obtaining the content from another source MP4. The source MP4 would be read with `AVAssetReader` and the output written with `AVAssetWriter`. I wanted to do partial t
Hello, I am trying to create an MP4 by obtaining the content from another source MP4. The source MP4 would be read with AVAssetReader and the output written with AVAssetWriter. I wanted to do partial tests: first, I placed only the video in the output MP4. Now, I am trying to place only the audio in the output MP4. I even managed to get the output MP4 to have the same length (in seconds) as the source MP4. But the problem is simple: the output MP4 is simply silent. Naturally, I want it to have audio. Below are two excerpts from the source code. Reading and writing. Note: The variable videoURL is from the class where the function writeVideo() is located. Its assignment happens in another scope, already debugged. Snippet: let semaphore = DispatchSemaphore(value: 0) func writeVideo() { var audioReaderBuffers = [CMSampleBuffer]() // File directory url = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)[0].appendingPathComponent("*****/output.mp4") guard let url = url else { return } try FileManager.default.createDirectory(at: url.deletingLastPathComponent(), withIntermediateDirectories: true) if FileManager.default.fileExists(atPath: url.path()) { try FileManager.default.removeItem(at: url) } if let videoURL = videoURL { let videoAsset = AVAsset(url: videoURL) Task { let audioTrack = try await videoAsset.loadTracks(withMediaType: .audio).first! let reader = try AVAssetReader(asset: videoAsset) let audioSettings = [ AVFormatIDKey: kAudioFormatLinearPCM, AVSampleRateKey: 44100, AVNumberOfChannelsKey: 2 ] as [String : Any] let audioOutputTest = try await audioTrack.getAudioSettings() let readerAudioOutput = AVAssetReaderTrackOutput(track: audioTrack, outputSettings: audioSettings) reader.add(readerAudioOutput) reader.startReading() while let sampleBuffer = readerAudioOutput.copyNextSampleBuffer() { audioReaderBuffers.append(sampleBuffer) } semaphore.signal() } } semaphore.wait() let audioInput = createAudioInput(sampleBuffer: audioReaderBuffers[0]) let assetWriter = try AVAssetWriter(outputURL: url, fileType: .mp4) assetWriter.add(audioInput) assetWriter.startWriting() assetWriter.startSession(atSourceTime: .zero) for (index, buffer) in audioReaderBuffers.enumerated() { while !audioInput.isReadyForMoreMediaData { usleep(1000) } audioInput.append(buffer) } assetWriter.finishWriting { switch assetWriter.status { case .completed: print("Operation completed successfully: \(url.absoluteString)") case .failed: if let error = assetWriter.error { print("Error description: \(error.localizedDescription)") } else { print("Error not found.") } default: print("Error not found.") } } } Below is the createAudioInput method: func createAudioInput(sampleBuffer: CMSampleBuffer) -> AVAssetWriterInput { let audioSettings = [ AVFormatIDKey: kAudioFormatMPEG4AAC, AVSampleRateKey: 48000, AVEncoderBitRatePerChannelKey: 64000, AVNumberOfChannelsKey: 1 ] as [String : Any] let audioInput = AVAssetWriterInput(mediaType: .audio, outputSettings: audioSettings, sourceFormatHint: sampleBuffer.formatDescription) audioInput.expectsMediaDataInRealTime = false return audioInput } I await your help, please.
0
0
270
Sep ’24
tvOS 18.0 update harmed passthrough of multichannel audio for previously compatible hardware
tvOS 18 doesn't provide passthrough of multichannel audio for streaming apps offering content where it it promoted as available. This is true for devices for which the functionality existed before the 18.0 tvOS update. What's more, the 18.1 Public Beta did not provide a resolution for the issue. All streaming apps appear to be affected. Notably, Home Sharing does not appear to be affected, and continues to provide multichannel audio as it did before the 18.0 update.
1
2
418
Sep ’24
How To Play Audio Through Headphones on WatchOS 11?
I have an app that plays audio and the behaviour of it has changed in watchOS 11. I can no longer figure out how to play the audio through the headphones. To play audio I.. let session = AVAudioSession.sharedInstance() try session.setCategory(.playback, mode: .default, policy: .longFormAudio, options: [] let activated = try await session.activate() if activated { // play audio } In previous versions, 'try await session.activate()` would bring up a route picker where the user could select their headphones. Now on watchOS 11 it just plays the audio out of the speaker. Maybe that's what some people want but if they do want it to play out of the headphones I can't see how I can give that option now? There's no AVRoutePickerView available on watchOS for selecting it. I've tried setting the category to .multiRoute instead of .playback and that does bring up the picker but selecting the speaker results in an error code and selecting the headphones results in it saying it cannot find my headphones (which shouldn't be the case since Apple Music on watchOS finds them). Tried overriding the output with try session.overrideOutputAudioPort(.speaker) but the compiler complains that speaker isn't available on watchOS, which is strange as if I understand correctly it's possible to play through the speaker now at least on some Apple Watches. So is this a bug or is there some way I've not found of playing audio through the headphones?
1
0
276
Sep ’24
Implementing Enhanced Dialogue on tvOS 18
How does a third party developer go about supporting the new Enhanced Dialogue option for video apps in tvOS 18? If an app is using the standard AVPlayerViewController, I had assumed it would be a simple-ish matter of building against the tvOS 18 SDK but apparently not, the options don't appear, not even greyed out.
0
0
293
Sep ’24
Using non-modular libraries in an audio-unit
I've got a bunch of Audio Units I've been developing over the last few months - in Xcode 14 based on the Audio Unit template that ships in Xcode. The DSP heavy-lifting is all done in Rust libraries, which I build for Intel and Apple Silicon, merge using lipo and build XCFrameworks from. There are several of these - one with the DSP code, and several others used from the UI - a mix of SwiftUI and Cocoa. Getting the integration of Rust, Objective C, C++ and Swift working and automated took a few weeks (my first foray into C++ since the 1990s), but has been solid, reliable and working well - everything loads in Logic Pro and Garage Band and works. But now I'm attempting the (woefully underdocumented) process of making 13 audio unit projects able to be loaded in-process - which means moving all of the actual code into a Framework. And therein lies the problem: None of the xcframeworks are modular, which leads to the dreaded "Include of non-modular header inside framework module". Imported directly into the app extension project with "Allow Non-modular Includes in Framework Modules" works fine - but in a framework this seems to be verboten. So, the obvious solution would be to add a module map to each xcframework, and then, poof, they're modular. BUT... due to a peculiar limitation of Xcode's build system I've spent days searching for a workaround for, it is only possible to have ONE xcframework containing a module.modulemap file in any project. More than that and xcodebuild will try to clobber them with each other and the build will fail. And there appears to be NO WAY to name the file anything other than module.modulemap inside an xcframework and have it be detected. So I cannot modularize my frameworks, because there is more than one of them. How the heck does one work around this? A custom module map file somewhere (that the build should find and understand applies to four xcframeworks - how?)? Something else? I've seen one dreadful workaround - https://medium.com/@florentmorin/integrating-swift-framework-with-non-modular-libraries-d18098049e18 - given that I'm generating a lot of the C and Objective C code for the audio in Rust, I suppose I could write a tool that parses the header files and generates Objective C code that imports each framework and declares one method for every single Rust call. But it seems to me there has to be a correct way to do this without jumping through such hoops. Thoughts?
2
0
248
Sep ’24
ExtAudioFileRead throwing AVAudioSessionErrorCodeResourceNotAvailable error on iOS and iPadOS 18
Calls to ExtAudioFileRead are throwing OSStatus 561145203 (AVAudioSessionErrorCodeResourceNotAvailable) on iOS and iPadOS 18 -- earlier versions of iOS have not exhibited this behavior. This is a longstanding code path that has seen a spike of these error codes since iOS 18's release. The following is also printed to the Xcode 16 console:
2
1
360
Sep ’24
Anyone has any issues with voice control and high quality visuals in the Vision Pro
We are working on an app for the vision pro which has high polygons count and lots of high resolution textures. Everything looks smooth and in general very well, The issue is the moment we turn on voice control even if it is not being used the visuals at the center start to stutter left to right. Has anyone seen this ?, it must be a bug, any workaround ? Thanks, Guillermo
1
0
267
Sep ’24
While AirPlaying, MPVolumeView snaps to different volume when I play my app media
I have an app that plays sound files stored locally. I'm using a single SwiftUI view with a MPVolumeView so the user can control system volume from the player in my app. When I'm playing the sound file on the iPhone, my volume slider operates as expected. When I AirPlay to my AppleTV, the volume slider still works to control the volume, but when I hit play in my app, the volume snaps to a different value, but actual sound volume doesn't change. Control still works. Flipping to control center, I see a volume mismatch between system volume and the MPVolumeView. Here's the code that I use to put the slider in my app. struct VolumeSlider: UIViewRepresentable { func makeUIView(context: Context) -> MPVolumeView { let vv = MPVolumeView(frame: .zero) vv.showsVolumeSlider = true vv.setVolumeThumbImage(UIImage() ,for: UIControl.State.normal) return vv } func updateUIView(_ uiView: MPVolumeView, context: Context) { // No need to update the view in this case } } I'm using AVFoundation and AVAudioPlayer to playback the sound file. I'm using MediaPlayer to tell MPNowPlayingInfoCenter the track info and AlbumArt. Audio control via control center works perfectly. Does the same if I target iOS 16 or 17. Is this a bug with the MPVolumeView or the way I added it to the app?
2
0
285
Sep ’24
Sound quality
Am a musician/DJ. Jumped from 14 Pro Max iOS 17 to 16 Pro Max iOS 18.1 b4. For each audio source(music app/yt music etc.) same track/eq/volume compared side by side. With the new device, overall it's a bit muffled and damping music when highs and lows are mixed. Most noticeable when listening to high vocals and acoustic instruments. Drum and bass sound much like on an old Nokia. On 14 Pro it's nothing like that. Thank you
1
0
226
Sep ’24
How to fetch all albums after performing a library request
Hi, In my app I am using MusicLibraryRequest<Artist> to fetch all of the artists in someone's Library collection. With this response I then fetch each artists albums: artist.with([.album]). The response from this only gives albums in the users Library collection. I would like to augment it with all of the albums for an artist from the full catalogue. I'm using MusicKit and targeting iOS18 and visionOS 2. Could someone please point me towards the best way to approach this?
1
0
286
Sep ’24
How best to handle AirPods audio glitches in Game Mode?
Hello! The new lower latency support for AirPods in Game Mode is impressive, but I'm not sure of the best way to handle the transition into/out of Game Mode while audio is playing. In order to lower the latency, the system appears to drop some number of samples, with the result being a good deal less latency. My use case is macOS where it's easier to switch in/out of the fullscreen game (a simple swipe left), thus causing more issues for Game Mode since the audio is playing the entire time. It would be nice if offscreen games could remain in game mode, but I understand not wanting to give developers that control. Are there any best practices for avoiding or masking the audio glitch caused by this skip-ahead? Is there a system event I can receive to know when Game Mode is about to be enabled or disabled, where I could perhaps fade out the audio? My callback checks the inTimestamp->mSampleTime value to detect gaps, but it only rarely detects a Game Mode gap, even though the audio skip-ahead always happens. BTW, I am currently only developing on macOS (15.0) and I'm working at a low level with AudioUnit callbacks and a SpatialMixer. I am not currently using any higher-level audio APIs. And here's a few questions I don't necessarily expect answers to, but it doesn't hurt to ask: Is there any additional technical details about how this latency reduction works, or exactly how much of a reduction is achieved (or said another way, how many samples are dropped)? How much does this affect AirPods battery life? And finally, is there a way to query the actual latency value? I check the value for kAudioDevicePropertyLatency but it seems to always report 160ms for AirPods. Thanks!
1
0
360
Sep ’24