Dive into the technical aspects of audio on your device, including codecs, format support, and customization options.

Audio Documentation

Post

Replies

Boosts

Views

Activity

iOS 18 arm64 simulator disables audio output with unknown "AudioConverterService" error
Hello, I'm getting an unknown, never-before-seen error at application launch, when running my iOS SpriteKit game on the iOS 18 arm64 simulator from Xcode 16.0 (16A242d) — AudioConverterOOP.cpp:847 Failed to prepare AudioConverterService: -302 This is occurs on all iOS 18 simulator devices, between application(_:didFinishLaunchingWithOptions:) and the first applicationDidBecomeActive(_:) — the SKScene object may have been already initialized by SpriteKit, but the scene's didMove(to:) method hasn't been called yet. Also, note that the error message is being emitted from a secondary (non-main) thread, obviously not created by the app. After the error occurs, no SKScene is able to play audio — this had never occurred on iOS versions prior to 18, neither on physical devices nor on the simulator. Has anyone seen anything like this on a physical device running 18? Unfortunately, at the moment I cannot test myself on an 18 device, only on the simulator... Thank you, D.
2
0
388
Sep ’24
carplay audio lost after ios18
Iphone 13mini updated to ios18. Carplay is wired on my 2021 RAM Laramie. After the update => Premium audio is lost, I can only hear low quality audio. When I manually change to Bluetooth instead of usb on the car, then audio comes in speaker mode on my phone and not on the truck.
2
2
249
Sep ’24
Song releaseDate always nil
I am fetching playlist songs from the users library and also need the releaseDate (or year) for the song for my use case. However, the releaseDate is always nil since I have upgraded to sequoia. I am pretty sure, this was working before the upgrade, but I couldn't find any documentation on changes related to this. Furthermore I noticed, the IDs also now seem to be the catalog IDs instead of the global ones like i.PkdZbQXsPJ4DX04 Here's in a nutshell what I am doing func fetchSongs(playlist: Playlist) async throws { let detailedPlaylist = try await playlist.with([.tracks]) var currentTracks: MusicItemCollection<Track>? = detailedPlaylist.tracks repeat { for track in currentTracks! { guard case .song(let song) = track else { print("This is not a song") continue } print(song.releaseDate) } currentTracks = try await currentTracks?.nextBatch() } while currentTracks != nil }
1
0
198
Sep ’24
Custom AudioObjectPropertySelector on audio plugins to get the data
I successfully retrieved strings, arrays, and other data through a custom AudioObjectPropertySelector, but I can only get fixed returns. Whenever I modify it to use dynamic data, it results in an error. Below is my code. case kPlugIn_CustomPropertyID: { *((CFStringRef*)outData) = CFSTR("qin@@@123"); *outDataSize = sizeof(CFStringRef); } break; case kPlugIn_ContainDic: { CFMutableDictionaryRef mutableDic1 = CFDictionaryCreateMutable(kCFAllocatorDefault, 0, &kCFTypeDictionaryKeyCallBacks, &kCFTypeDictionaryValueCallBacks); CFDictionarySetValue(mutableDic1, CFSTR("xingming"), CFSTR("qinmu")); *((CFDictionaryRef*)outData) = mutableDic1; *outDataSize = sizeof(CFPropertyListRef); // *((CFPropertyListRef*)outData) = mutableDic; } break; case kPlugIn_ContainArray: { CFMutableArrayRef mutableArray = CFArrayCreateMutable(kCFAllocatorDefault, 0, &kCFTypeArrayCallBacks); CFArrayAppendValue(mutableArray, CFSTR("Hello")); CFArrayAppendValue(mutableArray, CFSTR("World")); *((CFArrayRef*)outData) = mutableArray; *outDataSize = sizeof(CFArrayRef); } break; These are fixed returns, and there are no issues when I retrieve the data. When I change the return data in kPlugIn_ContainDic to the following, the first time I restart the CoreAudio service and retrieve the data, it works fine. However, when I attempt to retrieve it again, it results in an error: case kPlugIn_ContainDic: { *outDataSize = sizeof(CFPropertyListRef); *((CFPropertyListRef*)outData) = mutableDic; } break; error code: HALC_ShellDevice::CreateIOContextDescription: failed to get a description from the server HAL_HardwarePlugIn_ObjectGetPropertyData: no object HALPlugIn::ObjectGetPropertyData: got an error from the plug-in routine, Error: 560947818 (!obj) The declaration and usage of mutableDic are as follows: static CFMutableDictionaryRef mutableDic; static OSStatus BlackHole_Initialize(AudioServerPlugInDriverRef inDriver, AudioServerPlugInHostRef inHost) { OSStatus theAnswer = 0; gPlugIn_Host = inHost; if (mutableDic == NULL){ mutableDic = CFDictionaryCreateMutable(kCFAllocatorDefault, 100, &kCFTypeDictionaryKeyCallBacks, &kCFTypeDictionaryValueCallBacks); } } static OSStatus BlackHole_AddDeviceClient(AudioServerPlugInDriverRef inDriver, AudioObjectID inDeviceObjectID, const AudioServerPlugInClientInfo* inClientInfo) { CFStringRef string = CFStringCreateWithFormat(kCFAllocatorDefault, NULL, CFSTR("%u"), inClientInfo->mClientID); CFMutableDictionaryRef dic = CFDictionaryCreateMutable(kCFAllocatorDefault, 0, &kCFTypeDictionaryKeyCallBacks, &kCFTypeDictionaryValueCallBacks); CFDictionarySetValue(dic, CFSTR("clientID"), string); CFDictionarySetValue(dic, CFSTR("bundleID"), inClientInfo->mBundleID); CFDictionarySetValue(mutableDic, string, dic); } Can someone tell me why
0
0
222
Sep ’24
How To Add Multiple Songs/Playlist to the Queue?
A couple of weeks ago I got help here to play one song and the solution to my problem was that I wasn't adding the song (Track Type) to the queue correctly, so now I want to be able to add a playlist worth of songs to the queue. The problem is when I try to add an array of the Track type I get an error. The other part of this issue for me is how do I access an individual song off of the queue after I add it? I see I can do ApplicationMusicPlayer.shared.queue.currentItem but I think I'm missing/misunderstanding something here. Anyway's I'll post the code I have to show how I'm attempting to do this at this moment. In this scenario we're getting passed in a playlist from another view. import SwiftUI import MusicKit struct PlayBackView: View { @State var song: Track? @State private var songs: [Track] = [] @State var playlist: Playlist private let player = ApplicationMusicPlayer.shared VStack { // Album Cover HStack(spacing: 20) { if let artwork = player.queue.currentEntry?.artwork { ArtworkImage(artwork, height: 100) } else { Image(systemName: "music.note") .resizable() .frame(width: 100, height: 100) } VStack(alignment: .leading) { // Song Title Text(player.queue.currentEntry?.title ?? "Song Title Not Found") .font(.title) .fixedSize(horizontal: false, vertical: true) } } } .padding() .task { await loadTracks() // It's Here I thought I could do something like this player.queue = tracks // Since I can do this with one singular track player.queue = [song] do { try await player.queue.insert(songs, position: .afterCurrentEntry) } catch { print(error.localizedDescription) } } } @MainActor private func loadTracks() async { do { let detailedPlaylist = try await playlist.with([.tracks]) let tracks = detailedPlaylist.tracks ?? [] setTracks(tracks) } catch { print(error.localizedDescription) } } @MainActor private func setTracks(_ tracks: MusicItemCollection<Track>) { songs = Array(tracks) } }
1
0
323
Sep ’24
MusicKit UPCs changing and handling that
I use Universal Product Codes (UPC) in my app to reliably identify albums after having used albumIDs for a time. AlbumIDs can change over time for no obvious reasons (see here for songIDs) so I switched to UPCs since I believed they cannot change. Well apparently they can. A few days ago I populated a JSON with UPCs including 196871067713. Today trying to perform a MusicCatalogResourchRequest for the UPC does not return anything. When using that UPC and putting it into an Apple Music link like https://music.apple.com/de/album/folge-89-im-geistergarten/1683337782?l=en-GB redirects to https://music.apple.com/de/album/folge-89-im-geistergarten/1683337782?l=en-GB so I assume the UPC has changed from 196871067713 to 1683337782. Apple Music can handle that and redirects to the new upc both in the app and as a website. But a MusicCatalogResourceRequest cannot do that. I filed a suggestion for that (FB15167146) but I need a solution quicker. Can I somehow detect where the URL is redirecting to? Is there a way MusicCatalogResourceRequest can do this? Performing a MusicCatalogSearchRequest can be an option but seems unreliable when using the title as search term. Other ideas? Thank you
1
1
295
Sep ’24
Audio Muted After Building Unity App to iOS Device – Only Resetting Device Settings Fixes It
Hi, I'm facing an issue with my Unity-based app when deploying it to the AVP. Often, after building and running the app on the device, the audio gets muted. I couln't find any setting that let me unmute it. The only solution I've found is to reset the device settings, which makes the audio work again. Here are a few things I’ve noticed: The sound works fine when I reset my device’s settings. I haven't changed any sound or audio settings on the device before or after deploying the app. The issue doesn’t always occur immediately, but when it does, resetting settings seems to be the only fix. Could there be something in the AVP audio configuration that causes this problem? I’d appreciate any advice or suggestions. Thanks!
0
0
140
Sep ’24
MusicKit: How to search for a single song by ID
How can I search for a single song by using its song ID? I tried the following code but it's not working: MusicCatalogResourceRequest(matching: Song.self, equalTo: song.id.rawValue) I get the following errors in Xcode: Cannot convert value of type 'Song.Type' to expected argument type 'KeyPath<MusicItemType.FilterType, String>' Generic parameter 'MusicItemType' could not be inferred I have the song ID saved as a string inside song so I just want to grab the full MusicKit MusicItemType from the API. I looked at the documentation but it doesn't make any sense to me. Please help!
1
0
302
Sep ’24
AVAudioPlayer init very slow on iOS 18
On Xcode 16 (16A242) app execution and UI will stall / lag as soon as an AVAudioPlayer is initialized. let audioPlayer = try AVAudioPlayer(contentsOf: URL) audioPlayer.volume = 1.0 audioPlayer.delegate = self audioPlayer.prepareToPlay() Typically you would not notice this in a music app for example, but it is especially noticable in games where multiple sounds are being played using multiple instances of AVAudioPlayer. The entire app slows down because of it. This is similar to this issue from last year. I have reported it to Apple in FB15144369, as this messes up my production games where fps goes down to nothing when sounds are enabled. Unfortunately I cannot find a solution. Anyone?
2
0
314
Sep ’24
About nullAudio AudioObjectPropertySelector custom attributes
1, I saw nullAudio custom properties of the static const AudioObjectPropertySelector kPlugIn_CustomPropertyID = 'PCst'; But I don't know how to use this in a project. 2. What is the difference between PlugIn and Device's custom properties? 3. When I try to customize the PropertySelector for deivce. After adding kAudioObjectPropertyCustomPropertyInfoList NullAudio_HasDeviceProperty method to compile again after restarting coreAudio service, found that virtual devices don't show.
0
0
286
Sep ’24
Fairplay crash on 16.X.X OS Versions & not able to download file.
Here is crash logs NSInvalidArgumentException - *** -[AVContentKeyRequest processContentKeyResponse:] AVContentKeySession's keySystem is not same as that of keyResponse We observed few 16.X.X Devices are not able to download audio media content, and if they trying multiple times. app got crash and above error encountered while crashing. Please let us know if any issues.
1
0
263
Sep ’24
Haptic to Audio (synthesize audio file from haptic patterns?)
Haptics are often represented as audio for presentation purposes by Apple in videos and learning resources. I am curious if: ...Apple has released, or is willing to release any tools that may have been used synthesize audio to represent a haptic patterns (such as in their WWDC19 Audio-Haptic presentation)? ...there are any current tools available that take haptic instruction as input (like AHAP) and outputs an audio file? ...there is some low-level access to the signal that drives the Taptic Engine, so that it can be repurposed as an audio stream? ...you have any other suggestions! I can imagine some crude solutions that hack together preexisting synthesizers and fudging together a process to convert AHAP to MIDI instructions, dialing in some synth settings to mimic the behaviour of an actuator, but I'm not too interested in that rabbit hole just yet. Any thoughts? Very curious what the process was for the WWDC videos and audio examples in their documentation... Thank you!
2
0
290
Sep ’24
Content items not updating when using MediaPlayer API for CarPlay on iOS18
We are using the MediaPlayer API to provide CarPlay support. Starting in iOS 18 we are having issues updating the content list. The initial list of items will populate on a fresh instance but soon there after an error will show up saying we are not entitled to "com.apple.mediaremote.external-artwork-validation". From that point onwards no changes we make to our MPPlayableContentDataSource are reflected in CarPlay. Even after restarting the device. While the MediaPlayer API is marked as deprecated, we are still using it to provide CarPlay support going back to iOS 10. Has anyone else run into this or have suggestions for workarounds?
3
1
412
Sep ’24
Setting the default output device using Core Audio stops working after using continuity with AirPods
Hi, made an app which is managing the sound input/output for the user and I'm facing an unexpected behavior which can be replicated using the Apple's HALLab tool too. Initially the app is able to set the input/output AudioDevice and everything works as expected however if you switch from your Mac to your iPhone when using AirPods Pro and switch back again the app is no longer able to set the output device. There's no error, it simply switches back to AirPods immediately. You can replicate the issue on the HALLab(an app provided by Apple as "additional tools for Xcode"). How to: Open HALLab, put on your AirPods and play some media. Try out changing the input and output source and study the expected behavior Unlock your iPhone, play some media and wait for the AirPods Pro to switch to the iPhone. Go back to your Mac and play some media and wait for AirPods to switch to Mac Try changing the output source on HALLab, notice that the source immediately reverses back to AirPods, which is the unexpected behavior. Changing the source from the System Settings keeps working as expected. Any ideas on what's going on and how to handle that? I'm on MacOS 15.1 using the following code to set the device: private func setDevice(deviceID: AudioDeviceID, forType type: AudioDeviceType) throws { guard isDeviceAvailable(deviceID) else { throw AudioDeviceError.deviceNotAvailable } print("setting the device.") var propertyAddress = AudioObjectPropertyAddress( mSelector: type == .input ? kAudioHardwarePropertyDefaultInputDevice : kAudioHardwarePropertyDefaultOutputDevice, mScope: kAudioObjectPropertyScopeGlobal, mElement: kAudioObjectPropertyElementMain ) let dataSize = UInt32(MemoryLayout<AudioDeviceID>.size) var mutableDeviceID = deviceID // Create a mutable copy let status = AudioObjectSetPropertyData(AudioObjectID(kAudioObjectSystemObject), &propertyAddress, 0, nil, dataSize, &mutableDeviceID) guard status == noErr else { throw AudioDeviceError.deviceNotSet(status: status) } }
2
0
353
Sep ’24
How often do you need to update MPNowPlayingInfo?
I am using an AVQueuePlayer to play a series of audio files. While implementing the now playing functionality for the iOS lock screen, I got a little confused with how to use MPNowPlayingInfo. When you update MPNowPlayingInfo, one of the fields is MPNowPlayingInfoPropertyElapsedPlaybackTime. This leads me to believe you need to call it at least once a second to keep that up-to-date. But if you don't call it that frequently, the Now Playing UI does update correctly as it's playing, so that makes me think you only need to call it once you start playing? It feels very expensive to keep on calling it every time in my periodic time observer, but is that the correct approach? Or do you just call it when you play/pause/skip, etc. ?
1
0
316
Sep ’24
Compressing AVAudioPCMBuffer within AVAudioEngine Tap
Hi everyone, I’m working on a project that involves streaming audio over WebSockets, and I need to compress the audio to reduce bandwidth usage. I’m currently using AVAudioEngine to capture and process audio in PCM format (AVAudioPCMBuffer), but I want to compress the buffer into Opus (or another efficient codec) before sending it over the network. Has anyone worked with compressing an AVAudioPCMBuffer into Opus format within a tap on the inputNode, or could you recommend the best approach for compressing the PCM buffer into a different format? I haven’t been able to find a working solution for this. Any advice or code examples would be greatly appreciated! Thanks in advance, Ondřej -- My current code without the compression: inputNode.installTap(onBus: .zero, bufferSize: 1440, format: nil) { [weak self] buffer, time in guard let self else { return } // 1. Send data // a) Convert the buffer into the desired format if let outputBuffer = buffer.convert(toFormat: Self.websocketInputFormat) { // b) Use the converted buffer // TODO: compress it into a different format if let data = outputBuffer.convertToData() { self.sendAudio(data) } } // 2. Get sound level self.visualizeRecorderBuffer(buffer) } func convert(toFormat outputFormat: AVAudioFormat) -> AVAudioPCMBuffer? { let outputFrameCapacity = AVAudioFrameCount( round(Double(frameLength) * (outputFormat.sampleRate / format.sampleRate)) ) guard let outputBuffer = AVAudioPCMBuffer(pcmFormat: outputFormat, frameCapacity: outputFrameCapacity), let converter = AVAudioConverter(from: format, to: outputFormat) else { return nil } converter.convert(to: outputBuffer, error: nil) { packetCount, status in status.pointee = .haveData return self } return outputBuffer } static private let websocketInputFormat = AVAudioFormat( commonFormat: .pcmFormatInt16, sampleRate: 16000, channels: 1, interleaved: false )!
2
0
516
Sep ’24
Understanding the number of input channels in Core Audio
Hello everyone, I'm new to Core Audio and still haven't found my footing. I'm learning how to capture audio from the default device, using Audio Units. On my MacBook, the default audio input is mono. But when I write a piece of code to capture audio using AUHAL, I'm discovering that I need to provide an AudioBufferList with two channels, not one. Also, when I try to capture audio from an audio interface with 20 audio inputs, I must provide an AudioBufferList with two channels, and not with 20 channels. To investigate the issue, I wrote a small diagnostic program, which opens the default audio device and probes it for the number of channels. Depending on which way I'm probing, I'm getting different results. When I probe the stream format, I'm getting information that there is 1 channels. But when I probe the input audio unit, I'm getting information that there are 2 input channels. Here's my program to demonstrate the issue: // InputDeviceChannels.m // Compile with: // clang -framework CoreAudio -framework AudioToolbox -framework CoreFoundation -framework AudioUnit -o InputDeviceChannels InputDeviceChannels.m // // On my system, this prints: // Device Name: MacBook Pro Microphone // Number of Channels (Stream Format): 1 // Number of Elements (Element Count): 2 #import <AudioToolbox/AudioToolbox.h> #import <AudioUnit/AudioUnit.h> #import <CoreAudio/CoreAudio.h> #import <Foundation/Foundation.h> void printDeviceInfo(AudioUnit audioUnit) { UInt32 size; OSStatus err; AudioStreamBasicDescription streamFormat; size = sizeof(streamFormat); err = AudioUnitGetProperty(audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 1, &streamFormat, &size); if (err != noErr) { printf("Error getting stream format\n"); exit(1); } int numChannels = streamFormat.mChannelsPerFrame; UInt32 elementCount; size = sizeof(elementCount); err = AudioUnitGetProperty(audioUnit, kAudioUnitProperty_ElementCount, kAudioUnitScope_Input, 0, &elementCount, &size); if (err != noErr) { printf("Error getting element count\n"); exit(1); } printf("Number of Channels (Stream Format): %d\n", numChannels); printf("Number of Elements (Element Count): %d\n", elementCount); } void printDeviceName(AudioDeviceID deviceID) { UInt32 size; OSStatus err; CFStringRef deviceName = NULL; size = sizeof(deviceName); err = AudioObjectGetPropertyData( deviceID, &(AudioObjectPropertyAddress){kAudioDevicePropertyDeviceNameCFString, kAudioObjectPropertyScopeGlobal, kAudioObjectPropertyElementMain}, 0, NULL, &size, &deviceName); if (err != noErr) { printf("Error getting device name\n"); exit(1); } char deviceNameStr[256]; if (!CFStringGetCString(deviceName, deviceNameStr, sizeof(deviceNameStr), kCFStringEncodingUTF8)) { printf("Error converting device name to C string\n"); exit(1); } CFRelease(deviceName); printf("Device Name: %s\n", deviceNameStr); } int main(int argc, const char *argv[]) { @autoreleasepool { OSStatus err; // Get the default input device ID AudioDeviceID input_device_id = kAudioObjectUnknown; { UInt32 property_size = sizeof(input_device_id); AudioObjectPropertyAddress input_device_property = { kAudioHardwarePropertyDefaultInputDevice, kAudioObjectPropertyScopeGlobal, kAudioObjectPropertyElementMain, }; err = AudioObjectGetPropertyData(kAudioObjectSystemObject, &input_device_property, 0, NULL, &property_size, &input_device_id); if (err != noErr || input_device_id == kAudioObjectUnknown) { printf("Error getting default input device ID\n"); exit(1); } } // Print the device name using the input device ID printDeviceName(input_device_id); // Open audio unit for the input device AudioComponentDescription desc = {kAudioUnitType_Output, kAudioUnitSubType_HALOutput, kAudioUnitManufacturer_Apple, 0, 0}; AudioComponent component = AudioComponentFindNext(NULL, &desc); AudioUnit audioUnit; err = AudioComponentInstanceNew(component, &audioUnit); if (err != noErr) { printf("Error creating AudioUnit\n"); exit(1); } // Enable IO for input on the AudioUnit and disable output UInt32 enableInput = 1; UInt32 disableOutput = 0; err = AudioUnitSetProperty(audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, 1, &enableInput, sizeof(enableInput)); if (err != noErr) { printf("Error enabling input on AudioUnit\n"); exit(1); } err = AudioUnitSetProperty(audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Output, 0, &disableOutput, sizeof(disableOutput)); if (err != noErr) { printf("Error disabling output on AudioUnit\n"); exit(1); } // Set the current device to the input device err = AudioUnitSetProperty(audioUnit, kAudioOutputUnitProperty_CurrentDevice, kAudioUnitScope_Global, 0, &input_device_id, sizeof(input_device_id)); if (err != noErr) { printf("Error setting device for AudioUnit\n"); exit(1); } // Initialize AudioUnit err = AudioUnitInitialize(audioUnit); if (err != noErr) { printf("Error initializing AudioUnit\n"); exit(1); } // Print device info printDeviceInfo(audioUnit); // Clean up AudioUnitUninitialize(audioUnit); AudioComponentInstanceDispose(audioUnit); } return 0; } It prints: Device Name: MacBook Pro Microphone Number of Channels (Stream Format): 1 Number of Elements (Element Count): 2 I tried to set the number of channels to 1 on the input unit, but it didn’t change anything. After calling setNumberOfChannels(1, audioUnit), I’m still getting the same output. Note 1: I know that I can ignore one channel, etc, etc. My purpose here is not to "somehow get it to work", I already did that. My purpose is to understand the API, so that I'll be able to write code that handles any number of audio inputs. Note 2: I already read a bunch of documentation, especially this here: https://developer.apple.com/library/archive/technotes/tn2091/ - perhaps the channel map could help here, but I can’t make sense of it - I tried to use it based on my understanding but I only got the -50 OSStatus. How should I understand this? Is it that that audio unit is an abstraction layer and automatically converts mono input into stereo input? Can I ask AUHAL to provide me the same number of input channels that the audio device has?
1
0
462
Sep ’24
Unable to Play Music With Music Kit
I'm trying to create an app that uses the Apple Music API and while I can fetch playlists as I desire when selecting a song from a playlist the music does not play. First I'm getting the playlists, then showing those playlists on a list in a view. Then pass that "song" which is a Track type into a PlayBackView. There are several UI components in that view but I want to boil it down here for simplicity's sake to better understand my problem. struct PlayBackView: View { @State private var playState: PlayState = .pause private let player = ApplicationMusicPlayer.shared @State var song: Track private var isPlaying: Bool { return (player.state.playbackStatus == .playing) } var body: some View { VStack { AsyncImage(url: song.artwork?.url(width: 100, height: 100)) { image in image .resizable() .frame(maxWidth: .infinity) .aspectRatio(1.0, contentMode: .fit) } placeholder: { Image(systemName: "music.note") .resizable() .frame(width: 100, height: 100) } // Song Title Text(song.title) .font(.title) // Album Title Text(song.albumTitle ?? "Album Title Not Found") .font(.caption) // Play/Pause Button Button(action: { handlePlayButton() }, label: { Image(systemName: playPauseImage) }) .padding() .foregroundStyle(.white) .font(.largeTitle) Image(systemName: airplayImage) .font(ifDeviceIsConnected ? .largeTitle : .title3) } .padding() } private func handlePlayButton() { Task { if isPlaying { player.pause() playState = .play } else { playState = .pause await playTrack(song: song) } } } @MainActor public func playTrack(song: Track) async { do { try await player.play() playState = .play } catch { print(error.localizedDescription) } } } These are the errors I'm seeing printing in the console in Xcode prepareToPlay failed [no target descriptor] The operation couldn’t be completed. (MPMusicPlayerControllerErrorDomain error 1.) ASYNC-WATCHDOG-1: Attempting to wake up the remote process ASYNC-WATCHDOG-2: Tearing down connection
2
0
457
Sep ’24
Detect Dolby Atmos programmatically
Hi, I am trying to detect if an audio stream is Dolby Atmos. I have existing code that determines if a stream is Dolby Atmos based on the following: Channel count is greater than equal to 8 Binaural is true Immersive is true Downmix is false I am trying to determine if these rules are correct and documentation that specifies these rules that I can reference in the future. Any help you can provide is greatly appreciated. Regards, John
1
0
468
Sep ’24