I successfully retrieved strings, arrays, and other data through a custom AudioObjectPropertySelector, but I can only get fixed returns. Whenever I modify it to use dynamic data, it results in an error. Below is my code.
case kPlugIn_CustomPropertyID:
{
*((CFStringRef*)outData) = CFSTR("qin@@@123");
*outDataSize = sizeof(CFStringRef);
}
break;
case kPlugIn_ContainDic:
{
CFMutableDictionaryRef mutableDic1 = CFDictionaryCreateMutable(kCFAllocatorDefault,
0,
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
CFDictionarySetValue(mutableDic1, CFSTR("xingming"), CFSTR("qinmu"));
*((CFDictionaryRef*)outData) = mutableDic1;
*outDataSize = sizeof(CFPropertyListRef);
// *((CFPropertyListRef*)outData) = mutableDic;
}
break;
case kPlugIn_ContainArray:
{
CFMutableArrayRef mutableArray = CFArrayCreateMutable(kCFAllocatorDefault, 0, &kCFTypeArrayCallBacks);
CFArrayAppendValue(mutableArray, CFSTR("Hello"));
CFArrayAppendValue(mutableArray, CFSTR("World"));
*((CFArrayRef*)outData) = mutableArray;
*outDataSize = sizeof(CFArrayRef);
}
break;
These are fixed returns, and there are no issues when I retrieve the data.
When I change the return data in kPlugIn_ContainDic to the following, the first time I restart the CoreAudio service and retrieve the data, it works fine. However, when I attempt to retrieve it again, it results in an error:
case kPlugIn_ContainDic:
{
*outDataSize = sizeof(CFPropertyListRef);
*((CFPropertyListRef*)outData) = mutableDic;
}
break;
error code:
HALC_ShellDevice::CreateIOContextDescription: failed to get a description from the server
HAL_HardwarePlugIn_ObjectGetPropertyData: no object
HALPlugIn::ObjectGetPropertyData: got an error from the plug-in routine, Error: 560947818 (!obj)
The declaration and usage of mutableDic are as follows:
static CFMutableDictionaryRef mutableDic;
static OSStatus BlackHole_Initialize(AudioServerPlugInDriverRef inDriver, AudioServerPlugInHostRef inHost)
{
OSStatus theAnswer = 0;
gPlugIn_Host = inHost;
if (mutableDic == NULL){
mutableDic = CFDictionaryCreateMutable(kCFAllocatorDefault,
100,
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
}
}
static OSStatus BlackHole_AddDeviceClient(AudioServerPlugInDriverRef inDriver, AudioObjectID inDeviceObjectID, const AudioServerPlugInClientInfo* inClientInfo)
{
CFStringRef string = CFStringCreateWithFormat(kCFAllocatorDefault, NULL, CFSTR("%u"), inClientInfo->mClientID);
CFMutableDictionaryRef dic = CFDictionaryCreateMutable(kCFAllocatorDefault,
0,
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
CFDictionarySetValue(dic, CFSTR("clientID"), string);
CFDictionarySetValue(dic, CFSTR("bundleID"), inClientInfo->mBundleID);
CFDictionarySetValue(mutableDic, string, dic);
}
Can someone tell me why
Audio
RSS for tagDive into the technical aspects of audio on your device, including codecs, format support, and customization options.
Post
Replies
Boosts
Views
Activity
Hi,
I'm facing an issue with my Unity-based app when deploying it to the AVP. Often, after building and running the app on the device, the audio gets muted. I couln't find any setting that let me unmute it. The only solution I've found is to reset the device settings, which makes the audio work again.
Here are a few things I’ve noticed:
The sound works fine when I reset my device’s settings.
I haven't changed any sound or audio settings on the device before or after deploying the app.
The issue doesn’t always occur immediately, but when it does, resetting settings seems to be the only fix.
Could there be something in the AVP audio configuration that causes this problem? I’d appreciate any advice or suggestions.
Thanks!
I have a user who is reporting an error and has been kind enough to share screen recordings to help diagnose. I am not experiencing this error, nor am I able to replicate on other devices I've tried, so I'm stuck trying to fix. His & other devices tested were all running iOS 17.5.1. Any details on the cause of this error or potential workarounds I could use to resolve would be greatly appreciated.
try await ApplicationMusicPlayer.shared.play()
throws:
The operation couldn't be completed (MPMusicPlayerControllerErrorDomain error 6.)
MusicAuthorization.currentStatus is .authorized
ApplicationMusicPlayer.shared.isPreparedToPlay is false
ApplicationMusicPlayer.shared.queue.currentEntry is nil (I've noticed this to be the case even when I am able to successfully play as well)
Queue was loaded using ApplicationMusicPlayer.shared.queue = [album] but I also tried ApplicationMusicPlayer.shared.queue = ApplicationMusicPlayer.Queue(album:startingAt:) and it made no difference. album.playParameters are correct. He experiences the error when attempting to play any album.
Any and all help is truly appreciated. Feedback Assistant filed has gone unanswered.
How can I search for a single song by using its song ID? I tried the following code but it's not working:
MusicCatalogResourceRequest(matching: Song.self, equalTo: song.id.rawValue)
I get the following errors in Xcode:
Cannot convert value of type 'Song.Type' to expected argument type 'KeyPath<MusicItemType.FilterType, String>'
Generic parameter 'MusicItemType' could not be inferred
I have the song ID saved as a string inside song so I just want to grab the full MusicKit MusicItemType from the API. I looked at the documentation but it doesn't make any sense to me.
Please help!
1, I saw nullAudio custom properties of the static const AudioObjectPropertySelector kPlugIn_CustomPropertyID = 'PCst'; But I don't know how to use this in a project.
2. What is the difference between PlugIn and Device's custom properties?
3. When I try to customize the PropertySelector for deivce. After adding kAudioObjectPropertyCustomPropertyInfoList NullAudio_HasDeviceProperty method to compile again after restarting coreAudio service, found that virtual devices don't show.
Haptics are often represented as audio for presentation purposes by Apple in videos and learning resources.
I am curious if:
...Apple has released, or is willing to release any tools that may have been used synthesize audio to represent a haptic patterns (such as in their WWDC19 Audio-Haptic presentation)?
...there are any current tools available that take haptic instruction as input (like AHAP) and outputs an audio file?
...there is some low-level access to the signal that drives the Taptic Engine, so that it can be repurposed as an audio stream?
...you have any other suggestions!
I can imagine some crude solutions that hack together preexisting synthesizers and fudging together a process to convert AHAP to MIDI instructions, dialing in some synth settings to mimic the behaviour of an actuator, but I'm not too interested in that rabbit hole just yet.
Any thoughts? Very curious what the process was for the WWDC videos and audio examples in their documentation...
Thank you!
Here is crash logs
NSInvalidArgumentException - *** -[AVContentKeyRequest processContentKeyResponse:] AVContentKeySession's keySystem is not same as that of keyResponse
We observed few 16.X.X Devices are not able to download audio media content, and if they trying multiple times. app got crash and above error encountered while crashing.
Please let us know if any issues.
I've encountered a critical issue while developing a music player app using SwiftUI and MusicKit. The problem persists across multiple devices and iOS versions, specifically with the endSeeking() method of ApplicationMusicPlayer, which fails to stop the fast-forward operation as expected.
Development Environment:
Xcode 16 Beta 6
macOS Sonoma 15.0 Beta 7 (24A5327a)
Affected Devices:
iPhone 11 Pro Max (iOS 17.6)
iPhone SE 3 (iOS 18.0 Beta 7)
Here's the relevant code snippet:
Image(systemName: "forward.end.circle")
.foregroundStyle(.accent)
.gesture(
TapGesture()
.onEnded { _ in
vm.nextTrack()
}
)
.simultaneousGesture(
LongPressGesture(minimumDuration: 0.5)
.onChanged { isPressing in
if isPressing {
vm.player.beginSeekingForward()
}
}
.onEnded { _ in
vm.player.endSeeking()
}
)
The issue manifests when the long press ends: despite invoking the endSeeking() method, the fast-forward operation persists.
To troubleshoot, I've taken the following steps:
Confirmed that vm.player is set to ApplicationMusicPlayer.shared.
Attempted to combine endSeeking() with beginSeekingForward(), as per the documentation guidelines.
Despite these efforts, the problem persists across all tested devices and OS versions. This leads me to two critical questions:
Has anyone else encountered a similar issue?
Could this potentially be an undocumented bug in the latest MusicKit implementation?
Setting the default output device using Core Audio stops working after using continuity with AirPods
Hi, made an app which is managing the sound input/output for the user and I'm facing an unexpected behavior which can be replicated using the Apple's HALLab tool too.
Initially the app is able to set the input/output AudioDevice and everything works as expected however if you switch from your Mac to your iPhone when using AirPods Pro and switch back again the app is no longer able to set the output device. There's no error, it simply switches back to AirPods immediately.
You can replicate the issue on the HALLab(an app provided by Apple as "additional tools for Xcode").
How to:
Open HALLab, put on your AirPods and play some media.
Try out changing the input and output source and study the expected behavior
Unlock your iPhone, play some media and wait for the AirPods Pro to switch to the iPhone.
Go back to your Mac and play some media and wait for AirPods to switch to Mac
Try changing the output source on HALLab, notice that the source immediately reverses back to AirPods, which is the unexpected behavior.
Changing the source from the System Settings keeps working as expected.
Any ideas on what's going on and how to handle that?
I'm on MacOS 15.1 using the following code to set the device:
private func setDevice(deviceID: AudioDeviceID, forType type: AudioDeviceType) throws {
guard isDeviceAvailable(deviceID) else {
throw AudioDeviceError.deviceNotAvailable
}
print("setting the device.")
var propertyAddress = AudioObjectPropertyAddress(
mSelector: type == .input ? kAudioHardwarePropertyDefaultInputDevice : kAudioHardwarePropertyDefaultOutputDevice,
mScope: kAudioObjectPropertyScopeGlobal,
mElement: kAudioObjectPropertyElementMain
)
let dataSize = UInt32(MemoryLayout<AudioDeviceID>.size)
var mutableDeviceID = deviceID // Create a mutable copy
let status = AudioObjectSetPropertyData(AudioObjectID(kAudioObjectSystemObject), &propertyAddress, 0, nil, dataSize, &mutableDeviceID)
guard status == noErr else {
throw AudioDeviceError.deviceNotSet(status: status)
}
}
I am using an AVQueuePlayer to play a series of audio files. While implementing the now playing functionality for the iOS lock screen, I got a little confused with how to use MPNowPlayingInfo. When you update MPNowPlayingInfo, one of the fields is MPNowPlayingInfoPropertyElapsedPlaybackTime. This leads me to believe you need to call it at least once a second to keep that up-to-date.
But if you don't call it that frequently, the Now Playing UI does update correctly as it's playing, so that makes me think you only need to call it once you start playing?
It feels very expensive to keep on calling it every time in my periodic time observer, but is that the correct approach? Or do you just call it when you play/pause/skip, etc. ?
Hi everyone,
I’m working on a project that involves streaming audio over WebSockets, and I need to compress the audio to reduce bandwidth usage. I’m currently using AVAudioEngine to capture and process audio in PCM format (AVAudioPCMBuffer), but I want to compress the buffer into Opus (or another efficient codec) before sending it over the network.
Has anyone worked with compressing an AVAudioPCMBuffer into Opus format within a tap on the inputNode, or could you recommend the best approach for compressing the PCM buffer into a different format? I haven’t been able to find a working solution for this.
Any advice or code examples would be greatly appreciated!
Thanks in advance,
Ondřej
--
My current code without the compression:
inputNode.installTap(onBus: .zero, bufferSize: 1440, format: nil) { [weak self] buffer, time in
guard let self else {
return
}
// 1. Send data
// a) Convert the buffer into the desired format
if let outputBuffer = buffer.convert(toFormat: Self.websocketInputFormat) {
// b) Use the converted buffer
// TODO: compress it into a different format
if let data = outputBuffer.convertToData() {
self.sendAudio(data)
}
}
// 2. Get sound level
self.visualizeRecorderBuffer(buffer)
}
func convert(toFormat outputFormat: AVAudioFormat) -> AVAudioPCMBuffer? {
let outputFrameCapacity = AVAudioFrameCount(
round(Double(frameLength) * (outputFormat.sampleRate / format.sampleRate))
)
guard
let outputBuffer = AVAudioPCMBuffer(pcmFormat: outputFormat, frameCapacity: outputFrameCapacity),
let converter = AVAudioConverter(from: format, to: outputFormat)
else {
return nil
}
converter.convert(to: outputBuffer, error: nil) { packetCount, status in
status.pointee = .haveData
return self
}
return outputBuffer
}
static private let websocketInputFormat = AVAudioFormat(
commonFormat: .pcmFormatInt16,
sampleRate: 16000,
channels: 1,
interleaved: false
)!
Hello! I’m making an app which will have a waveform of the frequency of what’s playing on a Mac. The question is whether it is possible to have access to the signal of the media and use it with the FFT?
I'm trying to create an app that uses the Apple Music API and while I can fetch playlists as I desire when selecting a song from a playlist the music does not play. First I'm getting the playlists, then showing those playlists on a list in a view. Then pass that "song" which is a Track type into a PlayBackView. There are several UI components in that view but I want to boil it down here for simplicity's sake to better understand my problem.
struct PlayBackView: View {
@State private var playState: PlayState = .pause
private let player = ApplicationMusicPlayer.shared
@State var song: Track
private var isPlaying: Bool {
return (player.state.playbackStatus == .playing)
}
var body: some View {
VStack {
AsyncImage(url: song.artwork?.url(width: 100, height: 100)) { image in
image
.resizable()
.frame(maxWidth: .infinity)
.aspectRatio(1.0, contentMode: .fit)
} placeholder: {
Image(systemName: "music.note")
.resizable()
.frame(width: 100, height: 100)
}
// Song Title
Text(song.title)
.font(.title)
// Album Title
Text(song.albumTitle ?? "Album Title Not Found")
.font(.caption)
// Play/Pause Button
Button(action: {
handlePlayButton()
}, label: {
Image(systemName: playPauseImage)
})
.padding()
.foregroundStyle(.white)
.font(.largeTitle)
Image(systemName: airplayImage)
.font(ifDeviceIsConnected ? .largeTitle : .title3)
}
.padding()
}
private func handlePlayButton() {
Task {
if isPlaying {
player.pause()
playState = .play
} else {
playState = .pause
await playTrack(song: song)
}
}
}
@MainActor
public func playTrack(song: Track) async {
do {
try await player.play()
playState = .play
} catch {
print(error.localizedDescription)
}
}
}
These are the errors I'm seeing printing in the console in Xcode
prepareToPlay failed [no target descriptor]
The operation couldn’t be completed. (MPMusicPlayerControllerErrorDomain error 1.)
ASYNC-WATCHDOG-1: Attempting to wake up the remote process
ASYNC-WATCHDOG-2: Tearing down connection
Hi,
I am trying to detect if an audio stream is Dolby Atmos. I have existing code that determines if a stream is Dolby Atmos based on the following:
Channel count is greater than equal to 8
Binaural is true
Immersive is true
Downmix is false
I am trying to determine if these rules are correct and documentation that specifies these rules that I can reference in the future.
Any help you can provide is greatly appreciated.
Regards,
John
Is it possible to play audio in the Background or when the app is Terminated? If yes, how can I play audio in the Background or when the app is Terminated in iOS using Swift? I am receiving an audio link in a Firebase notification. How can I play this audio link when the app is in the Background or Terminated?
I am developing an app running on iOS/iPadOS and on macOS using MacCatalyst. It uses ApplicationMusicPlayer.shared to play music from Apple Music. However, on the Mac songs with contentRating == .explicit do not work.
I will get the following error (sorry, German localization):
Failed to prepareToPlay error=<MPMusicPlayerControllerErrorDomain.6 "Failed to prepare to play" {}>
Error playing item: Der Vorgang konnte nicht abgeschlossen werden. (MPMusicPlayerControllerErrorDomain-Fehler 6.)
On iOS/iPadOS these songs play correctly. What can I do to also play explicit songs using MacCatalyst?
Thanks,
Dirk
I'm experiencing stuttering every time I record something with my iOS app on iOS 18 beta. The code ran fine on previous iOS versions.
The stuttering occurs for the first 2 seconds. Here's an example:
https://soundcloud.com/thomas-walther-219010679/ios-18-stuttering
The way I set up AVAudioEngine and AVAudioSession was vetted quite thoroughly during sessions at WWDC '23. Here is how the engine and the tap is configured:
let engine = AVAudioEngine()
let recorderNode = AVAudioMixerNode()
engine.attach(recorderNode)
engine.connect(engine.mainMixerNode, to: engine.outputNode, format: engine.outputNode.inputFormat(forBus: 0))
engine.connect(recorderNode, to: engine.mainMixerNode, format: recordingOutputFormat)
engine.connect(engine.inputNode, to: recorderNode, format: engine.inputNode.inputFormat(forBus: 0))
let bufferSize: AVAudioFrameCount = 4096
recorderNode.installTap(onBus: 0, bufferSize: bufferSize, format: nil) { [weak self] buffer, time in
guard let self = self else { return }
do {
// Write recording to disk
try audioFile.write(buffer)
} catch {
// ...
}
}
I tried setting a different buffer size, but with no luck. I also can't see any hangs in Instruments. Do you have any pointers on how to debug this?
We do custom audio buffering in our app. A very minimal version of the relevant code would look something like:
import AVFoundation
class LoopingBuffer {
private var playerNode: AVAudioPlayerNode
private var audioFile: AVAudioFile
init(playerNode: AVAudioPlayerNode, audioFile: AVAudioFile) {
self.playerNode = playerNode
self.audioFile = audioFile
}
func scheduleBuffer(_ frames: AVAudioFrameCount) async {
let audioBuffer = AVAudioPCMBuffer(
pcmFormat: audioFile.processingFormat,
frameCapacity: frames
)!
try! audioFile.read(into: audioBuffer, frameCount: frames)
await playerNode.scheduleBuffer(audioBuffer, completionCallbackType: .dataConsumed)
}
}
We are in the migration process to swift 6 concurrency and have moved a lot of our audio code into a global actor. So now we have something along the lines of
import AVFoundation
@globalActor public actor AudioActor: GlobalActor {
public static let shared = AudioActor()
}
@AudioActor
class LoopingBuffer {
private var playerNode: AVAudioPlayerNode
private var audioFile: AVAudioFile
init(playerNode: AVAudioPlayerNode, audioFile: AVAudioFile) {
self.playerNode = playerNode
self.audioFile = audioFile
}
func scheduleBuffer(_ frames: AVAudioFrameCount) async {
let audioBuffer = AVAudioPCMBuffer(
pcmFormat: audioFile.processingFormat,
frameCapacity: frames
)!
try! audioFile.read(into: audioBuffer, frameCount: frames)
await playerNode.scheduleBuffer(audioBuffer, completionCallbackType: .dataConsumed)
}
}
Unfortunately this now causes an error:
error: sending 'audioBuffer' risks causing data races
| `- note: sending global actor 'AudioActor'-isolated 'audioBuffer' to nonisolated instance method 'scheduleBuffer(_:completionCallbackType:)' risks causing data races between nonisolated and global actor 'AudioActor'-isolated uses
I understand the error, what I don't understand is how I can safely use this API?
AVAudioPlayerNode is not marked as @MainActor so it seems like it should be safe to schedule a buffer from a custom actor as long as we don't send it anywhere else. Is that right?
AVAudioPCMBuffer is not Sendable so I don't think it's possible to make this callsite ever work from an isolated context. Even forgetting about the custom actor, if you instead annotate the class with @MainActor the same error is still present.
I think the AVAudioPlayerNode.scheduleBuffer() function should have a sending annotation to make clear that the buffer can't be used after it's sent. I think that would make the compiler happy but I'm not certain.
Am I overlooking something, holding it wrong, or is this API just pretty much unusable in Swift 6?
My current workaround is just to import AVFoundation with @preconcurrency but it feels dirty and I am worried there may be a real issue here that I am missing in doing so.
I am trying to acheive AAC playback, I have stripped off the ADTS header using a function.
I am not being shown any errors by the Apple API however I cannot hear any playback.
Here is my asbd
My sample is definitely 44.1KHz and AAC_LC.
Here is the file for reference: https://dl.espressif.com/dl/audio/ff-16b-2c-44100hz.aac
Here are some relevant snippets of the code:
AudioStreamBasicDescription desc = {0};
desc.mSampleRate = 44100; // Sample rate
desc.mFormatID = kAudioFormatMPEG4AAC; // Format ID for AAC
desc.mChannelsPerFrame = 2; // Stereo audio
desc.mFramesPerPacket = 1024; // AAC typically uses 1024 frames per packet
desc.mBitsPerChannel = 0;
desc.mBytesPerPacket = 0;
desc.mBytesPerFrame = 0;
OSStatus status =
CMAudioFormatDescriptionCreate(kCFAllocatorDefault,
&desc,
inlayout_size, //32 corresponding to stereo
inlayout_buf,
kAudioFormatMPEG4AAC,
nil,
nil,
&_fmtDesc);
const CMBlockBufferCustomBlockSource blockSource = {
.version = kCMBlockBufferCustomBlockSourceVersion,
.FreeBlock = customBlock_Free,
.refCon = block,
};
OSStatus status;
CMSampleBufferRef sampleBuffer;
CMBlockBufferRef blockBuf;
status = CMBlockBufferCreateWithMemoryBlock(kCFAllocatorDefault,
(block->p_buffer), // memoryBlock
(block->i_buffer), // blockLength
kCFAllocatorNull, // blockAllocator
&blockSource, // customBlockSource
0, // offsetToData
(block->i_buffer), // dataLength
0, // flags
&blockBuf);
const CMSampleTimingInfo timeInfo = {
.duration = kCMTimeInvalid,
.presentationTimeStamp = CMTimeMake(_ptsSamples, _sampleRate),
.decodeTimeStamp = kCMTimeInvalid,
};
status = CMSampleBufferCreateReady(kCFAllocatorDefault,
blockBuf, // dataBuffer
_fmtDesc, // formatDescription
1024, // numSamples
1, // numSampleTimingEntries
&timeInfo, // sampleTimingArray
1, // numSampleSizeEntries
&_bytesPerFrame, // sampleSizeArray
&sampleBuf);
The renderer then handles this sampleBuf which is working correctly as I have tested it for other formats.
I have verified the hex dump of the p_buffer and it matches with that of the .aac file having removed the ADTS header.
Here is an output example
Hex dump of p_buffer(which is being passed to CMBlockBufferCreateWithMemoryBlock):
4CFE1DE0: 21 1A 8F 20 63 E7 FF FF 11 72 A3 20 C5 E3 B7 E9
4CFE1DF0: 42 F5 3D 9A D1 77 D2 F0 9A 00 00 B2 32 53 84 8C
4CFE1E00: E8 24 ED DF 23 04 3D CF A6 51 A8 D2 8F EE B3 FB
4CFE1E10: F4 CC 17 F9 7C 8B 75 06 29 8D D6 95 98 78 9D 87
4CFE1E20: 9C B4 9D 8E 2B 6C D2 90 D7 E3 C4 37 05 97 85 C1
4CFE1E30: F7 5E 7F D8 F3 DD 20 B5 73 31 C5 EC 3D 6F AC 5E
4CFE1E40: 45 AF CC 38 0D 5B 98 F5 F9 3B 3E D7 C3 8E 1B 38
4CFE1E50: F8 F1 9A 6F 96 05 15 CE 39 D6 2B 06 60 33 8A C4
4CFE1E60: EE 4F 6B B3 C9 CF F2 BF 3F B1 96 69 B9 62 34 62
4CFE1E70: CD 41 1C 08 CF 80 5F A4 60 BD 45 36 AC 66 00 40
4CFE1E80: 42 F6 95 F4 89 8A A2 24 11 01 74 08 82 33 94 D1
4CFE1E90: 0B 24 51 4A 55 28 06 21 78 85 D4 B5 13 49 1D AA
4CFE1EA0: 44 02 32 E9 42 61 8C 59 4A 65 96 4D BC BC AE D2
4CFE1EB0: F1 D0 00 00 D4 A2 F8 87 A0 FD C8 93 87 59 A2 CB
4CFE1EC0: BE B3 AB 49 C6 37 60 2B 50 26 D3 0C 1D 29 45 81
4CFE1ED0: D9 4E 62 5E 29 8E 27 19 75 FB 62 0B 3B C0 B9 E6
4CFE1EE0: EB A0 3F B8 D5 7E 77 90 C1 E2 9C D9 4E 5B 82 ED
4CFE1EF0: CF BC 55 1C 55 1B F2 DE CC B2 13 25 CB ED F5 B5
4CFE1F00: 6E F9 EF 38 DE 8C C4 38 C2 60 CF DA F3 F2 1F 80
4CFE1F10: C5 23 0C 3E 57 31 0D 5E EB 63 58 1A 28 38 7B B2
4CFE1F20: 0B F3 5B 33 96 59 55 44 4A 09 55 73 EC 94 A0 F3
4CFE1F30: FC F4 70 F9 76 FB FF 8D AD 13 01 30 05 C0 90 01
4CFE1F40: B2 37 27 24 44 B9 F0 24 4E C5 D4 25 D6 F7 20 4D
4CFE1F50: 39 92 5D 31 71 5B 4A B2 A4 C1 59 D4 42 60 1C 00
4CFE1F60: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
i_buffer: 383
CMBlockBufferIsEmpty check: 0
Block not null in wrapBuffer
CMBlockBufferCreateWithMemoryBlock status: 0
I did try multiple configurations, I am shown no errors by any log however I cannot hear playback.
Please help me identify what is wrong here.
I have used this as a reference, which seems to be based on previous Apple documentation
https://github.com/UFOooX/iOSAACStreamPlayer
I want my media app to support Siri when using the phone, and CarPlay when in the car but I get an error during installation that it's not possible because 2 extensions both use INPlayMediaIntent.
Can someone explain why this is bad? I don't understand why I can't have both.