Dive into the technical aspects of audio on your device, including codecs, format support, and customization options.

Audio Documentation

Post

Replies

Boosts

Views

Activity

Help with corrupted audio plugin authentication
Somehow I have a corrupted audio plugin authentication problem. Iā€™m on a silicon Mac M1 and two audio plugins that were installed and working will now not authenticate. The vendors both are unable to troubleshoot and I think the issue is a corrupted low level file. One product authenticates correctly when I created a new user but another plugin only authenticates on the original user account and not on the newly created user. Reinstalling the plugins and the Mac OS does not fix the issue. Any thoughts?
0
0
168
2w
Help with CoreAudio Input level monitoring
I have spent the past 2 weeks diving into CoreAudio and have seemingly run into a wall... For my initial test, I am simply trying to create an AUGraph for monitoring input levels from a user chosen Audio Input Device (multi-channel in my case). I was not able to find any way to monitor input levels of a single AUHAL input device - so I decided to create a simple AUGraph for input level monitoring. Graph looks like: [AUHAL Input Device] -> [B1] -> [MatrixMixerAU] -> [B2] -> [AUHAL Output Device] Where B1 is an audio stream consisting of all the input channels available from the input device. The MatrixMixer has metering mode turned on, and level meters are read from the each submix of the MatrixMixer using kMatrixMixerParam_PostAveragePower. B2 is a stereo (2 channel) stream from the MatrixMixerAU to the default audio device - however, since I don't really want to pass audio through to an actual output, I have the volume muted on the MatrixMixerAU output channel. I tried using a GenericOutputAU instread of the default system output, however, the GenericOutputAU never seems to pull date from the ringBuffer (the graph renderProc is never called if a GenericOutputAU is used instead of AUHAL default output device). I have not been able to get this simple graph to work. I do not see any errors when creating the graph and initializing the graph, and I have verified that the inputProc is being called for filling up the ringBuffer - but when I read the level of the MatrixMixer, the levels are always -758 (silence). I have posted my demo project on github in hopes I can find someone with CoreAudio expertise to help with this problem. I am willing to move this to DTS Code Level support if there is someone in DTS with CoreAudio experience. Notes: My App is not sandboxed in this test I have tried with and without hardened runtime with Audio Input checked The multichannel audio device I am using for testing is the Audient iD14 USB-C audio device. It supports 12 input channels and 6 output channels. All input channels have been tested and are working in Ableton Live and Logic Pro. Of particular interest, is that I can't even get the Apple CAPlayThrough demo to work on my system. I see no errors when creating the graph, but all I hear is silence. The MatrixMixerTest from the Apple documentation archives does work - but note, that that demo does not use Audio Input devices - it reads audio into the graph from an audio file. Link to github project page. Diagram of AUGraph for initial test (code that is on github) Once I get audio input level metering to work, my plan is to implement something like in Phase 2 below - with the purpose of capturing a stereo input stream, mixing to mono, and sending to lowpass, bandpass, hipass AUs - and will again use MatrixMixer for level monitoring of the levels out of each filter. I have no plans on passthough audio (sending actual audio out to devices) - I am simple monitoring input levels Diagram of ultimate scope - rendering audio levels of a stereo to mono stream after passing through various filters
0
0
184
2w
Recording audio from a microphone using the AVFoundation framework does not work after reconnecting the microphone
There are different microphones that can be connected via a 3.5-inch jack or via USB or via Bluetooth, the behavior is the same. There is a code that gets access to the microphone (connected to the 3.5-inch audio jack) and starts an audio capture session. At the same time, the microphone use icon starts to be displayed. The capture of the audio device (microphone) continues for a few seconds, then the session stops, the microphone use icon disappears, then there is a pause of a few seconds, and then a second attempt is made to access the same microphone and start an audio capture session. At the same time, the microphone use icon is displayed again. After a few seconds, access to the microphone stops and the audio capture session stops, after which the microphone access icon disappears. Next, we will try to perform the same actions, but after the first stop of access to the microphone, we will try to pull the microphone plug out of the connector and insert it back before trying to start the second session. In this case, the second attempt to access begins, the running part of the program does not return errors, but the microphone access icon is not displayed, and this is the problem. After the program is completed and restarted, this icon is displayed again. This problem is only the tip of the iceberg, since it manifests itself in the fact that it is not possible to record sound from the audio microphone after reconnecting the microphone until the program is restarted. Is this normal behavior of the AVFoundation framework? Is it possible to somehow make it so that after reconnecting the microphone, access to it occurs correctly and the usage indicator is displayed? What additional actions should the programmer perform in this case? Is there a description of this behavior somewhere in the documentation? Below is the code to demonstrate the described behavior. I am also attaching an example of the microphone usage indicator icon. Computer description: MacBook Pro 13-inch 2020 Intel Core i7 macOS Sequoia 15.1. #include <chrono> #include <condition_variable> #include <iostream> #include <mutex> #include <thread> #include <AVFoundation/AVFoundation.h> #include <Foundation/NSString.h> #include <Foundation/NSURL.h> AVCaptureSession* m_captureSession = nullptr; AVCaptureDeviceInput* m_audioInput = nullptr; AVCaptureAudioDataOutput* m_audioOutput = nullptr; std::condition_variable conditionVariable; std::mutex mutex; bool responseToAccessRequestReceived = false; void receiveResponse() { std::lock_guard<std::mutex> lock(mutex); responseToAccessRequestReceived = true; conditionVariable.notify_one(); } void waitForResponse() { std::unique_lock<std::mutex> lock(mutex); conditionVariable.wait(lock, [] { return responseToAccessRequestReceived; }); } void requestPermissions() { responseToAccessRequestReceived = false; [AVCaptureDevice requestAccessForMediaType:AVMediaTypeAudio completionHandler:^(BOOL granted) { const auto status = [AVCaptureDevice authorizationStatusForMediaType:AVMediaTypeAudio]; std::cout << "Request completion handler granted: " << (int)granted << ", status: " << status << std::endl; receiveResponse(); }]; waitForResponse(); } void timer(int timeSec) { for (auto timeRemaining = timeSec; timeRemaining > 0; --timeRemaining) { std::cout << "Timer, remaining time: " << timeRemaining << "s" << std::endl; std::this_thread::sleep_for(std::chrono::seconds(1)); } } bool updateAudioInput() { [m_captureSession beginConfiguration]; if (m_audioOutput) { AVCaptureConnection *lastConnection = [m_audioOutput connectionWithMediaType:AVMediaTypeAudio]; [m_captureSession removeConnection:lastConnection]; } if (m_audioInput) { [m_captureSession removeInput:m_audioInput]; [m_audioInput release]; m_audioInput = nullptr; } AVCaptureDevice* audioInputDevice = [AVCaptureDevice deviceWithUniqueID: [NSString stringWithUTF8String: "BuiltInHeadphoneInputDevice"]]; if (!audioInputDevice) { std::cout << "Error input audio device creating" << std::endl; return false; } // m_audioInput = [AVCaptureDeviceInput deviceInputWithDevice:audioInputDevice error:nil]; // NSError *error = nil; NSError *error = [[NSError alloc] init]; m_audioInput = [AVCaptureDeviceInput deviceInputWithDevice:audioInputDevice error:&error]; if (error) { const auto code = [error code]; const auto domain = [error domain]; const char* domainC = domain ? [domain UTF8String] : nullptr; std::cout << code << " " << domainC << std::endl; } if (m_audioInput && [m_captureSession canAddInput:m_audioInput]) { [m_audioInput retain]; [m_captureSession addInput:m_audioInput]; } else { std::cout << "Failed to create audio device input" << std::endl; return false; } if (!m_audioOutput) { m_audioOutput = [[AVCaptureAudioDataOutput alloc] init]; if (m_audioOutput && [m_captureSession canAddOutput:m_audioOutput]) { [m_captureSession addOutput:m_audioOutput]; } else { std::cout << "Failed to add audio output" << std::endl; return false; } } [m_captureSession commitConfiguration]; return true; } void start() { std::cout << "Starting..." << std::endl; const bool updatingResult = updateAudioInput(); if (!updatingResult) { std::cout << "Error, while updating audio input" << std::endl; return; } [m_captureSession startRunning]; } void stop() { std::cout << "Stopping..." << std::endl; [m_captureSession stopRunning]; } int main() { requestPermissions(); m_captureSession = [[AVCaptureSession alloc] init]; start(); timer(5); stop(); timer(10); start(); timer(5); stop(); }
1
0
243
2w
Anyone know the output power of the headphone jack of a MacBook Pro for each percentage of volume?
Hello! I'm trying to create a headphone safety prototype to give warnings if I listen to music too loud, but inputing my headphone's impedance, sensitivity, and wanted SPL level, and all I need is just the data on the amount of power each percentage of volume outputs(I'm assuming the MacBook Pro has 1-100% volume scale). If anyone has this info, or can direct me to someone who has this info, that would be great! Also do I contact apple support for things like this? I'm not too sure... Thanks!!
0
0
135
2w
MusicKit UPCs changing and handling that
I use Universal Product Codes (UPC) in my app to reliably identify albums after having used albumIDs for a time. AlbumIDs can change over time for no obvious reasons (see here for songIDs) so I switched to UPCs since I believed they cannot change. Well apparently they can. A few days ago I populated a JSON with UPCs including 196871067713. Today trying to perform a MusicCatalogResourchRequest for the UPC does not return anything. When using that UPC and putting it into an Apple Music link like https://music.apple.com/de/album/folge-89-im-geistergarten/1683337782?l=en-GB redirects to https://music.apple.com/de/album/folge-89-im-geistergarten/1683337782?l=en-GB so I assume the UPC has changed from 196871067713 to 1683337782. Apple Music can handle that and redirects to the new upc both in the app and as a website. But a MusicCatalogResourceRequest cannot do that. I filed a suggestion for that (FB15167146) but I need a solution quicker. Can I somehow detect where the URL is redirecting to? Is there a way MusicCatalogResourceRequest can do this? Performing a MusicCatalogSearchRequest can be an option but seems unreliable when using the title as search term. Other ideas? Thank you
1
1
294
Sep ā€™24
Capturing system audio no longer works with macOS Sequoia
Our capture application records system audio via HAL plugin, however, with the latest macOS 15 Sequoia, all audio buffer values are zero. I am attaching sample code that replicates the problem. Compile as a Command Line Tool application with Xcode. STEPS TO REPRODUCE Install BlackHole 2ch audio driver: https://existential.audio/blackhole/download/?code=1579271348 Start some system audio, e.g. YouTube. Compile and run the sample application. On macOS up to Sonoma, you will hear audio via loopback and see audio values in the debug/console window. On macOS Sequoia, you will not hear audio and the audio values are 0. #import <AVFoundation/AVFoundation.h> #import <CoreAudio/CoreAudio.h> #define BLACKHOLE_UID @"BlackHole2ch_UID" #define DEFAULT_OUTPUT_UID @"BuiltInSpeakerDevice" @interface AudioCaptureDelegate : NSObject <AVCaptureAudioDataOutputSampleBufferDelegate> @end void setDefaultAudioDevice(NSString *deviceUID); @implementation AudioCaptureDelegate // receive samples from CoreAudio/HAL driver and print amplitute values for testing // this is where samples would normally be copied and passed downstream for further processing which // is not needed in this simple sample application - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection { // Access the audio data in the sample buffer CMBlockBufferRef blockBuffer = CMSampleBufferGetDataBuffer(sampleBuffer); if (!blockBuffer) { NSLog(@"No audio data in the sample buffer."); return; } size_t length; char *data; CMBlockBufferGetDataPointer(blockBuffer, 0, NULL, &length, &data); // Process the audio samples to calculate the average amplitude int16_t *samples = (int16_t *)data; size_t sampleCount = length / sizeof(int16_t); int64_t sum = 0; for (size_t i = 0; i < sampleCount; i++) { sum += abs(samples[i]); } // Calculate and log the average amplitude float averageAmplitude = (float)sum / sampleCount; NSLog(@"Average Amplitude: %f", averageAmplitude); } @end // set the default audio device to Blackhole while testing or speakers when done // called by main void setDefaultAudioDevice(NSString *deviceUID) { AudioObjectPropertyAddress address; AudioDeviceID deviceID = kAudioObjectUnknown; UInt32 size; CFStringRef uidString = (__bridge CFStringRef)deviceUID; // Gets the device corresponding to the given UID. AudioValueTranslation translation; translation.mInputData = &uidString; translation.mInputDataSize = sizeof(uidString); translation.mOutputData = &deviceID; translation.mOutputDataSize = sizeof(deviceID); size = sizeof(translation); address.mSelector = kAudioHardwarePropertyDeviceForUID; address.mScope = kAudioObjectPropertyScopeGlobal; //???? address.mElement = kAudioObjectPropertyElementMain; OSStatus status = AudioObjectGetPropertyData(kAudioObjectSystemObject, &address, 0, NULL, &size, &translation); if (status != noErr) { NSLog(@"Error: Could not retrieve audio device ID for UID %@. Status code: %d", deviceUID, (int)status); return; } AudioObjectPropertyAddress propertyAddress; propertyAddress.mSelector = kAudioHardwarePropertyDefaultOutputDevice; propertyAddress.mScope = kAudioObjectPropertyScopeGlobal; status = AudioObjectSetPropertyData(kAudioObjectSystemObject, &propertyAddress, 0, NULL, sizeof(AudioDeviceID), &deviceID); if (status == noErr) { NSLog(@"Default audio device set to %@", deviceUID); } else { NSLog(@"Failed to set default audio device: %d", status); } } // sets Blackhole device as default and configures it as AVCatureDeviceInput // sets the speakers as loopback so we can hear what is being captured // sets up queue to receive capture samples // runs session for 30 seconds, then restores speakers as default output int main(int argc, const char * argv[]) { @autoreleasepool { // Create the capture session AVCaptureSession *session = [[AVCaptureSession alloc] init]; // Select the audio device AVCaptureDevice *audioDevice = nil; NSString *audioDriverUID = nil; audioDriverUID = BLACKHOLE_UID; setDefaultAudioDevice(audioDriverUID); audioDevice = [AVCaptureDevice deviceWithUniqueID:audioDriverUID]; if (!audioDevice) { NSLog(@"Audio device %s not found!", [audioDriverUID UTF8String]); return -1; } else { NSLog(@"Using Audio device: %s", [audioDriverUID UTF8String]); } // Configure the audio input with the selected device (Blackhole) NSError *error = nil; AVCaptureDeviceInput *audioInput = [AVCaptureDeviceInput deviceInputWithDevice:audioDevice error:&error]; if (error || !audioInput) { NSLog(@"Failed to create audio input: %@", error); return -1; } [session addInput:audioInput]; // Configure the audio data output AVCaptureAudioDataOutput *audioOutput = [[AVCaptureAudioDataOutput alloc] init]; AudioCaptureDelegate *delegate = [[AudioCaptureDelegate alloc] init]; dispatch_queue_t queue = dispatch_queue_create("AudioCaptureQueue", NULL); [audioOutput setSampleBufferDelegate:delegate queue:queue]; [session addOutput:audioOutput]; // Set audio settings NSDictionary *audioSettings = @{ AVFormatIDKey: @(kAudioFormatLinearPCM), AVSampleRateKey: @48000, AVNumberOfChannelsKey: @2, AVLinearPCMBitDepthKey: @16, AVLinearPCMIsFloatKey: @NO, AVLinearPCMIsNonInterleaved: @NO }; [audioOutput setAudioSettings:audioSettings]; AVCaptureAudioPreviewOutput * loopback_output = nil; loopback_output = [[AVCaptureAudioPreviewOutput alloc] init]; loopback_output.volume = 1.0; loopback_output.outputDeviceUniqueID = DEFAULT_OUTPUT_UID; [session addOutput:loopback_output]; const char *deviceID = loopback_output.outputDeviceUniqueID ? [loopback_output.outputDeviceUniqueID UTF8String] : "nil"; NSLog(@"session addOutput for preview/loopback: %s", deviceID); // Start the session [session startRunning]; NSLog(@"Capturing audio data for 30 seconds..."); [[NSRunLoop currentRunLoop] runUntilDate:[NSDate dateWithTimeIntervalSinceNow:30.0]]; // Stop the session [session stopRunning]; NSLog(@"Capture session stopped."); setDefaultAudioDevice(DEFAULT_OUTPUT_UID); } return 0; }
4
0
279
3w
Play audio data coming from serial port
Hi. I want to read ADPCM encoded audio data, coming from an external device to my Mac via serial port (/dev/cu.usbserial-0001) as 256 byte chunks, and feed it into an audio player. So far I am using Swift and SwiftSerial (GitHub - mredig/SwiftSerial: A Swift Linux and Mac library for reading and writing to serial ports. 3) to get the data via serialPort.asyncBytes() into a AsyncStream but I am struggling to understand how to feed the stream to a AVAudioPlayer or similar. I am new to Swift and macOS audio development so any help to get me on the right track is greatly appreciated. Thx
0
0
131
3w
Get audio volume from microphone
Hello. We are trying to get audio volume from microphone. We have 2 questions. 1. Can anyone tell me about AVAudioEngine.InputNode.volume? AVAudioEngine.InputNode.volume Return 0 in the silence, Return float type value within 1.0 depending on the volume are expected work, but it looks 1.0 (default value) is returned at any time. Which case does it return 0.5 or 0? Sample code is below. Microphone works correctly. // instance member private var engine: AVAudioEngine! private var node: AVAudioInputNode! // start method self.engine = .init() self.node = engine.inputNode engine.prepare() try! engine.start() // volume getter print(\(self.node.volume)) 2. What is the best practice to get audio volume from microphone? Requirements are: Without AVAudioRecorder. We use it for streaming audio. it should withstand high frequency access. Testing info device: iPhone XR OS version: iOS 18 Best Regards.
2
0
292
3w
arm64 Logic Leaking Plugins (Not Calling AP_Close)
I'm running into an issue where in some cases, when the AUHostingServiceXPC_arrow process is shut down by Logic, the process is terminated abruptly without calling AP_Close on all of the plugins hosted in the process. In our case, we have filesystem resources we need to clean up, and having stale files around from the last run can cause issues in new sessions, so this leak is having some pretty gnarly effects. I can reproduce the issue using only Apple sample plugins, and it seems to be triggered by a timeout. If I have two different AU plugins in the session, and I add a 1 second sleep to the destructor of one of the sample plugins, Logic will force terminate the process and the remaining destructors are not called (even for the plugins without the 1 second sleep). Is there a way to avoid this behavior? Or to safely clean up our plugin even if other plugins in the session take a second to tear down?
1
1
166
4w
SFCustomLanguageModelData.CustomPronunciation and X-SAMPA string conversion
Can anyone please guide me on how to use SFCustomLanguageModelData.CustomPronunciation? I am following the below example from WWDC23 https://wwdcnotes.com/documentation/wwdcnotes/wwdc23-10101-customize-ondevice-speech-recognition/ While using this kind of custom pronunciations we need X-SAMPA string of the specific word. There are tools available on the web to do the same Word to IPA: https://openl.io/ IPA to X-SAMPA: https://tools.lgm.cl/xsampa.html But these tools does not seem to produce the same kind of X-SAMPA strings used in demo, example - "Winawer" is converted to "w I n aU @r". While using any online tools it gives - "/wI"nA:w@r/".
0
1
177
3w
Is there a recommended approach to safeguarding against audio recording losses via app crash?
AVAudioRecorder leaves a completely useless chunk of file if a crash happens while recording. I need to be able to recover. I'm thinking of streaming the recording to disk. I know that is possible with AVAudioEngine but I also know that API is a headache that will lead to unexpected crashes unless you're lucky and the person who built it. Does Apple have a recommended strategy for failsafe audio recordings? I'm thinking of chunking recordings using many instances of AVAudioRecorder and then stitching those chunks together.
1
0
177
4w
MusicKit currentEntry.item is nil but currentEntry exists.
I'm trying to get the item that's assigned to the currentEntry when playing any song which is currently coming up nil when the song is playing. Note currentEntry returns: MusicPlayer.Queue.Entry(id: "evn7UntpH::ncK1NN3HS", title: "SONG TITLE") I'm a bit stumped on the API usage. if the song is playing, how could the queue item be nil? if queueObserver == nil { queueObserver = ApplicationMusicPlayer.shared.queue.objectWillChange .sink { [weak self] in self?.handleNowPlayingChange() } } } private func handleNowPlayingChange() { if let currentItem = ApplicationMusicPlayer.shared.queue.currentEntry { if let song = currentItem.item { self.currentlyPlayingID = song.id self.objectWillChange.send() print("Song ID: \(song.id)") } else { print("NO ITEM: \(currentItem)") } } else { print("No Entries: \(ApplicationMusicPlayer.shared.queue.currentEntry)") } }
0
0
161
4w
Detect when (internal or external) microphone is being used
Hello, I used kAudioDevicePropertyDeviceIsRunningSomewhere to check if an internal or external microphone is being used. My code works well for the internal microphone, and for microphones which are connected using a cable. External microphones which are connected using bluetooth are not reporting their status. The status is always requested successfully, but it is always reported as inactive. Main relevant parts in my code : static inline AudioObjectPropertyAddress makeGlobalPropertyAddress(AudioObjectPropertySelector selector) { AudioObjectPropertyAddress address = { selector, kAudioObjectPropertyScopeGlobal, kAudioObjectPropertyElementMaster, }; return address; } static BOOL getBoolProperty(AudioDeviceID deviceID, AudioObjectPropertySelector selector) { AudioObjectPropertyAddress const address = makeGlobalPropertyAddress(selector); UInt32 prop; UInt32 propSize = sizeof(prop); OSStatus const status = AudioObjectGetPropertyData(deviceID, &address, 0, NULL, &propSize, &prop); if (status != noErr) { return 0; //this line never gets executed in my tests. The call above always succeeds, but it always gives back "false" status. } return static_cast<BOOL>(prop == 1); } ... __block BOOL microphoneActive = NO; iterateThroughAllInputDevices(^(AudioObjectID object, BOOL *stop) { if (getBoolProperty(object, kAudioDevicePropertyDeviceIsRunningSomewhere) != 0) { microphoneActive = YES; *stop = YES; } }); What could cause this and how could it be fixed? Thank you for your help in advance!
3
0
856
Nov ā€™23
API to use for high-level audio playback to a specific audio device?
I'm working on a little light and sound controller in Swift, driving DMX lights and audio. For the audio portion, I need to play a bunch of looping sounds (long-duration MP3s), and occasionally play sound effects (short-duration sounds, varying formats). I want all of this mixed into selected channels on specific devices. That is, I might have one audio stream going to the left channel, and a completely different one going to the right channel. What's the right API to do this from Swift? Core Audio? AVPlayer stuff?
0
0
149
4w
How to observe AVCaptureDevice.DiscoverySession devices property?
At Apple Developer documentation, https://developer.apple.com/documentation/avfoundation/avcapturedevice/discoverysession you can find the sentence You can also key-value observe this property to monitor changes to the list of available devices. But how to use it? I tried it with the code above and tested on my MacBook with EarPods. When I disconnect the EarPods, nothing was happened. MacBook Air M2 macOS Sequoia 15.0.1 Xcode 16.0 import Foundation import AVFoundation let discovery_session = AVCaptureDevice.DiscoverySession.init(deviceTypes: [.microphone], mediaType: .audio, position: .unspecified) let devices = discovery_session.devices for device in devices { print(device.localizedName) } let device = devices[0] let observer = Observer() discovery_session.addObserver(observer, forKeyPath: "devices", options: [.new, .old], context: nil) let input = try! AVCaptureDeviceInput(device: device) let queue = DispatchQueue(label: "queue") var output = AVCaptureAudioDataOutput() let delegate = OutputDelegate() output.setSampleBufferDelegate(delegate, queue: queue) var session = AVCaptureSession() session.beginConfiguration() session.addInput(input) session.addOutput(output) session.commitConfiguration() session.startRunning() let group = DispatchGroup() let q = DispatchQueue(label: "", attributes: .concurrent) q.async(group: group, execute: DispatchWorkItem() { sleep(10) session.stopRunning() }) _ = group.wait(timeout: .distantFuture) class Observer: NSObject { override func observeValue(forKeyPath keyPath: String?, of object: Any?, change: [NSKeyValueChangeKey : Any]?, context: UnsafeMutableRawPointer?) { print("Change") if keyPath == "devices" { if let newDevices = change?[.newKey] as? [AVCaptureDevice] { print("New devices: \(newDevices.map { $0.localizedName })") } if let oldDevices = change?[.oldKey] as? [AVCaptureDevice] { print("Old devices: \(oldDevices.map { $0.localizedName })") } } } } class OutputDelegate : NSObject, AVCaptureAudioDataOutputSampleBufferDelegate { func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) { print("Output") } }
6
0
275
Oct ā€™24
Ford Puma Sync 3 problems with iOS 18
Good afternoon since Iā€™ve installed ios 18 on me iphone 15 pro I have problems using Apple car play with my Ford Puma with Sync 3. More in detail, problems with audio commands, selecting audio track, bluetooth, etc.. Are you aware about it? Thanks a lot Regards Alberto
0
0
223
Oct ā€™24
How to get Song Information From Queue?
I have a recent post kind of outlining a similar question here. This time though I'm confident that inserting an array of Track works when inserting into the ApplicationMusicPlayer.shared.queue but now I'm not sure how I can initialize the queue to display song title and artwork for example. I'm also not sure how to get the current item in the queue's artist information and album information which I feel should be easy to do so maybe I'm missing something obvious. Hope this paints of what I'm trying to do and I'm going to post the neccessary code here to help me debug/figure out this problem. import SwiftUI import MusicKit struct PlayBackView: View { @Environment(\.scenePhase) var scenePhase @Environment(\.openURL) private var openURL // Adding Enum Here for Question Sake enum PlayState { case play case pause } @State var song: Track @Binding var songs: [Track]? @State var isShuffled = false @State private var playState: PlayState = .pause @State private var songTimer: Int = Int.random(in: 5...30) @State private var roundTimer: Int = 5 @State private var isTimerActive = false // @State private var volumeValue = VolumeObserver() @State private var isFirstPlay = true @State private var isDancing = false @State private var player = ApplicationMusicPlayer.shared private var isPlaying: Bool { return (player.state.playbackStatus == .playing) } let timer = Timer.publish(every: 1, on: .main, in: .common).autoconnect() var playPauseImage: String { switch playState { case .play: "pause.fill" case .pause: "play.fill" } } var body: some View { VStack { // Album Cover HStack(spacing: 20) { if let artwork = player.queue.currentEntry?.artwork { ArtworkImage(artwork, height: 100) } else { Image(systemName: "music.note") .resizable() .frame(width: 100, height: 100) } VStack(alignment: .leading) { /* This is where I want to display song title, album title, and artist name for example */ // Song Title Text(player.queue.currentEntry?.title ?? "Unable to Find Song Title") .font(.title) .fixedSize(horizontal: false, vertical: true) // Album Title // Text(player.queue.currentEntry ?? "Album Title Not Found") // .font(.caption) // .fixedSize(horizontal: false, vertical: true) // I don't know what the subtitle actually grabs Text(player.queue.currentEntry?.subtitle ?? "Artist Name Not Found") .font(.caption) } } .padding() // Play/Pause Button Button(action: { handlePlayButton() isFirstPlay = false }, label: { Text(playState == .play ? "Pause" : isFirstPlay ? "Play" : "Resume") .frame(maxWidth: .infinity) }) .buttonStyle(.borderedProminent) .padding() .font(.largeTitle) .tint(.red) } .padding() // Maybe I should use the `.task` modifier here? .onAppear { // I'm sure this code could be improved but don't think it'll help answer the question at the moment. Task { if let songs = songs { do { if isShuffled { let shuffledSongs = songs.shuffled() try await player.queue.insert(shuffledSongs, position: .tail) handlePlayButton() } else { try await player.queue.insert(songs, position: .tail) } } catch { print(error.localizedDescription) } } } } } private func handlePlayButton() { Task { if isPlaying { player.pause() playState = .pause isTimerActive = false } else { playState = .play await playTrack() isTimerActive = true } } } @MainActor private func playTrack() async { do { try await player.play() } catch { print(error.localizedDescription) } } } //#Preview { // PlayBackView() //}
2
0
315
Sep ā€™24
How to fetch all albums after performing a library request
Hi, In my app I am using MusicLibraryRequest<Artist> to fetch all of the artists in someone's Library collection. With this response I then fetch each artists albums: artist.with([.album]). The response from this only gives albums in the users Library collection. I would like to augment it with all of the albums for an artist from the full catalogue. I'm using MusicKit and targeting iOS18 and visionOS 2. Could someone please point me towards the best way to approach this?
1
0
286
Sep ā€™24