We're integrating a web based group calling application within a native iOS application and finding that every time a CallKit session gets fully established the web based media streams break, rendering as gray with no audio.
Up to iOS 18 we worked around it by not fulfilling the call start action but that's no longer an option as the audio stopped getting automatically redirected to the speakers. We would now need the CXProvider's didActivateAudioSession callback but that would break the video.
The sample project loads up a simple webpage in a WKWebView which contains a video tag streaming the media from the device's camera.
At the same time it sets up a new CallKit session by requesting and fulfilling a CXStartCallAction transaction.
You will notice that the media doesn't render and, if you are to follow the warnings we left, you will find that not fulfilling the CXStartCallAction fixes it.
Unfortunately that's not a workaround we can use as we need the CXProvider delegate to inform us about audio session changes so we can redirect the audio to the speaker (so the proximity sensor doesn't activate and locking the screen doesn't end the call)
Any insights or workarounds would be greatly appreciated.
Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.
Post
Replies
Boosts
Views
Activity
Hello,
Using ShazamKit, based on a shazam catalog result, would it be possible to detect the audio-recorded FPS (speed)?
I'm thinking that the shazam catalog which was created from an audio file can be used to compare the speed of a live recorded audio.
Thank you!
iOS (Official) Photos app can display some EXIF-related metadata (e.g. camera and lens info, ISO, shutter speed, F-number) even when photos are offloaded to iCloud and the device is not connected to internet (e.g. airplane mode).
However, with the Photos.framework, we need to download photos to retrive those metadata (which means it will not work with airplane mode).
I tried the following methods, but none of those worked when photos were offloaded to iCloud and the device was in airplane mode:
Requesting data with PHImageManager.default().requestImageDataAndOrientation
Result: It does not return Data if the photo is not stored locally on the device, even with options.deliveryMode = .fastFormat
Converting PHAsset#localIdentifier to an AssetsLibrary.framework URL (assets-library://asset/...)
(I am aware that AssetsLibrary.framework is deprecated, but this was just a test.)
Result: If PHImageManager does not returns Data, ALAsset#defaultRepresentation().metadata() returns an empty NSDictionary
I’m working on a macOS app, written in Swift. My goal is to record audio from an external microphone, e.g., one connected via USB.
For this, I’m using an AVCaptureSession and recording its output with an AVAssetWriter. This works perfectly in principle (and reliably with internal microphones, for example).
The problem occurs after my app has successfully completed the first recording and I then want to make additional recordings (which makes me think it might be process-dependent, because it works again after restarting the app).
The problem: Noisy or distorted-sounding audio files. In addition, the following error message appears in the Console from CoreAudio / its AudioConverter:
Input data proc returned inconsistent 512 packets for 2048 bytes; at 3 bytes per packet, that is actually 682 packets
It is easy to reproduce. This problem is reproducible even if I don’t configure the AVAssetWriter manually and instead let it receive its audioSettings using a preset from an AVOutputSettingsAssistant. I’m running on macOS 15.0 (24A335).
I’ve filed a feedback including a demo project → FB15333298 🎟️
I would greatly appreciate any help!
Have a great day,
Martin
I use the iTunes Library framework in one of my apps, starting with macOS Sequoia 15.1 i can't create the ITLibrary object anymore with the following error:
Connection to amplibraryd was interrupted. clientName:iTunesLibrary(ITLibraryLoader)
Error connecting to the server via proxy object Error Domain=NSCocoaErrorDomain Code=4097 "connection to service named com.apple.amp.library.framework" UserInfo={NSDebugDescription=connection to service named com.apple.amp.library.framework}
configure failed: Error Domain=NSCocoaErrorDomain Code=4097 "connection to service named com.apple.amp.library.framework" UserInfo={NSDebugDescription=connection to service named com.apple.amp.library.framework}
I created a new sandboxed macOS app, added the music folder read permission and it reproduced the error:
import SwiftUI
import iTunesLibrary
@main
struct ITLibraryLoaderApp: App {
var body: some Scene {
WindowGroup {
ContentView()
}
}
}
struct ContentView: View {
var body: some View {
Button("Load ITLibrary") {
loadLibrary()
}
}
func loadLibrary() {
do {
let _ = try ITLibrary(apiVersion: "1.0", options: .none)
}
catch {
print(error)
}
}
}
I restarted my developer machine and the music app with no luck.
After an Album, Playlist, or collection of songs have been added to the ApplicationMusicPlayer queue, clearing the queue can be easily accomplished with:
ApplicationMusicPlayer.shared.queue.entries = []
This transitions the player to a paused state with an empty queue.
After queueing a Station, the same code cannot be used to clear the queue. Instead, it causes the queue to be refilled with a current and next MusicItem from the Station.
What's the correct way to detect that the ApplicationMusicPlayer is in the state where it's being refilled by a Station and clear it? I've tried the following approaches with no luck:
# Reinitialize queue
ApplicationMusicPlayer.shared.queue = ApplicationMusicPlayer.Queue()
# Create empty Queue
let songs: [Song] = []
let emptyQueue = ApplicationMusicPlayer.Queue(for: songs)
ApplicationMusicPlayer.shared.queue = emptyQueue
We have a universal iOS/tvOS app that also supports iOS App on Mac.
In our AVPlayer-based video player we support AirPlay with AVRouteDetector and AVRoutePickerView. We play HLS streams.
When we try to AirPlay from an iOS device to an Apple TV or a Mac that has our app installed, it doesn't work. The receiver is marked as active in the route picker UI but the video doesn't show up on the receiver and playback stops.
When our app isn't installed on the receiver device, everything works as expected.
Has anyone encountered the same issue? Any solutions available for this?
https://api.media.apple.com/v1/feed/exports/song_2024-11-02T16-02/parts?limit=200&offset=400
This is the api used to get parquet file urls. I need all the urls in one api hit, right now if I don't provide the limit then default it is taking 100 and max is 200.
How to get all the records in one hit? Or the count of parquet records in one hit?
Hi,
I am in need to get the total number of parquet files that are present in the apple music feed api for songs, artists. As there is option for limit and offset. But limit is limited to 200 records and offset is uncertain.
How to get total number of parquet files number without quering apple music feed api mulitple times?
Need help regarding this. Thanks!
The media services used for HLS streaming in an AVPlayer seem to crash if your segments are too large.
Anything over 20Mbps seems to cause a crash. I have tried adjusting the segment length to 1 second also and it didn't help.
I am remuxing Dolby Vision and HDR video and want to avoid transcoding and losing any metadata. However the segments are too large.
Is there a workaround for this? Otherwise it seems AVFoundation is not suited to high bitrate HLS and I should be using MPV or similar.
I'm using iCloud Music Library. I’m using macOS 14.1 (23B74) and iOS 17.1.
i’m using MusicKit to find songs that do not have artwork. On iOS, Song.artwork will be nil for items I know do not have artwork. On macOS, Song.artwork is not nil. However when the songs are shown in Music.app, they do not have Artwork. Is this expected? Alternately, is there a more correct way to determine that a Song has no Artwork?
I have also filed FB13315721.
Thank you for any tips!
Hey,
I am fairly new to working with AVFoundation etc. As far as I could research on my own, if I want to get metadata from let's say a .m4a audio file, I have to get the data and then create an AVAsset. My files are all on local servers and therefore I would not be able to just pass in the URL.
The extraction of the metadata works fine - however those AVAssets create a huge overhead in storage consumption. To my knowledge the data instances of each audio file and AVAsset should only live inside the function I call to extract the metadata, however those data/AVAsset instances still live on on storage as I can clearly see that the app's file size increases by multiple Gigabytes (equal to the library size I test with). However, the only data that I purposefully save with SwiftData is the album artwork.
Is this normal behavior for AVAssets or am I missing some detail?
PS. If I forgot to mention something important, please ask. This is my first ever post, so I'm not too sure what is worth mentioning.
Thank you in advance!
Denis
We are experiencing an issue with our HLS MPEG-TS streams on Apple devices, where the AVPlayer in our iOS app and Safari jumps back to the start when the player automatically changes quality. This occurs despite the stream still indicating that it is live and there is no change in the seekbar. After testing our streams with the Apple HLS Validator, the only problem that occured was an "Measured peak bitrate compared to multivariant playlist declared value exceeds error tolerance"-Error.
On Chrome and on our Android-App this playback bug does not happen. Has someone else experienced similar issues with the AVPlayer?
I am attempting to do batch Transcription of audio files exported from Voice Memos, and I am running into an interesting issue. If I only transcribe a single file it works every time, but if I try to batch it, only the last one works, and the others fail with No speech detected. I assumed it must be something about concurrency, so I implemented what I think should remove any chance of transcriptions running in parallel. And with a mocked up unit of work, everything looked good. So I added the transcription back in, and
1: It still fails on all but the last file. This happens if I am processing 10 files or just 2.
2: It no longer processes in order, any file can be the last one that succeeds. And it seems to not be related to file size. I have had paragraph sized notes finish last, but also a single short sentence that finishes last.
I left the mocked processFiles() for reference.
Any insights would be greatly appreciated.
import Speech
import SwiftUI
struct ContentView: View {
@State private var processing: Bool = false
@State private var fileNumber: String?
@State private var fileName: String?
@State private var files: [URL] = []
let locale = Locale(identifier: "en-US")
let recognizer: SFSpeechRecognizer?
init() {
self.recognizer = SFSpeechRecognizer(locale: self.locale)
}
var body: some View {
VStack {
if files.count > 0 {
ZStack {
ProgressView()
Text(fileNumber ?? "-")
.bold()
}
Text(fileName ?? "-")
} else {
Image(systemName: "folder.badge.minus")
Text("No audio files found")
}
}
.onAppear {
files = getFiles()
Task {
await processFiles()
}
}
}
private func getFiles() -> [URL] {
do {
let documentsURL = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first!
let path = documentsURL.appendingPathComponent("Voice Memos").absoluteURL
let contents = try FileManager.default.contentsOfDirectory(at: path, includingPropertiesForKeys: nil, options: [])
let files = (contents.filter {$0.pathExtension == "m4a"}).sorted { url1, url2 in
url1.path < url2.path
}
return files
}
catch {
print(error.localizedDescription)
return []
}
}
private func processFiles() async {
var fileCount = files.count
for file in files {
fileNumber = String(fileCount)
fileName = file.lastPathComponent
await processFile(file)
fileCount -= 1
}
}
// private func processFile(_ url: URL) async {
// let seconds = Double.random(in: 2.0...10.0)
// await withCheckedContinuation { continuation in
// DispatchQueue.main.asyncAfter(deadline: .now() + seconds) {
// continuation.resume()
// print("\(url.lastPathComponent) \(seconds)")
// }
// }
// }
private func processFile(_ url: URL) async {
let recognitionRequest = SFSpeechURLRecognitionRequest(url: url)
recognitionRequest.requiresOnDeviceRecognition = false
recognitionRequest.shouldReportPartialResults = false
await withCheckedContinuation { continuation in
recognizer?.recognitionTask(with: recognitionRequest) { (transcriptionResult, error) in
guard transcriptionResult != nil else {
print("\(url.lastPathComponent.uppercased())")
print(error?.localizedDescription ?? "")
return
}
if ((transcriptionResult?.isFinal) == true) {
if let finalText: String = transcriptionResult?.bestTranscription.formattedString {
print("\(url.lastPathComponent.uppercased())")
print(finalText)
}
}
}
continuation.resume()
}
}
}
I have an app that edits photos in your library. When I call
try CIContext().writeHEIFRepresentation(of: editedImage, to: fileURL, format: .RGBA8, colorSpace: originalImage.colorSpace!)
The following is logged to the console:
writeImageAtIndex:1012: ⭕️ ERROR: 'App' is trying to save an opaque image (5712x4284) with 'AlphaLast'. This would unnecessarily increase the file size and will double (!!!) the required memory when decoding the image --> ignoring alpha.
What does that mean and how can I resolve it?
Xcode Version 16.0 (16A242d)
iOS 18.1 (22B82)
Somehow I have a corrupted audio plugin authentication problem. I’m on a silicon Mac M1 and two audio plugins that were installed and working will now not authenticate. The vendors both are unable to troubleshoot and I think the issue is a corrupted low level file. One product authenticates correctly when I created a new user but another plugin only authenticates on the original user account and not on the newly created user. Reinstalling the plugins and the Mac OS does not fix the issue. Any thoughts?
How to capture the audio being played by apple music in ios and combine it with fft to achieve audio visualization?
I am experiencing a bug when using a AVCapturePhotoBracketSettings object to capture a bracketed photo sequence on iPhone 16 Pro.
Specifically, when I pass in an array of exposure values: [-x, 0, +x], where x >= 3.
Specifically, the high exposure photo capture returns a black image.
STEPS TO REPRODUCE
Run the sample app I have provided on an iPhone 16 Pro
Notice that bracketed images captured where the eV is set to [-3,0,+3], [-4,0,+4], or [-5,0,+5] return a black image for the high exposure photo.
Notice that on other iOS devices (like iPhone 13 Pro), the high exposure photo is returned as high brightness as expected.
I have also added two folders in the sample project that show screenshots of the bug: iPhone13Pro & iPhone16Pro
Sample Project:
https://www.icloud.com/iclouddrive/090O_68Z0Nh2UOxmPRwu56Tmw#Focused16ProBracketedCaptureBug
I'm very excited about the new MusicLibrary API, but after a couple of days of playing around with it, I have to say that I find the implementation of filtering MusicLibraryRequests a little confusing. MPMediaQuery has a fairly extensive list of predicates that can be applied, including string and persistentID comparisons for artist, album artist genre, and more. It also lets you filter on an item’s title. MusicLibraryRequests let you filter on the item’s ID, or on its MusicKit Artist and Genre relationships. To me, this seems like it adds an extra step.
With an MPMediaQuery, if I wanted to fetch every album by a given artist, I’d apply an MPMediaPropertyPredicate looking at MPMediaItemPropertyAlbumArtist and compare the string. It was also easy to change the MPMediaPredicateComparison to .contains to match more widely. If I wanted to surface albums by “Aesop Rock” or “Aesop Rock & Blockhead,” I could use that.
In the MusicLibraryRequest implementation, it looks like I need to perform a MusicLibraryRequest<Artist> first in order to get the Artist objects. There’s no filter for the name property, so if I don’t have their IDs, I’ve got to use filter(text:). From there, I can take the results of that request and apply them to my MusicLibraryRequest<Album> using the filter(matching:memberOf) function.
I could use filter(text:) on the MusicLibraryRequest<Album>, but that filters across multiple properties (title and artistName?) and is less precise than defining the actual property I want to match against.
I think my ideal version of the MusicLibraryRequest API would offer something like filter(matching:equalTo:) or filter(matching:contains:) that worked off of KeyPaths rather than relationships. That seems more intuitive to me. I’m not saying we need every property from every filterable MPMediaItemProperty key, but I’d love to be able to do it on title, artistName, and other common metadata. That might look something like:
filter(matching: \.title, contains: “Abbey Road”)
filter(matching: \.artistName, equalTo: “Between The Buried And Me”)
I noticed that filter(text:) is case insensitive, which is awesome, and something I’ve wanted for a long time in MPMediaPropertyPredicate. As a bonus, it would be great if a KeyPath based filter API supported a case sensitivity flag. This is less of a problem when dealing with Apple Music catalog content, but users’ libraries are a harsh environment, and you might have an artist “Between The Buried And Me” and one called “Between the Buried and Me.” It would be great to get albums from both with something like:
filter(matching: \.artistName, equalTo: “Between The Buried And Me”, caseSensitive: false)
I've submitted the above as FB10185685. I also submitted another feedback this morning regarding filter(text:) and repeating text as FB10184823.
My last wishlist item for this API (for the time being!) is exposing the MPMediaItemPropertyAlbumPersistentID as an available filter attribute. I know, I know… hear me out. If you take a look at the other thread I made today, you’ll see that due to missing metadata in MusicKit, I still have some use cases where I need to be able to reference an MPMediaItem and might need to fetch its containing MPMediaItemCollection to get at other tracks on the album. It would be nice to seamlessly be able to fetch the MPMediaItemCollection or the library Album using a shared identifier, especially when it comes to being able to play the album in MusicKit’s player rather than Media Player’s.
I've submitted that list bit as FB10185789
Thanks for bearing with my walls of text today. Keep up the great work!
I'm creating app that listening other app's sound. in this use case, screen data is not needed.
but if I don't call SCStream#addStreamOutput(_, type: .screen, ...), console shows this error:
[ERROR] _SCStream_RemoteVideoQueueOperationHandlerWithError:701 stream output NOT found. Dropping frame
currently I'm setting SCStreamConfiguration#minimumFrameInterval to large value (e.g. 0.1fps) as workaround, but it would be good if i can completely disable screen capture for best performance.
there is any way to disable screen capture and only captures apps audio?