Hello,
As explained in this link, the AVAssetReaderTrackOutput.copyNextSampleBuffer() returns a CMSampleBuffer in linear PCM audio format.
I want to place this audio buffer into an AVAssetWriterInput of type kAudioFormatMPEG4AAC, but I can't manage the conversion.
Could you help me by providing an extension that returns a CMSampleBuffer converted from linear PCM audio format to kAudioFormatMPEG4AAC?
Example:
extension CMSampleBuffer {
func fromPCMToAAC() -> CMSampleBuffer? {
// Here, get a new AudioStreamBasicDescription, create a CMSampleBuffer and a CMBlockBuffer
}
}
I've tried multiple times but without success.
Software: iOS 18.1
XCode: 16.0
Thank you!
Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.
Post
Replies
Boosts
Views
Activity
Hi,
I'm trying to insert CMSampleBuffers into an AVAssetWriterInput that has been configured with expectsMediaDataInRealTime = false with pauses. That is, I insert fixed-length audio at specific (increasing and non-overlapping) time points with large gaps in between. E.g., 5 seconds of audio at t=3.0, 5 seconds of audio at t=12.0, etc.
The first audio sample plays at t=3 in the final output video as expected. But then all the other samples are bunched up immediately after it instead of being scheduled at the correct time. Below is my code.
I'm just loading the asset and then readjusting its timestamps to be correct in the absolute timeline. Why do they not get scheduled correctly when the timestamps and durations are definitely correct and non-overlapping?
func addFrame(_ pixelBuffer: CVPixelBuffer) {
guard CGSize(width: pixelBuffer.width, height: pixelBuffer.height) == outputSize else { return }
let frameTime = CMTimeMake(value: frameCount, timescale: frameRate)
if videoInput?.isReadyForMoreMediaData == true {
pixelBufferAdaptor?.append(pixelBuffer, withPresentationTime: frameTime)
frameCount += 1
currentTime = frameTime
}
}
func addMP3AudioClip(_ audioData: Data) async throws {
let tempURL = FileManager.default.temporaryDirectory.appendingPathComponent(UUID().uuidString + ".mp3")
defer {
try? FileManager.default.removeItem(at: tempURL)
}
try audioData.write(to: tempURL)
let asset = AVAsset(url: tempURL)
let duration = try await asset.load(.duration)
let audioTrack = try await asset.loadTracks(withMediaType: .audio).first!
let audioReader = try AVAssetReader(asset: asset)
let outputSettings: [String: Any] = [
AVFormatIDKey: kAudioFormatLinearPCM,
AVSampleRateKey: 44100,
AVNumberOfChannelsKey: 2,
AVLinearPCMBitDepthKey: 16,
AVLinearPCMIsFloatKey: false,
AVLinearPCMIsBigEndianKey: false,
AVLinearPCMIsNonInterleaved: false
]
let audioReaderOutput = AVAssetReaderTrackOutput(track: audioTrack, outputSettings: outputSettings)
audioReader.add(audioReaderOutput)
guard audioReader.startReading() else {
throw NSError(domain: "AudioReaderError", code: 0, userInfo: [NSLocalizedDescriptionKey: "Failed to start reading audio"])
}
let baseInsertionTime = currentTime.convertScale(duration.timescale, method: .default) // Capture the current video time when the method is called
print("Adding audio clip at \(baseInsertionTime.seconds) seconds, duration: \(duration.seconds) seconds")
var audioTime = CMTime.zero
var totalDuration: Double = 0
while let sampleBuffer = audioReaderOutput.copyNextSampleBuffer() {
let bufferDuration = CMSampleBufferGetDuration(sampleBuffer)
let adjustedBuffer = adjustTimeStamp(of: sampleBuffer, by: baseInsertionTime)
while !audioInput!.isReadyForMoreMediaData {
try await Task.sleep(nanoseconds: 100_000_000) // 0.1 second
}
audioInput!.append(adjustedBuffer)
print(" Adjusted time: \(adjustedBuffer.presentationTimeStamp.seconds)")
audioTime = CMTimeAdd(audioTime, bufferDuration)
totalDuration += bufferDuration.seconds
}
print("Finished adding audio clip. Last sample at: \(CMTimeAdd(baseInsertionTime, audioTime).seconds) seconds")
print(" totalDuration=\(totalDuration)")
}
private func adjustTimeStamp(of sampleBuffer: CMSampleBuffer, by timeOffset: CMTime) -> CMSampleBuffer {
var count: CMItemCount = 0
CMSampleBufferGetSampleTimingInfoArray(sampleBuffer, entryCount: 0, arrayToFill: nil, entriesNeededOut: &count)
var timingInfo = [CMSampleTimingInfo](repeating: CMSampleTimingInfo(), count: count)
CMSampleBufferGetSampleTimingInfoArray(sampleBuffer, entryCount: count, arrayToFill: &timingInfo, entriesNeededOut: nil)
for i in 0..<count {
timingInfo[i].presentationTimeStamp = CMTimeAdd(timingInfo[i].presentationTimeStamp, timeOffset)
if timingInfo[i].decodeTimeStamp != .invalid {
timingInfo[i].decodeTimeStamp = CMTimeAdd(timingInfo[i].decodeTimeStamp, timeOffset)
} else {
timingInfo[i].decodeTimeStamp = timingInfo[i].presentationTimeStamp
}
}
var adjustedBuffer: CMSampleBuffer?
CMSampleBufferCreateCopyWithNewTiming(allocator: nil, sampleBuffer: sampleBuffer, sampleTimingEntryCount: count, sampleTimingArray: &timingInfo, sampleBufferOut: &adjustedBuffer)
return adjustedBuffer!
}
Our app, Universalis (Apple ID 284942719) plays audio successfully on all versions of iOS up to and including iOS 17. It uses the old MediaPlayer interface because it is targeted at versions all the way down to iOS 12.
On iOS 18, it plays audio but CarPlay fails to show the Now Playing screen. Instead, a message box pops up in CarPlay saying "There was a problem loading this content", with an OK button. Nevertheless the audio plays correctly.
This has been reported in the wild by a user of iOS 18 with CarPlay. I am also able to reproduce it locally, running the app in Xcode with the CarPlay Simulator, with an iPhone using iOS 18.0 or iOS 18.1. Earlier versions work correctly.
Looking at the console log in CarPlay, the following error message appears about 10 seconds before the error message pops up:
MSVEntitlementUtilities - Process Universalis PID[1173] - Group: (null) - Entitlement: com.apple.mediaremote.external-artwork-validation - Entitled: NO - Error: (null)
The message has an orange background which appears to mean that it does not come directly from NSLog in the app. The message appears immediately after the request handler of MPMediaItemArtwork has been called requesting a 128 x 128 image and has successfully returned a 128x128 UIImage object.
This has been reported through Feedback Assistant: Bug report ID is FB15343941
How can we work round this error?
Hello,
I am getting the following error while attempting to run my LockedCameraCapture compatible app on an iOS 15 device:
dyld[434]: Library not loaded: '/System/Library/Frameworks/LockedCameraCapture.framework/LockedCameraCapture'
Referenced from: '/private/var/containers/Bundle/Application/.../MyApp.app/MyApp.debug.dylib'
Reason: tried: '/System/Library/Frameworks/LockedCameraCapture.framework/LockedCameraCapture' (no such file)
Of course iOS 15 doesn't have the library for LockedCameraCapture, but I have had no issue including Lock Screen Widgets (which require iOS 16), so I am not sure why the error is popping up.
Thank you!
The app needs to play remote videos. Sometimes it takes very long time (~10 seconds) to load the media and play with AVPlayer.
So I use a timer to check and try to play next video if it is over 5 seconds:
AVURLAsset *asset = [[AVURLAsset alloc] initWithURL:videoUrl options:nil];// line 1
NSArray *keys = @[@"playable"];
mediaLoaded = NO;
[asset loadValuesAsynchronouslyForKeys:keys completionHandler:^() { // line 2
mediaLoaded = YES; // line 4
dispatch_async(dispatch_get_main_queue(), ^{
[self.player replaceCurrentItemWithPlayerItem:[AVPlayerItem playerItemWithAsset:asset]];
[self.player playImmediatelyAtRate:playSpeed];
});
}];
dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(5 * NSEC_PER_SEC)), dispatch_get_main_queue(), ^{
if (!mediaLoaded) {
[self playNextVideo]; // line 3
}
});
So the flow is: line 1 (of video 1)- line 2 (of video 1)- line 3 (if over 5 seconds and video 1 is not playing)- line 1 (of video 2)-...
Now the problem is that seems line 2 is blocking line 1: only line 4 (for video 1 after ~10 seconds) or the completionHandler is executed will line 2 (for video 2) will be executed.
Anybody can give any insight?
Thx!
Hi,
I'm recording videos frame by frame and occasionally a sound plays (from an MP3 asset). I want to composite these sounds into the video at the correct timings. But this doesn't work. Really pulling my hair out here. I've tried everything, including adding one after another and then inserting silence in between (allegedly this pushes subsequent clips back) but nothing works.
Here, _currentTime is the current time according to the video frames added, which are added at 20Hz.
You can see I am adding silence long enough to cover the time from the end of the last audio clip to now, plus extra padding to contain the audio we are about to add. Doesn't matter if I remove this, it just doesn't work. Sometimes I can get two pieces of audio to play but never a third and usually, only the first audio plays, and then nothing after.
I'm completely stumped.
func addFrame(_ pixelBuffer: CVPixelBuffer) {
guard CGSize(width: pixelBuffer.width, height: pixelBuffer.height) == _outputSize else { return }
let frameTime = CMTimeMake(value: Int64(_frameCount), timescale: _frameRate)
if _videoInput?.isReadyForMoreMediaData == true {
_pixelBufferAdaptor?.append(pixelBuffer, withPresentationTime: frameTime)
_frameCount += 1
_currentTime = frameTime
}
}
func addMP3AudioClip(_ audioData: Data) async throws {
let tempURL = FileManager.default.temporaryDirectory.appendingPathComponent(UUID().uuidString + ".mp3")
try audioData.write(to: tempURL)
let asset = AVAsset(url: tempURL)
let duration = try await asset.load(.duration)
let audioTrack = try await asset.loadTracks(withMediaType: .audio).first!
let currentAudioTime = _currentTime.convertScale(duration.timescale, method: .default)
_audioTrack?.insertEmptyTimeRange(CMTimeRangeFromTimeToTime(start: _lastAudioClipEndTime, end: currentAudioTime))
_audioTrack?.insertEmptyTimeRange(CMTimeRangeFromTimeToTime(start: currentAudioTime, end: CMTimeAdd(currentAudioTime, duration)))
let timeRange = CMTimeRangeMake(start: .zero, duration: duration)
try _audioTrack?.insertTimeRange(timeRange, of: audioTrack, at: currentAudioTime)
_lastAudioClipEndTime = CMTimeAdd(currentAudioTime, duration)
try FileManager.default.removeItem(at: tempURL)
_audioClipTimeRanges.append(CMTimeRangeMake(start: _currentTime, duration: duration))
}
Thank you,
-- B.
When update as IOS18 , Display automatically dims while the iPhone is video calling with LINE and it is stabilizing.
I upgraded my IOS to 18 in both of my phones and none allowm e to rate my music with the stars I want to rate them. I disable the option, an try it, then I disable it and re-enable it and nothing... Something may be wrong with the IOS 18 version
I had this situation recently where suddenly with the new Beta update the sound of my airpods pro 2 was all gltichy and spatial audio didn’t work at all.
I've already check my airpods with other devices and they're completely fine.
Hope this helps!
We have a VOIP calling application that releases resources at the end of call. When the AudioOutputUnitStop API is invoked, it takes upto 700millisecond to return back sometimes.
If we comment that API call as a test then the AudioUnitUninitialize API takes upto 700ms.
Once the cleanup is done, as part of the call flow, the application invokes a BYE SIP message. Hence in cases where the API takes more than 200 ms, the Bye message is sent with that much delay and gets blocked as per the server DDOS settings. (DDOS timer will start as soon as a the UDP socket is disconnected with the client and will timeout within 200 ms and Bye request coming post that time will get blocked)
We need to understand why there is a delay of more than 200ms observed sometimes while in other cases it requires less than 50ms?
I have an app that plays audio and the behaviour of it has changed in watchOS 11. I can no longer figure out how to play the audio through the headphones.
To play audio I..
let session = AVAudioSession.sharedInstance()
try session.setCategory(.playback,
mode: .default,
policy: .longFormAudio,
options: []
let activated = try await session.activate()
if activated {
// play audio
}
In previous versions, 'try await session.activate()` would bring up a route picker where the user could select their headphones. Now on watchOS 11 it just plays the audio out of the speaker.
Maybe that's what some people want but if they do want it to play out of the headphones I can't see how I can give that option now? There's no AVRoutePickerView available on watchOS for selecting it.
I've tried setting the category to .multiRoute instead of .playback and that does bring up the picker but selecting the speaker results in an error code and selecting the headphones results in it saying it cannot find my headphones (which shouldn't be the case since Apple Music on watchOS finds them).
Tried overriding the output with try session.overrideOutputAudioPort(.speaker) but the compiler complains that speaker isn't available on watchOS, which is strange as if I understand correctly it's possible to play through the speaker now at least on some Apple Watches.
So is this a bug or is there some way I've not found of playing audio through the headphones?
Hello,
I noticed that SFSpeechRecognizer is broken on iOS 18. During a recognition task, it keeps dropping the recognized text on every pause. For example, if you say "how are you fine", it will drop the "how are you" part and only give you "fine" as the result.
Say "how are you <pause> fine"
// iOS 17 ✅ (perfect final result)
How
How are
How are you
How are you.
How are you. Fine.
// iOS 18 ❌
How
How are
How are you
How are you
Fine
(the text before the pause is dropped, and fail to recognize the punctuations.)
Reproducing the issue:
Download the official sample project.
Run it on an iOS 18 device or simulator.
Say "how are you fine"
Only "fine" will be displayed.
I am using the AVAudioEngine to play back samples in an iOS game. I would like to change the play back rate of a sample in real time.
When using AVAudioUnitVarispeed for chaging the play back rate it creates stutters in the game as it isn't processed in real time (as stated here:AVAudioUnitTimeEffect)
The other option I found to change the rate is by using an AVAudioEnvironmentNode and change the rate of the AVAudioPlayerNode. That works without creating stutters but limits the valid values for the rate from 0.5 to 2.0 (I need higher rates then 2.0). See here: AVAudio3DMixing.
Are there any other ways to play back a sample with a rate control in real time?
Hello,
I try to get the Video from an HDMI USB capture card and show it in a PreviewLayer with 60fps. The device I am using (ShadowCast 2) is supporting 1080p with 60fps in "yuvs" and "420v".
This is my code with stripped away uninteresting stuff and removed error handling to build the previewLayer.
I am using the AVFrameRateRange because the capture device is not directly supporting 60.00 but <AVFrameRateRange: 0x600000875680 60.00 - 60.00 (1000000 / 60000240 - 1000000 / 60000240)> fps.
@Observable
final class AVFoundationService: AVService {
// Live View
private let session: AVCaptureSession = .init()
var previewLayer: AVCaptureVideoPreviewLayer {
let layer = AVCaptureVideoPreviewLayer(session: session)
layer.videoGravity = .resizeAspect
return layer
}
var activeVideoDevice: AVCaptureDevice? {
// TODO: implement correct logic
if let device = videoDevices.first(where: { $0.localizedName.contains("Shadow") }) {
return device
}
return AVCaptureDevice.default(for: .video)
}
func setupStreamDemo(completion: @escaping (Error?) -> Void) {
session.beginConfiguration()
if let device = activeVideoDevice {
do {
let input = try AVCaptureDeviceInput(device: device)
if session.canAddInput(input) {
session.addInput(input)
} else {
print("explode")
}
for format in device.formats {
let dimensions = CMVideoFormatDescriptionGetDimensions(format.formatDescription)
if dimensions.width == 1920 && dimensions.height == 1080 && format.formatDescription.mediaSubType.description == "'yuvs'" {
let foundFPS = format.videoSupportedFrameRateRanges.first {
Int($0.minFrameRate) == 60 && Int($0.minFrameRate) == 60
}
try device.lockForConfiguration()
device.activeFormat = format
device.activeVideoMinFrameDuration = foundFPS!.minFrameDuration
device.activeVideoMaxFrameDuration = foundFPS!.minFrameDuration
device.unlockForConfiguration()
}
}
} catch {
return completion(error)
}
}
session.commitConfiguration()
session.startRunning()
completion(nil)
}
}
I am using the following code in SwiftUI to show the AVCaptureVideoPreviewLayer.
struct VideoPreviewView: NSViewRepresentable {
private let previewLayer: AVCaptureVideoPreviewLayer
func makeNSView(context: Context) -> NSView {
let view = NSView()
view.layer = self.previewLayer
view.layer?.frame = view.bounds
return view
}
func updateNSView(_ nsView: NSView, context: Context) {
if let layer = nsView.layer as? AVCaptureVideoPreviewLayer {
layer.session = self.previewLayer.session
}
}
}
When I now run my app, it will ignore whatever I set on device.activeVideoMinFrameDuration and/or device.activeVideoMaxFrameDuration. If I set it to 10 fps - it's running with 30, if I set 60 it is running with 30.
If I start in parallel to my app QuickTime and start a "Recording" from my USB Capture Card, it will switch to 60fps mode.
I am on Mac Sequoia 15.0 with Xcode 16.0.
What I am doing wrong?
I use AVPlayer to play HLS video successfully on macOS Sonoma, but I encountered this error on macOS Sequoia. Please help me:
Error Domain=AVFoundationErrorDomain Code=-11833 ‘Cannot Decode’ UserInfo={NSUnderlyingError=0x600001e57330 {Error Domain=CoreMediaErrorDomain Code=-12906 ‘(null)’}, NSLocalizedFailureReason=The decoder required for this media cannot be found., AVErrorMediaTypeKey=vide, NSLocalizedDescription=Cannot Decode}
Thanks!
I am facing an Issue regarding the voicemail feature. when someone calls me, it would go to voicemail only if I click on the voicemail icon. If I do not respond at all to the incoming call, it does not give the caller an option to record voicemail. I have tried switching the voicemail on and off, its working on other devices Iphone 14 and 15 for my family member, just not for me. How can i resolve this?
When attempting to play a video using AVPlayer on iOS 18.0, I am encountering an error that does not occur on versions earlier than 18.0. Could you please advise what might be causing this issue?
Error Domain=AVFoundationErrorDomain Code=-11828
Error Domain=NSOSStatusErrorDomain Code=-12847 "(null)"
This is Error code
This is the URL information I retrieved using the curl command.
HTTP/1.1 200
Content-Disposition: inline;filename="sample.mp4"
Accept-Ranges: bytes
ETag: sample.mp4
Last-Modified: Tue, 20 Jan 1970 23:52:10 GMT
Expires: Mon, 07 Oct 2024 09:15:49 GMT
Content-Range: bytes 0-987561/987561
Content-Type: application/octet-stream
Content-Length: 987561
Date: Mon, 30 Sep 2024 09:15:49 GMT
Since the iOS 18 update, there's been a bug that always occurs when listening to music through Apple Music. When music is playing and the iPhone enters or exits standby mode, the music pauses by itself for 1 second. The initial conditions were the same as when this problem didn’t exist.
Hello,
I used following technical note to develop app that record mov file with SMPTE timecode.
https://developer.apple.com/library/archive/technotes/tn2310/_index.html
As result, a timecode track is present within .mov file (other tracks are audio and video)
Unfortunately, QuickTime Player doesn't display timecode information.
Analyser tools like mediainfo or online service as https://media-analyzer.pro/app show that timecode track has null duration (and so no "time code of last frame"
example n° of TC track :
Other
ID : 3
Type : Time code
Format : QuickTime TC
Frame rate : 60.000 FPS
Time code of first frame : 17:39:59:00
Time code, stripped : Yes
Title : Core Media Time Code
Encoded date : 2024-09-10 15:39:46 UTC
Tagged date : 2024-09-10 15:39:59 UTC
example 2 of Timecode track :
0000569562Quicktime Timecode #0
00007f6b8a'trak' Track atom #1
00007f6b92'tkhd' Track header atom #2
size 92 (0x5C)
type 'tkhd' (hex 74 6B 68 64)
version 0
flags 15 (0xF)
creation_time 0xE30618C2, '2024-09-10 15:39:46'
modification_time 0xE30618CF, '2024-09-10 15:39:59'
track_ID 3
reserved 0
duration 0
reserved [0, 0]
In each case, duration is considered as null even if the record's duration is more than 20s.
STEPS TO REPRODUCE
Use AVAssetWriter for video and audio.
Create AVAssetWrite for timecode and associate it with video track.
Just before stopping record, a sample buffer containing SMPTE is generated and added.
All track are marked as finished before stopping the record with finishWritingWithCompletionHandler.
I'm attempting to use AVExternalStorageDevice.requestAccess on iOS 18 using Xcode 16.
When calling requestAccess, a dialog does appear, but the completionHandler closure is never called to indicate whether access was granted. If using the async version, the function just never returns.
Calling requestAccess also results in a mediaServicesWereReset (-11819) error without fail.
Supposedly, "the system only presents the dialog to a person the first time your app calls the method." That also doesn't appear to be the case. The dialog appears every time requestAccess is called, regardless of previous invocations and whether "Allow" or "Don't Allow" was selected.
The dialog itself says "You can change this in Privacy settings." I cannot find this permission anywhere in the Settings app, neither under Privacy & Security nor under the app-specific settings page.
Has anyone else experienced these issues? Am I missing something here? I did suspect permissions issues and tried adding a NSRemovableVolumesUsageDescription entry to the app. This did not appear to change anything.