Dive into the technical aspects of audio on your device, including codecs, format support, and customization options.

Audio Documentation

Post

Replies

Boosts

Views

Activity

How to preserve kCMFormatDescriptionExtension_FieldXXX after encoding?
I am currently working with AVAssetWriterInput with interlaced source.I want to know the proper procedure to preserve 'fiel' format description extension through AVAssetWriterInput compression operation.Do you anyone have an answer on this?>>Technical Note TN2162>>Uncompressed Y´CbCr Video in QuickTime Files>>The 'fiel' ImageDescription Extension: Field/Frame Information//What I have tried is following:First decode into sample buffer with Field/Frame information- Decode DV-NTSC source and get current FieldCount/Detail of source data- Create a kCVPixelFormatType_422YpCbCr8 CVImageBuffer from source- CMSetAttachments() kCVImageBufferFieldCountKey/kCVImageBufferFieldDetailKey with FieldCount=2/FieldDetail=14 to imageBuffer, same as DV-NTSC source- CMVideoFormatDescriptionCreateForImageBuffer()- CMSampleBufferCreateForImageBuffer()- Now decompressed samplebuffer has kCMFormatDescriptionExtension_FieldXXX same as sourceSecond encode using ProRes422- Next I create AVAssetWriterInput to transcode from _422YpCbCr8 into AVVideoCodecAppleProRes422- Now AVAssetWriter can produce ProRes422 mov file, but it does not contain the format description extension 'fiel'.- ProRes422 Encoder does preserve 'colr', 'pasp', 'clap' same as source, but no 'fiel' is there.I have also tried to serve format description with kCMFormatDescriptionExtension_FieldXXX with initWithMediaType:outputSettings:sourceFormatHint:but no success.//
1
0
784
Sep ’16
Problems with AVAudioPlayerNode's scheduleBuffer function
Hi,I'm having two problems using the scheduleBuffer function of AVAudioPlayerNode.Background: my app generates audio programatically, which is why I am using this function. I also need low latency. Therefore, I'm using a strategy of scheduling a small number of buffers, and using the completion handler to keep the process moving forward by scheduling one more buffer for each one that completes.I'm seeing two problems with this approach:One, the total memory consumed by my app grows steadily while the audio is playing, which suggests that the audio buffers are never being deallocated or some other runaway process is underway. (The Leaks tool doesn't detect any leaks, however).Two, audio playback sometimes stops, particularly on slower devices. By "stops", what I mean is that at some point I schedule a buffer and the completion block for that buffer is never called. When this happens, I can't even clear the problem by stopping the player.Now, regarding the first issue, I suspected that if my completion block recursively scheduled another buffer with another completion block, I would probably end up blowing out the stack with an infinite recursion. To get around this, instead of directly scheduling the buffer in the completion block, I set it up to enqueue the schedule in a dispatch queue. However, this doesn't seem to solve the problem.Any advice would be appreciated. Thanks.
9
0
4.0k
Feb ’17
AVAssetReaderOutput.copyNextSampleBuffer() sometimes hangs forever
I'm using AVAssetReaders with AVSampleBufferDisplayLayers to display multiple videos at once. I'm seeing this issue on iOS 13.1.3, 13.2b2, on various hardware like iPad 10.5 and iPad 12.9.It works well for a while, then a random call to copyNextSampleBuffer never returns, blocking that thread indefinitely and eating up resources.I have tried different threading approaches with no avail:If copyNextSampleBuffer() and reader.cancelReading() are done on the same queue, then copyNextSampleBuffer() gets stuck and the cancelReading() never gets processed because the queue is blocked. If I manually (with the debugger) jump in on that blocked queue and execute cancelReading(), immediately an EXC_BREAKPOINT crashes the appIf copyNextSampleBuffer() and reader.cancelReading() are done on different queues, then copyNextSampleBuffer() crashes with EXC_BAD_ACCESSHere's the stacktrace (same queue approach). I don't understand why it's stuck, my expectation is that copyNextSampleBuffer should always return (ie. with nil in error case).VideoPlayerView: UIView with AVSampleBufferDisplayLayerAVAssetFactory: Singleton with the queue that creates & manages all AVAssetReader / AVAsset* objects* thread #22, queue = 'AVAssetFactory' frame #0: 0x00000001852355f4 libsystem_kernel.dylib`mach_msg_trap + 8 frame #1: 0x0000000185234a60 libsystem_kernel.dylib`mach_msg + 72 frame #2: 0x00000001853dc068 CoreFoundation`__CFRunLoopServiceMachPort + 216 frame #3: 0x00000001853d7188 CoreFoundation`__CFRunLoopRun + 1444 frame #4: 0x00000001853d68bc CoreFoundation`CFRunLoopRunSpecific + 464 frame #5: 0x000000018f42b6ac AVFoundation`-[AVRunLoopCondition _waitInMode:untilDate:] + 400 frame #6: 0x000000018f38f1dc AVFoundation`-[AVAssetReaderOutput copyNextSampleBuffer] + 148 frame #7: 0x000000018f3900f0 AVFoundation`-[AVAssetReaderTrackOutput copyNextSampleBuffer] + 72 * frame #8: 0x0000000103309d98 Photobooth`closure #1 in AVAssetFactory.nextSampleBuffer(reader=0x00000002814016f0, retval=(Swift.Optional<CoreMedia.CMSampleBuffer>, Swift.Optional<AVFoundation.AVAssetReader.Status>) @ 0x000000016dbd1cb8) at AVAssetFactory.swift:108:34 frame #9: 0x0000000102f4f480 Photobooth`thunk for @callee_guaranteed () -> () at <compiler-generated>:0 frame #10: 0x0000000102f4f4a4 Photobooth`thunk for @escaping @callee_guaranteed () -> () at <compiler-generated>:0 frame #11: 0x000000010bfe6c04 libdispatch.dylib`_dispatch_client_callout + 16 frame #12: 0x000000010bff5888 libdispatch.dylib`_dispatch_lane_barrier_sync_invoke_and_complete + 124 frame #13: 0x0000000103309a5c Photobooth`AVAssetFactory.nextSampleBuffer(reader=0x00000002814016f0, self=0x0000000281984f60) at AVAssetFactory.swift:101:20 frame #14: 0x00000001032ab690 Photobooth`closure #1 in VideoPlayerView.setRequestMediaLoop(self=0x000000014b8da1d0, handledCompletion=false) at VideoPlayerView.swift:254:70 frame #15: 0x0000000102dce978 Photobooth`thunk for @escaping @callee_guaranteed () -> () at <compiler-generated>:0 frame #16: 0x000000018f416848 AVFoundation`-[AVMediaDataRequester _requestMediaDataIfReady] + 80 frame #17: 0x000000010bfe5828 libdispatch.dylib`_dispatch_call_block_and_release + 24 frame #18: 0x000000010bfe6c04 libdispatch.dylib`_dispatch_client_callout + 16 frame #19: 0x000000010bfedb74 libdispatch.dylib`_dispatch_lane_serial_drain + 744 frame #20: 0x000000010bfee744 libdispatch.dylib`_dispatch_lane_invoke + 500 frame #21: 0x000000010bff9ae4 libdispatch.dylib`_dispatch_workloop_worker_thread + 1324 frame #22: 0x000000018517bfa4 libsystem_pthread.dylib`_pthread_wqthread + 276I've tried all kinds of other things like making sure the AVAssets and all objects are made on one queue, and stopping the AVAssetReaders from ever deallocing to see if that helps. Nothing works. Any ideas?
4
0
2.5k
Oct ’19
How to cache an HLS video while playing it
Hi, I'm working on an app where a user can scroll through a feed of short videos (a bit like TikTok). I do some pre-fetching of videos a couple positions ahead of the user's scroll position, so the video can start playing as soon as he or she scrolls to the video. Currently, I just pre-fetch by initializing a few AVPlayers. However, I'd like to add a better caching system. I'm looking for the best way to: get videos to start playing as possible, while making sure to minimize re-downloading of videos if a user scrolls away from a video and back to it. Is there a way that I can cache the contents of an AVPlayer that has loaded an HLS video? Alternatively, I've explored using AVAssetDownloadTask to download the HLS videos. My issue is that I can't download the full video and then play it - I need to start playing the video as soon as the user scrolls to it, even if it's not done downloading. Is there a way to start an HLS download with an AVAssetDownloadTask, and then start playing the video while it continues downloading? Thank you!
10
0
7.2k
Jun ’20
AVAssetWriter leading to huge kernel_task memory usage
I am currently working on a macOS app which will be creating very large video files with up to an hour of content. However, generating the images and adding them to AVAssetWriter leads to VTDecoderXPCService using 16+ GB of memory and the kernel-task using 40+ GB (the max I saw was 105GB). It seems like the generated video is not streamed onto the disk but rather written to memory for it to be written to disk all at once. How can I force it to stream the data to disk while the encoding is happening? Btw. my app itself consistently needs around 300MB of memory, so I don't think I have a memory leak here. Here is the relevant code: func analyse() { 				self.videoWritter = try! AVAssetWriter(outputURL: outputVideo, fileType: AVFileType.mp4) 				let writeSettings: [String: Any] = [ 						AVVideoCodecKey: AVVideoCodecType.h264, 						AVVideoWidthKey: videoSize.width, 						AVVideoHeightKey: videoSize.height, 						AVVideoCompressionPropertiesKey: [ 								AVVideoAverageBitRateKey: 10_000_000, 						] 				] 				self.videoWritter!.movieFragmentInterval = CMTimeMake(value: 60, timescale: 1) 				self.frameInput = AVAssetWriterInput(mediaType: AVMediaType.video, outputSettings: writeSettings) 				self.frameInput?.expectsMediaDataInRealTime = true 				self.videoWritter!.add(self.frameInput!) 				if self.videoWritter!.startWriting() == false { 						print("Could not write file: \(self.videoWritter!.error!)") 						return 				} } func writeFrame(frame: Frame) { 				/* some more code to determine the correct frame presentation time stamp */ 				let newSampleBuffer = self.setTimeStamp(frame.sampleBuffer, newTime: self.nextFrameStartTime!) 				self.frameInput!.append(newSampleBuffer) 				/* some more code */ } func setTimeStamp(_ sample: CMSampleBuffer, newTime: CMTime) -> CMSampleBuffer { 				var timing: CMSampleTimingInfo = CMSampleTimingInfo( 						duration: CMTime.zero, 						presentationTimeStamp: newTime, 						decodeTimeStamp: CMTime.zero 				) 				var newSampleBuffer: CMSampleBuffer? 				CMSampleBufferCreateCopyWithNewTiming( 						allocator: kCFAllocatorDefault, 					 sampleBuffer: sample, 					 sampleTimingEntryCount: 1, 					 sampleTimingArray: &timing, 					 sampleBufferOut: &newSampleBuffer 			 ) 				return	newSampleBuffer! 		} My specs: MacBook Pro 2018 32GB Memory macOS Big Sur 11.1
5
0
2.3k
Dec ’20
What causes "issue_type = overload" in coreaudiod with USB audio interface?
I have a USB audio interface that is causing kernel traps and the audio output to "skip" or dropout every few seconds. This behavior occurs with a completely fresh install of Catalina as well as Big Sur with the stock Music app on a 2019 MacBook Pro 16 (full specs below). The Console logs show coreaudiod got an error from a kernel trap, a "USB Sound assertion" in AppleUSBAudio/AppleUSBAudio-401.4/KEXT/AppleUSBAudioDevice.cpp at line 6644, and the Music app "skipping cycle due to overload." I've added a short snippet from Console logs around the time of the audio skip/drop out. The more complete logs are at this gist: https://gist.github.com/djflux/08d9007e2146884e6df1741770de5105 I've also opened a Feedback Assistant ticket (FB9037528): https://feedbackassistant.apple.com/feedback/9037528 Does anyone know what could be causing this issue? Thanks for any help. Cheers, Flux aka Andy. Hardware Overview:  Model Name: MacBook Pro  Model Identifier: MacBookPro16,1  Processor Name: 8-Core Intel Core i9  Processor Speed: 2.4 GHz  Number of Processors: 1  Total Number of Cores: 8  L2 Cache (per Core): 256 KB  L3 Cache: 16 MB  Hyper-Threading Technology: Enabled  Memory: 64 GB  System Firmware Version: 1554.80.3.0.0 (iBridge: 18.16.14347.0.0,0) System Software Overview: System Version: macOS 11.2.3 (20D91)  Kernel Version: Darwin 20.3.0  Boot Volume: Macintosh HD  Boot Mode: Normal  Computer Name: mycomputername  User Name: myusername  Secure Virtual Memory: Enabled  System Integrity Protection: Enabled USB interface: Denon DJ DS1 Snippet of Console logs error 21:07:04.848721-0500 coreaudiod HALS_IOA1Engine::EndWriting: got an error from the kernel trap, Error: 0xE00002D7 default 21:07:04.848855-0500 Music HALC_ProxyIOContext::IOWorkLoop: skipping cycle due to overload default 21:07:04.857903-0500 kernel USB Sound assertion (Resetting engine due to error returned in Read Handler) in /AppleInternal/BuildRoot/Library/Caches/com.apple.xbs/Sources/AppleUSBAudio/AppleUSBAudio-401.4/KEXT/AppleUSBAudioDevice.cpp at line 6644 ... default 21:07:05.102746-0500 coreaudiod Audio IO Overload inputs: 'private' outputs: 'private' cause: 'Unknown' prewarming: no recovering: no default 21:07:05.102926-0500 coreaudiod   CAReportingClient.mm:508  message {   HostApplicationDisplayID = "com.apple.Music";   cause = Unknown;   deadline = 2615019;   "input_device_source_list" = Unknown;   "input_device_transport_list" = USB;   "input_device_uid_list" = "AppleUSBAudioEngine:Denon DJ:DS1:000:1,2";   "io_buffer_size" = 512;   "io_cycle" = 1;   "is_prewarming" = 0;   "is_recovering" = 0;   "issue_type" = overload;   lateness = "-535";   "output_device_source_list" = Unknown;   "output_device_transport_list" = USB;   "output_device_uid_list" = "AppleUSBAudioEngine:Denon DJ:DS1:000:1,2"; }: (null)
37
12
17k
Apr ’21
AVAudioPCMBuffer Memory Management
I’m using AVAudioEngine to get a stream of AVAudioPCMBuffers from the device’s microphone using the usual installTap(onBus:) setup. To distribute the audio stream to other parts of the program, I’m sending the buffers to a Combine publisher similar to the following: private let publisher = PassthroughSubject<AVAudioPCMBuffer, Never>() I’m starting to suspect I have some kind of concurrency or memory management issue with the buffers, because when consuming the buffers elsewhere I’m getting a range of crashes that suggest some internal pointer in a buffer is NULL (specifically, I’m seeing crashes in vDSP.convertElements(of:to:) when I try to read samples from the buffer). These crashes are in production and fairly rare — I can’t reproduce them locally. I never modify the audio buffers, only read them for analysis. My question is: should it be possible to put AVAudioPCMBuffers into a Combine pipeline? Does the AVAudioPCMBuffer class not retain/release the underlying AudioBufferList’s memory the way I’m assuming? Is this a fundamentally flawed approach?
1
1
914
Oct ’21
Background audio issues with AVPictureInPictureController
I know that if you want background audio from AVPlayer you need to detatch your AVPlayer from either your AVPlayerViewController or your AVPlayerLayer in addition to having your AVAudioSession configured correctly. I have that all squared away and background audio is fine until we introduce AVPictureInPictureController or use the PiP behavior baked into AVPlayerViewController. If you want PiP to behave as expected when you put your app into the background by switching to another app or going to the homescreen you can't perform the detachment operation otherwise the PiP display fails. On an iPad if PiP is active and you lock your device you continue to get background audio playback. However on an iPhone if PiP is active and you lock the device the audio pauses. However if PiP is inactive and you lock the device the audio will pause and you have to manually tap play on the lockscreen controls. This is the same between iPad and iPhone devices. My questions are: Is there a way to keep background-audio playback going when PiP is inactive and the device is locked (iPhone and iPad) Is there a way to keep background-audio playback going when PiP is active and the device is locked? (iPhone)
1
0
1.4k
Dec ’21
How to obtain an AVAudioFormat for a canonical format?
I receive a buffer from[AVSpeechSynthesizer convertToBuffer:fromBuffer:] and want to schedule it on an AVPlayerNode. The player node's output format need to be something that the next node could handle and as far as I understand most nodes can handle a canonical format. The format provided by AVSpeechSynthesizer is not something thatAVAudioMixerNode supports. So the following:   AVAudioEngine *engine = [[AVAudioEngine alloc] init];   playerNode = [[AVAudioPlayerNode alloc] init];   AVAudioFormat *format = [[AVAudioFormat alloc] initWithSettings:utterance.voice.audioFileSettings];   [engine attachNode:self.playerNode];   [engine connect:self.playerNode to:engine.mainMixerNode format:format]; Throws an exception: Thread 1: "[[busArray objectAtIndexedSubscript:(NSUInteger)element] setFormat:format error:&nsErr]: returned false, error Error Domain=NSOSStatusErrorDomain Code=-10868 \"(null)\"" I am looking for a way to obtain the canonical format for the platform so that I can use AVAudioConverter to convert the buffer. Since different platforms have different canonical formats, I imagine there should be some library way of doing this. Otherwise each developer will have to redefine it for each platform the code will run on (OSX, iOS etc) and keep it updated when it changes. I could not find any constant or function which can make such format, ASDB or settings. The smartest way I could think of, which does not work:   AudioStreamBasicDescription toDesc;   FillOutASBDForLPCM(toDesc, [AVAudioSession sharedInstance].sampleRate,                      2, 16, 16, kAudioFormatFlagIsFloat, kAudioFormatFlagsNativeEndian);   AVAudioFormat *toFormat = [[AVAudioFormat alloc] initWithStreamDescription:&toDesc]; Even the provided example for iPhone, in the documentation linked above, uses kAudioFormatFlagsAudioUnitCanonical and AudioUnitSampleType which are deprecated. So what is the correct way to do this?
4
0
2.0k
Feb ’22
`videoChat` AVAudioSession.Mode Issues on iPhone 14 Pro Max
I work on a video conferencing application, which makes use of AVAudioEngine and the videoChat AVAudioSession.Mode This past Friday, an internal user reported an "audio cutting in and out" issue with their new iPhone 14 Pro, and I was able to reproduce the issue later that day on my iPhone 14 Pro Max. No other iOS devices running iOS 16 are exhibiting this issue. I have narrowed down the root cause to the videoChat AVAudioSession.Mode after changing line 53 of the ViewController.swift file in Apple's "Using Voice Processing" sample project (https://developer.apple.com/documentation/avfaudio/audio_engine/audio_units/using_voice_processing) from: try session.setCategory(.playAndRecord, options: .defaultToSpeaker) to try session.setCategory(.playAndRecord, mode: .videoChat, options: .defaultToSpeaker) This only causes issues on my iPhone 14 Pro Max device, not on my iPhone 13 Pro Max, so it seems specific to the new iPhones only. I am also seeing the following logged to the console using either device, which appears to be specific to iOS 16, but am not sure if it is related to the videoChat issue or not: 2022-09-19 08:23:20.087578-0700 AVEchoTouch[2388:1474002] [as] ATAudioSessionPropertyManager.mm:71  Invalid input size for property 1684431725 2022-09-19 08:23:20.087605-0700 AVEchoTouch[2388:1474002] [as] ATAudioSessionPropertyManager.mm:225  Invalid input size for property 1684431725 I am assuming 1684431725 is 'dfcm' but I am not sure what Audio Session Property that might be.
13
2
4.4k
Sep ’22
iOS 16 RemoteIO: Input data proc returned inconsistent 2 packets
I am getting an error in iOS 16. This error doesn't appear in previous iOS versions. I am using RemoteIO to playback live audio at 4000 hz. The error is the following: Input data proc returned inconsistent 2 packets for 186 bytes; at 2 bytes per packet, that is actually 93 packets This is how the audio format and the callback is set: // Set the Audio format AudioStreamBasicDescription audioFormat; audioFormat.mSampleRate = 4000; audioFormat.mFormatID = kAudioFormatLinearPCM; audioFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked; audioFormat.mFramesPerPacket = 1; audioFormat.mChannelsPerFrame = 1; audioFormat.mBitsPerChannel = 16; audioFormat.mBytesPerPacket = 2; audioFormat.mBytesPerFrame = 2; AURenderCallbackStruct callbackStruct; // Set output callback callbackStruct.inputProc = playbackCallback; callbackStruct.inputProcRefCon = (__bridge void * _Nullable)(self); status = AudioUnitSetProperty(audioUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Global, kOutputBus, &callbackStruct, sizeof(callbackStruct)); Note that the mSampleRate I set is 4000 Hz. In iOS 15 I get 0.02322 seconds of buffer duration (IOBufferDuration) and 93 frames in each callback. This is expected, because: number of frames * buffer duration = sampling rate 93 * 0.02322 = 4000 Hz However, in iOS 16 I am getting the aforementioned error in the callback. Input data proc returned inconsistent 2 packets for 186 bytes; at 2 bytes per packet, that is actually 93 packets Since the number of frames is equal to the number of packets, I am getting 1 or 2 frames in the callback and the buffer duration is of 0.02322 seconds. This didn't affect the playback of the "raw" signal, but it did affect the playback of the "processed" signal. number of frames * buffer duration = sampling rate 2 * 0.02322 = 0.046 Hz That doesn't make any sense. This error appears for different sampling rates (8000, 16000, 32000), but not for 44100. However I would like to keep 4000 as my sampling rate. I have also tried to set the sampling rate by using the setPreferredSampleRate(_:) function of AVAudioSession, but the attempt didn't succeed. The sampling rate was still 44100 after calling that function. Any help on this issue would be appreciated.
10
3
4.2k
Sep ’22
CoreMediaErrorDomain Code=-12884
I have an AVPlayerViewController in my app playing video and my largest source of errors is NSURLErrorDomain Code=-1008. The underlying error they provide is Error Domain=CoreMediaErrorDomain Code=-12884 "(null)". I couldn't find what the error implies or is caused by - osstatus.com also does not have a reference to it.
1
0
1.7k
Nov ’22
ShazamKit SDK crash on Android
We are using ShazamKit SDK for Android and our application sometimes crashes when performing an audio recognition. We get the following logs: Cause: null pointer dereference backtrace: #00 pc 000000000000806c /data/app/lib/arm64/libsigx.so (SHAZAM_SIGX::reset()) (BuildId: 40e0b3c4250b21f23f7c4ec7d7b88f954606d914) #01 pc 00000000000dc324 /data/app//oat/arm64/base.odex at libsigx.SHAZAM_SIGX::reset()(reset:0) at base.0xdc324(Native Method)
1
2
1.4k
Nov ’22
AVPlayer 'Auto(Recommended)' option for subtitles
I use HLS to play video in AVPlayer. My .mp3u8 file has following content #EXTM3U #EXT-X-VERSION:3 #EXT-X-MEDIA:TYPE=SUBTITLES, DEFAULT=YES,GROUP-ID="subs",NAME="English",LANGUAGE="eng",URI="some/path" .... There is 3 options for subtitles "Off", "Auto (Recommended)", "English" on the screen. By default "Auto" option is chosen in the player, but there is no English subtitles are displayed (phone language is English).I have tried set FORCED attribute to YES but it does not help(subtitle icon become hidden). I have 2 questions: Is it possible to display subtitles when user selects "Auto option"? What criteria is used to determine subtitles language or subtitles visibility?
2
0
990
Jan ’23
Saving plugin presets using fullState in Audio Units
Hi, I'm having trouble saving user presets in the plugin for Audio Units. This works well for saving the user presets in the Host, but I get an error when trying to save them in the plugin. I'm not using a parameter tree, but instead using the fullState's getter and setter for saving and retrieving a dictionary with the state. With some simplified parameters it looks something like this: var gain: Double = 0.0 var frequency: Double = 440.0     private var currentState: [String: Any] = [:] override var fullState: [String: Any]? {     get {       // Save the current state       currentState["gain"] = gain       currentState["frequency"] = frequency       // Return the preset state       return ["myPresetKey": currentState]     }     set {       // Extract the preset state       currentState = newValue?["myPresetKey"] as? [String: Any] ?? [:]       // Set the Audio Unit's properties       gain = currentState["gain"] as? Double ?? 0.0       frequency = currentState["frequency"] as? Double ?? 440.0     }  } This works perfectly well for storing user presets when saved in the host. When trying to save them in the plugin to be able to reuse them across hosts, I get the following error in the interface: "Missing key in preset state map". Note that I am testing mostly in AUM. I could not find any documentation for what the missing key is about and how can I get around this. Any ideas?
2
1
1.1k
Mar ’23
v3 AudioUnit, itself an audio unit host cannot see v3 audio units when hosted "out of process"
Back at the start of January this year we filed a bug report that still hasn't been acknowledged. Before we file a code level support ticket, we would love to know if there’s any one else out there who has experienced anything similar. We have read documentation (and again repeatedly) and searched the web over and still found no solution and this issue does look like it could be a bug in the system (or our coding) rather than proper behaviour. The app is a host for a v3 audio unit which itself is a workstation that can host other audio units. The host and audio unit are both working well in all other areas other than this issue. Note: This is not running on catalyst, it is a native Mac OS app. ( not iOS ) The problem is that when an AUv3 is hosted out of process (on the Mac) and then goes to fetch a list of all the other available installed audio units, the list that is returned from the system does not contain any of the other v3 audio units in the system. It only contains v2. We see this issue if we load our AU out of process in our own bespoke host, and also when it loads into Logic Pro which gives no choice but to load out of process. This means that, as it stands at the moment, when we release the app our users will have limited functionality in Logic Pro, and possibly by then, other hosts too. In our own host we can load our hosting AU in-process and then it can find and use all the available units both v2 and v3. So no issue there but sadly when loaded into the only other host that can do anything like this ( Logic Pro at the time of posting) it won't be able to use v3 units which is quite a serious limitation. SUMMARY v3 Audio Unit being hosted out of process. Audio unit fetches the list of available audio units on the system. v3 audio units are not provided in the list. Only v2 are presented. EXPECTED In some ways this seems to be the opposite of the the behaviour we would expect. We would expect to see and host ALL the other units that are installed on the system. “out of process” suggests the safer of the two options and so this looks like it could be related to some kind of sand boxing issue. But sadly we cannot work out a solution hence this report. Is Quin “The Eskimo” still around? Can you help us out here?
3
0
1.3k
Apr ’23
Audio / Video sync issue on iOS using AVSampleBufferRenderSynchronizer
My current app implements a custom video player, based on a AVSampleBufferRenderSynchronizer synchronising two renderers: an AVSampleBufferDisplayLayer receiving decoded CVPixelBuffer-based video CMSampleBuffers, and an AVSampleBufferAudioRenderer receiving decoded lpcm-based audio CMSampleBuffers. The AVSampleBufferRenderSynchronizer is started when the first image (in presentation order) is decoded and enqueued, using avSynchronizer.setRate(_ rate: Float, time: CMTime), with rate = 1 and time the presentation timestamp of the first decoded image. Presentation timestamps of video and audio sample buffers are consistent, and on most streams, the audio and video are correctly synchronized. However on some network streams, on iOS, the audio and video aren't synchronized, with a time difference that seems to increase with time. On the other hand, with the same player code and network streams on macOS, the synchronization always works fine. This reminds me of something I've read, about cases where an AVSampleBufferRenderSynchronizer could not synchronize audio and video, causing them to run with independent and potentially drifting clocks, but I cannot find it again. So, any help / hints on this sync problem will be greatly appreciated! :)
1
0
747
May ’23
Suggested guidance for creating MV-HEVC video files?
After taking a look at the Deliver Video Content for Spatial Experiences session, alongside the Destination Video sample code, I'm a bit unclear on how one might go about creating stereoscopic content that can be bundled up as a MV-HEVC file and played on Vision Pro. I see the ISO Base Media File Format and Apple HEVC Stereo Video format specifications, alongside new mvhevc1440x1440 output presets in AVFoundation, but I'm unclear what sort of camera equipment could be used to create stereoscopic content and how one might be able to create a MV-HEVC using a command-line tool that leverages AVFoundation/VideoToolbox, or something like Final Cut Pro. Is such guidance available on how to film and create this type of file? Thanks!
6
8
4.3k
Jun ’23
MacOS echo cancellation (AUVoiceProcessing) trouble with device gain/volume
Hi community I'm developing an application for MacOS and i need to capture the mic audio stream. Currently using CoreAudio in Swift i'm able to capture the audio stream using IO Procs and have applied the AUVoiceProcessing for prevent echo from speaker device. I was able to connect the audio unit and perform the echo cancellation. The problem that i'm getting is that when i'm using AUVoiceProcessing the gain of the two devices get reduced and that affects the volume of the two devices (microphone and speaker). I have tried to disable the AGC using the property kAUVoiceIOProperty_VoiceProcessingEnableAGCbut the results are the same. There is any option to disable the gain reduction or there is a better approach to get the echo cancellation working?
3
0
1.7k
Jul ’23
AudioQueue player mix with other sounds has a 0.5s mute
I was play a pcm(24kHz 16bit) file with AudioQueue, and it mix with other sound( 192kHz 24bit) named sound2. Setting for AVAudioSession as: category (AVAudioSessionCategoryPlayback), options (AVAudioSessionCategoryOptionMixWithOthers|AVAudioSessionCategoryOptionDuckOthers) when playing pcm the sound2 should pushed volume lower as setting. BUT, there has a absolutly mute keep 0.5 second when the sound2 become low, and after a 0.5s mute the pushed lower sound came out. It only become in sound2 situation(192kHz, 24bit). if the sound2's quality lower everything is OK.
1
1
1.2k
Jul ’23