Adding multiple AVCaptureVideoDataOutput is officially supported in iOS 16 and works well, except for certain configurations such as ProRes (YCbCr422 pixel format) where session fails to start if two VDO outputs are added. Is this a known limitation or a bug? Here is the code:
device.activeFormat = device.findFormat(targetFPS, resolution: targetResolution, pixelFormat: kCVPixelFormatType_422YpCbCr10BiPlanarVideoRange)!
NSLog("Device supports tone mapping \(device.activeFormat.isGlobalToneMappingSupported)")
device.activeColorSpace = .HLG_BT2020
device.activeVideoMinFrameDuration = CMTime(value: 1, timescale: CMTimeScale(targetFPS))
device.activeVideoMaxFrameDuration = CMTime(value: 1, timescale: CMTimeScale(targetFPS))
device.unlockForConfiguration()
self.session?.addInput(input)
let output = AVCaptureVideoDataOutput()
output.alwaysDiscardsLateVideoFrames = true
output.setSampleBufferDelegate(self, queue: self.samplesQueue)
if self.session!.canAddOutput(output) {
self.session?.addOutput(output)
}
let previewVideoOut = AVCaptureVideoDataOutput()
previewVideoOut.alwaysDiscardsLateVideoFrames = true
previewVideoOut.automaticallyConfiguresOutputBufferDimensions = false
previewVideoOut.deliversPreviewSizedOutputBuffers = true
previewVideoOut.setSampleBufferDelegate(self, queue: self.previewQueue)
if self.session!.canAddOutput(previewVideoOut) {
self.session?.addOutput(previewVideoOut)
}
self.vdo = vdo
self.previewVDO = previewVideoOut
self.session?.startRunning()
It works for other formats such as 10-bit YCbCr video range HDR sample buffers, but there are lot of frame drops when recording with AVAssetWriter at 4K@60 fps. Are these known limitations or bad use of API?