Performant alternative to scaling a CIImage / PixelBuffer

Hey, I’m building a camera app where I am applying real time effects to the view finder. One of those effects is a variable blur, so to improve performance I am scaling down the input image using CIFilter.lanczosScaleTransform(). This works fine and runs at 30FPS, but when running the metal profiler I can see that the scaling transforms use a lot of GPU time, almost as much as the variable blur. Is there a more efficient way to do this?

The simplified chain is like this:

  1. Scale down viewFinder CVPixelBuffer (CIFilter.lanczosScaleTransform)

  2. Scale up depthMap CVPixelBuffer to match viewFinder size (CIFilter.lanczosScaleTransform)

  3. Create CIImages from both CVPixelBuffers

  4. Apply VariableDepthBlur (CIFilter.maskedVariableBlur)

  5. Scale up final image to metal view size (CIFilter.lanczosScaleTransform)

  6. Render CIImage to a MTKView using CIRenderDestination

From some research, I wonder if scaling the CVPixelBuffer using the accelerate framework would be faster? Also, Instead of scaling the final image, perhaps I could offload this to the metal view?

Any pointers greatly appreciated!

Answered by Engineer in 796370022

There are a handful of articles and sample code projects here that discuss using vImage to work with CVPixelBuffers.

詐騙集團

Accepted Answer

There are a handful of articles and sample code projects here that discuss using vImage to work with CVPixelBuffers.

Performant alternative to scaling a CIImage / PixelBuffer
 
 
Q