Core Image

RSS for tag

Use built-in or custom filters to process still and video images using Core Image.

Posts under Core Image tag

50 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

IOSurface vs. IOSurfaceRef on Catalyst
I have an IOSurface and I want to turn that into a CIImage. However, the constructor of CIImage takes a IOSurfaceRef instead of a IOSurface. On most platforms, this is not an issue because the two types are toll-free bridgeable... except for Mac Catalyst, where this fails. I observed the same back in Xcode 13 on macOS. But there I could force-cast the IOSurface to a IOSurfaceRef: let image = CIImage(ioSurface: surface as! IOSurfaceRef) This cast fails at runtime on Catalyst. I found that unsafeBitCast(surface, to: IOSurfaceRef.self) actually works on Catalyst, but it feels very wrong. Am I missing something? Why aren't the types bridgeable on Catalyst? Also, there should ideally be an init for CIImage that takes an IOSurface instead of a ref.
2
1
100
1w
ProRes 4444 blocky compression artifacts
I’m creating a objective C command-line utility to encode RAW image sequences to ProRes 4444, but I’m encountering, blocky compression artifacts in the ProRes 4444 video output. To test the integrity of the image data before encoding to ProRes, I added a snippet in my encoding function that saves a 16-bit PNG before encoding to ProRes and the PNG looks perfect, I can see all detail in every part of the image dynamic range. Here’s a comparison between the 16-bit PNG(on the right) and the ProRes 4444 output. (on the left) As a further test, I re-encoded the ‘test PNG’ to ProRes 4444 using DaVinci Resolve, and the ProRes4444 output video from Resolve doesn’t have any blocky compression artifacts. Looks identical. In short, this is what the utility does: Unpacks the 12-bit raw data into 16-bit values. After unpacking, the raw data is debayered to convert it into a standard color image format (BGR) using OpenCV. Scale the debayered pixel values from their original 12-bit depth to fit into a 16-bit range. Up to this point everything is fine and confirmed by saving 16bit PNGs. The images are encoded to ProRes 4444 using the AVFoundation framework. The pixel buffers are created and managed using dictionary method with ‘kCVPixelFormatType_64RGBALE’. I need help figuring this out, I’m a real novice when it comes to AVfoundation/encoding to ProRes. See relevant parts of my 'encodeToProRes' function: void encodeToProRes(const std::string &outputPath, const std::vector<std::string> &rawPaths, const std::string &proResFlavor) { NSError *error = nil; NSURL *url = [NSURL fileURLWithPath:[NSString stringWithUTF8String:outputPath.c_str()]]; AVAssetWriter *assetWriter = [AVAssetWriter assetWriterWithURL:url fileType:AVFileTypeQuickTimeMovie error:&error]; if (error) { std::cerr << "Error creating AVAssetWriter: " << error.localizedDescription.UTF8String << std::endl; return; } // Load the first image to get the dimensions std::cout << "Debayering the first image to get dimensions..." << std::endl; Mat firstImage; int width = 5320; int height = 3900; if (!debayer_image(rawPaths[0], firstImage, width, height)) { std::cerr << "Error debayering the first image" << std::endl; return; } width = firstImage.cols; height = firstImage.rows; // Save the first frame as a PNG 16-bit image for validation std::string pngFilePath = outputPath + "_frame1.png"; if (!imwrite(pngFilePath, firstImage)) { std::cerr << "Error: Failed to save the first frame as a PNG image" << std::endl; } else { std::cout << "First frame saved as PNG: " << pngFilePath << std::endl; } NSString *codecKey = nil; if (proResFlavor == "4444") { codecKey = AVVideoCodecTypeAppleProRes4444; } else if (proResFlavor == "422HQ") { codecKey = AVVideoCodecTypeAppleProRes422HQ; } else if (proResFlavor == "422") { codecKey = AVVideoCodecTypeAppleProRes422; } else if (proResFlavor == "LT") { codecKey = AVVideoCodecTypeAppleProRes422LT; } else { std::cerr << "Error: Invalid ProRes flavor specified: " << proResFlavor << std::endl; return; } NSDictionary *outputSettings = @{ AVVideoCodecKey: codecKey, AVVideoWidthKey: @(width), AVVideoHeightKey: @(height) }; AVAssetWriterInput *videoInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:outputSettings]; videoInput.expectsMediaDataInRealTime = YES; NSDictionary *pixelBufferAttributes = @{ (id)kCVPixelBufferPixelFormatTypeKey: @(kCVPixelFormatType_64RGBALE), (id)kCVPixelBufferWidthKey: @(width), (id)kCVPixelBufferHeightKey: @(height) }; AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:videoInput sourcePixelBufferAttributes:pixelBufferAttributes]; ... [assetWriter startSessionAtSourceTime:kCMTimeZero]; CMTime frameDuration = CMTimeMake(1, 24); // Frame rate of 24 fps int numFrames = static_cast<int>(rawPaths.size()); ... // Encoding thread std::thread encoderThread([&]() { int frameIndex = 0; std::vector<CVPixelBufferRef> pixelBufferBuffer; while (frameIndex < numFrames) { std::unique_lock<std::mutex> lock(queueMutex); queueCondVar.wait(lock, [&]() { return !frameQueue.empty() || debayeringFinished; }); if (!frameQueue.empty()) { auto [index, debayeredImage] = frameQueue.front(); frameQueue.pop(); lock.unlock(); if (index == frameIndex) { cv::Mat rgbaImage; cv::cvtColor(debayeredImage, rgbaImage, cv::COLOR_BGR2RGBA); CVPixelBufferRef pixelBuffer = NULL; CVReturn result = CVPixelBufferPoolCreatePixelBuffer(NULL, adaptor.pixelBufferPool, &pixelBuffer); if (result != kCVReturnSuccess) { std::cerr << "Error: Could not create pixel buffer" << std::endl; dispatch_group_leave(dispatchGroup); return; } CVPixelBufferLockBaseAddress(pixelBuffer, 0); void *pxdata = CVPixelBufferGetBaseAddress(pixelBuffer); for (int row = 0; row < height; ++row) { memcpy(static_cast<uint8_t*>(pxdata) + row * CVPixelBufferGetBytesPerRow(pixelBuffer), rgbaImage.ptr(row), width * 8); } CVPixelBufferUnlockBaseAddress(pixelBuffer, 0); pixelBufferBuffer.push_back(pixelBuffer); ... Thanks very much!
1
0
196
2w
CAMetalLayer renders HDR images with a color shift
I can't get CoreImage to render an HDR image file with correct colors to a CAMetalLayer on macOS 14. I'm comparing the result with NSImageView and the SupportingHDRImagesInYourApp 'HDRDemo23' sample code, which use CVPixelBuffer. With CAMetalLayer, the images are displayed as HDR (definitely more highlights), but they're darker with some kind saturation increase & color shift. Files I've tested include the sample ISO HDR files in the SupportingHDRImagesInYourApp sample code. Methods I've tried to render to CAMetalLayer include: Modifying the GeneratingAnAnimationWithACoreImageRenderDestination sample code's ContentView so it uses HDRDemo23/example-ISO-HDR-images/image_01.heic, loaded using CIImage(contentsOf:) Creating a test AppKit app that uses MTKView and CIRenderDestination the same way. I have NSImageView and the MTKView in the same window for comparison. Using CIRAWFilter > CIRenderDestination > IOSurface > MTKView/CAMetalLayer All these methods produce the image with the exact same appearance; a dark HDR image with some saturation/color shift. The only clue I've found is that when using the Metal Debugger on the test AppKit app, the CAMetalLayer's 'Present' shows the 'input' thumbnail is HDR without the color shift, but the 'output' thumbnail looks like what I actually see. I tried changing the color profile on the layer to various things but nothing looked more correct. I've tried this on two Macs, an M1 Mac Studio with an LG display, and a MacBook Air M2. The MacBook Air shows the same color shift, but since it has less dynamic range overall there isn't as much difference between NSImageView and MTKView.
4
0
715
3w
Changing CIKernel sampler coord causes chaos
I feel like I'm missing something really simple. I've got the simplest possible CIKernel, it looks like this: extern "C" float4 Simple(coreimage::sampler s) { float2 current = s.coord(); float2 anotherCoord = float2(current.x + 1.0, current.y); float4 sample = s.sample(anotherCoord); // s.sample(current) works fine return sample; } It's (in my mind) incrementing the x position of the sampler by 1 and sampling the neighboring pixel. What I get in practice is a bunch of banded garbage (pictured below.) The sampler seems to be pretty much undocumented, so I have no idea whether I'm incrementing by the right amount to advance one pixel. The weird banding is still present if I clamp anootherCoord to s.extent() but it behaves normally if I sample s.coord() unchanged. I'm trying to write a box blur that samples / averages neighboring pixels and am completely blocked by this. What am I missing?
2
0
330
May ’24
IOSurface objects aren't released in ScreenCaptureKit
I use ScreenCaptureKit, CoreVideo, CoreImage, CoreMedia frameworks to capture screenshots on macOS 14.0 and higher. Example of creating CGImageRef: CVImageBufferRef cvImageBufferRef = ..; CIImage* temporaryImage = [CIImage imageWithCVPixelBuffer:cvImageBufferRef]; CIContext* temporaryContext = [CIContext context]; CGImageRef imageRef = [temporaryContext createCGImage:temporaryImage fromRect:CGRectMake(0, 0, CVPixelBufferGetWidth(cvImageBufferRef), CVPixelBufferGetHeight(cvImageBufferRef))]; I have the next results of profiling with XCode Instruments Memory Leaks & Allocations: there is constantly increasing memory usage, but no memory leaks are detected, and there are many calls to create IOSurface objects, that have been never released. The most part of memory - All Anonymous VM - VM: IOSurface. The heaviest stack trace: [RPIOSurfaceObject initWithCoder:] [IOSurface initWithMachPort:] IOSurfaceClientLookupFromMachPort I don't have any of IOSurface objects created by myself. There are low-level calls to it. In Allocation List I can see many allocations of IOSurface objects, but there are no info about releasing it. Due to this info, how can I release them to avoid permanent increasing memory consumption?
2
0
442
May ’24
Lossy option has no effect when exporting PNG to HEIF
Under Sonoma 14.4 the compression option doesn't work with PNG images. It works for JPG/HEIF. Preview can export PNG file to HEIC with compression option. What am I missing? Previously this has worked. I am trying with 0.01 and 0.9 as compression quality and the file size is the same for PNG. Is Preview using some trick to convert the image using ciContext.createCGImage? PS: Compression option of 1.0 was broken under 14.4 RC and Preview created empty file. func heifImageDataUsingDestination(at url: URL, compressionQuality : CGFloat) -> Data? { guard let imageSource = CGImageSourceCreateWithURL(url as CFURL, nil) else { return nil } guard let cgImage = CGImageSourceCreateImageAtIndex(imageSource, 0, nil) else { return nil } var mutableData = NSMutableData() guard let imageDestination = CGImageDestinationCreateWithData(mutableData, "public.heic" as CFString, 1, nil) else { return nil } let options = [ kCGImageDestinationLossyCompressionQuality: compressionQuality ] as CFDictionary CGImageDestinationAddImage(imageDestination, cgImage, options) let success = CGImageDestinationFinalize(imageDestination) if success { return mutableData as Data } return nil } func heifImageDataUsingCIContext(at url: URL, compressionQuality : CGFloat) -> Data? { guard let ciImage = CIImage(contentsOf: url) else { return nil } let context = CIContext() let colorspace = ciImage.colorSpace ?? CGColorSpaceCreateDeviceRGB() let options = [CIImageRepresentationOption(rawValue: kCGImageDestinationLossyCompressionQuality as String) : compressionQuality] return context.heifRepresentation(of: ciImage, format: .RGBA8, colorSpace: colorspace, options: options) }
4
0
593
May ’24
iOS 17 UIImageReader has memory leaks
In my SwiftUI view, I try to load the image from data. var body: some View { Group{ if let data = model.detailImageData, let uiimage = UIImage(data: data) {// no memory issue Image(uiImage: uiimage) .resizable() .scaledToFit() } } } But I want to get the HDR style of my image, so I use if let data = model.detailImageData, let uiimage = UIImageReader.default.image(data:data){ //memory leaks!!! When I change the data, the memory of the previous image is never freeed. finally caused my app to crash. You can see it from the Instrument screenshot.
1
1
384
Apr ’24
Memory Leak in ImageIO?
I use this code to show the Image in HDR in SwiftUI struct HDRImageView: UIViewRepresentable { // Set up a common reader for all UIImage read requests. static let reader: UIImageReader = { var config = UIImageReader.Configuration() config.prefersHighDynamicRange = true return UIImageReader(configuration: config) }() let data:Data? let enableHDR:Bool func makeUIView(context: Context) -> UIImageView { let view = UIImageView() view.preferredImageDynamicRange = enableHDR ? .high : .standard update(view) // Set this view to fit itself to the parent view. view.setContentCompressionResistancePriority(.defaultLow, for: .horizontal) view.setContentCompressionResistancePriority(.defaultLow, for: .vertical) view.setContentHuggingPriority(.required, for: .horizontal) view.setContentHuggingPriority(.required, for: .vertical) return view } func updateUIView(_ view: UIImageView, context: Context) { update(view) } func update(_ view: UIImageView) { autoreleasepool{//not working if let data = data { view.image = nil//set to nil first is not working view.image = HDRImageView.reader.image(data: data) } else { view.image = nil } view.preferredImageDynamicRange = enableHDR ? .high : .standard } } } But when I update the input data, seems that the old image data can not be freeed. After several changes, the app takes too much memory and crash. I found it's the VM:ImageIO_Surface_Data and the VM_Image_IO take up the memory. If I change the HDRImageView into a normal Image(uiimage:UIImage(data:)) It no longer have this issue. Is it a memory leak? and how to solve this. Update: I then tried using Image(_:cgImage), and it appear to be the same result.
0
0
360
Apr ’24
How to get the position of dominant colors in CGImage?
so, my app needs to find the dominant palette and the position in the image of the k-most dominant colors. I followed the very useful sample project from the vImage documentation https://developer.apple.com/documentation/accelerate/bnns/calculating_the_dominant_colors_in_an_image and the algorithm works fine although I can't wrap my head around how should I go on about and linking said colors with a point in the image. Since the algorithm works by filling storages first, I tried also filling an array of CGPoints called LocationStorage and working with that //filling the array for i in 0...width { for j in 0...height { locationStorage.append( CGPoint(x: i, y: j)) } . . . //working with the array let randomIndex = Int.random(in: 0 ..&lt; width * height) centroids.append(Centroid(red: redStorage[randomIndex], green: greenStorage[randomIndex], blue: blueStorage[randomIndex], position: locationStorage[randomIndex])) } struct Centroid { /// The red channel value. var red: Float /// The green channel value. var green: Float /// The blue channel value. var blue: Float /// The number of pixels assigned to this cluster center. var pixelCount: Int = 0 var position: CGPoint = CGPointZero init(red: Float, green: Float, blue: Float, position: CGPoint) { self.red = red self.green = green self.blue = blue self.position = position } } although it's not accurate. I also tried force trying every pixel in the image to get as close to each color but I think it's too slow. What do you think my approach should be? Let me know if you need additional info Please be kind I'm learning Swift.
3
0
428
Apr ’24
Save ARDepthData as .tiff
I would like to save the depth map from ARDepthData as .tiff, but notice my output tiff distances are incorrect. Objects that are close are reported to be slightly farther away, and walls that are around 4 meters away from me have a recorded value of 2 meters. I am using this code to write the tiff: import UIKit # Save method extension CVPixelBuffer { func saveDepthMapToTIFF(to path: URL) { let ciImage = CIImage(cvPixelBuffer: self) let context = CIContext() do { try context.writeTIFFRepresentation( of: ciImage, to: path, format: .Lf, colorSpace: CGColorSpaceCreateDeviceGray() ) } catch { print("Failed to write TIFF: \(error)") } } } # Calling the save arFrame.sceneDepth?.depthMap.saveDepthMapToTIFF(to: depthMapPath) I am reading the file like this in Python import tifffile depth_map = tifffile.imread("test.tiff") plt.imshow(depth_map) plt.colorbar() which creates this image: The farthest parts of the room should be around 4 meters, not 2. The dark blue spot on the lower right is closer than half a meter away. Notably the depth map contains distances from the camera plane to each region, not the distance from the camera sensor to the region. Even correcting for this though, the depth map remains about the same. Is there an issue with how I am saving the depth image? Is there a scale factor or format error?
1
1
445
Mar ’24
How to set the Portrait effect on/off in live camera view in AVFoundation in Swift?
I am using AVFoundation for live camera view. I can get my device from the current video input (of type AVCaptureDeviceInput) like: let device = videoInput.device The device's active format has a isPortraitEffectSupported. How can I set the Portrait Effect on and off in live camera view? I setup the camera like this: private var videoInput: AVCaptureDeviceInput! private let session = AVCaptureSession() private(set) var isSessionRunning = false private var renderingEnabled = true private let videoDataOutput = AVCaptureVideoDataOutput() private let photoOutput = AVCapturePhotoOutput() private(set) var cameraPosition: AVCaptureDevice.Position = .front func configureSession() { sessionQueue.async { [weak self] in guard let strongSelf = self else { return } if strongSelf.setupResult != .success { return } let defaultVideoDevice: AVCaptureDevice? = strongSelf.videoDeviceDiscoverySession.devices.first(where: {$0.position == strongSelf.cameraPosition}) guard let videoDevice = defaultVideoDevice else { print("Could not find any video device") strongSelf.setupResult = .configurationFailed return } do { strongSelf.videoInput = try AVCaptureDeviceInput(device: videoDevice) } catch { print("Could not create video device input: \(error)") strongSelf.setupResult = .configurationFailed return } strongSelf.session.beginConfiguration() strongSelf.session.sessionPreset = AVCaptureSession.Preset.photo // Add a video input. guard strongSelf.session.canAddInput(strongSelf.videoInput) else { print("Could not add video device input to the session") strongSelf.setupResult = .configurationFailed strongSelf.session.commitConfiguration() return } strongSelf.session.addInput(strongSelf.videoInput) // Add a video data output if strongSelf.session.canAddOutput(strongSelf.videoDataOutput) { strongSelf.session.addOutput(strongSelf.videoDataOutput) strongSelf.videoDataOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_32BGRA)] strongSelf.videoDataOutput.setSampleBufferDelegate(self, queue: strongSelf.dataOutputQueue) } else { print("Could not add video data output to the session") strongSelf.setupResult = .configurationFailed strongSelf.session.commitConfiguration() return } // Add photo output if strongSelf.session.canAddOutput(strongSelf.photoOutput) { strongSelf.session.addOutput(strongSelf.photoOutput) strongSelf.photoOutput.isHighResolutionCaptureEnabled = true } else { print("Could not add photo output to the session") strongSelf.setupResult = .configurationFailed strongSelf.session.commitConfiguration() return } strongSelf.session.commitConfiguration() } } func prepareSession(completion: @escaping (SessionSetupResult) -&gt; Void) { sessionQueue.async { [weak self] in guard let strongSelf = self else { return } switch strongSelf.setupResult { case .success: strongSelf.addObservers() if strongSelf.photoOutput.isDepthDataDeliverySupported { strongSelf.photoOutput.isDepthDataDeliveryEnabled = true } if let photoOrientation = AVCaptureVideoOrientation(interfaceOrientation: interfaceOrientation) { if let unwrappedPhotoOutputConnection = strongSelf.photoOutput.connection(with: .video) { unwrappedPhotoOutputConnection.videoOrientation = photoOrientation } } strongSelf.dataOutputQueue.async { strongSelf.renderingEnabled = true } strongSelf.session.startRunning() strongSelf.isSessionRunning = strongSelf.session.isRunning strongSelf.mainQueue.async { strongSelf.previewView.videoPreviewLayer.session = strongSelf.session } completion(strongSelf.setupResult) default: completion(strongSelf.setupResult) } } } Then to I set isPortraitEffectsMatteDeliveryEnabled like this: func setPortraitAffectActive(_ state: Bool) { sessionQueue.async { [weak self] in guard let strongSelf = self else { return } if strongSelf.photoOutput.isPortraitEffectsMatteDeliverySupported { strongSelf.photoOutput.isPortraitEffectsMatteDeliveryEnabled = state } } } However, I don't see any Portrait Effect in the live camera view! Any ideas why?
2
0
1.6k
Feb ’24
AVCaptureVideoDataOutput video range value exceed the range
CVPixelBuffer.h defines kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange = '420v', /* Bi-Planar Component Y'CbCr 8-bit 4:2:0, video-range (luma=[16,235] chroma=[16,240]). baseAddr points to a big-endian CVPlanarPixelBufferInfo_YCbCrBiPlanar struct */ kCVPixelFormatType_420YpCbCr10BiPlanarVideoRange = 'x420', /* 2 plane YCbCr10 4:2:0, each 10 bits in the MSBs of 16bits, video-range (luma=[64,940] chroma=[64,960]) */ But when I set above format camera output, and I find the output pixelbuffer's value is exceed the range.I can see [0 -255] for 420YpCbCr8BiPlanarVideoRange and [0,1023] for 420YpCbCr10BiPlanarVideoRange Is it a bug or something wrong of the output?If it is not how can I choose the correct matrix transfer the yuv data to rgb?
1
0
1.1k
Feb ’24
What is `CIRAWFilter.linearSpaceFilter` for and when to use it?
I haven't found any really thorough documentation or guidance on the use of CIRAWFilter.linearSpaceFilter. The API documentation calls it An optional filter you can apply to the RAW image while it’s in linear space. Can someone provide insight into what this means and what the linear space filter is useful for? When would we use this linear space filter instead of a filter on the output of CIRAWFilter? Thank you.
0
0
449
Feb ’24
[Metal] 9072 by 12096 iosurface is too large for GPU
I take a picture using the iPhone's camera. The taken resolution is 3024.0 x 4032. I then have to apply a watermark to this image. After a bunch of trial and error, the method I decided to use was taking a snapshot of a watermark UIView, and drawing that over the image, like so: // Create the watermarked photo. let result: UIImage=UIGraphicsImageRenderer(size: image.size).image(actions: { _ in   image.draw(in: .init(origin: .zero, size: image.size))   let watermark: Watermark = .init(     size: image.size,     scaleFactor: image.size.smallest / self.frame.size.smallest   )   watermark.drawHierarchy(in: .init(origin: .zero, size: image.size), afterScreenUpdates: true) }) Then with the final image — because the client wanted it to have a filename as well when viewed from within the Photos app and exported from it, and also with much trial and error — I save it to a file in a temporary directory. I then save it to the user's Photo library using that file. The difference as compared to saving the image directly vs saving it from the file is that when saved from the file the filename is used as the filename within the Photos app; and in the other case it's just a default photo name generated by Apple. The problem is that in the image saving code I'm getting the following error: [Metal] 9072 by 12096 iosurface is too large for GPU And when I view the saved photo it's basically just a completely black image. This problem only started when I changed the AVCaptureSession preset to .photo. Before then there was no errors. Now, the worst problem is that the app just completely crashes on drawing of the watermark view in the first place. When using .photo the resolution is significantly higher, so the image size is larger, so the watermark size has to be commensurately larger as well. iOS appears to be okay with the size of the watermark UIView. However, when I try to draw it over the image the app crashes with this message from Xcode: So there's that problem. But I figured that could be resolved by taking a more manual approach to the drawing of the watermark then using a UIView snapshot. So it's not the most pressing problem. What is, is that even after the drawing code is commented out, I still get the iosurface is too large error. Here's the code that saves the image to the file and then to the Photos library: extension UIImage {   /// Save us with the given name to the user's photo album.   /// - Parameters:   ///  - filename: The filename to be used for the saved photo. Behavior is undefined if the filename contain characters other than what is represented by this regular expression [A-Za-z0-9-_]. A decimal point for the file extension is permitted.   ///  - location: A GPS location to save with the photo.   fileprivate func save(_ filename: String, _ location: Optional&lt;Coordinates&gt;) throws {           // Create a path to a temporary directory. Adding filenames to the Photos app form of images is accomplished by first creating an image file on the file system, saving the photo using the URL to that file, and then deleting that file on the file system.     //   A documented way of adding filenames to photos saved to Photos was never found.     // Furthermore, we save everything to a `tmp` directory as if we just tried deleting individual photos after they were saved, and the deletion failed, it would be a little more tricky setting up logic to ensure that the undeleted files are eventually     // cleaned up. But by using a `tmp` directory, we can save all temporary photos to it, and delete the entire directory following each taken picture.     guard       let tmpUrl: URL=try {         guard let documentsDirUrl=NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true).first else {           throw GeneralError("Failed to create URL to documents directory.")         }         let url: Optional&lt;URL&gt; = .init(string: documentsDirUrl + "/tmp/")         return url       }()     else {       throw GeneralError("Failed to create URL to temporary directory.")     }           // A path to the image file.     let filePath: String=try {               // Reduce the likelihood of photos taken in quick succession from overwriting each other.       let collisionResistantPath: String="\(tmpUrl.path(percentEncoded: false))\(UUID())/"               // Make sure all directories required by the path exist before trying to write to it.       try FileManager.default.createDirectory(atPath: collisionResistantPath, withIntermediateDirectories: true, attributes: nil)               // Done.       return collisionResistantPath + filename     }()     // Create `CFURL` analogue of file path.     guard let cfPath: CFURL=CFURLCreateWithFileSystemPath(nil, filePath as CFString, CFURLPathStyle.cfurlposixPathStyle, false) else {       throw GeneralError("Failed to create `CFURL` analogue of file path.")     }           // Create image destination object.     //     // You can change your exif type here.     //   This is a note from original author. Not quite exactly sure what they mean by it. Link in method documentation can be used to refer back to the original context.     guard let destination=CGImageDestinationCreateWithURL(cfPath, UTType.jpeg.identifier as CFString, 1, nil) else {       throw GeneralError("Failed to create `CGImageDestination` from file url.")     }           // Metadata properties.     let properties: CFDictionary={               // Place your metadata here.       // Keep in mind that metadata follows a standard. You can not use custom property names here.       let tiffProperties: Dictionary&lt;String, Any&gt;=[:]               return [         kCGImagePropertyExifDictionary as String: tiffProperties       ] as CFDictionary     }()           // Create image file.     guard let cgImage=self.cgImage else {       throw GeneralError("Failed to retrieve `CGImage` analogue of `UIImage`.")     }     CGImageDestinationAddImage(destination, cgImage, properties)     CGImageDestinationFinalize(destination)             // Save to the photo library.     PHPhotoLibrary.shared().performChanges({       guard let creationRequest: PHAssetChangeRequest = .creationRequestForAssetFromImage(atFileURL: URL(fileURLWithPath: filePath)) else {         return       }       // Add metadata to the photo.       creationRequest.creationDate = .init()       if let location=location {         creationRequest.location = .init(latitude: location.latitude, longitude: location.longitude)       }     }, completionHandler: { _, _ in       try? FileManager.default.removeItem(atPath: tmpUrl.absoluteString)     })   } } If anyone can provide some insight as to what's causing the iosurface is too large error and what can be done to resolve it, that'd be awesome.
3
0
1.7k
Feb ’24
CoreImage createCGImage Crash
I found that the app reported a crash of a pure virtual function call, which could not be reproduced. A third-party library is referenced: https://github.com/lincf0912/LFPhotoBrowser Achieve smearing, blurring, and mosaic processing of images Crash code: if (![LFSmearBrush smearBrushCache]) { [_edit_toolBar setSplashWait:YES index:LFSplashStateType_Smear]; CGSize canvasSize = AVMakeRectWithAspectRatioInsideRect(self.editImage.size, _EditingView.bounds).size; [LFSmearBrush loadBrushImage:self.editImage canvasSize:canvasSize useCache:YES complete:^(BOOL success) { [weakToolBar setSplashWait:NO index:LFSplashStateType_Smear]; }]; } - (UIImage *)LFBB_patternGaussianImageWithSize:(CGSize)size orientation:(CGImagePropertyOrientation)orientation filterHandler:(CIFilter *(^ _Nullable )(CIImage *ciimage))filterHandler { CIContext *context = LFBrush_CIContext; NSAssert(context != nil, @"This method must be called using the LFBrush class."); CIImage *midImage = [CIImage imageWithCGImage:self.CGImage]; midImage = [midImage imageByApplyingTransform:[self LFBB_preferredTransform]]; midImage = [midImage imageByApplyingTransform:CGAffineTransformMakeScale(size.width/midImage.extent.size.width,size.height/midImage.extent.size.height)]; if (orientation > 0 && orientation < 9) { midImage = [midImage imageByApplyingOrientation:orientation]; } //图片开始处理 CIImage *result = midImage; if (filterHandler) { CIFilter *filter = filterHandler(midImage); if (filter) { result = filter.outputImage; } } CGImageRef outImage = [context createCGImage:result fromRect:[midImage extent]]; UIImage *image = [UIImage imageWithCGImage:outImage]; CGImageRelease(outImage); return image; } This line trigger crash: CGImageRef outImage = [context createCGImage:result fromRect:[midImage extent]]; b9c90c7bbf8940e5aabed7f3f62a65a2-symbolicated.crash
1
0
513
Jan ’24
Use CoreImage filters on Vision Pro (visionOS) view
I have an iOS app that uses (camera) video feed and applies CoreImage filters to simulate a specific real world effect (for educational purposes). Now I wanted to make a similar app for visionOS and apply the same CoreImage filters to the content (live view) users sees while wearing Apple Vision Pro headset. Is there a way to do it with current APIs and what would you recommend? I saw that we cannot get video feed from camera(s), is there a way to do it with ARKit and applying the filters somehow using that? I know visionOS is a young/fresh platform but any help would be great! Thank you!
1
0
842
Jan ’24
I want to move a CoreImage task to the background...
It feels like this should be easy, but I'm having conceptual problems about how to do this. Any help would be appreciated. I have a sample app below that works exactly as expected. I'm able to use the Slider and Stepper to generate inputs to a function that uses CoreImage filters to manipulate my input image. This all works as expected, but it's doing some O(n) CI work on the main thread, and I want to move it to a background thread. I'm pretty sure this can be done using combine, here is the pseudo code I imagine would work for me: func doStuff() { // subscribe to options changes // .receive on background thread // .debounce // .map { model.inputImage.combine(options: $0) // return an object on the main thread. // update an @Published property? } Below is the POC code for my project. Any guidance as to where I should use combine to do this would be greatly appreciate. (Also, please let me know if you think combine is not the best way to tackle this. I'd be open to alternative implementations.) struct ContentView: View { @State private var options = CombineOptions.basic @ObservedObject var model = Model() var body: some View { VStack { Image(uiImage: enhancedImage) .resizable() .aspectRatio(contentMode: .fit) Slider(value: $options.scale) Stepper(value: $options.numberOfImages, label: { Text("\(options.numberOfImages)")}) } } private var enhancedImage: UIImage { return model.inputImage.combine(options: options) } } class Model: ObservableObject { let inputImage: UIImage = UIImage.init(named: "IMG_4097")! } struct CombineOptions: Codable, Equatable { static let basic: CombineOptions = .init(scale: 0.3, numberOfImages: 10) var scale: Double var numberOfImages: Int }
1
0
562
Jan ’24
iPhone 15 Pro Front Camera quality issues and poor face photos
This isn't just my observation but lots of people around me and also you can find tonnes of feedback on the inter webs. The processing of images taken with the front facing camera on the 15 (and I think 14 before) is so over processed that I'm aware of people jumping to other phones. And they're right. The 15 exacerbates that even more. You can turn off HDR (a viewing thing), you can prioritise speed over processing but really you cannot turn this off. You can take a Live Photo and then choose a different frame and the processing is less. As a developer I look at that and think it's bonkers, it's just software so why hasn't anyone produced a camera app that makes faces look good (not AI processing) from the front camera. I can be all enthusiastic and say I will develop one but it seems like a simple, obvious fix for Apple. To have the settings so bad that I have friends returning their phones, seems pretty bad. And as a photographer I would agree. There's a lot to love with Apple on the 15 and the log and prores but a simple selfie produces such ugly results. That's an actual problem. So throwing it it out there. What does everyone think? cheers Paul
0
0
689
Jan ’24