Hello, I am experiencing an unexpected issue related to PhotosKit, where an occasional crash is occurring in an area it shouldn't. I've provided the crash log below for reference.
The crash can be recreated by calling PHPhotoLibrary.cloudIdentifierMappings(forLocalIdentifiers:) with random identifiers, as demonstrated below:
let cloudIdentifiers = (0...10).map { _ in PHCloudIdentifier(stringValue: UUID().uuidString) }
let localIdsMap = PHPhotoLibrary.shared().localIdentifierMappings(for: cloudIdentifiers)
I've noticed that the method seems to crash consistently when an incorrect cloud identifier is input. This is surprising to me since the return type is [PHCloudIdentifier : Result<String, Error>], so I was anticipating an explicit error.
The issue can be reproduced on both Xcode 15.0 & iOS 17.2 and Xcode 14.3.1 & iOS 16.0.
While I'm fairly certain that only valid identifiers are used in my app, I would still like to know if there is a potential way to validate a cloud identifier prior to accessing this method?
Photos & Camera
RSS for tagExplore technical aspects of capturing high-quality photos and videos, including exposure control, focus modes, and RAW capture options.
Post
Replies
Boosts
Views
Activity
Hello, I am experiencing an unexpected issue related to PhotosKit, where an occasional crash is occurring in an area it shouldn't. I've provided the crash log below for reference below:
Last Exception Backtrace:
0 CoreFoundation 0x1aa050860 __exceptionPreprocess + 164
1 libobjc.A.dylib 0x1a2363c80 objc_exception_throw + 60
2 CoreFoundation 0x1a9f68218 -[__NSDictionaryM setObject:forKeyedSubscript:] + 1184
3 Photos 0x1c11cb178 -[PHCloudIdentifierLookup _lookupCodeSpecificCloudIdentifierStrings:forIdentifierCode:] + 572
4 Photos 0x1c11cb278 -[PHCloudIdentifierLookup _lookupLocalIdentifiersForIdentifierCode:codeSpecificCloudIdentifierStrings:] + 92
5 Photos 0x1c11cbbec __69-[PHCloudIdentifierLookup lookupLocalIdentifiersForCloudIdentifiers:]_block_invoke + 88
6 CoreFoundation 0x1a9f67d70 NSDICTIONARY_IS_CALLING_OUT_TO_A_BLOCK + 24
7 CoreFoundation 0x1a9f67bec -[__NSDictionaryM enumerateKeysAndObjectsWithOptions:usingBlock:] + 288
8 Photos 0x1c11cbb40 -[PHCloudIdentifierLookup lookupLocalIdentifiersForCloudIdentifiers:] + 488
9 Photos 0x1c125e6e4 -[PHPhotoLibrary(CloudIdentifiers) localIdentifierMappingsForCloudIdentifiers:] + 96
10 Photos 0x1c1104f18 0x1c1101000 + 16152
"exceptionReason" : {"arguments":["-[__NSDictionaryM setObject:forKeyedSubscript:]"],"format_string":"*** %s: key cannot be nil","name":"NSInvalidArgumentException","type":"objc-exception","composed_message":"*** -[__NSDictionaryM setObject:forKeyedSubscript:]: key cannot be nil","class":"NSException"},
The crash can be recreated by calling PHPhotoLibrary.cloudIdentifierMappings(forLocalIdentifiers:) with random identifiers, as demonstrated below:
let cloudIdentifiers = (0...10).map { _ in PHCloudIdentifier(stringValue: UUID().uuidString) } let localIdsMap = PHPhotoLibrary.shared().localIdentifierMappings(for: cloudIdentifiers)
I've noticed that the method seems to crash consistently when an incorrect cloud identifier is input. This is surprising to me since the return type is [PHCloudIdentifier : Result<String, Error>], so I was anticipating an explicit error.
The issue can be reproduced on both Xcode 15.0 & iOS 17.2 and Xcode 14.3.1 & iOS 16.0.
While I'm fairly certain that only valid identifiers are used in my app, I would still like to know if there is a potential way to validate a cloud identifier prior to accessing this method?
There is a need to obtain data on the position of the TruDepth camera matrix. Couldn't find anything in the documentation. Has anyone solved this problem? Is it generally possible to obtain this data?
There is a need to obtain data on the position of the TrueDepth camera matrix. Couldn't find anything in the documentation. Has anyone solved this problem? Is it generally possible to obtain this data?
Hi Everyone need your help . I am working on an application where I am capturing photo from the back camera using AVCaptureSession. It is working fine with the devices running iOS17+ but I am facing an error on device iPhone X running iOS 16.7.4
ERROR:
error: Optional(Error Domain=AVFoundationErrorDomain Code=-11803 "Cannot Record" UserInfo={NSUnderlyingError=0x283f0b780 {Error Domain=NSOSStatusErrorDomain Code=-16409 "(null)"}, NSLocalizedRecoverySuggestion=Try recording again., AVErrorRecordingFailureDomainKey=3, NSLocalizedDescription=Cannot Record})
Here is my Code:
`final class CedulaScanningVC: UIViewController {
var captureSession: AVCaptureSession!
var stillImageOutput: AVCapturePhotoOutput!
var videoPreviewLayer: AVCaptureVideoPreviewLayer!
var delegate: ScanCedulaDelegate?
override func viewDidLoad() {
super.viewDidLoad()
}
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
}
override func viewWillDisappear(_ animated: Bool) {
super.viewWillDisappear(animated)
self.captureSession.stopRunning()
}
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
setupCamera()
}
// MARK: - Configure Camera
func setupCamera() {
captureSession = AVCaptureSession()
captureSession.sessionPreset = .medium
guard let backCamera = AVCaptureDevice.default(for: AVMediaType.video)
else {
print("Unable to access back camera!")
return
}
let input: AVCaptureDeviceInput
do {
input = try AVCaptureDeviceInput(device: backCamera)
//Step 9
stillImageOutput = AVCapturePhotoOutput()
if captureSession.canAddInput(input) && captureSession.canAddOutput(stillImageOutput) {
captureSession.addInput(input)
captureSession.addOutput(stillImageOutput)
setupLivePreview()
}
}
catch let error {
print("Error Unable to initialize back camera: \(error.localizedDescription)")
}
}
func setupLivePreview() {
videoPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
videoPreviewLayer.videoGravity = .resizeAspectFill
videoPreviewLayer.connection?.videoOrientation = .portrait
self.view.layer.addSublayer(videoPreviewLayer)
//Step12
DispatchQueue.global(qos: .userInitiated).async { [weak self] in
self?.captureSession.startRunning()
//Step 13
DispatchQueue.main.async {
self?.videoPreviewLayer.frame = self?.view.bounds ?? .zero
}
}
}
func failed() {
let ac = UIAlertController(title: "Scanning not supported", message: "Your device does not support scanning a code from an item. Please use a device with a camera.", preferredStyle: .alert)
ac.addAction(UIAlertAction(title: "OK", style: .default))
present(ac, animated: true)
captureSession = nil
}
// MARK: - actions
func cameraButtonPressed() {
let settings = AVCapturePhotoSettings(format: [AVVideoCodecKey: AVVideoCodecType.jpeg])
stillImageOutput.capturePhoto(with: settings, delegate: self)
}
}
extension CedulaScanningVC: AVCapturePhotoCaptureDelegate {
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {
print("error: \(error)")
captureSession.stopRunning()
DispatchQueue.main.asyncAfter(deadline: .now() + 1.0) { [weak self] in
guard let self = self else {return}
guard let imageData = photo.fileDataRepresentation()
else {
print("NO image captured")
return
}
let image = UIImage(data: imageData)
self.delegate?.capturedImage(image: image)
}
}
}`
I don't know what am doing wrong ?
Hi iOS community need your help. I am working on an application where I am capturing photo from the back camera using AVCaptureSession. It is working fine with the devices running iOS17+ but I am facing an error on device iPhone X running iOS 16.7.4
ERROR:
` error: Optional(Error Domain=AVFoundationErrorDomain Code=-11803 "Cannot Record" UserInfo={NSUnderlyingError=0x283f0b780 {Error Domain=NSOSStatusErrorDomain Code=-16409 "(null)"}, NSLocalizedRecoverySuggestion=Try recording again., AVErrorRecordingFailureDomainKey=3, NSLocalizedDescription=Cannot Record})
My Code here:
final class CedulaScanningVC: UIViewController {
var captureSession: AVCaptureSession!
var stillImageOutput: AVCapturePhotoOutput!
var videoPreviewLayer: AVCaptureVideoPreviewLayer!
var delegate: ScanCedulaDelegate?
override func viewDidLoad() {
super.viewDidLoad()
}
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
}
override func viewWillDisappear(_ animated: Bool) {
super.viewWillDisappear(animated)
self.captureSession.stopRunning()
}
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
setupCamera()
}
// MARK: - Configure Camera
func setupCamera() {
captureSession = AVCaptureSession()
captureSession.sessionPreset = .medium
guard let backCamera = AVCaptureDevice.default(for: AVMediaType.video)
else {
print("Unable to access back camera!")
return
}
let input: AVCaptureDeviceInput
do {
input = try AVCaptureDeviceInput(device: backCamera)
//Step 9
stillImageOutput = AVCapturePhotoOutput()
if captureSession.canAddInput(input) && captureSession.canAddOutput(stillImageOutput) {
captureSession.addInput(input)
captureSession.addOutput(stillImageOutput)
setupLivePreview()
}
}
catch let error {
print("Error Unable to initialize back camera: \(error.localizedDescription)")
}
}
func setupLivePreview() {
videoPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
videoPreviewLayer.videoGravity = .resizeAspectFill
videoPreviewLayer.connection?.videoOrientation = .portrait
self.view.layer.addSublayer(videoPreviewLayer)
//Step12
DispatchQueue.global(qos: .userInitiated).async { [weak self] in
self?.captureSession.startRunning()
//Step 13
DispatchQueue.main.async {
self?.videoPreviewLayer.frame = self?.view.bounds ?? .zero
}
}
}
func failed() {
let ac = UIAlertController(title: "Scanning not supported", message: "Your device does not support scanning a code from an item. Please use a device with a camera.", preferredStyle: .alert)
ac.addAction(UIAlertAction(title: "OK", style: .default))
present(ac, animated: true)
captureSession = nil
}
// MARK: - actions
func cameraButtonPressed() {
let settings = AVCapturePhotoSettings(format: [AVVideoCodecKey: AVVideoCodecType.jpeg])
stillImageOutput.capturePhoto(with: settings, delegate: self)
}
}
extension CedulaScanningVC: AVCapturePhotoCaptureDelegate {
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {
print("error: \(error)")
captureSession.stopRunning()
DispatchQueue.main.asyncAfter(deadline: .now() + 1.0) { [weak self] in
guard let self = self else {return}
guard let imageData = photo.fileDataRepresentation()
else {
print("NO image captured")
return
}
let image = UIImage(data: imageData)
self.delegate?.capturedImage(image: image)
}
}
}
I don't know what am doing wrong ?
This isn't just my observation but lots of people around me and also you can find tonnes of feedback on the inter webs.
The processing of images taken with the front facing camera on the 15 (and I think 14 before) is so over processed that I'm aware of people jumping to other phones. And they're right. The 15 exacerbates that even more. You can turn off HDR (a viewing thing), you can prioritise speed over processing but really you cannot turn this off. You can take a Live Photo and then choose a different frame and the processing is less.
As a developer I look at that and think it's bonkers, it's just software so why hasn't anyone produced a camera app that makes faces look good (not AI processing) from the front camera.
I can be all enthusiastic and say I will develop one but it seems like a simple, obvious fix for Apple.
To have the settings so bad that I have friends returning their phones, seems pretty bad. And as a photographer I would agree. There's a lot to love with Apple on the 15 and the log and prores but a simple selfie produces such ugly results. That's an actual problem.
So throwing it it out there. What does everyone think?
cheers
Paul
Hello! I'm trying to save videos asynchronously. I've already used performChanges without the completionHandler, but it didn't work. Can you give me an example? Consider that the variable with the file URL is named fileURL. What would this look like asynchronously?
My app uses camera and photo library. I found that if a user follows certain steps, they will no longer be able to change the photo permissions for my app in the Settings app.
The steps are as follows
Press the camera button in the app to launch the camera.
Take a picture with camera permissions granted.
grant ".addOnly" permission to the photo library.
Press the photo library button in the app to read photo library.
Deny ".readWrite" permission to the photo library.
After step 5, the Settings app only shows items to switch ".addOnly" permissions, but not ".readWrite" permissions.
I am aware that in iOS14 or later, the permission required after a photo is taken with the camera should be ".addOnly". Therefore, I suspect that this problem is occurring in other apps.
So far I have devised my app to deal with this problem, but is this the expected behavior of the Settings app? If so, how can I avoid this problem?
Hello,
I came across the Object Capture for iOS example from WWDC23, which utilizes LiDAR sensor.
However, I’m interested in using the TrueDepth camera system instead.
What I have tried is to save depth photos (.HEIC) to the Images/ folder (based on modifying the example below), which is hopefully used by the Photogrammetry session. But I haven’t been successful so far in starting the 3D reconstruction.
Could there be something I’ve missed, or is the Object Capture sample code exclusively designed for LiDAR? Or maybe .HEIC is not the right format to use?
Thank you for your assistance.
import AVFoundation
import UIKit
class DepthPhotoCapture: NSObject, AVCapturePhotoCaptureDelegate {
let photoOutput = AVCapturePhotoOutput()
let captureSession = AVCaptureSession()
override init() {
super.init()
setupCaptureSession()
}
func setupCaptureSession() {
// Get the front camera (TrueDepth camera)
guard let frontCamera = AVCaptureDevice.default(.builtInTrueDepthCamera, for: .video, position: .front) else {
print("Unable to access front camera!")
return
}
do {
// Create an input object from the camera
let input = try AVCaptureDeviceInput(device: frontCamera)
// Add the input to the capture session
captureSession.addInput(input)
} catch {
print("Unable to create AVCaptureDeviceInput: \(error)")
}
// Check if depth data capture is supported
if photoOutput.isDepthDataDeliverySupported {
// Enable depth data capture
photoOutput.isDepthDataDeliveryEnabled = true
}
// Add the photo output to the capture session
captureSession.addOutput(photoOutput)
// Start the capture session
captureSession.startRunning()
}
func captureDepthPhoto() {
// Create a photo settings object
let photoSettings = AVCapturePhotoSettings(format: [AVVideoCodecKey: AVVideoCodecType.hevc])
photoSettings.isDepthDataDeliveryEnabled = photoOutput.isDepthDataDeliveryEnabled
// Capture a photo with depth data
photoOutput.capturePhoto(with: photoSettings, delegate: self)
}
// Implement the AVCapturePhotoCaptureDelegate method
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {
guard let imageData = photo.fileDataRepresentation() else {
print("Error while generating image from photo capture data.")
return
}
// Get the documents directory
let documentsDirectory = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first!
// Append the image directory and a unique image name
let fileURL = documentsDirectory.appendingPathComponent("Images/").appendingPathComponent(UUID().uuidString).appendingPathExtension("heic")
do {
// Write the image data to the file
try imageData.write(to: fileURL)
print("Saved photo with depth data to \(fileURL)")
} catch {
print("Failed to write the image data to disk: \(error)")
}
}
}
Hello Apple Developer Community,
I'm excited to make my first post here and am seeking guidance for a feature I'd like to implement in my app. My objective is to enable users to select an image and crop it. Ideally, there should be a visible indicator, like a rectangle, to show the area that will be cropped. Upon clicking the save button, the image would be saved with the selected cropped area.
I'm aiming for functionality to the image editor in the Photos app. Is there a straightforward method or integration for this that adheres to Apple's native frameworks, without resorting to external GitLab repositories?
Thank you in advance for your assistance.
Best regards,
Nicola
how can we change the image quality, size, camera and cadence used during RoomPlan's scanning. We are getting the images from RoomCaptureSession.
Hello,
I tried to build AVCam sample application for iOS17 and run it on MacBook (designed as iPad) with macos14.3 (Sonoma).
https://developer.apple.com/documentation/avfoundation/capture_setup/avcam_building_a_camera_app?language=objc
When building and testing with Xcode 15.2, AVCam application crashes systematically when choosing target "My Mac (Designed for iPad)"
In fact, SIGABORT signal is received in a thread dealing with "portrait effect"
Thread 19 Queue : com.apple.portrait.effect_init (serial)
Is it a known bug? Is there a workaround about this case?
Best regards
External webcam is detected by AVCam but preview and capture are systematically upside down. (may be the same FaceTime HD camera's)
Is it a known bug? Is there a workaround about this case?
Hello. Does anyone have any ideas on how to work with the new iOS 17 Live Photo? I can save the live photo, but I can't set it as wallpaper. Error: "Motion is not available in iOS 17" There are already applications that allow you to do this - VideoToLive and the like. What should I use to implement this with swift language? Most likely the metadata needs to be changed, but I'm not sure.
iPhone 14 promax uses 12 million pixels to sharpen it so severely that it is impossible to see. Using 48 million pixels takes up too much space. It starts with 128g. How can there be so much space, plus shooting videos. The biggest problem at present is that there are no 24 million pixels, which greatly affects the daily shooting experience. People around me who use the 14pro series say that not having 24 million pixels is the biggest problem at the moment.
The 24-megapixel camera is most widely used for daily photography, and the 48-megapixel camera is only used for taking landscapes or photos. After all, it takes up too much memory. The biggest problem with the 14promax now is that its photography is lame, and the 12-megapixel camera has long lagged behind Android. There are too many. Adding 24 million modes is much more valuable than updating iOS, and the experience is directly doubled.
Is it possible to get the camera intrinsic matrix for a captured single photo on iOS?
I know that one can get the cameraCalibrationData from a AVCapturePhoto, which also contains the intrinsicMatrix. However, this is only provided when using a constituent (i.e. multi-camera) capture device and setting virtualDeviceConstituentPhotoDeliveryEnabledDevices to multiple devices (or enabling isDualCameraDualPhotoDeliveryEnabled on older iOS versions). Then photoOutput(_:didFinishProcessingPhoto:) is called multiple times, delivering one photo for each camera specified. Those then contain the calibration data.
As far as I know, there is no way to get the calibration data for a normal, single-camera photo capture.
I also found that one can set isCameraIntrinsicMatrixDeliveryEnabled on a capture connection that leads to a AVCaptureVideoDataOutput. The buffers that arrive at the delegate of that output then contain the intrinsic matrix via the kCMSampleBufferAttachmentKey_CameraIntrinsicMatrix metadata. However, this requires adding another output to the capture session, which feels quite wasteful just for getting this piece of metadata. Also, I would somehow need to figure out which buffer was temporarily closest to when the actual photo was taken.
Is there a better, simpler way for getting the camera intrinsic matrix for a single photo capture?
If not, is there a way to calculate the matrix based on the image's metadata?
As the title already suggests, is it possible with the current Apple Vision Simulator to recognize objects/humans, like it is currently possible on the iPhone. I am not even sure, if we have an api for accessing the cameras of the Vision Pro?
My goal is, to recognize for example a human and add to this object an 3D object, for example a hat. Can this be done?
Hello Community,
I plan an app to correct a specific child's behavior.
For this to work, the app needs to run in the background and will be triggered to use the front camera when a pre-defined app is on screen (YouTube as an example). Snap photos will be taken for image processing before their deletion every few seconds. When the specific behavior is found, the app will take down the device volume (and put it back when "fixed"). The user photos/data are deleted and nothing is sent, saved, or shared.
My main concern is that the app is always in the background and using the camera frequently.
I'm unsure if that is possible/allowed, and if so, how stable will it be. But
most importantly, I do not want this code activity to be found suspicious when uploading the app to the store.
Hope this is clear.
I would appreciate an advice.
Thanks,
Avi
I need to capture 4k photos with 4:3 ratio from the camera. I can do this, but i want to disable video stabilization. I can disable video stabilization using the AVCaptureSessionPresetHigh preset. But AVCaptureSessionPresetHigh gives me a 16:9 photo with the surroundings cropped. Unfortunately, the 16:9 ratio does not solve my needs.
When I run the session using the AVCaptureSessionPresetPhoto preset and adding AVCapturePhotoOutput, I cannot turn off image stabilization.
self.capturePhotoOutput = AVCapturePhotoOutput.init()
self.captureDevice = AVCaptureDevice.default(AVCaptureDevice.DeviceType.builtInWideAngleCamera
, for: AVMediaType.video, position: .back)
do {
let input = try AVCaptureDeviceInput(device: self.captureDevice!)
self.captureSession = AVCaptureSession()
self.captureSession?.beginConfiguration()
self.captureSession?.sessionPreset = .photo
self.captureSession?.addInput(input)
if ((captureSession?.canAddOutput(capturePhotoOutput!)) != nil) {
captureSession?.addOutput(capturePhotoOutput!)
}
if let connection = capturePhotoOutput?.connection(with: .video) {
if connection.isVideoStabilizationSupported {
connection.preferredVideoStabilizationMode = .off
}
}
DispatchQueue.main.async { [self] in
self.capturePhotoOutput?.isHighResolutionCaptureEnabled = true
self.videoPreviewLayer = AVCaptureVideoPreviewLayer(session: self.captureSession!)
self.videoPreviewLayer?.videoGravity = .resizeAspectFill
self.videoPreviewLayer?.connection?.videoOrientation = .portrait
self.videoPreviewLayer?.frame = self.previewView.layer.frame
self.previewView.layer.insertSublayer(self.videoPreviewLayer!, at: 0)
}
self.captureSession?.commitConfiguration()
self.captureSession?.startRunning()
}
}
@objc private func handleTakePhoto(){
let photoSettings = AVCapturePhotoSettings()
if let photoPreviewType = photoSettings.availablePreviewPhotoPixelFormatTypes.first {
photoSettings.previewPhotoFormat = [kCVPixelBufferPixelFormatTypeKey as String:photoPreviewType]
photoSettings.isAutoStillImageStabilizationEnabled = false
capturePhotoOutput?.capturePhoto(with: photoSettings, delegate: self)
}
}
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {
if let dataImage = photo.fileDataRepresentation() {
print(UIImage(data: dataImage)?.size as Any)
let dataProvider = CGDataProvider(data: dataImage as CFData)
let cgImageRef: CGImage! = CGImage(jpegDataProviderSource: dataProvider!, decode: nil, shouldInterpolate: true, intent: .defaultIntent)
let image = UIImage(cgImage: cgImageRef, scale: 1.0, orientation: rotateImage(orientation: currentOrientation))
} else {
print("some error here")
}
}
As a temporary solution, I added only AVCaptureVideoDataOutput to the session without adding AVCapturePhotoOutput, and I can capture in 4:3 format with the captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) function. However, this time I cannot get a 4K image.
In short, I need to turn off video stabilization in a session with AVCapturePhotoOutput added.
self.captureDevice = AVCaptureDevice.default(AVCaptureDevice.DeviceType.builtInWideAngleCamera
, for: AVMediaType.video, position: .back)
do {
let input = try AVCaptureDeviceInput(device: self.captureDevice!)
self.captureSession = AVCaptureSession()
self.captureSession?.beginConfiguration()
self.captureSession?.sessionPreset = .photo
self.captureSession?.addInput(input)
videoDataOutput = AVCaptureVideoDataOutput()
videoDataOutput?.videoSettings = [
kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_32BGRA)
]
videoDataOutput?.setSampleBufferDelegate(self, queue: DispatchQueue(label: "videoQueue"))
if ((captureSession?.canAddOutput(videoDataOutput!)) != nil) {
captureSession?.addOutput(videoDataOutput!)
}
/* If I cancel the comment line, video stabilization is enabled.
if ((captureSession?.canAddOutput(capturePhotoOutput!)) != nil) {
captureSession?.addOutput(capturePhotoOutput!)
}
*/
DispatchQueue.main.async { [self] in
self.videoPreviewLayer = AVCaptureVideoPreviewLayer(session: self.captureSession!)
self.videoPreviewLayer?.videoGravity = .resizeAspectFill
self.videoPreviewLayer?.connection?.videoOrientation = .portrait
self.videoPreviewLayer?.frame = self.previewView.layer.frame
self.previewView.layer.insertSublayer(self.videoPreviewLayer!, at: 0)
}
self.captureSession?.commitConfiguration()
self.captureSession?.startRunning()
}
}
@objc private func handleTakePhoto(){
takePicture = true
}
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
if !takePicture {
return //we have nothing to do with the image buffer
}
//try and get a CVImageBuffer out of the sample buffer
guard let cvBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else {
return
}
let rect = CGRect(x: 0, y: 0, width: CVPixelBufferGetWidth(cvBuffer), height: CVPixelBufferGetHeight(cvBuffer))
let ciImage = CIImage.init(cvImageBuffer: cvBuffer)
let ciContext = CIContext()
let cgImage = ciContext.createCGImage(ciImage, from: rect)
guard cgImage != nil else {return }
let uiImage = UIImage(cgImage: cgImage!)
}