Photos & Camera

RSS for tag

Explore technical aspects of capturing high-quality photos and videos, including exposure control, focus modes, and RAW capture options.

Post

Replies

Boosts

Views

Activity

Expected timestamp ranges between AVCapturePhoto photos when using .builtInDualWideCamera?
I have built a camera application which uses a AVCaptureSession with the AVCaptureDevice set to .builtInDualWideCamera and isVirtualDeviceConstituentPhotoDeliveryEnabled=true to enable delivery of "simultaneous" photos (AVCapturePhoto) for a single capture request. Our app ideally would have the timestamp difference between the photos in a single capture request as short as possible, but we don't have a good idea of what the theoretical or practical limits of this timestamp difference are. In my testing on an iPhone 12 Pro, with a frame rate of 33Hz and the preset set to hd1920x1080, I get the timestamp difference between photos at approx 0.3ms, which seems smaller than I would expect, unless the frames are being synchronised incredibly well under the hood. This leaves the following unanswered questions: What sort of ranges of values should we expect to come out of these timestamp differences between photos? What factors influence this? Is there any way to control these values to ensure they are as small as possible? (Will likely be answered by (2))
2
1
521
Apr ’24
iPadOS 17.4.1: AVCaptureMetadataOutput delegate not called
After my iPad 6 upgrades from iOS 17.3 to 17.4, the AVCaptureMetadataOutput delegate is not called anymore. I find there is the same problem in a stackoverflow post: https://stackoverflow.com/questions/78128010/ipados-17-4-avcapturemetadataoutput-delegate-not-called-qrscanner An Apple webpage said the "QR code scanning" issue is fixed in iPadOS 17.4.1: If your iPad is unable to scan QR codes after updating to iPadOS 17.4 - Apple Support - https://support.apple.com/en-lamr/118614 That's true, I confirm that on my iPad 6. But, unfortunately, iPadOS 17.4.1 does fix ONLY QR code scanning! It doesn't fix barcode scanning, like PDF 417 Happening on iPad (7th Generation) iPad (6th Generation) iPad Pro 12.9-inch (2nd Generation) iPad Pro 10.5-inch
0
2
699
Apr ’24
Opening front camera on Vision Pro
Hey! I'm trying to open the front camera on my demo app, and from what I read on the Apple docs and forums if you have configured your Persona you will get that image. But I'm having some issues with it, this is my code: struct ContentView: View { @Environment(\.presentationMode) var presentationMode var body: some View { ZStack { VStack { Image("logo") .resizable() .frame(width: 337, height: 211) .accessibilityHidden(true) Text("My first Vision Pro app.") .multilineTextAlignment(.center) .font(.headline) .frame(width: 340) .padding(.bottom, 10) Button { // Add camera functionality here } label: { Text("Open Camera") .frame(maxWidth: .infinity) } .onAppear { requestCameraAccess() } .onTapGesture { // Check if camera permission is granted if AVCaptureDevice.authorizationStatus(for: .video) == .authorized { openFrontCamera() } else { requestCameraAccess() } } } } } func requestCameraAccess() { AVCaptureDevice.requestAccess(for: .video) { authorized in DispatchQueue.main.async { if authorized { // Permission granted, open camera if needed openFrontCamera() } else { // Handle permission denied case (optional) } } } } func openFrontCamera() { } }``` On the openFrontCamera() function I tried using .devices() .default() and other methods like you would use for other Apple devices but this doesn't work with Vision Pro and I can't find anything that tells me how to open it. Has anyone been able to work this out?
1
0
848
Apr ’24
PHPhotoLibrary.requestAuthorization just returns .denied on macOS
I am trying to implement the ability to save a photo to the user’s photo library on macOS. When I call PHPhotoLibrary.requestAuthorization(for: .addOnly) I just get a .denied status. The user is not prompted for access. I tried adding these entitlements: com.apple.security.personal-information.photos-library, com.apple.security.assets.pictures.read-write. I tried turning off sandboxing entirely. I tried saving despite getting the authorization denied, but unsurprisingly that gives me this error: Domain=PHPhotosErrorDomain Code=3311 I can almost do what i want with NSSharingService(named: .addToIPhoto), but that has the side effect of launching Photos. Is there a trick to getting PHPhotoLibrary.requestAuthorization(for: .addOnly) to work? Thanks. John
4
1
1.1k
Mar ’24
AVMutableCompositionTrack has wrong naturalSize if you insertTimeRange of livePhoto AVAsset
Hello, I fetch Live Photo AVAsset using: PHImageManager and PHAssetResourceManager for getting Data. And then I want to wrap it to AVAsset using fileURL, and everything works fine, but also I want to trim this AVAsset using AVMutableComposition. I use insertTimeRange of AVMutableCompositionTrack method, and I don't not why but naturalSize of originalVideoTrack and newVideoTrack are different, and this happening only with Live Photo, default Videos work fine. Seems like this is AVMutableCompositionTrack bug inside AVFoundation. Please give me some info. Thanks) PHImageManager.default().requestLivePhoto( for: phAsset, targetSize: size, contentMode: .aspectFill, options: livePhotoOptions ) { [weak self] livePhoto, info in guard let livePhoto else { return } self?.writeAVAsset(livePhoto: livePhoto, fileURL: fileURL) } private func writeAVAsset(livePhoto: PHLivePhoto, fileURL: URL) { let resources = PHAssetResource.assetResources(for: livePhoto) guard let videoResource = resources.first(where: { $0.type == .pairedVideo }) else { return } let requestOptions = PHAssetResourceRequestOptions() var data = Data() dataRequestID = PHAssetResourceManager.default().requestData( for: videoResource, options: requestOptions, dataReceivedHandler: { chunk in data.append(chunk) }, completionHandler: { [weak self] error in try data.write(to: fileURL) let avAsset = AVAsset(url: fileURL) let composition = AVMutableComposition() if let originalVideoTrack = tracks(withMediaType: .video).first, let videoTrack = composition.addMutableTrack(withMediaType: .video, preferredTrackID: 0) { // originalVideoTrack has naturalSize (width: 1744, height: 1308) try? videoTrack.insertTimeRange(timeRange, of: originalVideoTrack, at: .zero) videoTrack.preferredTransform = originalVideoTrack.preferredTransform // videoTrack has naturalSize (width: 1920.0, height: 1440.0) } ) }
0
0
566
Mar ’24
What do I need to do to take 48MP photos and set the iso and exposure duration successfully?
I want to take 48MP photos and get the same iso and exposure duration as I set. Configuration Set the active AVCaptureDevice.Format to a format where supportedMaxPhotoDimensions contains the (8064, 6048) size Set AVCapturePhotoOutput.maxPhotoDimensions to (8064, 6048) Set if (AVCaptureDevice.isExposureModeSupported:.custom) { AVCaptureDevice.exposureMode = .custom; } Set AVCaptureDevice.setExposureModeCustomWithDuration:1/20 ISO:100 completionHandler:handler Taking a photo Set AVCapturePhotoSettings.maxPhotoDimensions to (8064, 6048) The API discussion of setExposureModeCustomWithDuration told me https://developer.apple.com/documentation/avfoundation/avcapturedevice/1624646-setexposuremodecustomwithduratio/ To ensure that the receiver's ISO and exposureDuration values are honored while in AVCaptureExposureModeCustom or AVCaptureExposureModeLocked, you must set your AVCapturePhotoSettings.photoQualityPrioritization property to AVCapturePhotoQualityPrioritizationSpeed. But at last step, when I set AVCapturePhotoSettings.maxPhotoQualityPrioritization = .speed, the photo resolution is (4000, 3000), only 12MP, not is (8000, 6000). the iso and exposure duration on the photo are the same as what I set. and when I set AVCapturePhotoSettings.maxPhotoQualityPrioritization = .balanced/.qulity, the photo is (8000, 6000) , but the iso and exposeure duration obtained on the photo is different from the one I set. What do I need to do to take 48MP photos and set the iso and exposure duration successfully?
1
0
672
Mar ’24
iOS 17.4 breaks 48 MP capture in certain scenarios
The methods described in https://developer.apple.com/forums/thread/715452?answerId=729571022#729571022 to obtain 48 MP image captures no longer seem to work on iOS 17.4 under certain circumstances. Previously, the following steps were sufficient to get 48 MP capture from AVFoundation: Configuration Set the active AVCaptureDevice.Format to a format where supportedMaxPhotoDimensions contains the (8064, 6048) size Set AVCapturePhotoOutput.maxPhotoDimensions to (8064, 6048) Set AVCapturePhotoOutput.maxPhotoQualityPrioritization to .quality Taking a photo Set AVCapturePhotoSettings.maxPhotoDimensions to (8064, 6048) Set AVCapturePhotoSettings.photoQualityPrioritization to .quality As of iOS 17.4, the exact same code that worked through 17.3 no longer works if the session was configured manually (resulting in the .inputPriority session preset) rather than using a session preset (like .high). When configuring the session manually, all the intervening steps work (an active format can be found with the appropriate dimensions, the photo output settings can be set to 8064x6048 successfully, etc.), but the resulting photo is 4032x3024. Again, these same steps worked flawlessly prior to iOS 17.4. Am I missing something? Did iOS 17.4 change the requirements for 48 MP capture, or is this a bug?
3
0
897
Mar ’24
After iPadOS 17.4, AVCaptureMetadataOutput no longer detects QR codes on some devices.
I'm creating an app that uses AVCaptureSession to pass camera input to AVCaptureMetadataOutput and scan QRCode. After updating to iPadOS 17.4, an issue has occurred where the delegate method of AVCaptureMetadataOutputObjectsDelegate is not called on some devices. The following devices are experiencing this issue. iPad (7th Gen) iPad (6th Gen) iPad Pro (10.5) iPad Pro (12.9 2nd Gen) This issue has not occur on any other devices I have. This may only occur on devices with model number "iPad7,x". I tried running the AVFoundation sample code on the Apple Developer site on the above device. The same problem still occurs. https://developer.apple.com/documentation/avfoundation/capture_setup/avcambarcode_detecting_barcodes_and_faces Are any additional settings required after iPadOS17.4? Or is there some problem on the OS side?
9
1
3k
Mar ’24
AVCaptureSession preview freezing
I'm currently working on an iPad application that uses a third party sdk to scan a drivers license, and then allows the user to take a picture of themselves. However, when the user is directed to the self photo view, the AVCaptureSession preview will freeze. The app as a whole does not freeze. Only the view preview. I believe this is an issue with the OS, because this only happens on iPad 9s. All the other iPads work fine. Has anyone else seen this issue? Also, is there anyway to see logs from the AVCaptureSession so I can see what is happening? Maybe there is a way I can see when it freezes and then restart it.
1
0
711
Mar ’24
iPhone webRTC / getUserMedia only uses headset mic with low volume. Can we change mics?
Hi hope all are well! We've been working on a live streaming app and it's going quite well! Just got the aspect ratio locked as desired. Now the audio, its volume is extremely low. It sounds like it's using the headset mic instead of the bottom mic that's used on Facetime or on speakerphone calls. We tried flipping cameras and specifying sample rates, almost every constraint in MediaConstraints - no go! Is there any way to specify this? Thanks in advance!
1
0
628
Mar ’24
Problem iOS camera detect QR code with vCard Containing Space in FullName
Dear Team, I am trying to add contact from QRCode. But it seems that the built-in QR code reader of iphone camera isn't able to decode the FullName with space containing in last name correctly ex:-Collin A. Al Miller. I have attached all the screenshot for your reference. Here are the examples: When I am trying to focus iphone camera on QRCode the fullname (Collin A. Al Miller). scan the The full name its giving the empty result without the fullname. The attached screenshot details a)CameraQRNotWorking b)NotWorkingQRCOde 2)When i try to removed the blank space and tried to add comma or - in the full name its getting recognised and its working perfectly. The attached screenshot name a)CameraQRCodeWorking b)workingQRCODE 3)Both the full name are working perfectly in QR camera scanner of android Collin A. Al-Miller or Collin A, Al Miller. The attached screenshot name AndroidQRCODE Hope this issue will get resolved in upcoming release. Kindly provide the feedback relatedto this issue Code to generate vcard var str = "BEGIN:VCARD \n" + "VERSION:2.1 \n" + "FN:\("Collin A. Al Miller") \n" + "TITLE:\("") \n" if options.showPersonalPhone { str.append(contentsOf: "item1.TEL;CELL:\("+91987654320") \n") str.append(contentsOf: "item1.X-ABLabel:Mobile\n") } if options.showWorkPhone { str.append(contentsOf: "item2.TEL;WORK;VOICE:\("+91987654320") \n") str.append(contentsOf: "item2.X-ABLabel:Work Phone\n") } if options.showEmail { str.append(contentsOf: "item3.EMAIL;WORK;INTERNET:\("test@gmail.com") \n") str.append(contentsOf: "item3.X-ABLabel:Work Email\n") } if options.showWebsite { str.append(contentsOf: "URL:www.test.com \n") } if options.showLocation { str.append(contentsOf: "ADR;WORK:;;\("Bangalore") \n") } str.append(contentsOf: "END:VCARD")
2
0
460
Mar ’24
How do I disable video stabilization in a AVCaptureSession with AVCapturePhotoOutput added?
I need to capture 4k photos with 4:3 ratio from the camera. I can do this, but i want to disable video stabilization. I can disable video stabilization using the AVCaptureSessionPresetHigh preset. But AVCaptureSessionPresetHigh gives me a 16:9 photo with the surroundings cropped. Unfortunately, the 16:9 ratio does not solve my needs. When I run the session using the AVCaptureSessionPresetPhoto preset and adding AVCapturePhotoOutput, I cannot turn off image stabilization. self.capturePhotoOutput = AVCapturePhotoOutput.init() self.captureDevice = AVCaptureDevice.default(AVCaptureDevice.DeviceType.builtInWideAngleCamera , for: AVMediaType.video, position: .back) do { let input = try AVCaptureDeviceInput(device: self.captureDevice!) self.captureSession = AVCaptureSession() self.captureSession?.beginConfiguration() self.captureSession?.sessionPreset = .photo self.captureSession?.addInput(input) if ((captureSession?.canAddOutput(capturePhotoOutput!)) != nil) { captureSession?.addOutput(capturePhotoOutput!) } if let connection = capturePhotoOutput?.connection(with: .video) { if connection.isVideoStabilizationSupported { connection.preferredVideoStabilizationMode = .off } } DispatchQueue.main.async { [self] in self.capturePhotoOutput?.isHighResolutionCaptureEnabled = true self.videoPreviewLayer = AVCaptureVideoPreviewLayer(session: self.captureSession!) self.videoPreviewLayer?.videoGravity = .resizeAspectFill self.videoPreviewLayer?.connection?.videoOrientation = .portrait self.videoPreviewLayer?.frame = self.previewView.layer.frame self.previewView.layer.insertSublayer(self.videoPreviewLayer!, at: 0) } self.captureSession?.commitConfiguration() self.captureSession?.startRunning() } } @objc private func handleTakePhoto(){ let photoSettings = AVCapturePhotoSettings() if let photoPreviewType = photoSettings.availablePreviewPhotoPixelFormatTypes.first { photoSettings.previewPhotoFormat = [kCVPixelBufferPixelFormatTypeKey as String:photoPreviewType] photoSettings.isAutoStillImageStabilizationEnabled = false capturePhotoOutput?.capturePhoto(with: photoSettings, delegate: self) } } func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) { if let dataImage = photo.fileDataRepresentation() { print(UIImage(data: dataImage)?.size as Any) let dataProvider = CGDataProvider(data: dataImage as CFData) let cgImageRef: CGImage! = CGImage(jpegDataProviderSource: dataProvider!, decode: nil, shouldInterpolate: true, intent: .defaultIntent) let image = UIImage(cgImage: cgImageRef, scale: 1.0, orientation: rotateImage(orientation: currentOrientation)) } else { print("some error here") } } As a temporary solution, I added only AVCaptureVideoDataOutput to the session without adding AVCapturePhotoOutput, and I can capture in 4:3 format with the captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) function. However, this time I cannot get a 4K image. In short, I need to turn off video stabilization in a session with AVCapturePhotoOutput added. self.captureDevice = AVCaptureDevice.default(AVCaptureDevice.DeviceType.builtInWideAngleCamera , for: AVMediaType.video, position: .back) do { let input = try AVCaptureDeviceInput(device: self.captureDevice!) self.captureSession = AVCaptureSession() self.captureSession?.beginConfiguration() self.captureSession?.sessionPreset = .photo self.captureSession?.addInput(input) videoDataOutput = AVCaptureVideoDataOutput() videoDataOutput?.videoSettings = [ kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_32BGRA) ] videoDataOutput?.setSampleBufferDelegate(self, queue: DispatchQueue(label: "videoQueue")) if ((captureSession?.canAddOutput(videoDataOutput!)) != nil) { captureSession?.addOutput(videoDataOutput!) } /* If I cancel the comment line, video stabilization is enabled. if ((captureSession?.canAddOutput(capturePhotoOutput!)) != nil) { captureSession?.addOutput(capturePhotoOutput!) } */ DispatchQueue.main.async { [self] in self.videoPreviewLayer = AVCaptureVideoPreviewLayer(session: self.captureSession!) self.videoPreviewLayer?.videoGravity = .resizeAspectFill self.videoPreviewLayer?.connection?.videoOrientation = .portrait self.videoPreviewLayer?.frame = self.previewView.layer.frame self.previewView.layer.insertSublayer(self.videoPreviewLayer!, at: 0) } self.captureSession?.commitConfiguration() self.captureSession?.startRunning() } } @objc private func handleTakePhoto(){ takePicture = true } func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) { if !takePicture { return //we have nothing to do with the image buffer } //try and get a CVImageBuffer out of the sample buffer guard let cvBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return } let rect = CGRect(x: 0, y: 0, width: CVPixelBufferGetWidth(cvBuffer), height: CVPixelBufferGetHeight(cvBuffer)) let ciImage = CIImage.init(cvImageBuffer: cvBuffer) let ciContext = CIContext() let cgImage = ciContext.createCGImage(ciImage, from: rect) guard cgImage != nil else {return } let uiImage = UIImage(cgImage: cgImage!) }
0
0
641
Mar ’24
Use front camera with background application
Hello Community, I plan an app to correct a specific child's behavior. For this to work, the app needs to run in the background and will be triggered to use the front camera when a pre-defined app is on screen (YouTube as an example). Snap photos will be taken for image processing before their deletion every few seconds. When the specific behavior is found, the app will take down the device volume (and put it back when "fixed"). The user photos/data are deleted and nothing is sent, saved, or shared. My main concern is that the app is always in the background and using the camera frequently. I'm unsure if that is possible/allowed, and if so, how stable will it be. But most importantly, I do not want this code activity to be found suspicious when uploading the app to the store. Hope this is clear. I would appreciate an advice. Thanks, Avi
2
0
656
Feb ’24
Recognize Objects (Humans) with Apple Vision Pro Simulator
As the title already suggests, is it possible with the current Apple Vision Simulator to recognize objects/humans, like it is currently possible on the iPhone. I am not even sure, if we have an api for accessing the cameras of the Vision Pro? My goal is, to recognize for example a human and add to this object an 3D object, for example a hat. Can this be done?
1
0
907
Feb ’24
Camera intrinsic matrix for single photo capture
Is it possible to get the camera intrinsic matrix for a captured single photo on iOS? I know that one can get the cameraCalibrationData from a AVCapturePhoto, which also contains the intrinsicMatrix. However, this is only provided when using a constituent (i.e. multi-camera) capture device and setting virtualDeviceConstituentPhotoDeliveryEnabledDevices to multiple devices (or enabling isDualCameraDualPhotoDeliveryEnabled on older iOS versions). Then photoOutput(_:didFinishProcessingPhoto:) is called multiple times, delivering one photo for each camera specified. Those then contain the calibration data. As far as I know, there is no way to get the calibration data for a normal, single-camera photo capture. I also found that one can set isCameraIntrinsicMatrixDeliveryEnabled on a capture connection that leads to a AVCaptureVideoDataOutput. The buffers that arrive at the delegate of that output then contain the intrinsic matrix via the kCMSampleBufferAttachmentKey_CameraIntrinsicMatrix metadata. However, this requires adding another output to the capture session, which feels quite wasteful just for getting this piece of metadata. Also, I would somehow need to figure out which buffer was temporarily closest to when the actual photo was taken. Is there a better, simpler way for getting the camera intrinsic matrix for a single photo capture? If not, is there a way to calculate the matrix based on the image's metadata?
0
0
781
Feb ’24
My 14promax urgently needs 24 million pixels
The 24-megapixel camera is most widely used for daily photography, and the 48-megapixel camera is only used for taking landscapes or photos. After all, it takes up too much memory. The biggest problem with the 14promax now is that its photography is lame, and the 12-megapixel camera has long lagged behind Android. There are too many. Adding 24 million modes is much more valuable than updating iOS, and the experience is directly doubled.
0
0
437
Feb ’24
The lack of 24 million pixels in iPhone 14 promax is its biggest shortcoming
iPhone 14 promax uses 12 million pixels to sharpen it so severely that it is impossible to see. Using 48 million pixels takes up too much space. It starts with 128g. How can there be so much space, plus shooting videos. The biggest problem at present is that there are no 24 million pixels, which greatly affects the daily shooting experience. People around me who use the 14pro series say that not having 24 million pixels is the biggest problem at the moment.
0
0
396
Feb ’24