Photos & Camera

RSS for tag

Explore technical aspects of capturing high-quality photos and videos, including exposure control, focus modes, and RAW capture options.

Post

Replies

Boosts

Views

Activity

CMIO CameraExtension with DistributedNotificationCenter and Process.run
I got code of CMIO CameraExtension by Xcode target and it is running with FaceTime. I guess this kind of Extension has lots of security limitation. I like to run command like "netstat" in Extension. Is that possible to call Process.run()? I got keep getting error like "The file zsh doesn’t exist". Same code with Process.run() worked in macOS app. I like to run DistributedNotificationCenter and send text from App to CameraExtension. Is that possible? I do not receive any message on CameraExtension. If there is any other IPC method between macOS app and CameraExtension, please let me know.
2
0
713
Dec ’23
Synchronized depth and video data not being received with builtInLiDARDepthCamera
Hello, Faced with a really perplexing issue. Primary problem is that sometimes I get depth and video data as expected, but at other times I don't. And sometimes I'll get both data outputs for a 4-5 frames and then it'll just stop. The source code I implemented is a modified version of the sample code provided by Apple, and interestingly enough I can't re-create this issue with the Apple sample app. So wondering what I could be doing wrong? Here's the code for setting up the capture input. preferredDepthResolution is 1280 in my case. I'm running this on an iPad Pro (6th gen). iOS version 17.0.3 (21A360). Encounter this issue on iPhone 13 Pro as well. iOS version is 17.0 (21A329) private func setupLiDARCaptureInput() throws { // Look up the LiDAR camera. guard let device = AVCaptureDevice.default(.builtInLiDARDepthCamera, for: .video, position: .back) else { throw ConfigurationError.lidarDeviceUnavailable } guard let format = (device.formats.last { format in format.formatDescription.dimensions.width == preferredWidthResolution && format.formatDescription.mediaSubType.rawValue == kCVPixelFormatType_420YpCbCr8BiPlanarFullRange && format.videoSupportedFrameRateRanges.first(where: {$0.maxFrameRate >= 60}) != nil && !format.isVideoBinned && !format.supportedDepthDataFormats.isEmpty }) else { throw ConfigurationError.requiredFormatUnavailable } guard let depthFormat = (format.supportedDepthDataFormats.last { depthFormat in depthFormat.formatDescription.mediaSubType.rawValue == kCVPixelFormatType_DepthFloat16 }) else { throw ConfigurationError.requiredFormatUnavailable } // Begin the device configuration. try device.lockForConfiguration() // Configure the device and depth formats. device.activeFormat = format device.activeDepthDataFormat = depthFormat let desc = format.formatDescription dimensions = CMVideoFormatDescriptionGetDimensions(desc) let duration = CMTime(value:1, timescale:CMTimeScale(60)) device.activeVideoMinFrameDuration = duration device.activeVideoMaxFrameDuration = duration // Finish the device configuration. device.unlockForConfiguration() self.device = device print("Selected video format: \(device.activeFormat)") print("Selected depth format: \(String(describing: device.activeDepthDataFormat))") // Add a device input to the capture session. let deviceInput = try AVCaptureDeviceInput(device: device) captureSession.addInput(deviceInput) guard let audioDevice = AVCaptureDevice.default(for: .audio) else { return } // Configure audio input - always configure audio even if isAudioEnabled is false audioDeviceInput = try! AVCaptureDeviceInput(device: audioDevice) captureSession.addInput(audioDeviceInput) deviceSystemPressureStateObservation = device.observe( \.systemPressureState, options: .new ) { _, change in guard let systemPressureState = change.newValue else { return } print("system pressure \(systemPressureState.levelAsString()) due to \(systemPressureState.factors)") } } Here's how I'm setting up the output: private func setupLiDARCaptureOutputs() { // Create an object to output video sample buffers. videoDataOutput = AVCaptureVideoDataOutput() captureSession.addOutput(videoDataOutput) // Create an object to output depth data. depthDataOutput = AVCaptureDepthDataOutput() depthDataOutput.isFilteringEnabled = false captureSession.addOutput(depthDataOutput) audioDeviceOutput = AVCaptureAudioDataOutput() audioDeviceOutput.setSampleBufferDelegate(self, queue: videoQueue) captureSession.addOutput(audioDeviceOutput) // Create an object to synchronize the delivery of depth and video data. outputVideoSync = AVCaptureDataOutputSynchronizer(dataOutputs: [depthDataOutput, videoDataOutput]) outputVideoSync.setDelegate(self, queue: videoQueue) // Enable camera intrinsics matrix delivery. guard let outputConnection = videoDataOutput.connection(with: .video) else { return } if outputConnection.isCameraIntrinsicMatrixDeliverySupported { outputConnection.isCameraIntrinsicMatrixDeliveryEnabled = true } } The top part of my delegate implementation is as follows: func dataOutputSynchronizer( _ synchronizer: AVCaptureDataOutputSynchronizer, didOutput synchronizedDataCollection: AVCaptureSynchronizedDataCollection ) { // Retrieve the synchronized depth and sample buffer container objects. guard let syncedDepthData = synchronizedDataCollection.synchronizedData(for: depthDataOutput) as? AVCaptureSynchronizedDepthData, let syncedVideoData = synchronizedDataCollection.synchronizedData(for: videoDataOutput) as? AVCaptureSynchronizedSampleBufferData else { if synchronizedDataCollection.synchronizedData(for: depthDataOutput) == nil { print("no depth data at time \(mach_absolute_time())") } if synchronizedDataCollection.synchronizedData(for: videoDataOutput) == nil { print("no video data at time \(mach_absolute_time())") } return } print("received depth data \(mach_absolute_time())") } As you can see, I'm console logging whenever depth data is not received. Note because I'm driving the video frames at 60 fps, its expected that I'll only receive depth data for every alternate video frame. Console output is posted as a follow up comment (because of the character limit). I edited some lines out for brevity. You'll see it started streaming correctly but after a while it stopped received both video and depth outputs (in some other runs, it works perfectly and in some other runs I receive no depth data whatsoever). One thing to note, I sometimes run quicktime mirroring to see the device screen to see what the app is displaying (so not sure if that's causing any interference - that said I don't see any system pressure changes either). Any help is most appreciated! Thanks.
2
0
704
Dec ’23
Apple ProRAW image issue when editing
I am running Sonoma 14.1.1 and having an issue with DNG ProRAW images when choosing to edit. I can choose to edit using the photos app, or edit with Photoshop and in both cases the image once edit is chosen becomes less clear and slightly fuzzy with loss of detail. This happens too when I choose to export the photo from photos as a DNG file and then also try to edit with Photoshop the photos. There is a drastically noticeable difference in quality of the image. This appears to be the way the image is handles in the photos app itself. Even if I save directly from the iPhone to files, it does the same thing once on the Mac. Attached are some screen shot examples, but still clear to see the difference.
0
0
388
Nov ’23
Request for Technical Specifications of iPhone SE (Model Number MMXN3ZD/A)
Dear Sir/Madam, I am reaching out as a developer working on an academic project. I am currently working on my bachelor's thesis, where I am developing a mobile application for iOS devices. For the success of this project, it is essential to have precise information about the hardware components of specific iPhone models, especially the iPhone SE with the model number MMXN3ZD/A and iOS version 17.1.1. My primary interest lies in the exact technical specifications of the LEDs and the CCD or CMOS image sensor (depending on which type is used) installed in the iPhone SE. For my project, it is crucial to understand the spectral properties of these components: LED Specifications: I require information about the spectra of the LEDs, especially which wavelengths of light they emit. This is relevant for the functionality of my app, which relies on photometric analyses. CCD/CMOS Sensor Specifications: Furthermore, it is important for me to know the wavelengths for which the sensor built into the device is sensitive. This information is critical to accurately interpret the interaction between the sensor and the illuminated environment. The results of my research and development will not only be significant for my academic work but could also provide valuable insights for the further development of iOS applications in my field of study. I would be very grateful if you could provide me with this information or direct me to the appropriate department or resource where I can obtain these specific technical details. Thank you in advance for your support and cooperation. Best regards,
1
0
379
Nov ’23
Performance issues with `AVAssetWriter`
Hey all! I'm trying to build a Camera app that records Video and Audio buffers (AVCaptureVideoDataOutput and AVCaptureAudioDataOutput) to an mp4/mov file using AVAssetWriter. When creating the Recording Session, I noticed that it blocks for around 5-7 seconds before starting the recording, so I dug deeper to find out why. This is how I create my AVAssetWriter: let assetWriter = try AVAssetWriter(outputURL: tempURL, fileType: .mov) let videoWriter = self.createVideoWriter(...) assetWriter.add(videoWriter) let audioWriter = self.createAudioWriter(...) assetWriter.add(audioWriter) assetWriter.startWriting() There's two slow parts here in that code: The createAudioWriter(...) function takes ages! This is how I create the audio AVAssetWriterInput: // audioOutput is my AVCaptureAudioDataOutput, audioInput is the microphone let settings = audioOutput.recommendedAudioSettingsForAssetWriter(writingTo: .mov) let format = audioInput.device.activeFormat.formatDescription let audioWriter = AVAssetWriterInput(mediaType: .audio, outputSettings: settings, sourceFormatHint: format) audioWriter.expectsMediaDataInRealTime = true The above code takes up to 3000ms on an iPhone 11 Pro! When I remove the recommended settings and just pass nil as outputSettings: audioWriter = AVAssetWriterInput(mediaType: .audio, outputSettings: nil) audioWriter.expectsMediaDataInRealTime = true ...It initializes almost instantly - something like 30 to 50ms. Starting the AVAssetWriter takes ages! Calling this method: assetWriter.startWriting() ...takes takes 3000 to 5000ms on my iPhone 11 Pro! Does anyone have any ideas why this is so slow? Am I doing something wrong? It feels like passing nil as the outputSettings is not a good idea, and recommendedAudioSettingsForAssetWriter should be the way to go, but 3 seconds initialization time is not acceptable. Here's the full code: RecordingSession.swift from react-native-vision-camera. This gets called from here. I'd appreciate any help, thanks!
2
1
956
Nov ’23
iPhone RAW photos is very dark when it import to 3rd software
I use the official api to output the raw file. when I transfer it to my mac, it is very dark. But if I shot with iOS original camera app, it is brighter. This is how I save the proraw file to photo library let creationRequest = PHAssetCreationRequest.forAsset() creationRequest.addResource(with: .photo, data: photo.compressedData, options: nil) // Save the RAW (DNG) file an alternate resource for the Photos asset. let options = PHAssetResourceCreationOptions() ////////options.shouldMoveFile = true creationRequest.addResource(with: .alternatePhoto, fileURL: photo.rawFileURL, options: options)
0
0
317
Nov ’23
AVAssetWriter
As of iOS 17.X a .mov (h264/alac) recording has a problem where the track length doesn't equal the container length. So if the Audio/Video tracks are 10.00 long, the fullVideo.mov could be 10.06 long. This does not happen with any version pervious to iOS 17. Anyone else experiencing this or have advice?
0
0
317
Nov ’23
Request for Access to Apple Photos API
Hello, I'm currently investigating the possibility of accessing my photos stored on my iCloud via a dedicated API, in order to create a photo portfolio. However, after extensive research, I haven't found any documentation or public API allowing such access. I wonder if there are any future plans to make such an API available to third-party developers. I would be grateful if you could provide me with information regarding the possibility of accessing an API for Apple Photos or any other solution you might suggest. Thank you for your attention and assistance. Yours sincerely Owen
0
0
656
Nov ’23
Accessing "From my mac" in PhotoKit
Is it possible to access "From my mac" photos/PHAssetCollection through PhotoKit in iOS? "From my mac" photos/videos are media synced from a mac where iCloud Photos are turned off on the iOS device, like what we did in the ole' days before iCloud Photos. I have set up an iOS device with "From my mac" albums present in Photos.app, but in my own app I don't seem to be able to access those collections/photos through PhotoKit using all the defined PHAssetCollectionTypes. Are these directly synced photos simply not available through PhotoKit and you would have to revert to the deprecated ALAssetLibrary?
4
0
752
Oct ’23
API for taking panorama & stitching ios 14 style?
A few versions of iOS ago, the stitching algorithm for panoramas was updated, which produces results that in my opinion look less good for what I'm using the panoramas for. I was exploring developing a custom panorama app but couldn't find the API for taking panoramic photos, much less stitching them. Is there an API in AVFoundation or elsewhere to use for capturing a panoramic photo and stitching it?
1
0
663
Nov ’23
iOS16: localIdentifier of PHAsset gets changed after saving to camera roll
Environment: iOS 16 beta 2, beta 3. iPhone 11 Pro, 12 mini Steps to reproduce: Subscribe to Photo Library changes via PHPhotoLibraryChangeObserver, put some logs to track inserted/deleted objects: func photoLibraryDidChange(_ changeInstance: PHChange) { if let changeDetails = changes.changeDetails(for: allPhotosFetchResult) { for insertion in changeDetails.insertedObjects { print("🥶 INSERTED: ", insertion.localIdentifier) } for deletion in changeDetails.removedObjects { print("🥶 DELETED: ", deletion.localIdentifier) } } } Save a photo to camera roll with PHAssetCreationRequest Go to the Photo Library, delete the newly saved photo Come back to the app and watch the logs: 🥶 INSERTED:  903933C3-7B83-4212-8DF1-37C2AD3A923D/L0/001 🥶 DELETED:  39F673E7-C5AC-422C-8BAA-1BF865120BBF/L0/001 Expected result: localIdentifier of the saved and deleted asset is the same string in both logs. In fact: It's different. So it appears that either the localIdentifier of an asset gets changed after successful saving, or it's a bug in the Photos framework in iOS 16. I've checked - in iOS 15 it works fine (IDs in logs match).
2
1
1.6k
Jul ’22
Ios flashlight specification
We are making application of ios. The application is for Disaster prevention in Japan. Because, Recentry in Japan has many Disaster in all season. We spplied to Local government in Japan and using general public and they are using well. One of the local government request to us, they want also supply to deaf person and help whe disaster occur. Local government already has disaster prevention broadcast but it is using loudspeaker. Then, when rains heavily, it can not hear to general public. And our application is from disaster prevention broadcast and forwarding to ios smartphone. It is helpfull to general public well. We are making new application it is not only prevention broadcast text but flashlight with iphone. But after making application, the flashlight is lighting only open our application. like beloe link. But, if Deaf person using it maybe he is not notice well. Our application is already has viblation function but Deaf person put his smartphone in his bag maybe I think he never noticed that Heavy rain and tsunami. Here is our application https://apps.apple.com/jp/app/cosmocast/id1247774270?mt=8 Here is application rule for flashlight. https://stackoverflow.com/questions/32136442/iphone-flashlight-not-working-while-app-is-in-background/32137434 I want help Deaf person and also senior citizens for Heavy rain and tsunami by our application. We'd like to make a flash light at the same time as the push notification arrived. Does anyone know a good way? Thank you and Best regards. Tomita.
1
0
799
Oct ’20
how to get the proraw image output with 1:1, 16:9
Now I use AVFoundation framework to get the photo output, but the image aspect ratio is 4:3. But according to the Camera app in the iPhone 13 Pro, it has server image aspect ratio: 4:3, 16:9 and 1:1 when take the proraw image. So how can I get the 1:1, 16:9 aspect ratio proraw image? After I do some research, I find that no matter you use which kinds of camera in the iPhone 11, 12, 13, 14, 15 or Pro, the result image is always 4:3, 1:1 and 16:9 come from the 4:3 cropping. If it is true, how can I crop the proraw file without any data lossing? My developer environment: iPhone 13 Pro iOS 16.7 xcode 14.3.1 This is the session configuration code for the camera device configuration. session.beginConfiguration() /* Do not create an AVCaptureMovieFileOutput when setting up the session because Live Photo is not supported when AVCaptureMovieFileOutput is added to the session. */ session.sessionPreset = .photo // Add video input. do { var defaultVideoDevice: AVCaptureDevice? if let backCameraDevice = AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .back) { // If a rear dual camera is not available, default to the rear wide angle camera. defaultVideoDevice = backCameraDevice } else if let frontCameraDevice = AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .front) { // If the rear wide angle camera isn't available, default to the front wide angle camera. defaultVideoDevice = frontCameraDevice } guard let videoDevice = defaultVideoDevice else { print("Default video device is unavailable.") setupResult = .configurationFailed session.commitConfiguration() return } let videoDeviceInput = try AVCaptureDeviceInput(device: videoDevice) if session.canAddInput(videoDeviceInput) { session.addInput(videoDeviceInput) self.videoDeviceInput = videoDeviceInput } else { print("Couldn't add video device input to the session.") setupResult = .configurationFailed session.commitConfiguration() return } } catch { print("Couldn't create video device input: \(error)") setupResult = .configurationFailed session.commitConfiguration() return } // check the lens list let camerasOptions = videoDeviceDiscoverySession.devices var availableCameras: [AVCaptureDevice.DeviceType] = [] if camerasOptions.isEmpty { print("no camera devices") } else { for camera in camerasOptions { if camera.deviceType == .builtInUltraWideCamera || camera.deviceType == .builtInWideAngleCamera || camera.deviceType == .builtInTelephotoCamera { if !availableCameras.contains(camera.deviceType) { availableCameras.append(camera.deviceType) } } } } DispatchQueue.main.async { self.lensList = availableCameras } // Add the photo output. if session.canAddOutput(photoOutput) { session.addOutput(photoOutput) photoOutput.isHighResolutionCaptureEnabled = true photoOutput.maxPhotoQualityPrioritization = .quality print(photoOutput.isAppleProRAWSupported) // Use the Apple ProRAW format when the environment supports it. photoOutput.isAppleProRAWEnabled = photoOutput.isAppleProRAWSupported DispatchQueue.main.async { self.isSupportAppleProRaw = self.photoOutput.isAppleProRAWSupported } } else { print("Could not add photo output to the session") setupResult = .configurationFailed session.commitConfiguration() return } session.commitConfiguration()
1
0
618
Oct ’23
Photos framework and reusable UICollectionViewCell
I have a UICollectionView to display photos from a device's album with the Photos framework. The photos are correctly displayed, but if I scroll fast (like when you tape at the top of the screen to go to the top of the collectionView), I have some photos which are not at the good indexPath. I just need to scroll a bit to put the bad photo out of the screen, and everything go back in place.I clean the cell during prepareForReuse by canceling the current request.I presume it's a problem with the asynchronous request of PHImageManager, but I don't know how to avoid this problem. And I want to keet the asynchronous request to keep the collectionView smooth.Here some code :View Controller extension AlbumDetailViewController : UICollectionViewDataSource { func numberOfSectionsInCollectionView(collectionView: UICollectionView) -> Int { return 1 } func collectionView(collectionView: UICollectionView, numberOfItemsInSection section: Int) -> Int { return photoList.count } func collectionView(collectionView: UICollectionView, cellForItemAtIndexPath indexPath: NSIndexPath) -> UICollectionViewCell { let cell = collectionView.dequeueReusableCellWithReuseIdentifier("PhotoCell", forIndexPath: indexPath) as! PhotoCollectionCell cell.setImage(photoList.objectAtIndex(indexPath.row) as! PHAsset) return cell } }Custom CollectionViewCell class PhotoCollectionCell: UICollectionViewCell { @IBOutlet weak var imageView: UIImageView! var requestId: PHImageRequestID! let manager = PHImageManager.defaultManager() override func awakeFromNib() { super.awakeFromNib() // Initialization code } override func prepareForReuse() { self.imageView.image = nil manager.cancelImageRequest(self.requestId) } func setImage(asset: PHAsset) { let option = PHImageRequestOptions() option.resizeMode = .Fast option.deliveryMode = .HighQualityFormat self.requestId = manager.requestImageForAsset(asset, targetSize: CGSize(width: self.frame.size.width * UIScreen.mainScreen().scale, height: self.frame.size.height * UIScreen.mainScreen().scale), contentMode: PHImageContentMode.Default, options: option, resultHandler: {(result, info)->Void in self.imageView.image = result }) } }Thank you
1
0
1.4k
Feb ’16
AVCaptureMultiCamSession and unstable nominalFrameRate of videoAssetTracks
Why, when I am recording and mixing videos from two cameras simultaneously using AVMultiCamPiP app as I guide, the nominalframerates of the videoAssetTracks I am recording do not have a fixed value (e.g. 30) but floats between 20-30 when the active format I am loading on both AVCaptureDevices supports that (e.g. 'vide'/'420v' 1920x1080, { 1- 30 fps},....) and I set min and max frame duration at 30fps (1,30).
0
0
380
Oct ’23
iPhone 14 pro blurry image issue
We are developing an image classification iOS app where we use Tensor flow model. Prediction of Tensor flow model depends upon accuracy of image captured. We are facing issue with iPhone 14 pro versions. Images captured are blurry. As per this link, https://www.pcmag.com/news/apple-promises-fix-for-iphone-14-pro-camera-shake-bug Blurry issue is fixed by apple. But we are still facing this issue on iPhone 14 Pro versions. Our iOS version is above 16.2. We are also facing this issue on whatsApp (and other third party applications as well). Is there any official documentation on how to open camera on iPhone 14 pro programmatically? Note: When we use apple camera app this issue doesn’t exist. Is this hardware or software issue? Is this issue fixed on iPhone 15 version? Can you please guide us on this issue. Thanks Jay
0
0
407
Oct ’23