Hello everyone, with the release of Apple's new Final Cut Camera App, we see the possibility to overlay a Focus Peaking indicator over the camera feed, showing focussed areas.
We have already had a contrast based autofocus system for some time via the AVCaptureDevice.Format.AutoFocusSystem.contrastDetection, but I haven't found a way to actually present contrast areas to the user.
Given that Apple now natively has such an algorithm for the Final Cut Camera App, I wonder if we devs now also get access to this. If not, does anybody know of implementations of focus peaking out there?
Thanks and with best regards
Photos & Camera
RSS for tagExplore technical aspects of capturing high-quality photos and videos, including exposure control, focus modes, and RAW capture options.
Post
Replies
Boosts
Views
Activity
I have an app that uses a MultiCamCaptureSession, the devices of which are builtInUltraWideCamera and builtInLiDARDepthCamera cameras. Occasionally when outside I get some frame drops due to discontinuity that end in the media services being reset:
[06-24 11:27:13][CameraSession] Capture session runtime error: related decl 'e' for AVError(_nsError: Error Domain=AVFoundationErrorDomain Code=-11819 "Cannot Complete Action" UserInfo={NSLocalizedDescription=Cannot Complete Action, NSLocalizedRecoverySuggestion=Try again later.})
This runtime error notification is always superseded by 4-5 frame drops :
[06-24 11:27:10][CaptureSession] Dropped frame because Discontinuity
Logging the system temperature shows
[06-24 11:27:10][CaptureSession] Temperature is 'Fair'
I have some inclination that the frame discontinuity is being caused by the whileBalanceMode of the capture session, perhaps the algorithm requires 5 recent frames to work. I had a similar problem with the lidar depth camera where with filtering enabled exactly 5 frame drops would make the media services reset.
When the whiteBalanceMode is locked I do slightly better with 10 frame drops before the mediaServices are reset.
Is there any logging utility to determine the actual reason? All of these sampleBuffers come with no info attachment only the not so useful "Dropped frame because Discontinuity." Any ideas for solving this would be helpful as well. Maybe tuning the camera to work better with quickly varying lighting conditions?
I have downloaded the beta update to IOS 18, but the Clean Up option for photos is not present.
I updated to iOS 18 developer beta yesterday, and since then all of my 4k photos are displaying as 1080p when I look at them in the photos app. I need help, this is very annoying. Spent multiple hours trying to figure it out.
I have the following code:
extension AssetGridViewController: PHPhotoLibraryChangeObserver {
func photoLibraryDidChange(_ changeInstance: PHChange) {
Task { @MainActor in
guard let changes = changeInstance.changeDetails(for: fetchResult) else { return }
fetchResult = changes.fetchResultAfterChanges
}
}
}
With Swift 6, this generates a compilation error: Main actor-isolated instance method 'photoLibraryDidChange' cannot be used to satisfy nonisolated protocol requirement. The error includes to fix-it suggestions:
Adding nonisolated to the function (nonisolated func photoLibraryDidChange(_ changeInstance: PHChange))
Adding @preconcurrency to the protocol conformance (extension AssetGridViewController: @preconcurrency PHPhotoLibraryChangeObserver {)
Both options generate a runtime error: EXC_BREAKPOINT (code=1, subcode=0x105b7c400). For context, AssetGridViewController is a regular UIViewController.
Any ideas on how to fix this?
My app has encountered many watchdog issues on iOS 17, with stack traces as follows:
Attributed: Call stack 0:
mach_msg2_trap (in libsystem_kernel.dylib) + 7
mach_msg2_internal (in libsystem_kernel.dylib) + 79
mach_msg_overwrite (in libsystem_kernel.dylib) + 435
mach_msg (in libsystem_kernel.dylib) + 23
_dispatch_mach_send_and_wait_for_reply (in libdispatch.dylib) + 543
dispatch_mach_send_with_result_and_wait_for_reply (in libdispatch.dylib) + 59
xpc_connection_send_message_with_reply_sync (in libxpc.dylib) + 263
FigXPCConnectionSendSyncMessageCreatingReply (in CoreMedia) + 291
FigXPCRemoteClientSendSyncMessageCreatingReply (in CoreMedia) + 47
FigCaptureSessionRemoteCreate (in CMCapture) + 131
-[AVCaptureSession _createFigCaptureSession] (in AVFCapture) + 123
-[AVCaptureSession _initWithMediaEnvironment:] (in AVFCapture) + 619
-[AVCaptureSession init] (in AVFCapture) + 415
We also have many iOS 16 users, but have never encountered a watchdog issue with the AVCaptureSession init method in iOS 16. Is there any change in iOS 17 that could have caused this? How can I avoid this issue?
The complete stack trace is attached
avfoundation-watchdog.txt
Hey, I'm building a portrait mode into my camera app but I'm having trouble with matching the quality of Apples native camera implementation. I'm streaming the depth data and applying a CIMaskedVariableBlur to the video stream which works quite well but the definition of the object in focus looks quite bad in some scenarios. See comparison below with Apples UI + depth data.
What I don't quite understand is how Apple is able to do such a good cutout around my hand assuming it has similar depth data to what I am receiving. You can see in the depth image that my hand is essentially the same colour as parts of background, and this shows in the blur preview - but Apple gets around this.
Does anyone have any ideas?
Thanks!
Hi,
I'm looking to update the metadata properties of a DNG image stored on disc, saving to a new file.
Using ImageIO's CGImageSource and CGImageDestination classes, I run into a problem where by the destination doesn't support the type of the source. For example:
let imageSourceOptions = [kCGImageSourceShouldCache: false] as CFDictionary
if
let cgImageSource = CGImageSourceCreateWithURL(sourceURL as CFURL, imageSourceOptions),
let type = CGImageSourceGetType(cgImageSource) {
guard let imageDestination = CGImageDestinationCreateWithURL(destinationURL as CFURL, type, 1, nil) else {
fatalError("Unable to create image destination")
}
// Code to update properties and write out to destination url
}
}
When this code is executed I get the following errors on the command line when trying to create the destination:
2024-06-30 11:52:25.531530+0100 ABC[7564:273101] [ABC] findWriterForTypeAndAlternateType:119: *** ERROR: unsupported output file format 'com.adobe.raw-image'
2024-06-30 11:52:25.531661+0100 ABC[7564:273101] [ABC] CGImageDestinationCreateWithURL:4429: *** ERROR: CGImageDestinationCreateWithURL: failed to create 'CGImageDestinationRef'
I don't see a way to create a destination directly from a source? Yes, the code works for say a JPEG file but I want it to work for any image format that CGImageSource can work with?
Post Content:
Hi everyone,
I’m encountering an issue with how iPhone displays contact information from a vCard QR code in the contact preview. When I scan the QR code with my iPhone camera, the contact preview shows the email address between the name and the contact image, instead of displaying the organization name.
Here’s the structure of the vCard I’m using:
BEGIN:VCARD
VERSION:3.0
FN:Ahmad Rana
N:Rana;Ahmad;;;
ORG:Company 3
TEL;TYPE=voice,msg:+1234567890
EMAIL:a(at the rate)gmail.com
URL:https://example.com
IMPP:facebook:fb
END:VCARD
What I Expect:
When I scan it with camera and in the contact preview before creating the camera I want organization name between name and image of the preview but I get email instead of ogrganization name. If only organisation is passed then it displays correctly but when I pass email it displayed email in between.
Steps I’ve Taken:
Verified the vCard structure to ensure it follows the standard format.
Reordered the fields in the vCard to prioritize the organization name and job title.
Tested with a simplified vCard containing only the name, organization, and email.
Despite these efforts, the email address continues to be displayed in the contact preview between the name and the contact image, while the organization name is not shown as expected.
Question:
How can I ensure that the organization name is displayed correctly in the contact preview on iPhone when scanning a QR code? Are there specific rules or best practices for field prioritization in vCards that I might be missing?
I would appreciate any insights or suggestions on how to resolve this issue.
Thank you!
Following Metadata should be accessible from PHAsset Object: Title, caption, Keywords. Write now they are available in the SQLite Photo Library, but thats is not a clean solution.
NSString *filePath = @"/var/mobile/Media/DCIM/100APPLE/IMG_0800.MP4";
NSURL *fileURL = [NSURL fileURLWithPath:filePath];
AVURLAsset *asset = [[AVURLAsset alloc] initWithURL:fileURL options:nil];
Before iOS 18, you could access AVAsset using the method mentioned above, but starting from the iOS 18 beta version, the following error appears
Error Domain=NSCocoaErrorDomain Code=257 "未能打开文件“IMG_0800.MP4”,因为你没有查看它的权限。" UserInfo={NSURL=file:///var/mobile/Media/DCIM/100APPLE/IMG_0800.MP4, AVErrorFailedDependenciesKey=(
"assetProperty_AssetType"
), NSUnderlyingError=0x30c497f60 {Error Domain=NSOSStatusErrorDomain Code=-12203 "(null)"}}
Hi,
I would like to ask your advice with our IOS app, which has a problem with the Iphone 14 pro & Iphone 15 pro model camera focusing on close objects.
The game is made with Unity and the idea is that kids can scan random QR - and barcodes to catch monsters. With Iphone the no pro models focus and read the barcodes well, but new models with three back camera systems our app does not focus on close, that makes the barcode reading hard.
Somehow it seems that the Iphone pro model is not changing the camera lens to close focusing one in a third party app like it does when you use the camera itself.
Do you have any advice on how to solve the focusing problem?
I appreciate your help!
I am working on an iOS application using SwiftUI where I want to convert a JPG and a MOV file to a live photo. I am utilizing the LivePhoto Class from Github for this. The JPG and MOV files are displayed correctly in my WallpaperDetailView, but I am facing issues when trying to download the live photo to the gallery and generate the Live Photo.
Here is the relevant code and the errors I am encountering:
Console prints:
Play button should be visible Image URL fetched and set: Optional("https://firebasestorage.googleapis.com/...") Video is ready to play Video downloaded to: file:///var/mobile/Containers/Data/Application/.../tmp/CFNetworkDownload_7rW5ny.tmp Failed to generate Live Photo
I have verified that the app has the necessary permissions to access the Photo Library.
The JPEG and MOV files are successfully downloaded and can be displayed in the app.
The issue seems to occur when generating the Live Photo from the downloaded files.
struct WallpaperDetailView: View {
var wallpaper: Wallpaper
@State private var isLoading = false
@State private var isImageSaved = false
@State private var imageURL: URL?
@State private var livePhotoVideoURL: URL?
@State private var player: AVPlayer?
@State private var playerViewController: AVPlayerViewController?
@State private var isVideoReady = false
@State private var showBuffering = false
var body: some View {
ZStack {
if let imageURL = imageURL {
GeometryReader { geometry in
KFImage(imageURL)
.resizable()
...
}
}
if let playerViewController = playerViewController {
VideoPlayerViewController(playerViewController: playerViewController)
.frame(maxWidth: .infinity, maxHeight: .infinity)
.clipped()
.edgesIgnoringSafeArea(.all)
}
}
.onAppear {
PHPhotoLibrary.requestAuthorization { status in
if status == .authorized {
loadImage()
} else {
print("User denied access to photo library")
}
}
}
private func loadImage() {
isLoading = true
if let imageURLString = wallpaper.imageURL, let imageURL = URL(string: imageURLString) {
self.imageURL = imageURL
if imageURL.scheme == "file" {
self.isLoading = false
print("Local image URL set: \(imageURL)")
} else {
fetchDownloadURL(from: imageURLString) { url in
self.imageURL = url
self.isLoading = false
print("Image URL fetched and set: \(String(describing: url))")
}
}
}
if let livePhotoVideoURLString = wallpaper.livePhotoVideoURL, let livePhotoVideoURL = URL(string: livePhotoVideoURLString) {
self.livePhotoVideoURL = livePhotoVideoURL
preloadAndPlayVideo(from: livePhotoVideoURL)
} else {
self.isLoading = false
print("No valid image or video URL")
}
}
private func preloadAndPlayVideo(from url: URL) {
self.player = AVPlayer(url: url)
let playerViewController = AVPlayerViewController()
playerViewController.player = self.player
self.playerViewController = playerViewController
let playerItem = AVPlayerItem(url: url)
playerItem.preferredForwardBufferDuration = 1.0
self.player?.replaceCurrentItem(with: playerItem)
...
print("Live Photo Video URL set: \(url)")
}
private func saveWallpaperToPhotos() {
if let imageURL = imageURL, let livePhotoVideoURL = livePhotoVideoURL {
saveLivePhotoToPhotos(imageURL: imageURL, videoURL: livePhotoVideoURL)
} else if let imageURL = imageURL {
saveImageToPhotos(url: imageURL)
}
}
private func saveImageToPhotos(url: URL) {
...
}
private func saveLivePhotoToPhotos(imageURL: URL, videoURL: URL) {
isLoading = true
downloadVideo(from: videoURL) { localVideoURL in
guard let localVideoURL = localVideoURL else {
print("Failed to download video for Live Photo")
DispatchQueue.main.async {
self.isLoading = false
}
return
}
print("Video downloaded to: \(localVideoURL)")
self.generateAndSaveLivePhoto(imageURL: imageURL, videoURL: localVideoURL)
}
}
private func generateAndSaveLivePhoto(imageURL: URL, videoURL: URL) {
LivePhoto.generate(from: imageURL, videoURL: videoURL, progress: { percent in
print("Progress: \(percent)")
}, completion: { livePhoto, resources in
guard let resources = resources else {
print("Failed to generate Live Photo")
DispatchQueue.main.async {
self.isLoading = false
}
return
}
print("Live Photo generated with resources: \(resources)")
self.saveLivePhotoToLibrary(resources: resources)
})
}
private func saveLivePhotoToLibrary(resources: LivePhoto.LivePhotoResources) {
LivePhoto.saveToLibrary(resources) { success in
DispatchQueue.main.async {
if success {
self.isImageSaved = true
print("Live Photo saved successfully")
} else {
print("Failed to save Live Photo")
}
self.isLoading = false
}
}
}
private func fetchDownloadURL(from gsURL: String, completion: @escaping (URL?) -> Void) {
let storageRef = Storage.storage().reference(forURL: gsURL)
storageRef.downloadURL { url, error in
if let error = error {
print("Failed to fetch image URL: \(error)")
completion(nil)
} else {
completion(url)
}
}
}
private func downloadVideo(from url: URL, completion: @escaping (URL?) -> Void) {
let task = URLSession.shared.downloadTask(with: url) { localURL, response, error in
guard let localURL = localURL, error == nil else {
print("Failed to download video: \(String(describing: error))")
completion(nil)
return
}
completion(localURL)
}
task.resume()
}
}```
Is There Anyone Who implemented Rich Notification Recently I tried To Implement it but its not working, I Have went through blogs and tut which are Available On on internet but They are not working, Please let me know if there is someone who Can Help Me With That.
I was taking some photos of the Milky Way while using a tripod and the stars seemed to not be in focus. I tried plenty of times to get stars in focus. I switched to other apps that have a manual focus and the problem was resolved however the results weren’t as clear as the standard app would have been. I then tried again with the standard app but while focusing on an outdoor camera on our house and yet the camera would still not focus no matter how much I tried. I’m not sure if it is a bug with the iOS 18 developer beta. Either way I figured it was best to post something I had realized. Apple please add a manual focus option.
I am building a LockedCameraCaptureExtension for my app. The example code works as expected. I am now in the process of porting my existing camera UI to the LockedCameraCaptureExtension.
Currently, my custom UI is shown for a split second when I tap the control to open the LockedCameraCaptureExtension but it quickly exits back into Lock Screen. I assume my ported UI is doing something wrong.
What is the recommended way to debug a LockedCameraCaptureExtension? I can't get my break points to break and os_log does not seem to log anything to Console.
手机系统相册中有个长按识别对象的功能,这个功能在苹果开发中叫做什么,我应该使用哪个控件才能拥有这个功能?
There is a long press recognition feature in the photo album of the mobile phone system. What is this feature called in Apple development, and which control should I use to have this feature?
有人知道这个鼠标点击右键自动识别照片中的对象然后可以实现抠图的功能用的是哪个控件吗?
Does anyone know which control is used to automatically recognize objects in photos and achieve the function of cutout by right-clicking the mouse?