Hi Apple Engineer,
My App is using ImageCapture Framwork to connect DSLR Camera, Before iOS 18 this method is effective,but When I upgraded my iPhone and iPad, found my app can`t connect DSLR Camera, open Setting -> Privacy & Security -> Files and Folders permission, can‘t found my app, I swear it worked before iOS 18.
I find other developers have the same problem.
https://forums.developer.apple.com/forums/thread/756960 .
https://developer.apple.com/forums/thread/765768.
I also found a process for reproducing this problem in ios 18,
Do reset all settings.
Can you help me with this problem? Or tell me how to use the API properly.Look forward to your reply. Thank you very much.
Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.
Post
Replies
Boosts
Views
Activity
Hi community,
I'm trying to setup an AVAudioFormat with AVAudioPCMFormatInt16. But, i've an error :
AVAEInternal.h:125 [AUInterface.mm:539:SetFormat: ([[busArray objectAtIndexedSubscript:(NSUInteger)element] setFormat:format error:&nsErr])] returned false, error Error Domain=NSOSStatusErrorDomain Code=-10868 "(null)"
If i understand the error code 10868, the format is not correct. But, how i can use PCM Int16 format ? Here is my method :
- (void)setupAudioDecoder:(double)sampleRate audioChannels:(double)audioChannels {
if (self.isRunning) {
return;
}
self.audioEngine = [[AVAudioEngine alloc] init];
self.audioPlayerNode = [[AVAudioPlayerNode alloc] init];
[self.audioEngine attachNode:self.audioPlayerNode];
AVAudioChannelCount channelCount = (AVAudioChannelCount)audioChannels;
self.audioFormat = [[AVAudioFormat alloc] initWithCommonFormat:AVAudioPCMFormatInt16
sampleRate:sampleRate
channels:channelCount
interleaved:YES];
NSLog(@"Audio Format: %@", self.audioFormat);
NSLog(@"Audio Player Node: %@", self.audioPlayerNode);
NSLog(@"Audio Engine: %@", self.audioEngine);
// Error on this line
[self.audioEngine connect:self.audioPlayerNode to:self.audioEngine.mainMixerNode format:self.audioFormat];
/**NSError *error = nil;
if (![self.audioEngine startAndReturnError:&error]) {
NSLog(@"Erreur lors de l'initialisation du moteur audio: %@", error);
return;
}
[self.audioPlayerNode play];
self.isRunning = YES;*/
}
Also, i see the audioEngine seem not running ?
Audio Engine:
________ GraphDescription ________
AVAudioEngineGraph 0x600003d55fe0: initialized = 0, running = 0, number of nodes = 1
Anyone have already use this format with AVAudioFormat ?
Thank you !
I feel that IOS18 camera filters are over complicated and generate lower level results than iOS18 filters. I am really missing the Vivid filter.
It was perfect on ios17.
As a straightforward example, I've taken Apple's MV-HEVC sample project and added two lines.
First, after the AVAssetWriterInput is created:
frameInput.performsMultiPassEncodingIfSupported = true
Second, after the call to multiviewWriter.startWriting():
print("canPerformMultiplePasses: \(frameInput.canPerformMultiplePasses)")
Which prints true.
This leads me to believe that the first encoding pass should proceed as-normal (even though I haven't handled the logic for the completion of the first pass, etc.).
However, I receive this error when the code attempts to appendTaggedBuffers to the AVAssetWriterInputTaggedPixelBufferGroupAdaptor:
Fatal error: Failed to append tagged buffers to multiview output
Am I missing a step? Or is the multi-pass encoding only supported for standard sample/pixel buffers (and not tagged buffers)?
We have a new photo sharing app (https://photodare.ca).
We've had no issues with photos loading in North America and Caribbean, but so far 2 users (Germany, Netherlands) are saying they can't load photos even though they've proven they have permissions for photos enabled.
I can't reproduce this in Canada.
Anyone know about other permissions we need to setup for european countries, or is anyone in GDPR countries willing to try this for us?
They were on 17.6.1.
Thanks either way
Task {
for await update in LockedCameraCaptureManager.shared.sessionContentUpdates {
switch update {
case .initial(let urls):
print("frank: init \(urls)")
await MainActor.run {
let label = UILabel(frame: CGRect(x: 100, y: 100, width: 100, height: 30))
label.text = "frank test"
label.textColor = .black
UIViewController.getTop().view.addSubview(label)
}
case .added(let url):
print("frank: add \(url)")
case .removed(let url):
print("frank: removed \(url)")
default:
break
}
}
}
why 'case .initial(let urls)': never never be executed? Can some one provide a sample code?
Hi:
I am working with the ObjectCapture frameworks and sample code.
Everything works great.
We are trying to go from capturing 12MP images as in the sample code to capturing 48MP 6048 × 8064 images.
We can't seem to get it to work.
Any advice here?
This ideal gonna be cool:
When people finish recording a video and later realize there's something else worth capturing, they can only create a second clip. But what if it were possible to reopen the first video and continue recording from where they left off? This would be a great convenience for many people
With iOS 18.1 having call recording out of the box, is it now possible to build apps that can record calls?
I could not find anything in the swift ios docs yet.
I'm seeking information about the original file schema for an m4a file recorded directly on an iPhone (iPhone 5 running iOS 9.2.0).
I currently have two files from which I extracted metadata using ExifTool.
The first file was provided to me by someone who claims it was recorded on an iPhone 5 with iOS 9.2.0. I would like to verify whether this file has been edited.
File Permissions: -rwx------
Content Create Date: 2016:03:01 14:21:08+07:00
The second file was recorded by me on the same device model and iOS version.
File Permissions: -rw-r--r--
Date/Time Original: 2024:10:03 11:44:16+07:00
As you can see, the file permissions differ, and the key for the recording date also differs: one uses "Content Create Date" while the other uses "Date/Time Original." I would like to determine if the first file was edited, but I haven't been able to find any official documentation on the m4a schema or metadata structure from audio recorder apps. I reached out to support, and they directed me to this forum. Any insights or help would be appreciated.
I'm developing an iOS app using DockKit to control a motorized stand. I've noticed that as the zoom factor of the AVCaptureDevice increases, the stand's movement becomes increasingly erratic up and down, almost like a pendulum motion. I'm not sure why this is happening or how to fix it.
Here's a simplified version of my tracking logic:
func trackObject(_ boundingBox: CGRect, _ dockAccessory: DockAccessory) async throws {
guard let device = AVCaptureDevice.default(for: .video),
let input = try? AVCaptureDeviceInput(device: device) else {
fatalError("Camera not available")
}
let currentZoomFactor = device.videoZoomFactor
let dimensions = device.activeFormat.formatDescription.dimensions
let referenceDimensions = CGSize(width: CGFloat(dimensions.width), height: CGFloat(dimensions.height))
let intrinsics = calculateIntrinsics(for: device, currentZoom: Double(currentZoomFactor))
let deviceOrientation = UIDevice.current.orientation
let cameraOrientation: DockAccessory.CameraOrientation = {
switch deviceOrientation {
case .landscapeLeft: return .landscapeLeft
case .landscapeRight: return .landscapeRight
case .portrait: return .portrait
case .portraitUpsideDown: return .portraitUpsideDown
default: return .unknown
}
}()
let cameraInfo = DockAccessory.CameraInformation(
captureDevice: input.device.deviceType,
cameraPosition: input.device.position,
orientation: cameraOrientation,
cameraIntrinsics: useIntrinsics ? intrinsics : nil,
referenceDimensions: referenceDimensions
)
let observation = DockAccessory.Observation(
identifier: 0,
type: .object,
rect: boundingBox
)
let observations = [observation]
try await dockAccessory.track(observations, cameraInformation: cameraInfo)
}
func calculateIntrinsics(for device: AVCaptureDevice, currentZoom: Double) -> matrix_float3x3 {
let dimensions = CMVideoFormatDescriptionGetDimensions(device.activeFormat.formatDescription)
let width = Float(dimensions.width)
let height = Float(dimensions.height)
let diagonalPixels = sqrt(width * width + height * height)
let estimatedFocalLength = diagonalPixels * 0.8
let fx = Float(estimatedFocalLength) * Float(currentZoom)
let fy = fx
let cx = width / 2.0
let cy = height / 2.0
return matrix_float3x3(
SIMD3<Float>(fx, 0, cx),
SIMD3<Float>(0, fy, cy),
SIMD3<Float>(0, 0, 1)
)
}
I'm calling this function regularly (10-30 times per second) with updated bounding box information. The erratic movement seems to worsen as the zoom factor increases.
Questions:
Why might increasing the zoom factor cause this erratic movement?
I'm currently calculating camera intrinsics based on the current zoom factor. Is this approach correct, or should I be doing something differently?
Are there any other factors I should consider when using DockKit with a variable zoom?
Could the frequency of calls to trackRider (10-30 times per second) be contributing to the erratic movement? If so, what would be an optimal frequency?
Any insights or suggestions would be greatly appreciated. Thanks!
Hello developer community.
I purchase recently my new iPhone 16 Pro Max; it is a premium device with great quality overall.
However, I am having a big trouble shotting in ProRaw MAX (48 mode) with native camera.
Just to be clear, the problem that i will describe do not happen in 3rd apps, such as ProCam; only with native camera.
When I use ProRaw Max, and take the photo, and watch the photo in the gallery the image can’t load and render properly. Even, when I maximize the image to the maximum I can see pixelated portions, defects and super low resolution and excessive denoise.
For comparison, this not occur with my previous iPhone 15 PM and/or when I capture photos from ProCam (same settings and configurations) in the 16PM. I proceed to take the photo, open the gallery and I see full of details, when zoomed to 100%.
I tried to format the phone, reinstall the software via my mac. Tried even to look at some forums to find if there’s someone with the same issue, the information available so far is very low.
I’m in contact with apple assistant from my country (Portugal), and they escalated this problem to the engineers. (that’s what I’ve been told).
They did all the tests remotely (via analysis and improvement’s) and they told me that my phone is perfect in the hardware department. I will wait for the next days to be contacted again.
I’m on iOS 18.0.1. (The last software available at this time).
I tried multiple 16PM, from friends, family and stores (more or less 10 units), and they all showed the exact same problematic.
I’m a professional photographer, so I find this frustrating and unacceptable.
I would appreciate any additional suggestion or information. Thank you!
Cannot add photos or files because they are bigger than 5Mb.
Currently we tested iOS AAC LC encoder using AudioToolbox framework, no matter we set mManufacturer to kAppleHardwareAudioCodecManufacturer or kAppleSoftwareAudioCodecManufacturer, it always run on CPU.
We are working on an app for the vision pro which has high polygons count and lots of high resolution textures. Everything looks smooth and in general very well, The issue is the moment we turn on voice control even if it is not being used the visuals at the center start to stutter left to right. Has anyone seen this ?, it must be a bug, any workaround ?
Thanks,
Guillermo
When we tested the audio quality of our VoIP App, we found that when the iOS18.0 device was played with AirPods Pro 2, we could hear noises similar to peak clipping and distortion, especially when the sound source played was loud and high-pitched. Here is the device information we tested:
Model: iPhone 16 Pro Max, iPhone 15 Pro
System version: iOS 18.0 (22A3354)
Bluetooth headset model: AirPods Pro 2
Bluetooth firmware version: 6F8
We tested multiple apps (including phone calls, FaceTime, Zoom, WeChat, Tencent Meeting), and they all had the above noise problem.
We also found two phenomena:
If we use the same iOS 18 device to connect HUAWEI FreeBuds Pro or FreeBuds 2, there is no such noise problem;
If we use an iOS 17 device to connect to the same AirPods Pro 2 for testing, there is no such noise problem;
Therefore, we suspect that it is caused by the compatibility problem between iOS 18.0 and AirPods firmware 6F8. The firmware version of our AirPods Pro 2 is 6F8, which was released on June 26, and iOS 18.0 was released on September 16. Maybe they are not very compatible. I hope that subsequent firmware updates can fix this problem.
I’m working on an iOS app for a client, and I have a question regarding a specific feature we're looking to implement.
We want the app to respond to a user pressing the volume button three times while the app is in the background. The goal is to allow users to discreetly trigger a safety feature without drawing attention, particularly in situations where they may be in danger or at risk.
This feature is critical for the app and would be a valuable addition, as it could potentially help protect users in emergency situations. However, I haven’t found much information on whether iOS allows background listening for volume button presses. Therefore, I would greatly appreciate your insights on the following:
Is it possible to listen for volume button presses when the app is in the background, or are there system-level restrictions that prevent this?
If it's not directly possible, are there any special provisions, APIs, or entitlements that can be requested from Apple to enable this functionality?
In case this feature is not supported, are there alternative approaches to achieve a similar discreet activation mechanism?
If this is something that requires special permission or a process, could you please guide me on how to proceed?
I understand that maintaining user privacy and security is a priority for iOS, and I want to ensure that any implementation fully complies with Apple's guidelines.
Thanks in advance for your help!
Hi,
for the implementation of an audio player with signed URL's, I need to be able to set an authorization header to the request for an AVURLAsset.
This works but not on Airplay when trying to stream multiple songs in a queue.
For each item I do:
let headerFields: [String: String] = ["Authorization": getIdToken()!]
super.init(url: url, options: ["AVURLAssetHTTPHeaderFieldsKey": headerFields])
But only the first 2 songs in the queue actually get this authorization header sent along, somehow it is removed for subsequent songs.
Any ideas on how I can fix this?
thanks,
Thomas
Hello,
I apologize if the answer is obvious but I'm having a hard time figuring this one out.
Let's say the user taps an "Edit" button in my LockedCameraCaptureSession. The extension calls:
activity.userInfo = ["ActivityKey": "ID"]
try await session.openApplication(for: activity)
Can I retrieve, in my application, the data stored in activity.userInfo (lets say, a flag "open editor"), or is data passing exclusively handled via appContext of CameraCaptureIntent?
Thank you!
I would like to offer the functionality that the user aims the camera at a graph (including axes and scales) and the app detects the graph and the app replicates the graph using the image.
I have the whole camera setup finished with a AVCaptureSession, VNDetectContoursRequest, VNImageRequestHandler, etc.
However, now I get many many results so I guess I will now need to tell the image processing process what I am looking for. i.e. filter the VNContoursObservations.
I 'think' I first need to detect two perpendicular lines (the two axes). How do I do that? If I do not see them, I can just ignore that input and wait for the next VNContoursObservation.
When I found the axes of the graph, I will need to find the curve (graph) that I need to scan. Any tips on how I can find that curve and turn that curve into a bunch of coordinates?
Thanks!
Wouter
My App is live on app store , user are using it with iPhone 16 pro max and they are getting Operation Stopped while combining videos and audios only specifically on iPhone 16 pro max , on every other device its working fine. And When i adding AVAssetExportPresetPassthrough it able to combine videos and audios but not respecting the encoding and without audio.
NSArray *compatiblePresets = [AVAssetExportSession exportPresetsCompatibleWithAsset:composition];
if ([compatiblePresets containsObject:AVAssetExportPresetHighestQuality]) {
presetName = AVAssetExportPresetHighestQuality;
} else if ([compatiblePresets containsObject:AVAssetExportPreset1920x1080]) {
presetName = AVAssetExportPreset1920x1080;
} else if ([compatiblePresets containsObject:AVAssetExportPreset1280x720]) {
presetName = AVAssetExportPreset1280x720;
} else {
presetName = AVAssetExportPresetPassthrough;
}
} else {
presetName = AVAssetExportPreset1280x720;
}