I am trying to use the new API CAEDRMetadata.hlg(ambientViewingEnvironment:) introduced in iOS 17.0. Since ambientViewingEnvironmentData is dynamic, I understand the edrMetaData of CAMetalLayer needs to be set on every draw call. But doing so causes CAMetalLayer to freeze and even crash.
if let pixelBuffer = image.pixelBuffer, let aveData = pixelBuffer.attachments.propagated[kCVImageBufferAmbientViewingEnvironmentKey as String] as? Data {
if #available(iOS 17.0, *) {
metalLayer.edrMetadata = CAEDRMetadata.hlg(ambientViewingEnvironment: aveData)
} else {
// Fallback on earlier versions
}
}
Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.
Post
Replies
Boosts
Views
Activity
Hi everyone,
I'm the owner of a radio station called Radio Krimi and we have an official APP on iOS but because the technician, don't replied anymore to our message, we would like to update it with a new audio link. Then deeply sorry but I really don't know how to do it, basically it sould be easy because is a just a new link instead an old one.
Please someone could help us with the process ? Thanks a lot ! Seb
https://apps.apple.com/fr/app/radio-krimi/id1034088733
What version of draft-pantos-hls-rfc8216bis does Apple currently support?
Adding multiple AVCaptureVideoDataOutput is officially supported in iOS 16 and works well, except for certain configurations such as ProRes (YCbCr422 pixel format) where session fails to start if two VDO outputs are added. Is this a known limitation or a bug? Here is the code:
device.activeFormat = device.findFormat(targetFPS, resolution: targetResolution, pixelFormat: kCVPixelFormatType_422YpCbCr10BiPlanarVideoRange)!
NSLog("Device supports tone mapping \(device.activeFormat.isGlobalToneMappingSupported)")
device.activeColorSpace = .HLG_BT2020
device.activeVideoMinFrameDuration = CMTime(value: 1, timescale: CMTimeScale(targetFPS))
device.activeVideoMaxFrameDuration = CMTime(value: 1, timescale: CMTimeScale(targetFPS))
device.unlockForConfiguration()
self.session?.addInput(input)
let output = AVCaptureVideoDataOutput()
output.alwaysDiscardsLateVideoFrames = true
output.setSampleBufferDelegate(self, queue: self.samplesQueue)
if self.session!.canAddOutput(output) {
self.session?.addOutput(output)
}
let previewVideoOut = AVCaptureVideoDataOutput()
previewVideoOut.alwaysDiscardsLateVideoFrames = true
previewVideoOut.automaticallyConfiguresOutputBufferDimensions = false
previewVideoOut.deliversPreviewSizedOutputBuffers = true
previewVideoOut.setSampleBufferDelegate(self, queue: self.previewQueue)
if self.session!.canAddOutput(previewVideoOut) {
self.session?.addOutput(previewVideoOut)
}
self.vdo = vdo
self.previewVDO = previewVideoOut
self.session?.startRunning()
It works for other formats such as 10-bit YCbCr video range HDR sample buffers, but there are lot of frame drops when recording with AVAssetWriter at 4K@60 fps. Are these known limitations or bad use of API?
Basically for this iPhone app I want to be able to record from either the built in microphone or from a connected USB audio device while simultaneously playing back processed audio on connected AirPods. It's a pretty simple AVAudioEngine setup that includes a couple of effects units. The category is set to .playAndRecord with the .allowBluetooth and .allowBluetoothA2DP options added. With no attempts to set the preferred input and AirPods connected, the AirPods mic will be used and output also goes to the AirPods. If I call setPreferredInput to either built in mic or a USB audio device I will get input as desired but then output will always go to the speaker. I don't really see a good explanation for this and overrideOutputAudioPort does not really seem to have suitable options.
Testing this on iPhone 14 Pro
Hello,
I'm trying to play a local playlist using AVPlayer and AVAssetResourceLoaderDelegate using a TLS-PSK connection. I'm facing two obstacles.
Whereas I'm able to download myself m3u8 files, when it comes to chunks I can only redirect the URL which doesn't give me much control over the connection.
It seems that URLSession does not support TLS-PSK.
Is there a way to accomplish this?
Thanks in advance
Hello,
I'm currently investigating the possibility of accessing my photos stored on my iCloud via a dedicated API, in order to create a photo portfolio.
However, after extensive research, I haven't found any documentation or public API allowing such access. I wonder if there are any future plans to make such an API available to third-party developers.
I would be grateful if you could provide me with information regarding the possibility of accessing an API for Apple Photos or any other solution you might suggest.
Thank you for your attention and assistance.
Yours sincerely
Owen
We are implementing new Live Activities in our app (we are a live shopping app). We also have PIP however i've noticed that when I start a live activity, then go into picture in picture, the dynamic island does not show the activity we've created for it. I can only see the activity on the lock screen widget while the audio for the videos plays. Is there any way to get the dynamic island to work with an app that is also in picture in picture? If there is and I'm doing something wrong, I can post some code. But from what I see, these don't seem to be compatible unfortunately :(
We are currently working on a real-time, low-latency solution for video conferencing scenarios and have encountered some issues with the current implementation of the encoder. We need a feature enhancement for the Videotoolbox encoder.
In our use case, we need to control the encoding quality, which requires setting the maximum encoding QP. However, the kVTCompressionPropertyKey_MaxAllowedFrameQP only takes effect in the kVTVideoEncoderSpecification_EnableLowLatencyRateControl mode. In this mode, when the maximum QP is limited and the bitrate is insufficient, the encoder will drop frames.
Our desired scenario is for the encoder to not actively drop frames when the maximum QP is limited. Instead, when the bitrate is insufficient, the encoder should be able to encode the frame with the maximum QP, allowing the frame size to be larger. This would provide a more seamless experience for users in video conferencing situations, where maintaining consistent video quality is crucial.
It is worth noting that Android has already implemented this feature in Android 12, which demonstrates the value and feasibility of this enhancement. We kindly request that you consider adding support for external control of frame dropping in the Videotoolbox encoder to accommodate our needs. This enhancement would greatly benefit our project and others that require real-time, low-latency video encoding solutions.
I have the m3u8 like this
#EXTM3U #EXT-X-STREAM-INF:AVERAGE-BANDWIDTH=190000,BANDWIDTH=240000,RESOLUTION=240x160,FRAME-RATE=24.000,CODECS="avc1.42c01e,mp4a.40.2",CLOSED-CAPTIONS=NONE tracks-v1a1/mono.m3u8?thumbnails=10 #EXT-X-IMAGE-STREAM-INF:BANDWIDTH=10000,RESOLUTION=240x160,CODECS="jpeg",URI="images-240x160/tpl-0-60-10.m3u8?thumbnails=10"
and I have no thumbnails in the Safari native player. Could you please tell me why?
Hi there,
I am building a camera application to be able to capture an image with the wide and ultra wide cameras simultaneously (or as close as possible) with the intrinsics and extrinsics for each camera also delivered.
We are able to achieve this with an AVCaptureMultiCamSession and AVCaptureVideoDataOutput, setting up the .builtInWideAngleCamera and .builtInUltraWideCamera manually. Doing this, we are able to enable the delivery of the intrinsics via the AVCaptureConnection of the cameras. Also, geometric distortion correction is enabled for the ultra camera (by default).
However, we are seeing if it possible to move the application over to the .builtInDualWideCamera with AVCapturePhotoOutput and AVCaptureSession to simplify our application and get access to depth data. We are using the isVirtualDeviceConstituentPhotoDeliveryEnabled=true property to allow for simultaneous capture. Functionally, everything is working fine, except that when isGeometricDistortionCorrectionEnabled is not set to false, the photoOutput.isCameraCalibrationDataDeliverySupported returns false.
From this thread and the docs, it appears that we cannot get the intrinsics when isGeometricDistortionCorrectionEnabled=true (only applicable to the ultra wide), unless we use a AVCaptureVideoDataOutput.
Is there any way to get access to the intrinsics for the wide and ultra while enabling geometric distortion correction for the ultra?
guard let captureDevice = AVCaptureDevice.default(.builtInDualWideCamera, for: .video, position: .back) else {
throw InitError.error("Could not find builtInDualWideCamera")
}
self.captureDevice = captureDevice
self.videoDeviceInput = try AVCaptureDeviceInput(device: captureDevice)
self.photoOutput = AVCapturePhotoOutput()
self.captureSession = AVCaptureSession()
self.captureSession.beginConfiguration()
captureSession.sessionPreset = AVCaptureSession.Preset.hd1920x1080
captureSession.addInput(self.videoDeviceInput)
captureSession.addOutput(self.photoOutput)
try captureDevice.lockForConfiguration()
captureDevice.isGeometricDistortionCorrectionEnabled = false // <- NB line
captureDevice.unlockForConfiguration()
/// configure photoOutput
guard self.photoOutput.isVirtualDeviceConstituentPhotoDeliverySupported else {
throw InitError.error("Dual photo delivery is not supported")
}
self.photoOutput.isVirtualDeviceConstituentPhotoDeliveryEnabled = true
print("isCameraCalibrationDataDeliverySupported", self.photoOutput.isCameraCalibrationDataDeliverySupported) // false when distortion correction is enabled
let videoOutput = AVCaptureVideoDataOutput()
videoOutput.setSampleBufferDelegate(self, queue: DispatchQueue(label: "sample buffer delegate", attributes: []))
if captureSession.canAddOutput(videoOutput) {
captureSession.addOutput(videoOutput)
}
self.videoPreviewLayer.setSessionWithNoConnection(self.captureSession)
self.videoPreviewLayer.videoGravity = AVLayerVideoGravity.resizeAspect
let cameraVideoPreviewLayerConnection = AVCaptureConnection(inputPort: self.videoDeviceInput.ports.first!, videoPreviewLayer: self.videoPreviewLayer);
self.captureSession.addConnection(cameraVideoPreviewLayerConnection)
self.captureSession.commitConfiguration()
self.captureSession.startRunning()
As of iOS 17.X a .mov (h264/alac) recording has a problem where the track length doesn't equal the container length.
So if the Audio/Video tracks are 10.00 long, the fullVideo.mov could be 10.06 long.
This does not happen with any version pervious to iOS 17. Anyone else experiencing this or have advice?
Please say there is a way to programmatically delete playlists in Apple Music in iOS? I'll take any possible way or any hints whatsoever.
Use AVPlayer to play and AVAssetResourceLoaderDelegate to read data. The following errors occasionally occur during playback.
-11819:Cannot Complete Action
-11800:The operation could not be completed
-11829:Cannot Open
-11849:Operation Stopped
-11870:这项操作无法完成
-1002:unsupported URL
-11850:操作已停止
-1:未知错误
-17377
I use the official api to output the raw file. when I transfer it to my mac, it is very dark. But if I shot with iOS original camera app, it is brighter.
This is how I save the proraw file to photo library
let creationRequest = PHAssetCreationRequest.forAsset()
creationRequest.addResource(with: .photo,
data: photo.compressedData,
options: nil)
// Save the RAW (DNG) file an alternate resource for the Photos asset.
let options = PHAssetResourceCreationOptions()
////////options.shouldMoveFile = true
creationRequest.addResource(with: .alternatePhoto,
fileURL: photo.rawFileURL,
options: options)
Hi,
I have implemented Core Haptics into a Unity app but there are some unexpected behaviours.
I followed the documentation on GitHub and did not get any errors while implementing it.
My issue is core haptics giving errors on iPhone 11. iOS 16.7 but works perfectly on iPhone 13 and iPhone 14 pro.
This is the error code I am getting in logs:
[hapi] HapticDictionaryReader.mm:44 -[HapticDictionaryReader readAndVerifyVersion:error:]: ERROR: Unsupported version number: 10.0
2023-11-20 18:20:01.040069+0300 *** - Game599:12190] Error occurred playing JSON one-shot: Error Domain=com.apple.CoreHaptics Code=-4809 "(null)"
CHException: Exception of type 'Apple.CoreHaptics.CHException' was thrown.
at Apple.CoreHaptics.CHHapticEngine.PlayPattern (Apple.CoreHaptics.CHHapticPattern ahap) [0x00000] in <00000000000000000000000000000000>:0
at ButtonController.OpenCenterPanel (UnityEngine.Transform panel) [0x00000] in <00000000000000000000000000000000>:0
at UnityEngine.Events.UnityEvent.Invoke () [0x00000] in <00000000000000000000000000000000>:0
at UnityEngine.EventSystems.ExecuteEvents.Execute[T] (UnityEngine.GameObject target, UnityEngine.EventSystems.BaseEventData eventData, UnityEngine.EventSystems.ExecuteEvents+EventFunction`1[T1] functor) [0x00000] in <00000000000000000000000000000000>:0
at UnityEngine.EventSystems.StandaloneInputModule.ProcessTouchPress (UnityEngine.EventSystems.PointerEventData pointerEvent, System.Boolean pressed, System.Boolean released) [0x00000] in <00000000000000000000000000000000>:0
at UnityEngine.EventSystems.StandaloneInputModule.ProcessTouchEvents () [0x00000] in <00000000000000000000000000000000>:0
at UnityEngine.EventSystems.StandaloneInputModule.Process () [0x00000] in <00000000000000000000000000000000>:0
UnityEngine.EventSystems.ExecuteEvents:Execute(GameObject, BaseEventData, EventFunction`1)
UnityEngine.EventSystems.StandaloneInputModule:ProcessTouchPress(PointerEventData, Boolean, Boolean)
UnityEngine.EventSystems.StandaloneInputModule:ProcessTouchEvents()
UnityEngine.EventSystems.StandaloneInputModule:Process()
It says unsupported version number: 10.0 I do not know why. For the context I play haptic patterns like this example:
HapticEngine.PlayPatternFromAhap(_hitAHAP);
How can I solve this problem?
Hello,
I'm having an issue where my app is in TestFlight, and some of my testers are reporting that FairPlay protected videos are not playing back in iOS 17.
It's been working fine in iOS 16 (my app's initial target).
I can see from the debug logs that for an online stream request -
contentKeySession(_ session: AVContentKeySession, didProvide keyRequest: AVContentKeyRequest)
is never called.
Whereas, a download for offline playback request, the function is called.
I've used much of the sample code in "HLS Catalog With FPS" as part of the FPS developer package.
All of my m3u8 files are version 5 and contain encryption instructions like below:
#EXT-X-KEY:METHOD=SAMPLE-AES,URI="skd://some-uuid",KEYFORMAT="com.apple.streamingkeydelivery",KEYFORMATVERSIONS="1"
Here's a short excerpt of the code being run:
let values = HTTPCookie.requestHeaderFields(with: cookies)
let cookieOptions = ["AVURLAssetHTTPHeaderFieldsKey": values]
assetUrl = "del\(assetUrl)"
clip!.assetUrl = AVURLAsset(url: URL(string: assetUrl)!, options: cookieOptions)
clip!.assetUrl!.resourceLoader.setDelegate(self, queue: DispatchQueue.global(qos: .default))
ContentKeyManager.shared.contentKeySession.addContentKeyRecipient(clip!.assetUrl!)
urlAssetObserver = self.observe(\.isPlayable, options: [.new, .initial]) { [weak self] (assetUrl, _) in
guard let strongSelf = self else { return }
strongSelf.playerItem = AVPlayerItem(asset: (self!.clip!.assetUrl)!)
strongSelf.player.replaceCurrentItem(with: strongSelf.playerItem)
}
The error thrown is:
Task .<8> finished with error [18,446,744,073,709,550,614] Error Domain=NSURLErrorDomain Code=-1002 "unsupported URL" UserInfo={NSLocalizedDescription=unsupported URL, NSErrorFailingURLStringKey=skd://some-uuid, NSErrorFailingURLKey=skd://some-uuid, _NSURLErrorRelatedURLSessionTaskErrorKey=(
"LocalDataTask .<8>"
), _NSURLErrorFailingURLSessionTaskErrorKey=LocalDataTask .<8>, NSUnderlyingError=0x2839a7450 {Error Domain=kCFErrorDomainCFNetwork Code=-1002 "(null)"}}
Which I believe is being thrown from AVPlayerItem.
Without the delegate, it appears to playback fine. However, I need the delegate (I think), since I'm appending some query params to each request for the segments.
I have an observer on the playerItem, per the example project which is changing the status to .failed once he -1002 error is thrown.
Please let me know if anything rings to mind to try, or if I can provide any additional info.
Thanks in advance!
Hey all!
I'm trying to build a Camera app that records Video and Audio buffers (AVCaptureVideoDataOutput and AVCaptureAudioDataOutput) to an mp4/mov file using AVAssetWriter.
When creating the Recording Session, I noticed that it blocks for around 5-7 seconds before starting the recording, so I dug deeper to find out why.
This is how I create my AVAssetWriter:
let assetWriter = try AVAssetWriter(outputURL: tempURL, fileType: .mov)
let videoWriter = self.createVideoWriter(...)
assetWriter.add(videoWriter)
let audioWriter = self.createAudioWriter(...)
assetWriter.add(audioWriter)
assetWriter.startWriting()
There's two slow parts here in that code:
The createAudioWriter(...) function takes ages!
This is how I create the audio AVAssetWriterInput:
// audioOutput is my AVCaptureAudioDataOutput, audioInput is the microphone
let settings = audioOutput.recommendedAudioSettingsForAssetWriter(writingTo: .mov)
let format = audioInput.device.activeFormat.formatDescription
let audioWriter = AVAssetWriterInput(mediaType: .audio,
outputSettings: settings,
sourceFormatHint: format)
audioWriter.expectsMediaDataInRealTime = true
The above code takes up to 3000ms on an iPhone 11 Pro!
When I remove the recommended settings and just pass nil as outputSettings:
audioWriter = AVAssetWriterInput(mediaType: .audio,
outputSettings: nil)
audioWriter.expectsMediaDataInRealTime = true
...It initializes almost instantly - something like 30 to 50ms.
Starting the AVAssetWriter takes ages!
Calling this method:
assetWriter.startWriting()
...takes takes 3000 to 5000ms on my iPhone 11 Pro!
Does anyone have any ideas why this is so slow? Am I doing something wrong?
It feels like passing nil as the outputSettings is not a good idea, and recommendedAudioSettingsForAssetWriter should be the way to go, but 3 seconds initialization time is not acceptable.
Here's the full code: RecordingSession.swift from react-native-vision-camera. This gets called from here.
I'd appreciate any help, thanks!
I've added a listener block for camera notifications. This works as expected: the listener block is invoked then the camera is activated/deactivated.
However, when I call CMIOObjectRemovePropertyListenerBlock to remove the listener block, though the call succeeds, camera notifications are still delivered to the listener block.
Since in the header file it states this function "Unregisters the given CMIOObjectPropertyListenerBlock from receiving notifications when the given properties change." I'd assume that once called, no more notifications would be delivered?
Sample code:
#import <Foundation/Foundation.h>
#import <CoreMediaIO/CMIOHardware.h>
#import <AVFoundation/AVCaptureDevice.h>
int main(int argc, const char * argv[]) {
AVCaptureDevice* camera = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
OSStatus status = -1;
CMIOObjectID deviceID = 0;
CMIOObjectPropertyAddress propertyStruct = {0};
propertyStruct.mSelector = kAudioDevicePropertyDeviceIsRunningSomewhere;
propertyStruct.mScope = kAudioObjectPropertyScopeGlobal;
propertyStruct.mElement = kAudioObjectPropertyElementMain;
deviceID = (UInt32)[camera performSelector:NSSelectorFromString(@"connectionID") withObject:nil];
CMIOObjectPropertyListenerBlock listenerBlock = ^(UInt32 inNumberAddresses, const CMIOObjectPropertyAddress addresses[]) {
NSLog(@"Callback: CMIOObjectPropertyListenerBlock invoked");
};
status = CMIOObjectAddPropertyListenerBlock(deviceID, &propertyStruct, dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), listenerBlock);
if(noErr != status) {
NSLog(@"ERROR: CMIOObjectAddPropertyListenerBlock() failed with %d", status);
return -1;
}
NSLog(@"Monitoring %@ (uuid: %@ / %x)", camera.localizedName, camera.uniqueID, deviceID);
sleep(10);
status = CMIOObjectRemovePropertyListenerBlock(deviceID, &propertyStruct, dispatch_get_main_queue(), listenerBlock);
if(noErr != status) {
NSLog(@"ERROR: 'AudioObjectRemovePropertyListenerBlock' failed with %d", status);
return -1;
}
NSLog(@"Stopped monitoring %@ (uuid: %@ / %x)", camera.localizedName, camera.uniqueID, deviceID);
sleep(10);
return 0;
}
Compiling and running this code outputs:
Monitoring FaceTime HD Camera (uuid: 3F45E80A-0176-46F7-B185-BB9E2C0E436A / 21)
Callback: CMIOObjectPropertyListenerBlock invoked
Callback: CMIOObjectPropertyListenerBlock invoked
Stopped monitoring FaceTime HD Camera (uuid: 3F45E80A-0176-46F7-B185-BB9E2C0E436A / 21)
Callback: CMIOObjectPropertyListenerBlock invoked
Callback: CMIOObjectPropertyListenerBlock invoked
Note the last two log messages showing that the CMIOObjectPropertyListenerBlock is still invoked ...even though CMIOObjectRemovePropertyListenerBlock has successfully been invoked.
Am I just doing something wrong here? Or is the API broken?
Hello!
I'm trying to display AVPlayerViewController in a separate WindowGroup - my main window opens a new window where the only element is a struct that implements UIViewControllerRepresentable for AVPlayerViewController:
@MainActor public struct AVPlayerView: UIViewControllerRepresentable {
public let assetName: String
public init(assetName: String) {
self.assetName = assetName
}
public func makeUIViewController(context: Context) -> AVPlayerViewController {
let controller = AVPlayerViewController()
controller.player = AVPlayer()
return controller
}
public func updateUIViewController(_ controller: AVPlayerViewController, context: Context) {
Task {
if context.coordinator.assetName != assetName {
let url = Bundle.main.url(forResource: assetName, withExtension: ".mp4")
guard let url else { return }
controller.player?.replaceCurrentItem(with: AVPlayerItem(url: url))
controller.player?.play()
context.coordinator.assetName = assetName
}
}
}
public static func dismantleUIViewController(_ controller: AVPlayerViewController, coordinator: Coordinator) {
controller.player?.pause()
controller.player = nil
}
public func makeCoordinator() -> Coordinator {
return Coordinator()
}
public class Coordinator: NSObject {
public var assetName: String?
}
}
WindowGroup(id: Window.videoPlayer.rawValue) {
AVPlayerView(assetName: "wwdc")
.onDisappear { print("DISAPPEAR") }
}
This displays the video player in non-inline mode and plays the video.
The problem appears when I try to close the video player's window using the close button. Sound from the video continues playing in the background. I've tried to clean the state myself by using dismantleUIViewController and onDisapear methods, but they are not called by the system (it works correctly if a window doesn't contain AVPlayerView). This appear on Xcode 15.1 Beta 3 (I haven't tested it on other versions).
Is there something I do incorrectly that is causing this issue, or is it a bug and I need to wait until its fixed?