Dive into the world of video on Apple platforms, exploring ways to integrate video functionalities within your iOS,iPadOS, macOS, tvOS, visionOS or watchOS app.

Video Documentation

Post

Replies

Boosts

Views

Activity

Video Player controllers aren't showing for the video in immersive view
I have a AVPlayer() which loads the video and places it on the screen ModelEntity in the immersive view using the VideoMaterial. This also makes the video untappable as it is a VideoMaterial. Here's the code for the same: let screenModelEntity = model.garageScreenEntity as! ModelEntity let modelEntityMesh = screenModelEntity.model!.mesh let url = Bundle.main.url(forResource: "<URL>", withExtension: "mp4")! let asset = AVURLAsset(url: url) let playerItem = AVPlayerItem(asset: asset) let player = AVPlayer() let material = VideoMaterial(avPlayer: player) screenModelEntity.components[ModelComponent.self] = .init(mesh: modelEntityMesh, materials: [material]) player.replaceCurrentItem(with: playerItem) return player I was able to load and play the video. However, I cannot figure out how to show the player controls (AVPlayerViewController) to the user, similar to the DestinationVideo sample app. How can I add the video player controls in this case?
0
0
680
Apr ’24
Reading 12K panoramic images system API read error, VisionOS does not support 12K panoramic photos view
xtension Entity { func addPanoramicImage(for media: WRMedia) { let subscription = TextureResource.loadAsync(named:"image_20240425_201630").sink( receiveCompletion: { switch $0 { case .finished: break case .failure(let error): assertionFailure("(error)") } }, receiveValue: { [weak self] texture in guard let self = self else { return } var material = UnlitMaterial() material.color = .init(texture: .init(texture)) self.components.set(ModelComponent( mesh: .generateSphere(radius: 1E3), materials: [material] )) self.scale *= .init(x: -1, y: 1, z: 1) self.transform.translation += SIMD3(0.0, -1, 0.0) } ) components.set(Entity.WRSubscribeComponent(subscription: subscription)) } func updateRotation(for media: WRMedia) { let angle = Angle.degrees( 0.0) let rotation = simd_quatf(angle: Float(angle.radians), axis: SIMD3<Float>(0, 0.0, 0)) self.transform.rotation = rotation } struct WRSubscribeComponent: Component { var subscription: AnyCancellable } } case .failure(let error): assertionFailure("(error)") Thread 1: Fatal error: Error Domain=MTKTextureLoaderErrorDomain Code=0 "Image decoding failed" UserInfo={NSLocalizedDescription=Image decoding failed, MTKTextureLoaderErrorKey=Image decoding failed}
1
0
610
Apr ’24
iOS App crashes while converting CVPixelbuffer to JPG
I have been seeing some crash reports for my app on some devices (not all of them). The crash occurs while converting a CVPixelBuffer captured from Video to a JPG using VTCreateCGImageFromCVPixelBuffer from VideoToolBox. I have not been able to reproduce the crash on local devices, even under adverse memory conditions (many apps running in the background). The field crash reports show that VTCreateCGImageFromCVPixelBuffer does the conversion in another thread and that thread crashed at call to vConvert_420Yp8_CbCr8ToARGB8888_vec. Any suggestions on how to debug this further would be helpful.
2
0
694
Apr ’24
Question regarding the kVTVideoEncoderList_IsHardwareAccelerated flag
I am a bit confused on whether certain Video Toolbox (VT) encoders support hardware acceleration or not. When I query the list of VT encoders (VTCopyVideoEncoderList(nil,&encoderList)) on an iPhone 14 Pro device, for avc1 (AVC / H.264) and hevc1 (HEVC / H.265) encoders, the kVTVideoEncoderList_IsHardwareAccelerated flag is not there, which -based on the documentation found on the VTVideoEncoderList.h- means that the encoders do not support hardware acceleration: optional. CFBoolean. If present and set to kCFBooleanTrue, indicates that the encoder is hardware accelerated. In fact, no encoders from this list return this flag as true and most of them do not include the flag at all on their dictionaries. On the other hand, when I create a compression session using the VTCompressionSessionCreate() and pass the kVTVideoEncoderSpecification_EnableHardwareAcceleratedVideoEncoder as true in the encoder specifications, after querying the kVTCompressionPropertyKey_UsingHardwareAcceleratedVideoEncoder using the following code, I get a CFBoolean value of true for both H.264 and H.265 encoder. In fact, I get a true value (for both of the aforementioned encoders) even if I don't specify the kVTVideoEncoderSpecification_EnableHardwareAcceleratedVideoEncoder during the creation of the compression session (note here that this flag was introduced in iOS 17.4 ^1). So the question is: Are those encoders actually hardware accelerated on my device, and if so, why isn't that reflected on the VTCopyVideoEncoderList() call?
3
1
707
May ’24
FCPXML Creation issue...
I have generated FCPXML, but i can't figure out issue: <?xml version="1.0"?> <fcpxml version="1.11"> <resources> <format id="r1" name="FFVideoFormat3840x2160p2997" frameDuration="1001/30000s" width="3840" height="2160" colorSpace="1-1-1 (Rec. 709)"/> <asset id="video0" name="11a(1-5).mp4" start="0s" hasVideo="1" videoSources="1" duration="6.81s"> <media-rep kind="original-media" src="file:///Volumes/Dropbox/RealMedia Dropbox/Real Media/Media/Test/Test AE videos, City, testOLOLO/video/11a(1-5).mp4"/> </asset> <asset id="video1" name="12(4)r8 mute.mp4" start="0s" hasVideo="1" videoSources="1" duration="9.94s"> <media-rep kind="original-media" src="file:///Volumes/Dropbox/RealMedia Dropbox/Real Media/Media/Test/Test AE videos, City, testOLOLO/video/12(4)r8 mute.mp4"/> </asset> <asset id="video2" name="13 mute.mp4" start="0s" hasVideo="1" videoSources="1" duration="6.51s"> <media-rep kind="original-media" src="file:///Volumes/Dropbox/RealMedia Dropbox/Real Media/Media/Test/Test AE videos, City, testOLOLO/video/13 mute.mp4"/> </asset> <asset id="video3" name="13x (8,14,24,29,38).mp4" start="0s" hasVideo="1" videoSources="1" duration="45.55s"> <media-rep kind="original-media" src="file:///Volumes/Dropbox/RealMedia Dropbox/Real Media/Media/Test/Test AE videos, City, testOLOLO/video/13x (8,14,24,29,38).mp4"/> </asset> </resources> <library> <event name="Untitled"> <project name="Untitled Project" uid="28B2D4F3-05C4-44E7-8D0B-70A326135EDD" modDate="2024-04-17 15:44:26 -0400"> <sequence format="r1" duration="4802798/30000s" tcStart="0s" tcFormat="NDF" audioLayout="stereo" audioRate="48k"> <spine> <asset-clip ref="video0" offset="0/10000s" name="11a(1-5).mp4" duration="0/10000s" format="r1" tcFormat="NDF"/> <asset-clip ref="video1" offset="12119/10000s" name="12(4)r8 mute.mp4" duration="0/10000s" format="r1" tcFormat="NDF"/> <asset-clip ref="video2" offset="22784/10000s" name="13 mute.mp4" duration="0/10000s" format="r1" tcFormat="NDF"/> <asset-clip ref="video3" offset="34544/10000s" name="13x (8,14,24,29,38).mp4" duration="0/10000s" format="r1" tcFormat="NDF"/> </spine> </sequence> </project> </event> </library> </fcpxml> Any ideas?
2
0
506
May ’24
iOS to Android H264 encoding issue.
I'm trying to cast the screen from an iOS device to an Android device. I'm leveraging ReplayKit on iOS to capture the screen and VideoToolbox for compressing the captured video data into H.264 format using CMSampleBuffers. Both iOS and Android are configured for H.264 compression and decompression. While screen casting works flawlessly within the same platform (iOS to iOS or Android to Android), I'm encountering an error ("not in avi mode") on the Android receiver when casting from iOS. My research suggests that the underlying container formats for H.264 might differ between iOS and Android. Data transmission over the TCP socket seems to be functioning correctly. My question is: Is there a way to ensure a common container format for H.264 compression and decompression across iOS and Android platforms? Here's a breakdown of the iOS sender details: Device: iPhone 13 mini running iOS 17 Development Environment: Xcode 15 with a minimum deployment target of iOS 16 Screen Capture: ReplayKit for capturing the screen and obtaining CMSampleBuffers Video Compression: VideoToolbox for H.264 compression Compression Properties: kVTCompressionPropertyKey_ConstantBitRate: 6144000 (bitrate) kVTCompressionPropertyKey_ProfileLevel: kVTProfileLevel_H264_Main_AutoLevel (profile and level) kVTCompressionPropertyKey_MaxKeyFrameInterval: 60 (maximum keyframe interval) kVTCompressionPropertyKey_RealTime: true (real-time encoding) kVTCompressionPropertyKey_Quality: 1 (lowest quality) NAL Unit Handling: Custom header is added to NAL units Android Receiver Details: Device: RedMi 7A running Android 10 Video Decoding: MediaCodec API for receiving and decoding the H.264 stream
0
0
646
May ’24
How can I cache only the initial few seconds (chunks) of an HLS stream on the disk?
Hi, I am working on an app that is very similar to TikTok in terms of video experience. There is an infinite scroll feed of videos, and I am using HLS URLs as the video source. My requirement is to cache the initial few seconds of each video on the disk while the video is playing. The next time a user views the video, it should play the initial few seconds from the cache, with the subsequent chunks coming from the network. Additionally, when there is no network connection, the video should still play the initial few seconds from the cache. I was able to achieve this with MP4 using AVAssetResourceLoaderDelegate, but the same approach is not possible with HLS. What are some other ways through which I can implement this feature? Thanks.
3
0
604
May ’24
On Sonoma 14.5, after upgrading CMIO CameraExtension, daemon is not running
I made CameraExtension and installed by OSSystemExtensionRequest. I got success callback. I did uninstall old version of my CameraExtension and install new version of my CameraExtension. "systemextensionsctl list" command shows "[activated enabled]" on my new version. But no daemon process with my CameraExtension is not running. I need to reboot OS to start the daemon process. This issue is new at macOS Sonoma 14.5. I did not see this issue on 14.4.x
0
0
518
May ’24
Taking a screenshot that includes AVCaptureVideoPreviewLayer
I have an app that displays overlays on top of an AVCaptureVideoPreviewLayer (basically AR without ARKit), and my users have repeatedly requested a button that will allow them to capture a screenshot of both the video and the overlays and surrounding UI with a single tap. However, I cannot find a way to actually take such a screenshot. I have tried the usual methods of rendering views to images, such as calling drawViewHierarchyInRect on my top level view or calling renderInContext on the same view's layer. These all work perfectly to capture the overlays and the surrounding UI elements, but there is nothing but black where the video preview's contents should be. snapshotViewAfterScreenUpdates: does capture exactly what I want, but snapshot views cannot be written to an image. From what I understand that's an intentional security decision by Apple. I've considered using ReplayKit to take a very short screen recording and then using an AVAssetImageGenerator to grab a frame from that video, but I don't think that's how those APIs were intended to be used and it's an additional permission to request from the user. I would really rather not do this if there is any alternative (and I'm not even sure it would work). Is there any reasonable method to render a view hierarchy to an image in such a way as to capture the contents of any video preview layers found within that hierarchy?
0
0
355
Jun ’24
Add 30 frames per secons in assetWriter
Hello, I have converted UIImage to CVPixelBuffer. I am creating a video writing app. In some cases, the same CVPixelBuffer should last in the video for 2 seconds or more. However, I need to add 30 CVPixelBuffers per second because the video, to work on social media, must be 30 frames per second. The problem is that whenever I try to add frames to long videos, like 50-minute videos, it gives an error. The error is something like "Operation cannot be completed". Give me an example of a loop to add 30 CVPixelBuffers per second to a currently written video. Example: while true { if videoInput.isReadyForMoreMediaData { break } if videoInput.isReadyForMoreMediaData, let buffer = videoProvider.getNextFrame() { adaptor.append(buffer, withPresentationTime: CMTime(value: 1, timescale: 30)) } } I await your response.
0
0
499
Jun ’24
How to setup SharePlay reliably?
Hi, I’m developing an app that uses SharePlay. In specific, I’m using ShareLink in my SwiftUI-based app so that when 2 devices come close, it will start SharePlay via AirDrop, just like how Name Drop works (the animation is super cool, btw). However, I’ve notice that SharePlay doesn’t start reliably under the following conditions: Do both devices need to be signed in using different Apple ID? I wish it works with the same Apple ID. When both devices are running my app, the sharing does not seem to start; maybe both of them are trying to be the host app? When I try to demo this NameDrop-like transaction via Zoom, it usually doesn’t work; maybe because the cable is connected in Lightening port? Is some Mac app (in my case, Zoom or even QuickTime) capturing the screen of the device make it less likely to have successful SharePlay transaction? Thanks!
0
0
330
Jun ’24
Reducing storage of similar PNGs by compressing them into a video and retrieving them losslessly--possibility or dumb idea?
My app stores and transports lots of groups of similar PNGs. These aren't compressed well by official algorithms like .lzfse, .lz4, .lzbitmap... not even bz2, but I realized that they are well-suited for compression by video codecs since they're highly similar to one another. I ran an experiment where I compressed a dozen images into an HEVCWithAlpha .mov via AVAssetWriter, and the compression ratio was fantastic, but when I retrieved the PNGs via AVAssetImageGenerator there were lots of artifacts which simply wasn't acceptable. Maybe I'm doing something wrong, or maybe I'm chasing something that doesn't exist. Is there a way to use video compression like a specialized archive to store and retrieve PNGs losslessly while retaining alpha? I have no intention of using the videos except as condensed storage. Any suggestions on how to reduce storage size of many large PNGs are also welcome. I also tried using HEVC instead of PNG via the new UIImage.hevcData(), but the decompression/processing times were just insane (5000%+ increase), on top of there being fatal errors when using async.
18
0
1.4k
Jun ’24
AirDrop Requesting to Open Video via 3rd Party App Instead of Photo Library
I film and edit content using my iPhone and PremierePro. I've been doing this since 2017, so I'd like to say I know my way around an iPhone and the PremierePro software enough to get a finished product to my client. Before I send content to clients, I airdrop it to my phone to make sure that the quality of it is up to par. On this most recent project, I've had issues airdropping it to myself. Each time I do so, my iPhone prompts which 3rd party app I would like to open the video in, rather than automatically opening or saving the video into my photo library. I will list the specs below: Filmed on iPhone 15 Plus iOS Version 17.5.1 (at the moment this is the most up to date software update) Filmed in 4K at 60 FPS I have ample storage space on my phone and the video file size is 220MB Premiere Pro Export Settings: Video Settings - H.264, Field Order: Progressive, Bit Rate Encoding: CBR I will say that I purchased a transition and burns bundle and used it for the first time on this project. All materials used are in an .mp4 format and blend mode was set to overlay. Nothing out of pocket. I figured it wouldn't be a problem since my client would just be downloading it via dropbox, but there was an issue there as well. My client received an error message saying, "Sorry, this type of video cannot be saved to this device". The back road workaround was to take the saved video, plug it into a separate PremierePro project, and export it with the preset: Match Source - Adaptive High Bitrate. I was able to airdrop it to myself, it saved in my photo album and my client was able to download it without receiving that error message. If there is an explanation as to why I am having this issue and how I can avoid it, I would really appreciate it as I have never had this problem in the past.
0
0
397
Jun ’24
ProRes 4444 blocky compression artifacts
I’m creating a objective C command-line utility to encode RAW image sequences to ProRes 4444, but I’m encountering, blocky compression artifacts in the ProRes 4444 video output. To test the integrity of the image data before encoding to ProRes, I added a snippet in my encoding function that saves a 16-bit PNG before encoding to ProRes and the PNG looks perfect, I can see all detail in every part of the image dynamic range. Here’s a comparison between the 16-bit PNG(on the right) and the ProRes 4444 output. (on the left) As a further test, I re-encoded the ‘test PNG’ to ProRes 4444 using DaVinci Resolve, and the ProRes4444 output video from Resolve doesn’t have any blocky compression artifacts. Looks identical. In short, this is what the utility does: Unpacks the 12-bit raw data into 16-bit values. After unpacking, the raw data is debayered to convert it into a standard color image format (BGR) using OpenCV. Scale the debayered pixel values from their original 12-bit depth to fit into a 16-bit range. Up to this point everything is fine and confirmed by saving 16bit PNGs. The images are encoded to ProRes 4444 using the AVFoundation framework. The pixel buffers are created and managed using dictionary method with ‘kCVPixelFormatType_64RGBALE’. I need help figuring this out, I’m a real novice when it comes to AVfoundation/encoding to ProRes. See relevant parts of my 'encodeToProRes' function: void encodeToProRes(const std::string &outputPath, const std::vector<std::string> &rawPaths, const std::string &proResFlavor) { NSError *error = nil; NSURL *url = [NSURL fileURLWithPath:[NSString stringWithUTF8String:outputPath.c_str()]]; AVAssetWriter *assetWriter = [AVAssetWriter assetWriterWithURL:url fileType:AVFileTypeQuickTimeMovie error:&error]; if (error) { std::cerr << "Error creating AVAssetWriter: " << error.localizedDescription.UTF8String << std::endl; return; } // Load the first image to get the dimensions std::cout << "Debayering the first image to get dimensions..." << std::endl; Mat firstImage; int width = 5320; int height = 3900; if (!debayer_image(rawPaths[0], firstImage, width, height)) { std::cerr << "Error debayering the first image" << std::endl; return; } width = firstImage.cols; height = firstImage.rows; // Save the first frame as a PNG 16-bit image for validation std::string pngFilePath = outputPath + "_frame1.png"; if (!imwrite(pngFilePath, firstImage)) { std::cerr << "Error: Failed to save the first frame as a PNG image" << std::endl; } else { std::cout << "First frame saved as PNG: " << pngFilePath << std::endl; } NSString *codecKey = nil; if (proResFlavor == "4444") { codecKey = AVVideoCodecTypeAppleProRes4444; } else if (proResFlavor == "422HQ") { codecKey = AVVideoCodecTypeAppleProRes422HQ; } else if (proResFlavor == "422") { codecKey = AVVideoCodecTypeAppleProRes422; } else if (proResFlavor == "LT") { codecKey = AVVideoCodecTypeAppleProRes422LT; } else { std::cerr << "Error: Invalid ProRes flavor specified: " << proResFlavor << std::endl; return; } NSDictionary *outputSettings = @{ AVVideoCodecKey: codecKey, AVVideoWidthKey: @(width), AVVideoHeightKey: @(height) }; AVAssetWriterInput *videoInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:outputSettings]; videoInput.expectsMediaDataInRealTime = YES; NSDictionary *pixelBufferAttributes = @{ (id)kCVPixelBufferPixelFormatTypeKey: @(kCVPixelFormatType_64RGBALE), (id)kCVPixelBufferWidthKey: @(width), (id)kCVPixelBufferHeightKey: @(height) }; AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:videoInput sourcePixelBufferAttributes:pixelBufferAttributes]; ... [assetWriter startSessionAtSourceTime:kCMTimeZero]; CMTime frameDuration = CMTimeMake(1, 24); // Frame rate of 24 fps int numFrames = static_cast<int>(rawPaths.size()); ... // Encoding thread std::thread encoderThread([&]() { int frameIndex = 0; std::vector<CVPixelBufferRef> pixelBufferBuffer; while (frameIndex < numFrames) { std::unique_lock<std::mutex> lock(queueMutex); queueCondVar.wait(lock, [&]() { return !frameQueue.empty() || debayeringFinished; }); if (!frameQueue.empty()) { auto [index, debayeredImage] = frameQueue.front(); frameQueue.pop(); lock.unlock(); if (index == frameIndex) { cv::Mat rgbaImage; cv::cvtColor(debayeredImage, rgbaImage, cv::COLOR_BGR2RGBA); CVPixelBufferRef pixelBuffer = NULL; CVReturn result = CVPixelBufferPoolCreatePixelBuffer(NULL, adaptor.pixelBufferPool, &pixelBuffer); if (result != kCVReturnSuccess) { std::cerr << "Error: Could not create pixel buffer" << std::endl; dispatch_group_leave(dispatchGroup); return; } CVPixelBufferLockBaseAddress(pixelBuffer, 0); void *pxdata = CVPixelBufferGetBaseAddress(pixelBuffer); for (int row = 0; row < height; ++row) { memcpy(static_cast<uint8_t*>(pxdata) + row * CVPixelBufferGetBytesPerRow(pixelBuffer), rgbaImage.ptr(row), width * 8); } CVPixelBufferUnlockBaseAddress(pixelBuffer, 0); pixelBufferBuffer.push_back(pixelBuffer); ... Thanks very much!
1
0
661
Jun ’24
WWDC Lab feedback
I am writing to follow up with my lab in WWDC24. I had 1:1 lab with Mr. Kavin, we had good 30 minutes lab and for follow up questions Kavin asked me to post it using feedback. Following is my questin: We have screenshare in our application and trying to use CFMessagePort for passing CVPixelBufferRef from broadcast extension to Applicaiton. Questions: How to copy planes of IOSurface backed CVPixelBufferRef onto another one without using memcpy, is there a zero-copy method? How to get notified when an IOSurface backed CVPixelBufferRef data get changed by another process. How to send an IOSurface backed CVPixelBufferRef from Broadcast Extension to application. How to pass unowned IOSurfaceRef from the Broadcast Extension to appliction.
0
1
378
Jun ’24