Dive into the world of video on Apple platforms, exploring ways to integrate video functionalities within your iOS,iPadOS, macOS, tvOS, visionOS or watchOS app.

Video Documentation

Post

Replies

Boosts

Views

Activity

FxPlug4.3 & Window
1.In the FxRemoteWindowAPI protocol, there is no way to set window.frame.origin. 2.When using NSWindow, you cannot set [Window setLevel:NSFloatingWindowLevel]. 3.How can I keep the window in front of Final Cut Pro without affecting the normal use of Final Cut Pro?
1
0
246
Aug ’24
Metal Performance Shader color issue with yCbCr buffer
I'm making an app that reads a ProRes file, processes each frame through metal to resize and scale it, then outputs a new ProRes file. In the future the app will support other codecs but for now just ProRes. I'm reading the ProRes 422 buffers in the kCVPixelFormatType_422YpCbCr16 pixel format. This is what's recommended by Apple in this video https://developer.apple.com/wwdc20/10090?time=599. When the MTLTexture is run through a metal performance shader, the colorspace seems to force RGB or is just not allowing yCbCr textures as the output is all green/purple. If you look at the render code, you will see there's a commented out block of code to just blit copy the outputTexture, if you perform the copy instead of the scaling through MPS, the output colorspace is fine. So it appears the issue is from Metal Performance Shaders. Side note - I noticed that when using this format, it brings in the YpCbCr texture as a single plane. I thought it's preferred to handle this as two separate planes? That said, if I have two separate planes, that makes my app more complicated as I would need to scale both planes or merge it to RGB. But I'm going for the most performance possible. A sample project can be found here: https://www.dropbox.com/scl/fo/jsfwh9euc2ns2o3bbmyhn/AIomDYRhxCPVaWw9XH-qaN0?rlkey=sp8g0sb86af1u44p3xy9qa3b9&dl=0 Inside the supporting files, there is a test movie. For ease, I would move this to somewhere easily accessible (i.e Desktop). Load and run the example project. Click 'Select Video' Select that video you placed on your desktop It will now output a new video next to the selected one, named "Output.mov" The new video should just be scaled at 50%, but the colorspace is all wrong. Below is a photo of before and after the metal performance shader.
3
0
451
Aug ’24
Write Permission to Camera Feed?
I am working on a project for a university which wants to alter the passthrough camera feed more so than the standard filters (saturation/contract/etc) that some of the headsets provide. I don't have access to the headset or enterprise SDK yet, as I'd like to nail down whether or not this is feasible before we purchase the hardware. In the API I see I can use CameraFrameProvider to access a CameraFrame and then grab a sample. The sample has a CVPixelBuffer. I have 2 questions regarding the pixelBuffer: I see that the buffer itself is read only but can I alter the bytes within this pixel buffer? Lets say change all green pixels to red (not my actual use case but just an example) Will the updated pixel buffer then be used in the passthrough screen? If not, then is there any way to have control over the video feed that is being displayed as passthrough? Our ideal setup would be to have access to a frame, alter it however we want, and then have the frame displayed in passthrough. I realize I could take the feed and copy it into a floating window and alter that, but that breaks the immersion we are shooting to create here. Thanks in advance!
1
0
333
Aug ’24
reasonForWaitingToPlay: AVPlayerWaitingToMinimizeStallsReason
Hello, We are currently developing a mobile game using Unreal Engine 5, and we have encountered an issue where a specific video (mp4 format) stops displaying at a particular frame during playback within the game. The code within Unreal fails at the following point, causing the issue: CMTime OutputItemTime = [Output itemTimeForHostTime:CACurrentMediaTime()]; if (![Output hasNewPixelBufferForItemTime:OutputItemTime]) { return; } We have referred to the following Apple documentation: AVPlayerTimeControlStatus reasonForWaitingToPlay Upon logging, we observed the following: [2024.08.13-05.18.35:266][429]LogTemp: AMP PlayerItem.status AVPlayerItemStatusReadyToPlay [2024.08.13-05.18.35:266][429]LogTemp: AMP MediaPlayer.timeControlStatus AVPlayerTimeControlStatusWaitingToPlayAtSpecifiedRate [2024.08.13-05.18.35:266][429]LogTemp: AMP reasonForWaitingToPlay: AVPlayerWaitingToMinimizeStallsReason [2024.08.13-05.18.35:266][429]LogTemp: AMP MediaPlayer.rate 1.000000 [2024.08.13-05.18.35:268][430]LogTemp: avf CurrentMediaTime : 455097.836833 [2024.08.13-05.18.35:268][430]LogTemp: avf OutputItemTime: 3.868346 [2024.08.13-05.18.35:268][430]LogTemp: avf Sampler::Tick() fail hasNewPixelBufferForItemTime OutputItemTime: 3.868346 This issue consistently occurs with videos that have the following specifications: Codec: H.264 Resolution: 1080x608 Bitrate: 7,922,135 bits/sec Duration: 90.17 seconds Frame Rate: 30.0 fps Pixel Format: yuv420p Profile: Main We would like to inquire about the possible reasons for the playback failure and the recommended MP4 specifications for seamless playback on Apple devices. Specifically, we need guidance on recommended resolution, FPS, profile, level, and bitrate limits. Your assistance would be greatly appreciated.
0
0
240
Aug ’24
Sending '$0' risks causing data races
I had no luck to compile a sample code provided by apple with Xcode 16.0 beta 5. ScreenCaptureKit demo (https://developer.apple.com/documentation/screencapturekit/capturing_screen_content_in_macos) The part it is failling is, streamOutput.capturedFrameHandler = { continuation.yield($0) } And the error message is Sending '$0' risks causing data races Task-isolated '$0' is passed as a 'sending' parameter; Uses in callee may race with later task-isolated uses Please enlighten me why this is an issue and how to avoid? Thanks in advance!
3
2
1.2k
Aug ’24
Can't share Video to Facebook
I have the Facebook SDK version 17.0.2 and xcode 15. Sharing photos and links work fine but when I try sharing videos, I get the following error: Failed to log access with error: access=<PATCCAccess 0x301d12b20> accessor:<<PAApplication 0x301d27e30 identifierType:auditToken identifier:{pid:18440, version:47210}>> identifier:A9159DCD-76B1-4C77-A01E-DA611929B50B kind:intervalEvent timestampAdjustment:0 visibilityState:0 assetIdentifierCount:0 accessCount:0 tccService:kTCCServicePhotos, error=Error Domain=NSCocoaErrorDomain Code=4097 "connection to service with pid 15679 named com.apple.privacyaccountingd" UserInfo={NSDebugDescription=connection to service with pid 15679 named com.apple.privacyaccountingd}
1
0
360
Aug ’24
FxPlug4.3_NSPanel_setLevel
NSPanel *panel = [[myPanel alloc] initWithContentRect:NSMakeRect(100, 100, 400, 300) styleMask:NSWindowStyleMaskTitled | NSWindowStyleMaskClosable backing:NSBackingStoreBuffered defer:NO]; [panel setLevel:NSFloatingWindowLevel];//无效???? [panel makeKeyAndOrderFront:self]; 问题:在FxPlug4.3中使用setLevel不能将panel放在Final cut pro和Mition的前面? 救命~~~全世界都没找到答案!
1
0
291
Aug ’24
FxPlug4.3_FxRemoteWindowAPI_window.frame.origin?
1.在Fxplug4.3的 FxRemoteWindowAPI 的协议中,没有提供window.frame.origin的设置。 2.如果我自定义NSWindow时,在FxPlug中 [Window setLevel:NSFloatingWindowLevel];也没有执行。 3.请问我应该如何把窗口保留在Final cut pro的前面,并且不影响Final cut pro 的操作呢?
1
0
297
Aug ’24
Swift 6 AVAssetImageGenerator generateCGImagesAsynchronously
I have the following piece of code that works in Swift 5 func test() { let url = Bundle.main.url(forResource: "movie", withExtension: "mov") let videoAsset = AVURLAsset(url: url!) let t1 = CMTime(value: 1, timescale: 1) let t2 = CMTime(value: 4, timescale: 1) let t3 = CMTime(value: 8, timescale: 1) let timesArray = [ NSValue(time: t1), NSValue(time: t2), NSValue(time: t3) ] let generator = AVAssetImageGenerator(asset: videoAsset) generator.requestedTimeToleranceBefore = .zero generator.requestedTimeToleranceAfter = .zero generator.generateCGImagesAsynchronously(forTimes: timesArray ) { requestedTime, image, actualTime, result, error in let img = UIImage(cgImage: image!) } } When I compile and run it in Swift 6 it gives a EXC_BREAKPOINT (code=1, subcode=0x1021c7478) I understand that Swift 6 adopts strict concurrency. My question is if I start porting my code, what is the recommended way to change the above code? Rgds, James
5
0
559
Aug ’24
Media Extension API - How to properly vend GOP samples from a MediaFormat Extension
Hello I am testing the new Media Extension API in macOS 15 Beta 4. Firstly, THANK YOU FOR THIS API!!!!!! This is going to be huge for the video ecosystem on the platform. Seriously! My understanding is that to support custom container formats you make a MEFormatReader extension, and to support a specific custom codec, you create a MEVideoDecoder for that codec. Ok - I have followed the docs - esp the inline header info and have gotten quite far A Host App which hosts my Media Extenion (MKV files) A Extension Bundle which exposes the UTTYpes it supports to the system and plugin class ID as per the docs Entitlements as per docs I'm building debug - but I have a valid Developer ID / Account associated in Teams in Xcode My Plugin is visible to the Media Extension System preference My Plugin is properly initialized, I get the MEByteReader and can read container level metadata in callbacks I can instantiate my tracks readers, and validate the tracks level information and provide the callbacks I can instantiate my sample cursors, and respond to seek requests for samples for the track in question Now, here is where I get hit some issues. My format reader is leveraging FFMPEGs libavformat library, and I am testing with MKV files which host AVC1 h264 samples, which should be decodable as I understand it out of the box from VideoToolbox (ie, I do not need a separate MEVideoDecoder plugin to handle this format). Here is my CMFormatDescription which I vend from my MKV parser to AVFoundation via the track reader Made Format Description: <CMVideoFormatDescription 0x11f005680 [0x1f7d62220]> { mediaType:'vide' mediaSubType:'avc1' mediaSpecific: { codecType: 'avc1' dimensions: 1920 x 1080 } extensions: {(null)} } My MESampleCursor implementation implements all of the callbacks - and some of the 'optional' sample cursor location methods: (im only sharing the optional ones here) - (MESampleLocation * _Nullable) sampleLocationReturningError:(NSError *__autoreleasing _Nullable * _Nullable) error - (MESampleCursorChunk * _Nullable) chunkDetailsReturningError:(NSError *__autoreleasing _Nullable * _Nullable) error I also populate the AVSampleCursorSyncInfo and AVSampleCursorDependencyInfo structs per each AVPacket* I decode from libavformat Now my issue: I get these log files in my host app: <<<< VRP >>>> figVideoRenderPipelineSetProperty signalled err=-12852 (kFigRenderPipelineError_InvalidParameter) (sample attachment collector not enabled) at FigStandardVideoRenderPipeline.c:2231 <<<< VideoMentor >>>> videoMentorDependencyStateCopyCursorForDecodeWalk signalled err=-12836 (kVideoMentorUnexpectedSituationErr) (Node not found for target cursor -- it should have been created during videoMentorDependencyStateAddSamplesToGraph) at VideoMentor.c:4982 <<<< VideoMentor >>>> videoMentorThreadCreateSampleBuffer signalled err=-12841 (err) (FigSampleGeneratorCreateSampleBufferAtCursor failed) at VideoMentor.c:3960 <<<< VideoMentor >>>> videoMentorThreadCreateSampleBuffer signalled err=-12841 (err) (FigSampleGeneratorCreateSampleBufferAtCursor failed) at VideoMentor.c:3960 Which I presume is telling me I am not providing the GOP or dependency metadata correctly to the plugin. I've included console logs from my extension and host app: LibAVExtension system logs And my SampleCursor implementation is here https://github.com/vade/FFMPEGMediaExtension/blob/main/LibAVExtension/LibAVSampleCursor.m Any guidance is very helpful. Thank you!
1
1
728
Jul ’24
iOS 18 uses ffmpeg subtitle synthesis with garbled Chinese characters
IOS 18 below FFmpeg subtitle synthesis log [Parsed_subtitles_0 @ 0x301b37650] Using font provider fontconfig [Parsed_subtitles_0 @ 0x301b37650] fontselect: (PingFangSC-Semibold, 400, 0) -> /System/Library/Fonts/LanguageSupport/PingFang.ttc, 8, PingFangSC-Semibold IOS 18 FFmpeg subtitle synthesis log [Parsed_subtitles_0 @ 0x303e825d0] Using font provider fontconfig [Parsed_subtitles_0 @ 0x303e825d0] fontselect: (PingFangSC-Regular, 400, 0) -> /System/Library/Fonts/Core/HiraginoKakuGothic.ttc, 10, . HiraKakuInterface-W4 [Parsed_subtitles_0 @ 0x303e825d0] Glyph 0x8FD9 not found, selecting one more font for (PingFangSC-Regular, 400, 0) [Parsed_subtitles_0 @ 0x303e825d0] fontselect: (PingFangSC-Regular, 400, 0) -> /System/Library/Fonts/Core/LastResort.otf, 0, LastResort In normal characters, there may be some garbled characters, such as 0x8FD9 mentioned in the log, corresponding to "这" Traditional Chinese and Simplified Chinese Normal font: Russian, Korean, Japanese, French, English, German IOS 18 below FFmpeg subtitle synthesis log fontselect: (PingFangSC-Semibold, 400, 0) -> /System/Library/Fonts/LanguageSupport/PingFang.ttc, 8, PingFangSC-Semibold IOS 18 FFmpeg subtitle synthesis log fontselect: (PingFangSC-Regular, 400, 0) -> /System/Library/Fonts/Core/HiraginoKakuGothic.ttc, 10, . HiraKakuInterface-W4 In iOS 18, the selected font has changed and instead of continuing to use PingFang.ttc, HiraginoKakuGothic. Has iOS changed the PingFang font directory, causing ffmpeg to be unable to use system fonts for subtitle synthesis
2
1
503
Jul ’24
Cause of AVErrorMediaServicesWereReset error?
Hi! I am getting AVErrorMediaServicesWereReset (-11819) thrown as an error by AVMutableCompositionTrack.insertTimeRange(_:of:at:) when trying to insert part of an AVAssetTrack into my video track (an AVMutableCompositionTrack) in my AVMutableComposition. This is not happening every time I'm making an AVMutableComposition, but it is happening frequently. I am also getting this error thrown when trying to export using an AVAssetExportSession. Is there any insight into what can cause this error in these scenarios? Thanks!
3
0
486
Jul ’24
Setting the right height for visualize in a correct way VR180 3D video
Hi, I'm developing a simple app to visualize embedded VR180 3D video. I used a semisphere and projected the video as its material. The semisphere is in the ambient at a fixed y value of 1.35, which is good for a seated person, but not ideal for a standing person because the stereoscopic vision is not correct. In the AppleTV+ and Kandao applications, I noticed that the translation of the video is anchored to the Apple Vision Pro. I tried using AnchorEntity to the head with trackingMode .once, but there is the problem of rotation; the semisphere starts with the rotation of the head. Is there a solution, for example, to anchor the semisphere only to the translation and not to the rotation of the head?
4
0
483
Jul ’24
mediafilesegmenter fails to segment a file due to track containing contains more than one valid time mapping
When I try to run mediafilesegmenter with the following mediafilesegmenter -iso-fragmented -z frame_index.m3u8 \ -f 720p -i index_720p \ -k key.bin -stream_encrypt -P -K <url> \ -B seg_720p -t 6 source.mp4 I get the following output ISO fragmented mode, forcing segments to start with I-Frame Processing file source.mp4 track 2 of source.mp4 contains more than one valid time mapping Unable to find any valid tracks to segment. Segmenting failed (-15650). What specifically should I be looking for in my MP4 that would be causing this?
0
0
426
Jul ’24
Get Progress of AVAssetDownloadTask on watchOS
I'm trying to download M3U8 media on watchOS by this code: let configuration = URLSessionConfiguration.background(withIdentifier: "com.id") let session = AVAssetDownloadURLSession( configuration: configuration, assetDownloadDelegate: M3U8DownloadDelegate.shared, delegateQueue: .main ) let asset = AVURLAsset(url: URL(string: mediaLink)!) let downloadTask = session.makeAssetDownloadTask(downloadConfiguration: .init(asset: asset, title: "")) downloadTask.resume() m3u8DownloadObservation = downloadTask.progress.observe(\.fractionCompleted) { progress, _ in print(progress) } But downloadTask.progress is always zero, and the observation is never called. How to get the progress correctly?
0
0
312
Jul ’24