I'm working on streaming tvOS app and as you know there are mostly two type of video streams - live and vod. AVPlayerViewController handles these types of streams by showing respective playback controls.
Recently I got a task to implement synchronous vod playback(syncVod), it's when we need to simulate live playback while actual vod stream playback.
In order to simulate live playback below things needs to be handled:
Disabling scrubbing via remote. (Done. playerVc.requiresLinearPlayback = true)
Disabling info panel view w/play "From beginning" button. (Done, playerVc.playbackControlsIncludeInfoViews = false)
Disabling play/pause button.(Done, not ideally though. On rate change observer - if player.rate == 0 && playbackMode == .syncVod { player.play() return }). Why not ideal solution - tapping on remote causes quite short hiccup in playback - but playback resumes, no actual pause happens.
Hiding progress bar and time labels. :(
Point #4 is the main problem, we can't hide progress bar and it's related UI elements(time labels) particularly, but only hide all playback controls - playerVc.showsPlaybackControls = false. The thing is I have custom buttons in transportBarCustomMenuItems and hiding all playback controls is not the right option for me.
Implementing custom playback controls panel is kind of heavy lift, but as of now it seems the only proper way of implementing syncVod playback ideally.
Did anyone face similar issue and could resolve it w/out implementing custom playback controls panel ? Is there way to hide progress bar only in tvOS AVPlayerViewController?
Video
RSS for tagDive into the world of video on Apple platforms, exploring ways to integrate video functionalities within your iOS,iPadOS, macOS, tvOS, visionOS or watchOS app.
Post
Replies
Boosts
Views
Activity
HELP! How could I play a spatial video in my own vision pro app like the official app Photos? I've used API of AVKit to play a spatial video in XCode vision pro simulator with the guild of the official developer document, this video could be played but it seems different with what is played through app Photos. In Photos the edge of the video seems fuzzy but in my own app it has a clear edge.
How could I play the spatial video in my own app with the effect like what is in Photos?
A small number of crashes are being reported on Firebase. When attempting to use the insertTimeRange:ofAsset:atTime:error: method of AVMutableComposition, a crash occurred with the error message -[__NSArrayM insertObject:atIndex:]: object cannot be nil.
Most of them appear in versions of ios 17.0 and above.
Here's my code:
- (AVMutableComposition *)createtrimAsset:(AVAsset *)asset
andStartTime:(CGFloat)startTime
endTime:(CGFloat)endTime{
NSError *error = nil;
CGFloat timescale = 1000000;
AVMutableComposition *mutableComposition = [AVMutableComposition composition];
CMTime sStartTime = CMTimeMakeWithSeconds(CMTimeGetSeconds(asset.duration)*startTime, timescale);
CMTime eEndTime = CMTimeMakeWithSeconds(CMTimeGetSeconds(asset.duration)*endTime, timescale);
[mutableComposition insertTimeRange:CMTimeRangeMake(sStartTime, CMTimeSubtract(eEndTime, sStartTime)) ofAsset:asset atTime:kCMTimeZero error:&error];
return mutableComposition;
}
I attempted to reproduce this crash by deliberately setting the timeRange or asset to unusual values, such as asset=nil, or asset.duration=0, or asset.duration=NAN, but all attempts failed.
What could be causing this exception? Any advice would be of great help to me.
OS:VisionOS 1.0
Xcode:15.2
In the application under development, do the following
Open ImmersiveSpace
Add VideoPlayerComponent to Entity
Play 8K Video
the App crash
The Apple symbol appears and returned to the Home
but, The problem does not occur if the application is created by extracting only the part of the 8K video to be played back.
Error Log
apply fence tx failed (client=0x6fbf0fcc) [0xfffffecc (ipc/mig) server died]
Failed to commit transaction (client=0x58510d43) [0x10000003 (ipc/send) invalid destination port]
nw_read_request_report [C1] Receive failed with error "No message available on STREAM"
nw_protocol_socket_reset_linger [C1:2] setsockopt SO_LINGER failed [22: Invalid argument]
Failed to set override status for bind point component member.
Message from debugger: Terminated due to signal 9
I can't share the entire application, but is anyone else experiencing the same problem?
Is this a memory issue?
My app stores and transports lots of groups of similar PNGs. These aren't compressed well by official algorithms like .lzfse, .lz4, .lzbitmap... not even bz2, but I realized that they are well-suited for compression by video codecs since they're highly similar to one another.
I ran an experiment where I compressed a dozen images into an HEVCWithAlpha .mov via AVAssetWriter, and the compression ratio was fantastic, but when I retrieved the PNGs via AVAssetImageGenerator there were lots of artifacts which simply wasn't acceptable. Maybe I'm doing something wrong, or maybe I'm chasing something that doesn't exist.
Is there a way to use video compression like a specialized archive to store and retrieve PNGs losslessly while retaining alpha? I have no intention of using the videos except as condensed storage.
Any suggestions on how to reduce storage size of many large PNGs are also welcome. I also tried using HEVC instead of PNG via the new UIImage.hevcData(), but the decompression/processing times were just insane (5000%+ increase), on top of there being fatal errors when using async.
Hi.
I know that playing videos from Apple Music was not possible with iOS 17. There is only the workaround to open the Music app.
My question is whether anybody found a solution for iOS 18 (Beta).
Thanks,
Dirk
The M series utilizes VideoToolBox GPU compression with a YUV422 format kCVPixelFormatType_422YpCbCr8BiPlanarVideoRange input, and the compressed output JPEG image format remains YUV420. For the Intel series GPU compression, a YUV420 format kCVPixelFormatType_420YpCbCr8Planar input is required, and the compressed output JPEG image format is YUV422. The output format after compression is not consistent with the input format. Does VideoToolBox GPU compression support output YUV422 or YUV444 JPEG images and H.264 streams?
Hello everyone,
Is it possible to set AVCaptureMovieFileOutput to record the audio for my HECV video as 44100Hz 16 bit mono PCM. If yes: how does it work?
Thank you in advance
I am writing to follow up with my lab in WWDC24.
I had 1:1 lab with Mr. Kavin, we had good 30 minutes lab and for follow up questions Kavin asked me to post it using feedback.
Following is my questin:
We have screenshare in our application and trying to use CFMessagePort for passing CVPixelBufferRef from broadcast extension to Applicaiton.
Questions:
How to copy planes of IOSurface backed CVPixelBufferRef onto another one without using memcpy, is there a zero-copy method?
How to get notified when an IOSurface backed CVPixelBufferRef data get changed by another process.
How to send an IOSurface backed CVPixelBufferRef from Broadcast Extension to application.
How to pass unowned IOSurfaceRef from the Broadcast Extension to appliction.
iOS17.4 17.5.1
AVPlayer
https://svip.yzzy23-play.com/20240606/14121_017bbbeb/index.m3u8
大概在8:30s时会无声 我测试过系统safari也会出现这样的情况
之前版本都没问题
I’m creating a objective C command-line utility to encode RAW image sequences to ProRes 4444, but I’m encountering, blocky compression artifacts in the ProRes 4444 video output.
To test the integrity of the image data before encoding to ProRes, I added a snippet in my encoding function that saves a 16-bit PNG before encoding to ProRes and the PNG looks perfect, I can see all detail in every part of the image dynamic range.
Here’s a comparison between the 16-bit PNG(on the right) and the ProRes 4444 output. (on the left)
As a further test, I re-encoded the ‘test PNG’ to ProRes 4444 using DaVinci Resolve, and the ProRes4444 output video from Resolve doesn’t have any blocky compression artifacts. Looks identical.
In short, this is what the utility does:
Unpacks the 12-bit raw data into 16-bit values. After unpacking, the raw data is debayered to convert it into a standard color image format (BGR) using OpenCV.
Scale the debayered pixel values from their original 12-bit depth to fit into a 16-bit range. Up to this point everything is fine and confirmed by saving 16bit PNGs.
The images are encoded to ProRes 4444 using the AVFoundation framework.
The pixel buffers are created and managed using dictionary method with ‘kCVPixelFormatType_64RGBALE’.
I need help figuring this out, I’m a real novice when it comes to AVfoundation/encoding to ProRes.
See relevant parts of my 'encodeToProRes' function:
void encodeToProRes(const std::string &outputPath, const std::vector<std::string> &rawPaths, const std::string &proResFlavor) {
NSError *error = nil;
NSURL *url = [NSURL fileURLWithPath:[NSString stringWithUTF8String:outputPath.c_str()]];
AVAssetWriter *assetWriter = [AVAssetWriter assetWriterWithURL:url fileType:AVFileTypeQuickTimeMovie error:&error];
if (error) {
std::cerr << "Error creating AVAssetWriter: " << error.localizedDescription.UTF8String << std::endl;
return;
}
// Load the first image to get the dimensions
std::cout << "Debayering the first image to get dimensions..." << std::endl;
Mat firstImage;
int width = 5320;
int height = 3900;
if (!debayer_image(rawPaths[0], firstImage, width, height)) {
std::cerr << "Error debayering the first image" << std::endl;
return;
}
width = firstImage.cols;
height = firstImage.rows;
// Save the first frame as a PNG 16-bit image for validation
std::string pngFilePath = outputPath + "_frame1.png";
if (!imwrite(pngFilePath, firstImage)) {
std::cerr << "Error: Failed to save the first frame as a PNG image" << std::endl;
} else {
std::cout << "First frame saved as PNG: " << pngFilePath << std::endl;
}
NSString *codecKey = nil;
if (proResFlavor == "4444") {
codecKey = AVVideoCodecTypeAppleProRes4444;
} else if (proResFlavor == "422HQ") {
codecKey = AVVideoCodecTypeAppleProRes422HQ;
} else if (proResFlavor == "422") {
codecKey = AVVideoCodecTypeAppleProRes422;
} else if (proResFlavor == "LT") {
codecKey = AVVideoCodecTypeAppleProRes422LT;
} else {
std::cerr << "Error: Invalid ProRes flavor specified: " << proResFlavor << std::endl;
return;
}
NSDictionary *outputSettings = @{
AVVideoCodecKey: codecKey,
AVVideoWidthKey: @(width),
AVVideoHeightKey: @(height)
};
AVAssetWriterInput *videoInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:outputSettings];
videoInput.expectsMediaDataInRealTime = YES;
NSDictionary *pixelBufferAttributes = @{
(id)kCVPixelBufferPixelFormatTypeKey: @(kCVPixelFormatType_64RGBALE),
(id)kCVPixelBufferWidthKey: @(width),
(id)kCVPixelBufferHeightKey: @(height)
};
AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:videoInput sourcePixelBufferAttributes:pixelBufferAttributes];
...
[assetWriter startSessionAtSourceTime:kCMTimeZero];
CMTime frameDuration = CMTimeMake(1, 24); // Frame rate of 24 fps
int numFrames = static_cast<int>(rawPaths.size());
...
// Encoding thread
std::thread encoderThread([&]() {
int frameIndex = 0;
std::vector<CVPixelBufferRef> pixelBufferBuffer;
while (frameIndex < numFrames) {
std::unique_lock<std::mutex> lock(queueMutex);
queueCondVar.wait(lock, [&]() { return !frameQueue.empty() || debayeringFinished; });
if (!frameQueue.empty()) {
auto [index, debayeredImage] = frameQueue.front();
frameQueue.pop();
lock.unlock();
if (index == frameIndex) {
cv::Mat rgbaImage;
cv::cvtColor(debayeredImage, rgbaImage, cv::COLOR_BGR2RGBA);
CVPixelBufferRef pixelBuffer = NULL;
CVReturn result = CVPixelBufferPoolCreatePixelBuffer(NULL, adaptor.pixelBufferPool, &pixelBuffer);
if (result != kCVReturnSuccess) {
std::cerr << "Error: Could not create pixel buffer" << std::endl;
dispatch_group_leave(dispatchGroup);
return;
}
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pixelBuffer);
for (int row = 0; row < height; ++row) {
memcpy(static_cast<uint8_t*>(pxdata) + row * CVPixelBufferGetBytesPerRow(pixelBuffer),
rgbaImage.ptr(row),
width * 8);
}
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
pixelBufferBuffer.push_back(pixelBuffer);
...
Thanks very much!
Hello,
Notice that when using h.265 VideoToolBox encoding with Handbrake (latest snapshots) the resulting output is larger than the 264 source.
Obviously this was the case for OSX 14.x and below.
Is this a known "regression" ?
Thanks.
I film and edit content using my iPhone and PremierePro. I've been doing this since 2017, so I'd like to say I know my way around an iPhone and the PremierePro software enough to get a finished product to my client.
Before I send content to clients, I airdrop it to my phone to make sure that the quality of it is up to par. On this most recent project, I've had issues airdropping it to myself. Each time I do so, my iPhone prompts which 3rd party app I would like to open the video in, rather than automatically opening or saving the video into my photo library.
I will list the specs below:
Filmed on iPhone 15 Plus iOS Version 17.5.1 (at the moment this is the most up to date software update)
Filmed in 4K at 60 FPS
I have ample storage space on my phone and the video file size is 220MB
Premiere Pro Export Settings: Video Settings - H.264, Field Order: Progressive, Bit Rate Encoding: CBR
I will say that I purchased a transition and burns bundle and used it for the first time on this project. All materials used are in an .mp4 format and blend mode was set to overlay. Nothing out of pocket.
I figured it wouldn't be a problem since my client would just be downloading it via dropbox, but there was an issue there as well. My client received an error message saying, "Sorry, this type of video cannot be saved to this device".
The back road workaround was to take the saved video, plug it into a separate PremierePro project, and export it with the preset: Match Source - Adaptive High Bitrate. I was able to airdrop it to myself, it saved in my photo album and my client was able to download it without receiving that error message.
If there is an explanation as to why I am having this issue and how I can avoid it, I would really appreciate it as I have never had this problem in the past.
when audio file's magic number is 49443302 not 49443303, AVAudioPlayer's duration property return wrong value,
actually it cause by engiTunSMPB but I want to know why it happen only in ID3 version 2 (49443302)
example: only difference in two mp3 file is the magic number
and check the duration returns this result
source code under here
ContentView.swift
I see that Quicklook PreviewApplication.open has ability to show the videos in Immersive view, similar to Photos application. So I assume there is should be a control/configuration for VideoPlayer/AVPlayerViewController that would allow to do so.
How do you add this Immersive presentation for the VideoPlayer?
If it is not possible: FB13886809
Hi,
I’m developing an app that uses SharePlay. In specific, I’m using ShareLink in my SwiftUI-based app so that when 2 devices come close, it will start SharePlay via AirDrop, just like how Name Drop works (the animation is super cool, btw).
However, I’ve notice that SharePlay doesn’t start reliably under the following conditions:
Do both devices need to be signed in using different Apple ID? I wish it works with the same Apple ID.
When both devices are running my app, the sharing does not seem to start; maybe both of them are trying to be the host app?
When I try to demo this NameDrop-like transaction via Zoom, it usually doesn’t work; maybe because the cable is connected in Lightening port? Is some Mac app (in my case, Zoom or even QuickTime) capturing the screen of the device make it less likely to have successful SharePlay transaction?
Thanks!
Is AVQT capable of being used to measure encoding quality of PQ or HLG based content beyond SDR? If so, how am I able to leverage it. If not, is there a roadmap for timing to enable this type of tool?
Hello,
I have converted UIImage to CVPixelBuffer. I am creating a video writing app. In some cases, the same CVPixelBuffer should last in the video for 2 seconds or more.
However, I need to add 30 CVPixelBuffers per second because the video, to work on social media, must be 30 frames per second.
The problem is that whenever I try to add frames to long videos, like 50-minute videos, it gives an error.
The error is something like "Operation cannot be completed".
Give me an example of a loop to add 30 CVPixelBuffers per second to a currently written video.
Example:
while true {
if videoInput.isReadyForMoreMediaData {
break
}
if videoInput.isReadyForMoreMediaData,
let buffer = videoProvider.getNextFrame() {
adaptor.append(buffer, withPresentationTime: CMTime(value: 1, timescale: 30))
}
}
I await your response.
I have an app that displays overlays on top of an AVCaptureVideoPreviewLayer (basically AR without ARKit), and my users have repeatedly requested a button that will allow them to capture a screenshot of both the video and the overlays and surrounding UI with a single tap. However, I cannot find a way to actually take such a screenshot.
I have tried the usual methods of rendering views to images, such as calling drawViewHierarchyInRect on my top level view or calling renderInContext on the same view's layer. These all work perfectly to capture the overlays and the surrounding UI elements, but there is nothing but black where the video preview's contents should be.
snapshotViewAfterScreenUpdates: does capture exactly what I want, but snapshot views cannot be written to an image. From what I understand that's an intentional security decision by Apple.
I've considered using ReplayKit to take a very short screen recording and then using an AVAssetImageGenerator to grab a frame from that video, but I don't think that's how those APIs were intended to be used and it's an additional permission to request from the user. I would really rather not do this if there is any alternative (and I'm not even sure it would work).
Is there any reasonable method to render a view hierarchy to an image in such a way as to capture the contents of any video preview layers found within that hierarchy?
I made CameraExtension and installed by OSSystemExtensionRequest.
I got success callback. I did uninstall old version of my CameraExtension and install new version of my CameraExtension.
"systemextensionsctl list" command shows "[activated enabled]" on my new version.
But no daemon process with my CameraExtension is not running. I need to reboot OS to start the daemon process. This issue is new at macOS Sonoma 14.5. I did not see this issue on 14.4.x