I am working on a project for a university which wants to alter the passthrough camera feed more so than the standard filters (saturation/contract/etc) that some of the headsets provide.
I don't have access to the headset or enterprise SDK yet, as I'd like to nail down whether or not this is feasible before we purchase the hardware. In the API I see I can use CameraFrameProvider to access a CameraFrame and then grab a sample. The sample has a CVPixelBuffer. I have 2 questions regarding the pixelBuffer:
I see that the buffer itself is read only but can I alter the bytes within this pixel buffer? Lets say change all green pixels to red (not my actual use case but just an example)
Will the updated pixel buffer then be used in the passthrough screen?
If not, then is there any way to have control over the video feed that is being displayed as passthrough? Our ideal setup would be to have access to a frame, alter it however we want, and then have the frame displayed in passthrough. I realize I could take the feed and copy it into a floating window and alter that, but that breaks the immersion we are shooting to create here.
Thanks in advance!
Video
RSS for tagDive into the world of video on Apple platforms, exploring ways to integrate video functionalities within your iOS,iPadOS, macOS, tvOS, visionOS or watchOS app.
Post
Replies
Boosts
Views
Activity
Hello,
We are currently developing a mobile game using Unreal Engine 5, and we have encountered an issue where a specific video (mp4 format) stops displaying at a particular frame during playback within the game.
The code within Unreal fails at the following point, causing the issue:
CMTime OutputItemTime = [Output itemTimeForHostTime:CACurrentMediaTime()];
if (![Output hasNewPixelBufferForItemTime:OutputItemTime])
{
return;
}
We have referred to the following Apple documentation:
AVPlayerTimeControlStatus
reasonForWaitingToPlay
Upon logging, we observed the following:
[2024.08.13-05.18.35:266][429]LogTemp: AMP PlayerItem.status AVPlayerItemStatusReadyToPlay
[2024.08.13-05.18.35:266][429]LogTemp: AMP MediaPlayer.timeControlStatus AVPlayerTimeControlStatusWaitingToPlayAtSpecifiedRate
[2024.08.13-05.18.35:266][429]LogTemp: AMP reasonForWaitingToPlay: AVPlayerWaitingToMinimizeStallsReason
[2024.08.13-05.18.35:266][429]LogTemp: AMP MediaPlayer.rate 1.000000
[2024.08.13-05.18.35:268][430]LogTemp: avf CurrentMediaTime : 455097.836833
[2024.08.13-05.18.35:268][430]LogTemp: avf OutputItemTime: 3.868346
[2024.08.13-05.18.35:268][430]LogTemp: avf Sampler::Tick() fail hasNewPixelBufferForItemTime OutputItemTime: 3.868346
This issue consistently occurs with videos that have the following specifications:
Codec: H.264
Resolution: 1080x608
Bitrate: 7,922,135 bits/sec
Duration: 90.17 seconds
Frame Rate: 30.0 fps
Pixel Format: yuv420p
Profile: Main
We would like to inquire about the possible reasons for the playback failure and the recommended MP4 specifications for seamless playback on Apple devices. Specifically, we need guidance on recommended resolution, FPS, profile, level, and bitrate limits.
Your assistance would be greatly appreciated.
I have the Facebook SDK version 17.0.2 and xcode 15. Sharing photos and links work fine but when I try sharing videos, I get the following error:
Failed to log access with error: access=<PATCCAccess 0x301d12b20> accessor:<<PAApplication 0x301d27e30 identifierType:auditToken identifier:{pid:18440, version:47210}>> identifier:A9159DCD-76B1-4C77-A01E-DA611929B50B kind:intervalEvent timestampAdjustment:0 visibilityState:0 assetIdentifierCount:0 accessCount:0 tccService:kTCCServicePhotos, error=Error Domain=NSCocoaErrorDomain Code=4097 "connection to service with pid 15679 named com.apple.privacyaccountingd" UserInfo={NSDebugDescription=connection to service with pid 15679 named com.apple.privacyaccountingd}
I have the following piece of code that works in Swift 5
func test() {
let url = Bundle.main.url(forResource: "movie", withExtension: "mov")
let videoAsset = AVURLAsset(url: url!)
let t1 = CMTime(value: 1, timescale: 1)
let t2 = CMTime(value: 4, timescale: 1)
let t3 = CMTime(value: 8, timescale: 1)
let timesArray = [
NSValue(time: t1),
NSValue(time: t2),
NSValue(time: t3)
]
let generator = AVAssetImageGenerator(asset: videoAsset)
generator.requestedTimeToleranceBefore = .zero
generator.requestedTimeToleranceAfter = .zero
generator.generateCGImagesAsynchronously(forTimes: timesArray ) { requestedTime, image, actualTime, result, error in
let img = UIImage(cgImage: image!)
}
}
When I compile and run it in Swift 6 it gives a
EXC_BREAKPOINT (code=1, subcode=0x1021c7478)
I understand that Swift 6 adopts strict concurrency. My question is if I start porting my code, what is the recommended way to change the above code?
Rgds,
James
IOS 18 below FFmpeg subtitle synthesis log
[Parsed_subtitles_0 @ 0x301b37650] Using font provider fontconfig
[Parsed_subtitles_0 @ 0x301b37650] fontselect: (PingFangSC-Semibold, 400, 0) -> /System/Library/Fonts/LanguageSupport/PingFang.ttc, 8, PingFangSC-Semibold
IOS 18 FFmpeg subtitle synthesis log
[Parsed_subtitles_0 @ 0x303e825d0] Using font provider fontconfig
[Parsed_subtitles_0 @ 0x303e825d0] fontselect: (PingFangSC-Regular, 400, 0) -> /System/Library/Fonts/Core/HiraginoKakuGothic.ttc, 10, . HiraKakuInterface-W4
[Parsed_subtitles_0 @ 0x303e825d0] Glyph 0x8FD9 not found, selecting one more font for (PingFangSC-Regular, 400, 0)
[Parsed_subtitles_0 @ 0x303e825d0] fontselect: (PingFangSC-Regular, 400, 0) -> /System/Library/Fonts/Core/LastResort.otf, 0, LastResort
In normal characters, there may be some garbled characters, such as 0x8FD9 mentioned in the log, corresponding to "这"
Traditional Chinese and Simplified Chinese
Normal font:
Russian, Korean, Japanese, French, English, German
IOS 18 below FFmpeg subtitle synthesis log
fontselect: (PingFangSC-Semibold, 400, 0) -> /System/Library/Fonts/LanguageSupport/PingFang.ttc, 8, PingFangSC-Semibold
IOS 18 FFmpeg subtitle synthesis log
fontselect: (PingFangSC-Regular, 400, 0) -> /System/Library/Fonts/Core/HiraginoKakuGothic.ttc, 10, . HiraKakuInterface-W4
In iOS 18, the selected font has changed and instead of continuing to use PingFang.ttc, HiraginoKakuGothic.
Has iOS changed the PingFang font directory, causing ffmpeg to be unable to use system fonts for subtitle synthesis
We have developed a custom player tvOS application using AVPlayer Foundation. When we hit the Siri command "What did they say?" Playback will go backwards but subtitles will not work temporarily. Anyone please suggest a solution for this issue :)
I want my app to allow the user to search for certain words in a video file and the transcript of that video. I found a Transcript class. I don't remember which Framework it is in. Would someone point me in the right direction? What Framework and Classes should I use.
Hello
I am testing the new Media Extension API in macOS 15 Beta 4.
Firstly, THANK YOU FOR THIS API!!!!!! This is going to be huge for the video ecosystem on the platform. Seriously!
My understanding is that to support custom container formats you make a MEFormatReader extension, and to support a specific custom codec, you create a MEVideoDecoder for that codec.
Ok - I have followed the docs - esp the inline header info and have gotten quite far
A Host App which hosts my Media Extenion (MKV files)
A Extension Bundle which exposes the UTTYpes it supports to the system and plugin class ID as per the docs
Entitlements as per docs
I'm building debug - but I have a valid Developer ID / Account associated in Teams in Xcode
My Plugin is visible to the Media Extension System preference
My Plugin is properly initialized, I get the MEByteReader and can read container level metadata in callbacks
I can instantiate my tracks readers, and validate the tracks level information and provide the callbacks
I can instantiate my sample cursors, and respond to seek requests for samples for the track in question
Now, here is where I get hit some issues.
My format reader is leveraging FFMPEGs libavformat library, and I am testing with MKV files which host AVC1 h264 samples, which should be decodable as I understand it out of the box from VideoToolbox (ie, I do not need a separate MEVideoDecoder plugin to handle this format).
Here is my CMFormatDescription which I vend from my MKV parser to AVFoundation via the track reader
Made Format Description: <CMVideoFormatDescription 0x11f005680 [0x1f7d62220]> {
mediaType:'vide'
mediaSubType:'avc1'
mediaSpecific: {
codecType: 'avc1' dimensions: 1920 x 1080
}
extensions: {(null)}
}
My MESampleCursor implementation implements all of the callbacks - and some of the 'optional' sample cursor location methods: (im only sharing the optional ones here)
- (MESampleLocation * _Nullable) sampleLocationReturningError:(NSError *__autoreleasing _Nullable * _Nullable) error
- (MESampleCursorChunk * _Nullable) chunkDetailsReturningError:(NSError *__autoreleasing _Nullable * _Nullable) error
I also populate the AVSampleCursorSyncInfo and AVSampleCursorDependencyInfo structs per each AVPacket* I decode from libavformat
Now my issue:
I get these log files in my host app:
<<<< VRP >>>> figVideoRenderPipelineSetProperty signalled err=-12852 (kFigRenderPipelineError_InvalidParameter) (sample attachment collector not enabled) at FigStandardVideoRenderPipeline.c:2231
<<<< VideoMentor >>>> videoMentorDependencyStateCopyCursorForDecodeWalk signalled err=-12836 (kVideoMentorUnexpectedSituationErr) (Node not found for target cursor -- it should have been created during videoMentorDependencyStateAddSamplesToGraph) at VideoMentor.c:4982
<<<< VideoMentor >>>> videoMentorThreadCreateSampleBuffer signalled err=-12841 (err) (FigSampleGeneratorCreateSampleBufferAtCursor failed) at VideoMentor.c:3960
<<<< VideoMentor >>>> videoMentorThreadCreateSampleBuffer signalled err=-12841 (err) (FigSampleGeneratorCreateSampleBufferAtCursor failed) at VideoMentor.c:3960
Which I presume is telling me I am not providing the GOP or dependency metadata correctly to the plugin.
I've included console logs from my extension and host app:
LibAVExtension system logs
And my SampleCursor implementation is here
https://github.com/vade/FFMPEGMediaExtension/blob/main/LibAVExtension/LibAVSampleCursor.m
Any guidance is very helpful.
Thank you!
When I use ReplayKit's exportClipToURL function on iOS to capture a 15-second replay, the resulting video quality is poor, with snowy artifacts on the damaged visuals and audio distortion.
Is it possible to put an video on loop and autoplaying for visionOS? We used AVPlayerViewController
Hi, I'm developing a simple app to visualize embedded VR180 3D video.
I used a semisphere and projected the video as its material. The semisphere is in the ambient at a fixed y value of 1.35, which is good for a seated person, but not ideal for a standing person because the stereoscopic vision is not correct. In the AppleTV+ and Kandao applications, I noticed that the translation of the video is anchored to the Apple Vision Pro. I tried using AnchorEntity to the head with trackingMode .once, but there is the problem of rotation; the semisphere starts with the rotation of the head.
Is there a solution, for example, to anchor the semisphere only to the translation and not to the rotation of the head?
Hi!
I am getting AVErrorMediaServicesWereReset (-11819) thrown as an error by AVMutableCompositionTrack.insertTimeRange(_:of:at:) when trying to insert part of an AVAssetTrack into my video track (an AVMutableCompositionTrack) in my AVMutableComposition. This is not happening every time I'm making an AVMutableComposition, but it is happening frequently.
I am also getting this error thrown when trying to export using an AVAssetExportSession.
Is there any insight into what can cause this error in these scenarios?
Thanks!
I'm trying to download M3U8 media on watchOS by this code:
let configuration = URLSessionConfiguration.background(withIdentifier: "com.id")
let session = AVAssetDownloadURLSession(
configuration: configuration,
assetDownloadDelegate: M3U8DownloadDelegate.shared,
delegateQueue: .main
)
let asset = AVURLAsset(url: URL(string: mediaLink)!)
let downloadTask = session.makeAssetDownloadTask(downloadConfiguration: .init(asset: asset, title: ""))
downloadTask.resume()
m3u8DownloadObservation = downloadTask.progress.observe(\.fractionCompleted) { progress, _ in
print(progress)
}
But downloadTask.progress is always zero, and the observation is never called.
How to get the progress correctly?
When I try to run mediafilesegmenter with the following
mediafilesegmenter -iso-fragmented -z frame_index.m3u8 \
-f 720p -i index_720p \
-k key.bin -stream_encrypt -P
-K <url> \
-B seg_720p -t 6 source.mp4
I get the following output
ISO fragmented mode, forcing segments to start with I-Frame
Processing file source.mp4
track 2 of source.mp4 contains more than one valid time mapping
Unable to find any valid tracks to segment.
Segmenting failed (-15650).
What specifically should I be looking for in my MP4 that would be causing this?
When I use the exportPresetsCompatibleWithAsset interface in iOS 17, everything works fine. However, when I upgraded my iPhone XR to iOS 18, calling the exportPresetsCompatibleWithAsset interface to transcode 4K assets returned success, but when I tried to save to the album, it returned error 3302.
Is it because this interface has already been deprecated in iOS 16?
I am using Metal for rendering, and when calling the newCommandQueue interface of Metal, there is a certain probability that I will get a nil return value. However, when I call the MTLCreateSystemDefaultDevice interface, I can get a non-empty return value, which means my device supports Metal. I would like to ask what causes the newCommandQueue to return nil? Is there any way to avoid this situation? Thank you.
What framework to use to capture screen of a device connected to the Mac in the way OBS or QuickTime Player does when an iOS device is connected to Mac via USB. I tried to list devices with AVFoundation and ScreenCaptureKit but only Mac camera, mic and displays are listed.
When you select New Movie Recording in the QuickTime Player you can choose an Connected iPad or iPhone to record it's screan. Same with OBS.
What is the way to do it in my own MacOS app?
I recently bought an insta360 flow gimbal. when recording video with the instaflow app, I cannot see the location in apple photos app and all other apple apps. However I can see the location in windows photos app once I downloaded the videos into my windows PC. The location is also visible in android app once I share it through google account.
With an exif app, I can see the location meta data in exif table as well, but again not shown as location.
exiftool in my pc can also see the meta data including location as in attached screenshot.
Compared to video shot with built-in camera app, I cannot find any difference in terms of location meta data.
What could be wrong? I contacted insta360 app support, they do not seem to understand what's going on, just asking for very simple questions again and again like do you enable GPS location access, are you shooting video?
I also contacted apple support, they are just saying it's thirdparty issue and refusing to help further. If it's really thirdparty issue how come the location data is actually embeded as meta data, and windows pc and android device can see the location? BTW, I air drop this video to all my apple devices like iPhone 15 ultra and ipad air, and very old iPhone, all of them cannot see the location.
I have an mp4 file (which is around 25min) which i need to play but if i open it in quick time player or AVplayer, it shows as around 55min in those two platfroms. I don't understand why this happens. It correctly shows the time when opening with a browser or VSCode built in video playback.
Here is the link to the file:
https://www.dropbox.com/scl/fi/hbg59uqx8xdpiqbnx5wz8/videoplayback-1.mp4?rlkey=7o8l8m7j8dhq0o6f3zssgv9bd&st=5lt6apug&dl=0
My app need a specific scene that play a video when my iPhone close to NFC Tags. and my app can read the data from NFC Tags, the data will tell us which kind of video can be play.
I tried to write URLScheme or Universal Link in NFC Tags, but all this ways will pop up notifications. not launch my app and play a video, how can I design my app.
please give me some advice, thanks!