Hello,
I try to get the Video from an HDMI USB capture card and show it in a PreviewLayer with 60fps. The device I am using (ShadowCast 2) is supporting 1080p with 60fps in "yuvs" and "420v".
This is my code with stripped away uninteresting stuff and removed error handling to build the previewLayer.
I am using the AVFrameRateRange because the capture device is not directly supporting 60.00 but <AVFrameRateRange: 0x600000875680 60.00 - 60.00 (1000000 / 60000240 - 1000000 / 60000240)> fps.
@Observable
final class AVFoundationService: AVService {
// Live View
private let session: AVCaptureSession = .init()
var previewLayer: AVCaptureVideoPreviewLayer {
let layer = AVCaptureVideoPreviewLayer(session: session)
layer.videoGravity = .resizeAspect
return layer
}
var activeVideoDevice: AVCaptureDevice? {
// TODO: implement correct logic
if let device = videoDevices.first(where: { $0.localizedName.contains("Shadow") }) {
return device
}
return AVCaptureDevice.default(for: .video)
}
func setupStreamDemo(completion: @escaping (Error?) -> Void) {
session.beginConfiguration()
if let device = activeVideoDevice {
do {
let input = try AVCaptureDeviceInput(device: device)
if session.canAddInput(input) {
session.addInput(input)
} else {
print("explode")
}
for format in device.formats {
let dimensions = CMVideoFormatDescriptionGetDimensions(format.formatDescription)
if dimensions.width == 1920 && dimensions.height == 1080 && format.formatDescription.mediaSubType.description == "'yuvs'" {
let foundFPS = format.videoSupportedFrameRateRanges.first {
Int($0.minFrameRate) == 60 && Int($0.minFrameRate) == 60
}
try device.lockForConfiguration()
device.activeFormat = format
device.activeVideoMinFrameDuration = foundFPS!.minFrameDuration
device.activeVideoMaxFrameDuration = foundFPS!.minFrameDuration
device.unlockForConfiguration()
}
}
} catch {
return completion(error)
}
}
session.commitConfiguration()
session.startRunning()
completion(nil)
}
}
I am using the following code in SwiftUI to show the AVCaptureVideoPreviewLayer.
struct VideoPreviewView: NSViewRepresentable {
private let previewLayer: AVCaptureVideoPreviewLayer
func makeNSView(context: Context) -> NSView {
let view = NSView()
view.layer = self.previewLayer
view.layer?.frame = view.bounds
return view
}
func updateNSView(_ nsView: NSView, context: Context) {
if let layer = nsView.layer as? AVCaptureVideoPreviewLayer {
layer.session = self.previewLayer.session
}
}
}
When I now run my app, it will ignore whatever I set on device.activeVideoMinFrameDuration and/or device.activeVideoMaxFrameDuration. If I set it to 10 fps - it's running with 30, if I set 60 it is running with 30.
If I start in parallel to my app QuickTime and start a "Recording" from my USB Capture Card, it will switch to 60fps mode.
I am on Mac Sequoia 15.0 with Xcode 16.0.
What I am doing wrong?
Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.
Post
Replies
Boosts
Views
Activity
Hello,
I apologize if the answer is obvious but I'm having a hard time figuring this one out.
Let's say the user taps an "Edit" button in my LockedCameraCaptureSession. The extension calls:
activity.userInfo = ["ActivityKey": "ID"]
try await session.openApplication(for: activity)
Can I retrieve, in my application, the data stored in activity.userInfo (lets say, a flag "open editor"), or is data passing exclusively handled via appContext of CameraCaptureIntent?
Thank you!
Since upgrading to tvOS 18, the above function isn't working for me in converting a stream with these formats. It does work in decoding AAC, however.
https://developer.apple.com/documentation/audiotoolbox/1503098-audioconverterfillcomplexbuffer?language=objc
I pass a valid ioOutputDataPacketSize in, but it always comes out as zero.
Has anyone else observed this too?
I wonder if this is related to the issue being discussed widely about 5.1 sound being broken for many people after upgrading to tvOS 18?
https://discussions.apple.com/thread/255769102?login=true&sortBy=rank
EDIT: further information; the callback gets called once, asking for 1 packet (which is ok). I give it one packet and return noErr. However, after this, the callback is never invoked again. Must be a bug?
EDIT2: the same code continues to work correctly on macOS in decoding the same audio stream.
I want to detect when the Hardware Camer control button is pressed/interacted in the iPhone 16
Does Apple provide any API to detect the Hardware Camera control button in iPhone 16?
Since the iOS 18 update, there's been a bug that always occurs when listening to music through Apple Music. When music is playing and the iPhone enters or exits standby mode, the music pauses by itself for 1 second. The initial conditions were the same as when this problem didn’t exist.
When attempting to play a video using AVPlayer on iOS 18.0, I am encountering an error that does not occur on versions earlier than 18.0. Could you please advise what might be causing this issue?
Error Domain=AVFoundationErrorDomain Code=-11828
Error Domain=NSOSStatusErrorDomain Code=-12847 "(null)"
This is Error code
This is the URL information I retrieved using the curl command.
HTTP/1.1 200
Content-Disposition: inline;filename="sample.mp4"
Accept-Ranges: bytes
ETag: sample.mp4
Last-Modified: Tue, 20 Jan 1970 23:52:10 GMT
Expires: Mon, 07 Oct 2024 09:15:49 GMT
Content-Range: bytes 0-987561/987561
Content-Type: application/octet-stream
Content-Length: 987561
Date: Mon, 30 Sep 2024 09:15:49 GMT
I am facing an Issue regarding the voicemail feature. when someone calls me, it would go to voicemail only if I click on the voicemail icon. If I do not respond at all to the incoming call, it does not give the caller an option to record voicemail. I have tried switching the voicemail on and off, its working on other devices Iphone 14 and 15 for my family member, just not for me. How can i resolve this?
Hello everyone,
I am working on an iOS app that involves capturing images automatically, and I would like to control the start/stop of the capture process remotely from a Mac app. I explored the iPhone Mirroring feature, which allows some remote control but has the limitation of only functioning when the iPhone is locked, and it doesn’t permit access to the iPhone’s camera from the Mac.
Ideally, I am looking for a solution that would allow me to:
Remotely control the camera capture process on the iOS app from the Mac app.
Ensure the iPhone’s camera remains fully operational and controllable from the Mac during the capture process.
I have considered using options like Handoff for communication between the apps but faced some issues while communicating between the iOS and mac app. I would like to know if there is a more optimal solution within Apple’s ecosystem, or if there are APIs I might have overlooked.
Any advice or guidance on how to achieve this functionality would be greatly appreciated!
Thanks in advance!
I'm attempting to use AVExternalStorageDevice.requestAccess on iOS 18 using Xcode 16.
When calling requestAccess, a dialog does appear, but the completionHandler closure is never called to indicate whether access was granted. If using the async version, the function just never returns.
Calling requestAccess also results in a mediaServicesWereReset (-11819) error without fail.
Supposedly, "the system only presents the dialog to a person the first time your app calls the method." That also doesn't appear to be the case. The dialog appears every time requestAccess is called, regardless of previous invocations and whether "Allow" or "Don't Allow" was selected.
The dialog itself says "You can change this in Privacy settings." I cannot find this permission anywhere in the Settings app, neither under Privacy & Security nor under the app-specific settings page.
Has anyone else experienced these issues? Am I missing something here? I did suspect permissions issues and tried adding a NSRemovableVolumesUsageDescription entry to the app. This did not appear to change anything.
We have a VOIP calling application that releases resources at the end of call. When the AudioOutputUnitStop API is invoked, it takes upto 700millisecond to return back sometimes.
If we comment that API call as a test then the AudioUnitUninitialize API takes upto 700ms.
Once the cleanup is done, as part of the call flow, the application invokes a BYE SIP message. Hence in cases where the API takes more than 200 ms, the Bye message is sent with that much delay and gets blocked as per the server DDOS settings. (DDOS timer will start as soon as a the UDP socket is disconnected with the client and will timeout within 200 ms and Bye request coming post that time will get blocked)
We need to understand why there is a delay of more than 200ms observed sometimes while in other cases it requires less than 50ms?
How can I obtain the invoice order id on the user's purchase order?For example, "MT2345678"
I am working on an image processing app that requires 8 bit per channel (bpc) images.
Sometimes, input images are 16 bpc (e.g. sRGB IEC61966-2.1; extended range)
The app already draws the input image in a CGContext producing an 8 bpc CGImage. This works fine if the input is 16 bpc and I get an 8 bpc image.
I am wondering if it would be better for image quality to convert the 16 bpc images to 8 bpc using Accelerate before that CGContext draw? or does that draw essentially do the equivalent?
When my CarPlay connects and tries to play music it plays it through the vehicles phone speakers. If I make a phone call and then hang up it pushes the sound back to the vehicles stereo speakers. Does anyone have a fix for this because this is very annoying. Sometimes it will just switch to the phone speakers and I have to complete the process again.
I use AVPlayer to play HLS video successfully on macOS Sonoma, but I encountered this error on macOS Sequoia. Please help me:
Error Domain=AVFoundationErrorDomain Code=-11833 ‘Cannot Decode’ UserInfo={NSUnderlyingError=0x600001e57330 {Error Domain=CoreMediaErrorDomain Code=-12906 ‘(null)’}, NSLocalizedFailureReason=The decoder required for this media cannot be found., AVErrorMediaTypeKey=vide, NSLocalizedDescription=Cannot Decode}
Thanks!
I have an app that lets the user draw a circle or line over a live video feed. There’s a view for circles and a view for lines. These are displayed in ContentView and their position in the ZStack is altered based on the toolbar section by the user. Whichever view is on top of the stack is active and usable. That all works fine but I want the user to be able to select any shape on screen whether it is a circle or line. My question is what is best practice to manage tool selection and drawing the shapes? I’m guessing I should draw all shapes onto one view but I would like to avoid this as I have all my logic in each shapes respective view.
AVAudioEngine and AVAudioSession
Welcome! I will start off with the terms AVAudioEngineImpl::Initialize(NSError**).
Why? I want to make those who run into this issue have to possibility to find this post through Search Engines!
This is short small breakdown based on what I observed while trying to use these two Components. It's not a guide that goes into all the details.
If you're trying to figure out how to fix a crash, you may can find a common way to fix it, in this post!
Is it possible to use AVAudioEngine and AVAudioSession together?
The answer is yes.
But you will face challenges regarding it. Mostly AVAudioEngine. Whatever you're trying to do, it will take a lot of testing. I don't know how it will be with an IDE. But with just .app and iPhone it will take some testing. Or a lot of testing.
Something that helped me fixing a crash was, this here: https://developer.apple.com/documentation/avfaudio/audio_engine/audio_units/using_voice_processing
This example Project by Apple, uses both AVAudioEngine and AVAudioSession.
How can I fix AVAudioEngineImpl::Initialize(NSError**) ?
I think this depends. If you're lucky and have a crash log, you may can find clues, but the stack trace sometimes doesn't really help either.
I will mention common cases that I encountered though.
inputNode
https://developer.apple.com/documentation/avfaudio/avaudioengine/1386063-inputnode
You need an inputNode apparently. You need to access it or else I think there won't be one. And if there isn't one, AVAudioEngine.start will most likely crash.
The audio engine creates a singleton on demand when first accessing this variable.
Doing this has prevented this common issue for me.
.prepare deallocates and can cause a crash if you restart your AudioEngine
Another issue I faced was handling .prepare wrong. You don't need .prepare. But if you use installTap or other things, I think you need it.
Here is a common thing to note.
If you had previous initialized inputNode. Those could be gone after using .prepare.
You have to ensure you're accessing AVAudioEngine.inputNode again before calling .start() or whatever node you need.
The Voice Processing Project, does this by creating a Managing Controller for AVAudioEngine with a sort of "setup" function, which ensures that everything is ready, before .prepare and .start get called.
AVAudioSession's setCategory
You have to experiment with it. The crashes can be very weird. Sometimes your App will only crash once, and then only after you install it again, or if you start it up.
You are actually able to use .setActive and .setCategory with AVAduioEngine. Just do not try to do .setActive(false) before you've stopped the AudioEngine, as it will fail.
Sometimes I'd run into an issue with .setActive(true) so you really have to experiment if leaving that part out resolves the issue or not.
try session.setCategory(.multiRoute, mode: .default, options: [.defaultToSpeaker, .mixWithOthers])
Experiment with it. But these .multiRoute and .mixWithOthers have allowed me to use AVAudioEngine to make a test recording. And I can even switch the Data Sources and Polar Patterns without any issues.
Sometimes you can get away without setting .setActive at all. Not sure if AVAudioEngine does it automatically.
Short Summary
If you use .prepare and then .stop, make sure to initialize things like .inputNode before calling .prepare and .start again. (THIS CAN BE DIFFERENT)
Only call .setActive(false) after you used .stop. Otherwise I believe it has no chance to stop it.
AVAudioSession setCategory is important. Ensure you use mixRoutes or experiment with all the modes.
If you manage to solve your crash, you'll be able to indeed change the Data Sources and Polar Patterns and more!
Use isRunning before using .start, this will save you from another crash. If you use .start while it's already running, I think try and catch won't save you here, you have to ensure you're not starting it twice.
I hope that this short breakdown will help you to resolve your crash. If you get deeper into AVAudioEngine and AVAudioSession, you'll probably face more crashes. I yet, need to figure out how to solve them. I have a lot of trouble to put my Testing App on my iPhone, so I am sorry if this guide didn't cover every detail of it.
A HUGE tip from me is to check the Documentations. As example, when I read the Documentation for inputNode I learned why my app crashed, it's because I never accessed and initialized one.
The Developer Documentation can be a little bit of a laberynth, and I strongly recommend you to read every property you try to access if you believe they cause issues. And I also recommend to find example Projects like the Voice Processing ones. As there aren't any Code Examples in the Documentation.
I had this situation recently where suddenly with the new Beta update the sound of my airpods pro 2 was all gltichy and spatial audio didn’t work at all.
I've already check my airpods with other devices and they're completely fine.
Hope this helps!
Dear Apple Support,
I’ve noticed an issue with the Messages app on iOS 18. When I try to send Live Photos, I select the Live Photo icon, but the photo is sent as a still image instead. Despite following the correct steps, the Live Photo feature doesn’t seem to be working properly.
I would appreciate it if your team could look into this and resolve the issue in an upcoming update.
Thank you for your support and continued innovation!
Best regards,
Erfan Nateghie
After updating to 18.1 beta 5, the spatial video option no longer exists in settings, and the icon for capturing spatial video also disappears in camera app.
Device model, iPhone 15 Pro
Hello,
I noticed that SFSpeechRecognizer is broken on iOS 18. During a recognition task, it keeps dropping the recognized text on every pause. For example, if you say "how are you fine", it will drop the "how are you" part and only give you "fine" as the result.
Say "how are you <pause> fine"
// iOS 17 ✅ (perfect final result)
How
How are
How are you
How are you.
How are you. Fine.
// iOS 18 ❌
How
How are
How are you
How are you
Fine
(the text before the pause is dropped, and fail to recognize the punctuations.)
Reproducing the issue:
Download the official sample project.
Run it on an iOS 18 device or simulator.
Say "how are you fine"
Only "fine" will be displayed.