Hi All, I'm working on a camera system extension where the main app is supposed to transfer a video stream using IOSurface memory sharing to the cam extension.
I have built a sample app that does contains all the logic, but without a camera extension. So I'm essentially using IOSurface to render a video in one SwiftUI view and show the result in another SwiftUI view. Just for testing purposes. And everything works fine so far.
Now, when moving the receiver code to the camera extensions, I'm having problems in accessing the IOSurface via ID. I am sharing the IOSurface ID via UserDefaults. I know from the logs the ID is correctly transferred.
Here is the code that uses IOSurfaceLookup to get the IOSurface. But this fails with the given message. The error message prints the surface ID which is the correct one. I know this from the main app where I get the ID and print it as well.
private var surfaceId: Int = -1 {
didSet {
logger.info("surfaceId has changed")
if surfaceId == -1 {
stopReceivingFrames()
ioSurface = nil
} else {
guard let surface = IOSurfaceLookup(IOSurfaceID(surfaceId)) else {
logger.error("failed to lookup IOSurface with ID: \(self.surfaceId)")
return
}
self.ioSurface = surface
logger.info("surface set, now starting receiving frames")
startReceivingFrames()
}
}
}
My gut feeling says that this issue might be related to some missing entitlement, sandboxing. In general, I have a working camera extension. I'm just not able to render a video in the main app, and send it over to the camera extension to overlay another web cam.
Both, the main app and camera extension are in the same XCode workspace and share the same AppGroup.
In short, my actual questions are:
Is there any entitlement required for using IOSurface between app and camera system extension?
Is using IOSurface actually possible in system extensions?
Is there any specific setting/requirement that I need to handle to make this work?
Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.
Post
Replies
Boosts
Views
Activity
I am experiencing an issue while recording audio using AVAudioEngine with the installTap method. I convert the AVAudioPCMBuffer to Data and send it to a UDP server. However, when I receive the Data and play it back, there is continuous crackling noise during playback.
I am sending audio data using this library "https://github.com/mindAndroid/swift-rtp" by creating packet and send it.
Please help me resolve this issue. I have attached the code reference that I am currently using.
Thank you.
ViewController.swift
Hello, I'm fairly new to AVAudioEngine and I'm trying to connect 2 mono nodes as left/right input to a stereo node. I was successful in splitting the input audio to 2 mono nodes using AVAudioConnectionPoint and channelMap.
But I can't figure out how to connect them back to a stereo node.
I'll post the code I have so far. The use case for this is that I'm trying to process the left/right channels with separate audio units.
Any ideas?
let monoFormat = AVAudioFormat(standardFormatWithSampleRate: nativeFormat.sampleRate, channels: 1)!
let leftInputMixer = AVAudioMixerNode()
let rightInputMixer = AVAudioMixerNode()
let leftOutputMixer = AVAudioMixerNode()
let rightOutputMixer = AVAudioMixerNode()
let channelMixer = AVAudioMixerNode()
[leftInputMixer, rightInputMixer, leftOutputMixer,
rightOutputMixer, channelMixer].forEach { engine.attach($0) }
let leftConnectionR = AVAudioConnectionPoint(node: leftInputMixer, bus: 0)
let rightConnectionR = AVAudioConnectionPoint(node: rightInputMixer, bus: 0)
plugin.leftInputMixer = leftInputMixer
plugin.rightInputMixer = rightInputMixer
plugin.leftOutputMixer = leftOutputMixer
plugin.rightOutputMixer = rightOutputMixer
plugin.channelMixer = channelMixer
leftInputMixer.auAudioUnit.channelMap = [0]
rightInputMixer.auAudioUnit.channelMap = [1]
engine.connect(previousNode, to: [leftConnectionR, rightConnectionR], fromBus: 0, format: monoFormat)
// Process right channel, pass through left channel
engine.connect(rightInputMixer, to: plugin.audioUnit, format: monoFormat)
engine.connect(plugin.audioUnit, to: rightOutputMixer, format: monoFormat)
engine.connect(leftInputMixer, to: leftOutputMixer, format: monoFormat)
// Mix back to stereo?
engine.connect(leftOutputMixer, to: channelMixer, format: stereoFormat)
engine.connect(rightOutputMixer, to: channelMixer, format: stereoFormat)
We are experiencing thousands of crashes in our application when attempting to present the camera through a Web View. The app crashes during this process, and the crash logs point to
WebCore::AVVideoCaptureSource::create
WebCore::RealtimeMediaSourceCenter::getUserMediaDevices.
This issue has only been observed in iOS 18.2 beta versions (beta 1 - 22C5109p, beta 2 - 22C5125e, beta 3 - 22C5131e).
In iOS versions below 18.2, the functionality works and we haven't identified any correlation with specific device models. The problem seems to stem from a WebCore framework introduced in these beta releases 18.2.
We kindly request a review and fix for this issue in upcoming beta releases to restore functionality. Let us know if there are any workarounds or adjustments we can implement in the interim.
Thank you for your attention to this matter.
Hi. I encounter some random crashes of my camera app. After some investigations, I found that it's terminated by the system and the crash log did be generated but the information is not quite useful, and here is the log found via the Console app.
Termination & Crash log
"Camera not actively used; AVCaptureEventInteraction not installed":
Received termination request from [osservice<com.apple.SpringBoard>:10931] on <RBSProcessPredicate <RBSProcessInstancePredicate| [app<com.juniperphoton.PhotonCam]>> with context <RBSTerminateContext| explanation:Capture Application Requirements Unmet: "Camera not actively used; AVCaptureEventInteraction not installed" reportType:CrashLog maxTerminationResistance:Interactive>
The crash log exported from the device will have some common information like:
It's a EXC_CRASH (SIGKILL) type with no termination reason.
Exception Type: EXC_CRASH (SIGKILL)
Exception Codes: 0x0000000000000000, 0x0000000000000000
Termination Reason: RUNNINGBOARD 0
It's triggered by the main thread, but it seems to be waiting for an event to process.
Triggered by Thread: 0
Thread 0 name: Dispatch queue: com.apple.main-thread
Thread 0 Crashed:
0 libsystem_kernel.dylib 0x1ee165788 mach_msg2_trap + 8
1 libsystem_kernel.dylib 0x1ee168e98 mach_msg2_internal + 80
2 libsystem_kernel.dylib 0x1ee168db0 mach_msg_overwrite + 424
3 libsystem_kernel.dylib 0x1ee168bfc mach_msg + 24
4 CoreFoundation 0x19cbe47f4 __CFRunLoopServiceMachPort + 160
5 CoreFoundation 0x19cbe3ea0 __CFRunLoopRun + 1212
6 CoreFoundation 0x19cc36274 CFRunLoopRunSpecific + 588
7 GraphicsServices 0x1e9d6d4c0 GSEventRunModal + 164
8 UIKitCore 0x19f783480 -[UIApplication _run] + 816
9 UIKitCore 0x19f3a9410 UIApplicationMain + 340
10 UIKitCore 0x19fae4bb0 0x19f394000 + 7670704
11 PhotonCam 0x1002e7e3c 0x1002cc000 + 114236
12 dyld 0x1c2d5ade8 start + 2724
Address size fault on the main thread
Thread 0 crashed with ARM Thread State (64-bit):
...
far: 0x0000000000000000 esr: 0x56000080 Address size fault
I have once tried to reproduce this issue when the app is attached with debugger, and it says:
Terminated due to signal 9
When the crash or termination happened, the app:
No AVCaptureSession is running.
The app is in the foreground and users are interacting with some functions like viewing photos or editing photos in the app. When users exit the camera view, like entering the gallery or settings, the camera session will be stopped.
Both TestFlight and Debug build will have the same issue.
No 3rd party crash reporter is installed(I deliberately disable it in Debug build and TestFlight build)
It has adopted the LockedCameraCapture, but current it's running on the main app target(if not, my app will have a button of unlock, so I can confirm about this).
Also, when it comes to the memory consumption, there is no JetsamEvent around the crash time.
Device and app information
Additionally, some information about the tech stack and the current state of my device and my app:
iPhone 16 Pro with iOS 18.2 Beta 3.
The app is a camera based app(it's PhotonCam and you can find it on the App Store), its main functionality is the camera feature using AVFoundation + Core Image + Metal to deliver camera functionality.
It has adopted the Camera Control, AVCaptureEventInteraction and LockedCameraCapture features.
If I remember it right, it occurs in iOS 18.1 Release build, but currently I have no such device to confirm. But in iOS 17.x the issue has never happened.
Regarding to this termination, on top of my head is the "watchdog" mechanism that will terminate the process that is running on the LockedCameraCapture feature. However I can make sure that currently the app is running as the main target on the home screen.
Has anybody encountered this kind of issue and has found some solutions? Thanks in advance.
Hi,
I'm developing a musicKit integration in my iOS App, and I want to select songs from recently played (done it), the problem is that the queue is not auto-generated and the user have to select other song if they want to go forward.
There is any method to ask for similar songs, or recommended songs, from a song that the user has already selected?
It will be really great :)
Also if you know it... There is any publisher for the music duration or I need to do a timer?? Thanks.
David.
Hi fellow iOS developers! 👋
I've written a Swift code that converts a video (from a URL) into a Live Photo after downloading it. The conversion process seems fine, but when I try to set the generated Live Photo as a wallpaper on iOS 17+, it shows the message 'Motion not Available.'
Has anyone else experienced this issue or know why this might be happening? Could it be related to changes in iOS 17 Live Photo handling or the generated file structure? Any help or suggestions would be greatly appreciated! 🙏
I am talking about AVCaptureVideoDataOutput.recommendedVideoSettings.
I found sometimes it return nil, there is my test result.
hevc .mov with activeColorSpace sRGB
60FPS -> ok
120FPS -> ok
hevc .mov with activeColorSpace displayP3_HLG
60FPS -> nil
120FPS -> nil
h264 .mov
30FPS -> ok
60FPS -> nil
120FPS -> nil
so, if you don't give a recommend setting, and you don't give a document, how does developer to use it?
I’ve built a custom media player using AVSampleBufferAudioRenderer and AVSampleBufferRenderSynchronizer, and overall, it works great!
However, I’ve noticed some unusual logs popping up:
Domain: NSOSStatusErrorDomain
Error Codes: -16384, -16155, -16512
*That error -16512 keeps happening repeatedly for one of our users, preventing them from playing any media at all.
I’ve searched around but can’t find any documentation explaining what these errors mean.
Has anyone run into this issue or have any suggestions? Any help would be hugely appreciated!
Thanks!
Hello,
To create a test project, I want to understand how the video and audio settings would look for a destination video whose content comes from a source video.
I obtained the output from the source video in the audio and video tracks as follows:
let audioSettings = [
AVFormatIDKey: kAudioFormatLinearPCM,
AVSampleRateKey: 44100,
AVNumberOfChannelsKey: 2
] as [String : Any]
var audioOutput = AVAssetReaderTrackOutput(track: audioTrack!,
outputSettings: audioSettings)
// Video
let videoSettings = [
kCVPixelBufferPixelFormatTypeKey: kCVPixelFormatType_32BGRA,
kCVPixelBufferWidthKey: videoTrack!.naturalSize.width,
kCVPixelBufferHeightKey: videoTrack!.naturalSize.height
] as [String: Any]
var videoOutput = AVAssetReaderTrackOutput(track: videoTrack!, outputSettings: videoSettings)
With this, I'm obtaining the
CMSampleBuffer
using
AVAssetReader.copyNextSampleBuffer
.
How can I add it to the destination video?
Should I use a while loop, considering I already have the
AVAssetWriter
set up?
Something like this:
while let buffer = videoOutput.copyNextSampleBuffer() {
if let imgBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) {
let frame = imgBuffer as CVPixelBuffer
let time = CMSampleBufferGetPresentationTimeStamp(sampleBuffer)
adaptor.append(frame, withMediaTime: time)
}
}
Lastly, regarding the destination video.
How should the
AVAssetWriterInput
for audio and PixelBuffer of the destination video be set up?
Provide an example, something like:
let audioSettings = […] as [String: Any]
Looking forward to your response.
We develop a video playback app on Apple TV which has the two following features:
Its content browsing screen has installed a gesture recognizer for presses on the PlayPause Siri remote button in order to directly launch a playback. The gesture recognizer is attached to the content browsing UIViewController view.
It presents its own custom playback UI with an AVPlayerLayer for the video and supports MPNowPlayingSession in order to publish current playback information and respond to remote commands. It also supports switching between fullscreen and Picture in Picture playback.
Both features work fine, ie. the playback is launched when pressing the PlayPause Siri remote button and, during playback, the playback info are properly advertised on other devices and remote commands are also triggered as expected.
However, when pressing the PlayPause Siri remote button while the video is playing in PiP, the "pause" remote command is sometimes triggered instead of the .playPause gesture recognizer. The issue may not occur the first time but for subsequent PlayPause presses. Navigating a bit in the app UI seems to help preventing the issue to occur.
Finally, the issue only occurs if the video is playing. If the video is paused, the PlayPause Siri remote button gesture is always recognized instead of the remote command.
Please note that, before using MPNowPlayingSession (and the corresponding MPRemoteCommandCenter), the app was using the default MPRemoteCommandCenter to support remote commands and the issue did not occur.
We don't reproduce this issue with the Apple TV app so there's probably something we are not doing right. Has someone any clue?
We are processing videos with Core Image filters in our apps, using an AVMutableVideoComposition (for playback/preview and export).
For older devices, we want to limit the resolution at which the video frames are processed for performance and memory reasons. Ideally, we would tell AVFoundation to give us video frames with a defined maximum size into our composition. We thought setting the renderSize property of the composition to the desired size would do that.
However, this only changes the size of output frames, not the size of the source frames that come into the composition's handler block. For example:
let composition = AVMutableVideoComposition(asset: asset, applyingCIFiltersWithHandler: { request in
let input = request.sourceImage // <- this still has the video's original size
// ...
})
composition.renderSize = CGSize(width: 1280, heigth: 720) // for example
So if the user selects a 4K video, our filter chain gets 4K input frames. Sure, we can scale them down inside our pipeline, but this costs resources and especially a lot of memory. It would be way better if AVFoundation could decode the video frames in the desired size already before passing it into the composition handler.
Is there a way to tell AVFoundation to load smaller video frames?
Hello,
I used AVPlayer in my project to play network movie.
Most movie could play normally, but I found the sound will disappear sometimes if I play specified 4K video network stream.
The video will continue playing but audio stops after video is played for a while.
If I pause player and then resume, the sound will be back but disappeared again after several seconds
Check AVPlayerItem status:
isPlaybackLikelyToKeepUp` == true
isPlaybackBufferEmpty` = false
player.volume > 0
According the value above, it seems not cause by empty playback buffer or volume issue. I am so confused for this situation.
Movie information
Video
Format : AVC
Format/Info : Advanced Video Codec
Format profile : High L5.1
Codec ID : avc1
Codec ID/Info : Advanced Video Coding
Bit rate mode : Variable
Bit rate : 100.0 Mb/s
Width : 3 840 pixels
Height : 2 160 pixels
Display aspect ratio : 16:9
Frame rate mode : Constant
Frame rate : 29.970 (30000/1001) FPS
Audio
Format : AAC LC
Format/Info : Advanced Audio Codec Low Complexity
Codec ID : mp4a-40-2
Duration : 5 min 19 s
Bit rate mode : Constant
Bit rate : 192 kb/s
Nominal bit rate : 48.0 kb/s
Channel(s) : 2 channels
Channel layout : L R
Sampling rate : 48.0 kHz
Frame rate : 46.875 FPS (1024 SPF)
Does anyone know if AVPlayer has this limitations when playing high-bitrate movie streams, and are there any solutions?
Dear Apple,
I am currently working on Mental Health related research project supported by South Korea Government funding.
In addition to SensorKit access, we are working on the data from microphone. Is there any contact point aside SensorKit access application to discuss the possibility of research data collection from restricted participant samples?
Hello,
I.m deaf-blind programmer.
I'm experiencing memory issues in my app. Essentially, I'm writing a video.
In this output video, I get content from two sources.
The first source is an already recorded video of 18 seconds (just for testing). It will be shown at the beginning of the output video.
The second source is an array with photos and another array with audio buffers from AVSpeechSynthesizer.write(). The photos will be added along with the audio buffers to the output video, right after adding the 18-second video.
So, in the end, the output video should be:
18-second video + array of photos as video images and, for audio, the buffers from AVSpeechSynthesizer.write().
However, my app crashes as soon as I start the first process.
I'm using AVAssetWriter to write the video and AVAssetReader to read the video.
Below, I'll show the code where
I get the CMSampleBuffer.
I'd like an example of how to add the 18-second video to the beginning of the output video.
It doesn't need to be a big piece of code.
Here it is:
// Variables
var audioReaderBuffers = [CMSAMPLEBUFFER]()
var videoReaderBuffers = [(frame: CVPixelBuffer, time: CMTIME)]()
// Get CMSampleBuffer of a video asset
if let videoURL = videoURL {
let videoAsset = AVAsset(url: videoURL)
Task {
let videoAssetTrack = try await videoAsset.loadTracks(withMediaType: .video).first!
let audioTrack = try await videoAsset.loadTracks(withMediaType: .audio).first!
let reader = try AVAssetReader(asset: videoAsset)
let videoSettings = [
kCVPixelBufferPixelFormatTypeKey: kCVPixelFormatType_32BGRA,
kCVPixelBufferWidthKey: videoAssetTrack.naturalSize.width,
kCVPixelBufferHeightKey: videoAssetTrack.naturalSize.height
] as [String: Any]
let readerVideoOutput = AVAssetReaderTrackOutput(track: videoAssetTrack, outputSettings: videoSettings)
let audioSettings = [
AVFormatIDKey: kAudioFormatLinearPCM,
AVSampleRateKey: 44100,
AVNumberOfChannelsKey: 2
] as [String : Any]
let readerAudioOutput = AVAssetReaderTrackOutput(track: audioTrack,
outputSettings: audioSettings)
reader.add(readerVideoOutput)
reader.add(readerAudioOutput)
reader.startReading()
// Video CMSampleBuffer
while let sampleBuffer = readerVideoOutput.copyNextSampleBuffer() {
autoreleasepool {
if let imgBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) {
let pixBuf = imgBuffer as CVPixelBuffer
let pTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer)
videoReaderBuffers.append((frame: pixBuf, time: pTime))
}
}
}
if let videoURL = videoURL {
let videoAsset = AVAsset(url: videoURL)
Task {
let videoAssetTrack = try await videoAsset.loadTracks(withMediaType: .video).first!
let audioTrack = try await videoAsset.loadTracks(withMediaType: .audio).first!
let reader = try AVAssetReader(asset: videoAsset)
let videoSettings = [
kCVPixelBufferPixelFormatTypeKey: kCVPixelFormatType_32BGRA,
kCVPixelBufferWidthKey: videoAssetTrack.naturalSize.width,
kCVPixelBufferHeightKey: videoAssetTrack.naturalSize.height
] as [String: Any]
let readerVideoOutput = AVAssetReaderTrackOutput(track: videoAssetTrack, outputSettings: videoSettings)
let audioSettings = [
AVFormatIDKey: kAudioFormatLinearPCM,
AVSampleRateKey: 44100,
AVNumberOfChannelsKey: 2
] as [String : Any]
let readerAudioOutput = AVAssetReaderTrackOutput(track: audioTrack,
outputSettings: audioSettings)
reader.add(readerVideoOutput)
reader.add(readerAudioOutput)
reader.startReading()
while let sampleBuffer = readerVideoOutput.copyNextSampleBuffer() {
autoreleasepool {
if let imgBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) {
let pixBuf = imgBuffer as CVPixelBuffer
let pTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer)
}
I tried configuring the preferredForwardBufferDuration on devices using 4G and Wi-Fi, and in these cases, AVPlayer works correctly according to the configured buffer duration. However, when the device is connected to a 5G network, the configuration value no longer works.
For example, if I set preferredForwardBufferDuration to 30 seconds, AVPlayer preloads with a buffer of over 100 seconds. I’m not sure how to resolve this, as it’s causing issues with my system.
I am using ImageCaptureCore to access and (sometimes) download media files from a digital camera connected via USB (either to a Mac oder to an iOS device with Apple lightning to USB3 camera adapter).
This works very well in general, but what puzzles me is that for the ICCameraFile's EXIF creation/modification date, it always returns nil.
I can access the ICCameraItem's creation/modification date instead, which, as it says in the documentation "usually [is] the same as its EXIF creation date", but, well not always. Generally the EXIF tags are more reliable than the file dates, especially the modification date is easily messed up when copying files.
As for my cameras, they show the stable EXIF date on their display, so for consistency I would prefer to use the same in my app. Is there a way to get it without downloading the image from the camera and reading it from the file?
Does it possibly depend on the brand of camera (I mostly have Canon) whether ICCameraFile.exifCreationDate is ever populated or always nil?
For a thumb drive with DCIM folder, which is treated just like a camera, it is also nil.
I have an iPad app, written in objective-c and distributed through Enterprise developer, as it is not for public use but specific to some large companies.
The app has a local database and works offline
For some functions of the app I need to display images (not edit or cut them, just display them)
Right now there is integrated MWPhotoBrowser viewer, which has not been maintained for almost 10 years, so in addition to warnings in compilation I have to fight with some historical bugs especially on high resolution images. https://github.com/mwaterfall/MWPhotoBrowser
Do you know of a modern and maintained OFFLINE photo viewer? I evaluate both free and paid (maybe an SDK). My needs are very basic
I have found this one https://github.com/TimOliver/TOCropViewController, but I need to disable the photos edit features and especially I would lose the useful feature of displaying multiple images (mwphoto for multiple images showed a gallery)
I’m working on a memo app that records audio from the iPhone’s microphone (and other devices like MacBook or iPad) and processes it in 10-second chunks at a target sample rate of 16 kHz. However, I’ve encountered limitations with installTap in AVAudioEngine, which doesn’t natively support configuring a target sample rate on the mic input (the default being 44.1 kHz).
To address this, I tried using AVAudioMixerNode to downsample the mic input directly. Although everything seems correctly configured, no audio is recorded—just a flat signal with zero levels. There are no errors, and all permissions are granted, so it seems like an issue with downsampling rather than the mic setup itself.
To make progress, I implemented a workaround by tapping and resampling each chunk tapped using installTap (every 50ms in my case) with AVAudioConverter. While this works, it can introduce artifacts at the beginning and end of each chunk, likely due to separate processing instead of continuous downsampling.
Here are the key issues and questions I have:
1. Can we change the mic input sample rate directly using AVAudioSession or another native API in AVAudio? Setting up the desired sample rate initially would be ideal for my use case.
2. Are there alternatives to installTap for recording audio at a different sample rate or for continuously downsampling the live input without chunk-based artifacts?
This issue seems longstanding, as noted in a 2018 forum post:
https://forums.developer.apple.com/forums/thread/111726
Any guidance on configuring or processing mic input at a lower sample rate in real-time would be greatly appreciated. Thank you!
I’ve encountered an issue when trying to transcribe audio during a SharePlay session in VisionOS. Specifically, the AVAudioSession appears to fail when sharing audio, preventing successful transcription. The problem seems related to AVAudioSession.sharedInstance() and using the .mixWithOthers option, which is supposed to enable multiple audio sources to coexist without interference.
Here’s the relevant code snippet that throws the error:
private static func prepareEngine() throws -> (AVAudioEngine, SFSpeechAudioBufferRecognitionRequest) {
let audioEngine = AVAudioEngine()
let request = SFSpeechAudioBufferRecognitionRequest()
request.shouldReportPartialResults = true
let audioSession = AVAudioSession.sharedInstance()
try audioSession.setCategory(.playAndRecord, mode: .default, options: [.mixWithOthers, .allowBluetooth])
try audioSession.setActive(true, options: .notifyOthersOnDeactivation)
let inputNode = audioEngine.inputNode
let recordingFormat = inputNode.outputFormat(forBus: 0)
inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { (buffer: AVAudioPCMBuffer, when: AVAudioTime) in
request.append(buffer)
}
audioEngine.prepare()
try audioEngine.start()
return (audioEngine, request)
}
The setup is designed to initialize an AVAudioEngine and a SFSpeechAudioBufferRecognitionRequest for real-time transcription, but fails within the SharePlay context. Notably, while .mixWithOthers is intended to handle concurrent audio sessions, it doesn’t appear to work as expected during SharePlay. The audioSession.setActive(true) line is where the setup typically fails, with no clear solution to proceed.
Has anyone else faced similar issues with AVAudioSession and SharePlay in VisionOS? Any insights on how to manage audio sharing or transcription during a SharePlay session would be greatly appreciated!
The specific error is:
The operation couldn't be completed. (com.apple.coreaudio.avfaudio error 561145187.)