When I try to play video on my Apple Vision Pro simulator using a custom view with an AVPlayerLayer (as seen in my below VideoPlayerView), nothing displays but a black screen while the audio for the video i'm trying to play plays in the background. I've tried everything I can think of to resolve this issue, but to no avail.
import SwiftUI
import AVFoundation
import AVKit
struct VideoPlayerView: UIViewRepresentable {
var player: AVPlayer
func makeUIView(context: Context) -> UIView {
let view = UIView(frame: .zero)
let playerLayer = AVPlayerLayer(player: player)
playerLayer.videoGravity = .resizeAspect
view.layer.addSublayer(playerLayer)
return view
}
func updateUIView(_ uiView: UIView, context: Context) {
if let layer = uiView.layer.sublayers?.first as? AVPlayerLayer {
layer.frame = uiView.bounds
}
}
}
I have noticed however that if i use the default VideoPlayer (as demonstrated below), and not my custom VideoPlayerView, the video displays just fine, but any modifiers I use on that VideoPlayer (like the ones in my above custom struct), cause the video to display black while the audio plays in the background.
import SwiftUI
import AVKit
struct MyView: View {
var player: AVPlayer
var body: some View {
ZStack {
VideoPlayer(player: player)
Does anyone know a solution to this problem to make it so that video is able to display properly and not just appear as a black screen with audio playing in the background?
Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.
Post
Replies
Boosts
Views
Activity
I use a data cable to connect my Nikon camera to my iPhone. In my project, I use the framework ImageCaptureCore. Now I can read the photos in the camera memory card, but when I press the shutter of the camera to take a picture, the camera does not respond, the connection between the camera and the phone is normal. Then the camera screen shows a picture of a laptop. I don't know. Why is that? I hope someone can help me.
Hello,
I tried to build AVCam sample application for iOS17 and run it on MacBook (designed as iPad) with macos14.3 (Sonoma).
https://developer.apple.com/documentation/avfoundation/capture_setup/avcam_building_a_camera_app?language=objc
When building and testing with Xcode 15.2, AVCam application crashes systematically when choosing target "My Mac (Designed for iPad)"
In fact, SIGABORT signal is received in a thread dealing with "portrait effect"
Thread 19 Queue : com.apple.portrait.effect_init (serial)
Is it a known bug? Is there a workaround about this case?
Best regards
External webcam is detected by AVCam but preview and capture are systematically upside down. (may be the same FaceTime HD camera's)
Is it a known bug? Is there a workaround about this case?
Hi everyone, I was working on some code that involves recording audio with AVAudioEngine and got an issue that just crashes the app:
EXC_BREAKPOINT
Exception 6, Code 1, Subcode 4304279688
+0x009888 AudioRecordModule.setupAudioEngine
+0x009788
AudioRecordModule.setupAudioEngine
+0x00c5bc
AudioRecordModule.handleConfigurationChange
Below is the relevant code in the Recorder class.
public class AudioRecordModule: Module {
private var audioEngine: AVAudioEngine?
private func startRecording(options recordingOptions: RecordingOptions) {
try AVAudioSession.sharedInstance().setCategory(.playAndRecord, options: .mixWithOthers)
try AVAudioSession.sharedInstance().setActive(true)
outputFormat = AVAudioFormat(
commonFormat: recordingOptions.bitDepth == 32 ? .pcmFormatInt32 : .pcmFormatInt16,
sampleRate: Double(recordingOptions.sampleRate),
channels: AVAudioChannelCount(recordingOptions.channels),
interleaved: true
)!
let fileUri = URL(string: recordingOptions.fileUri)!
let formatSettings: [String: Any] = [
AVFormatIDKey: kAudioFormatMPEG4AAC,
AVSampleRateKey: recordingOptions.sampleRate,
AVNumberOfChannelsKey: recordingOptions.channels,
AVEncoderBitRateStrategyKey: AVAudioBitRateStrategy_Constant,
AVEncoderAudioQualityKey: AVAudioQuality.high.rawValue,
]
self.recordedFile = try AVAudioFile(
forWriting: fileUri,
settings: formatSettings,
commonFormat: outputFormat.commonFormat,
interleaved: outputFormat.isInterleaved
)
if !hadSetupNotification {
setupNotifications()
}
}
func handleConfigurationChange() {
DispatchQueue.main.async {
self.releaseAudioEngine()
self.setupAudioEngine()
if self.state == "recording" {
// we could attempt to keep recording
do {
try self.audioEngine?.start()
} catch {
self.internalPauseRecording()
self.sendInterruptEvent()
}
}
}
}
func setupNotifications() {
nc.addObserver(
forName: Notification.Name.AVAudioEngineConfigurationChange,
object: nil,
queue: nil
) { [weak self] _ in
guard let weakself = self else {
return
}
if weakself.state != "inactive" {
weakself.handleConfigurationChange()
}
}
}
private func setupAudioEngine() {
self.audioEngine = nil
let audioEngine = AVAudioEngine()
self.audioEngine = audioEngine
let inputNode = audioEngine.inputNode
let inputFormat = inputNode.inputFormat(forBus: 0)
let converter = AVAudioConverter(from: inputFormat, to: outputFormat)!
inputNode.installTap(onBus: 0, bufferSize: 1024, format: inputFormat) {
(buffer: AVAudioPCMBuffer!, time: AVAudioTime!) -> Void in
do {
let inputBlock: AVAudioConverterInputBlock = { _, outStatus in
outStatus.pointee = AVAudioConverterInputStatus.haveData
return buffer
}
let frameCapacity =
AVAudioFrameCount(self.outputFormat.sampleRate) * buffer.frameLength
/ AVAudioFrameCount(buffer.format.sampleRate)
let outputBuffer = AVAudioPCMBuffer(
pcmFormat: self.outputFormat,
frameCapacity: frameCapacity
)!
var error: NSError?
converter.convert(to: outputBuffer, error: &error, withInputFrom: inputBlock)
if let error = error {
throw error
} else {
try self.recordedFile?.write(from: outputBuffer)
}
} catch {
print(error)
}
}
}
private func releaseAudioEngine() {
if let audioEngine = self.audioEngine {
audioEngine.inputNode.removeTap(onBus: 0)
audioEngine.stop()
}
audioEngine = nil
}
}
Beside that, the record module works normally. It is just the configuration change that it does not handle well.
I understand that when configuration changes, I need to reinit the audio engine to have the correct input format (since the new config/audio device can have different sample rate and such). If I don't do that, the app also crashes perhaps due to the mismatch.
AVAudioRecorder is not an option for me.
Thank you for your help.
Not sure when it happened but I can no longer play explicit songs in my app using MK v3.
I've turned off restrictions/made sure i have access to explicit in...
My phone (Screen Time)
My computer (Screen Time)
My iPad (Screen Time)
music.apple.com (Settings)
and I still get this error when I try to play a song in console.
`CONTENT_RESTRICTED: Content restricted
at set isRestricted (https://js-cdn.music.apple.com/musickit/v3/musickit.js:28:296791)
at SerialPlaybackController._prepareQueue (https://js-cdn.music.apple.com/musickit/v3/musickit.js:28:318357)
at SerialPlaybackController._prepareQueue (https://js-cdn.music.apple.com/musickit/v3/musickit.js:28:359408)
at set queue (https://js-cdn.music.apple.com/musickit/v3/musickit.js:28:308934)
at https://js-cdn.music.apple.com/musickit/v3/musickit.js:28:357429
at Generator.next (<anonymous>)
at asyncGeneratorStep$j (https://js-cdn.music.apple.com/musickit/v3/musickit.js:28:351481)
at _next (https://js-cdn.music.apple.com/musickit/v3/musickit.js:28:351708)`
I’m exploring enabling speech-to-commands processing for a game, but would like to try and do a baseline of voice recognition within that to allow two people in close proximity to interact , but not interfere with each others voice commands to this system.
(it’s for an accessible game idea)
My app is consistently crashing for a specific user on 14.3 (iMac (24-inch, M1, 2021) when their library is being retrieved in full. User says they have 36k+ songs in their library which includes purchased music.
This is the code making the call:
var request = MusicLibraryRequest<Album>()
request.limit = 10000
let response = try await request.response()
I’m aware of a similar (?) crash FB13094022 (https://forums.developer.apple.com/forums/thread/736717) that was claim fixed for 14.2. Not sure if this is a separate issue or linked.
I’ve submitted new FB13573268 for it.
CrashReporter Key: 0455323d871db6008623d9288ecee16c676248c6
Hardware Model: iMac21,1
Process: Music Flow
Identifier: com.third.musicflow
Version: 1.2
Role: Foreground
OS Version: Mac OS 14.3
NSInternalInconsistencyException: No identifiers for model class: MPModelSong from source: (null)
0 CoreFoundation +0xf2530 __exceptionPreprocess
1 libobjc.A.dylib +0x19eb0 objc_exception_throw
2 Foundation +0x10f398 -[NSAssertionHandler handleFailureInMethod:object:file:lineNumber:description:]
3 MediaPlayer +0xd59f0 -[MPBaseEntityTranslator _objectForPropertySet:source:context:]
4 MediaPlayer +0xd574c -[MPBaseEntityTranslator _objectForRelationshipKey:propertySet:source:context:]
5 MediaPlayer +0xd5cd4 __63-[MPBaseEntityTranslator _objectForPropertySet:source:context:]_block_invoke_2
6 CoreFoundation +0x40428 __NSDICTIONARY_IS_CALLING_OUT_TO_A_BLOCK__
7 CoreFoundation +0x402f0 -[__NSDictionaryI enumerateKeysAndObjectsWithOptions:usingBlock:]
8 MediaPlayer +0xd5c1c __63-[MPBaseEntityTranslator _objectForPropertySet:source:context:]_block_invoke
9 MediaPlayer +0x11296c -[MPModelObject initWithIdentifiers:block:]
10 MediaPlayer +0xd593c -[MPBaseEntityTranslator _objectForPropertySet:source:context:]
11 MediaPlayer +0xd66c4 -[MPBaseEntityTranslator objectForPropertySet:source:context:]
12 MediaPlayer +0x1a7744 __47-[MPModeliTunesLibraryRequestOperation execute]_block_invoke
13 iTunesLibrary +0x16d84 0x1b4e1cd84 (0x1b4e1cd30 + 84)
14 CoreFoundation +0x5dec0 __invoking___
15 CoreFoundation +0x5dd38 -[NSInvocation invoke]
16 Foundation +0x1e874 __NSXPCCONNECTION_IS_CALLING_OUT_TO_REPLY_BLOCK__
17 Foundation +0x1cef4 -[NSXPCConnection _decodeAndInvokeReplyBlockWithEvent:sequence:replyInfo:]
18 Foundation +0x1c850 __88-[NSXPCConnection _sendInvocation:orArguments:count:methodSignature:selector:withProxy:]_block_invoke_3
19 libxpc.dylib +0x10020 _xpc_connection_reply_callout
20 libxpc.dylib +0xff18 _xpc_connection_call_reply_async
21 libdispatch.dylib +0x398c _dispatch_client_callout3
22 libdispatch.dylib +0x21384 _dispatch_mach_msg_async_reply_invoke
23 libdispatch.dylib +0xad24 _dispatch_lane_serial_drain
24 libdispatch.dylib +0xba04 _dispatch_lane_invoke
25 libdispatch.dylib +0x16618 _dispatch_root_queue_drain_deferred_wlh
26 libdispatch.dylib +0x15e8c _dispatch_workloop_worker_thread
27 libsystem_pthread.dylib +0x3110 _pthread_wqthread
28 libsystem_pthread.dylib +0x1e2c start_wqthread
Hello. Does anyone have any ideas on how to work with the new iOS 17 Live Photo? I can save the live photo, but I can't set it as wallpaper. Error: "Motion is not available in iOS 17" There are already applications that allow you to do this - VideoToLive and the like. What should I use to implement this with swift language? Most likely the metadata needs to be changed, but I'm not sure.
I'm looking for a sample code project on integrating Spatial Audio into my app, Tunda Island, a music-loving, make friends and dating app. I have gone as far as purchasing a book "Exploring MusicKit" by Rudrank Riyam but to no avail.
I have trained a model to classify some symbols using Create ML.
In my app I am using VNImageRequestHandler and VNCoreMLRequest to classify image data.
If I use a CVPixelBuffer obtained from an AVCaptureSession then the classifier runs as I would expect. If I point it at the symbols it will work fairly accurately, so I know the model is trained fairly correctly and works in my app.
If I try to use a cgImage that is obtained by cropping a section out of a larger image (from the gallery), then the classifier does not work. It always seems to return the same result (although the confidence is not a 1.0 and varies for each image, it will be to within several decimal points of it, eg 9.9999).
If I pause the app when I have the cropped image and use the debugger to obtain the cropped image (via the little eye icon and then open in preview), then drop the image into the Preview section of the MLModel file or in Create ML, the model correctly classifies the image.
If I scale the cropped image to be the same size as I get from my camera, and convert the cgImage to a CVPixelBuffer with same size and colour space to be the same as the camera (1504, 1128, kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange) then I get some difference in ouput, it's not accurate, but it returns different results if I specify the 'centerCrop' or 'scaleFit' options. So I know that 'something' is happening, but it's not the correct thing.
I was under the impression that passing a cgImage to the VNImageRequestHandler would perform the necessary conversions, but experimentation shows this is not the case. However, when using the preview tool on the model or in Create ML this conversion is obviously being done behind the scenes because the cropped part is being detected.
What am I doing wrong.
tl;dr
my model works, as backed up by using video input directly and also dropping cropped images into preview sections
passing the cropped images directly to the VNImageRequestHandler does not work
modifying the cropped images can produce different results, but I cannot see what I should be doing to get reliable results.
I'd like my app to behave the same way the preview part behaves, I give it a cropped part of an image, it does some processing, it goes to the classifier, it returns a result same as in Create ML.
Hello,
Starting in iOS 17, our application started having some issue publishing to our video session. More specifically the video capture seems to be broken in some, but not all sessions. What's troubling is that we're seeing that it fails consistently every 4 sessions.
It also fails silently, without reporting any problems to the app. We only notice that there are no frames being rendered or sent to the remote devices.
Here's what shows-up in the console:
<<<< FigCaptureSourceRemote >>>> Fig assert: "! storage->connectionDied" at bail (FigCaptureSourceRemote.m:235) - (err=0)
<<<< FigCaptureSourceRemote >>>> Fig assert: "err == 0 " at bail (FigCaptureSourceRemote.m:253) - (err=-16453)
Anyone seeing this? Any idea what could be the cause? Our sessions work perfectly on iOS16 and below.
Thanks
I've been trying to make a native version of my iPad app which uses AVAudioPlayer. Everything works fine in iOS and iPad OS, however, when running on visionOS, it sounds like it's constantly skipping (both in the simulator and on an actual device).
Anyone know why this might be or how to fix or a workaround?
Each time your listening music you are streaming from a server powered by frequently coal or gaz are rarely green energy.
As a developper on IOS, i request to Apple to provide download of audio file into our audio app . The goal is not to resell the audio and violate authors right.
You Tube already does that.
it is time to find tips and tricks to reduce the consumption of the energy specially into data brodcasting and useless streaming of the same song again and again and again.
is it possible to change the API in accordance to this reality.
At least under macOS Sonoma 14.2.1 kAudioFormatFlagIsBigEndian for 24bit audio doesn't seem to be supported by the CoreAudio engine when providing kAudioServerPlugInIOOperationWriteMix streaming buffers for our CoreAudio server plugin.
Is that correct and to be expected? Or how should the AudioStreamBasicDescription be filled out on a kAudioStreamPropertyPhysicalFormat request to correctly announce 24bit big endian audio to CoreAudio?
Thanks, hagen.
When setting the mode during the configuration of an audio session in Swift, the previously configured categoryOptions get reset. For example, if you perform setMode as shown below, you will observe that all previously set categoryOptions are cleared.
Example:
try AVAudioSession.sharedInstance().setCategory(.playAndRecord, mode: .videoChat, options: [.allowBluetooth, .defaultToSpeaker])
try AVAudioSession.sharedInstance().setMode(.voiceChat)
If you need to change the mode while maintaining the categoryOptions, you have to perform setCategory once again. Although the exact reason for this behavior has not been identified, the practical impact on the application's functionality is not yet clear. Why do you think this handling is in place?
Hello, I am having issue with the setting my avaudiosession output to bluetooth a2dp device.
I want to use built in mic for the input and a2dp device (airpod pro 2) for the output route.
Whenever I set the .allowBluetoothA2DP for my avaudioSession option, the output changes to speaker.
the mode is default and category is playandrecord.
If I do the same procedure with airpod pro 1, the output sets to the airpod pro 1.
I am having the trouble when I use airpod pro 2 with iphone with ios 17. It seems like there is no issue with ios version below 17.
Anyone went through this kind of issue?
Thank you in advance.
Hello,
I keep getting this kind of errors (specifically 16247) when trying to download DRM-protected HLS content, would anyone have a clue about the reason why ?
Error Domain=CoreMediaErrorDomain Code=-16247 "(null)" UserInfo={_NSURLErrorRelatedURLSessionTaskErrorKey=("BackgroundAVAssetDownloadTask
Thanks
Sylvain
hdiutiul bug?
When making a DMG image for the whole content of user1 profile (meaning using srcFolder = /Users/user1) using hdiutil, the program fails indicating:
/Users/user1/Library/VoiceTrigger: Operation not permitted hdiutil: create failed - Operation not permitted
The complete command used was: "sudo hdiutil create -srcfolder /Users/user1 -skipunreadable -format UDZO /Volumes/testdmg/test.dmg"
And, of course, the user had local admin rights. I was using Sonoma 14.2.1 and a MacBook Pro (Intel T2)
What I would have expected, asuming that /VoiceTrigger cannot be copied for whatever reason, would be to skip that file or folder and continue the process. Then, at the end, produce a log listing the files/folders not included and the reason for their exclusion. The fact that hdiutil just ended inmediately, looks to me as a bug. Or what else could explain the problem described?
iPhone 14 promax uses 12 million pixels to sharpen it so severely that it is impossible to see. Using 48 million pixels takes up too much space. It starts with 128g. How can there be so much space, plus shooting videos. The biggest problem at present is that there are no 24 million pixels, which greatly affects the daily shooting experience. People around me who use the 14pro series say that not having 24 million pixels is the biggest problem at the moment.
The 24-megapixel camera is most widely used for daily photography, and the 48-megapixel camera is only used for taking landscapes or photos. After all, it takes up too much memory. The biggest problem with the 14promax now is that its photography is lame, and the 12-megapixel camera has long lagged behind Android. There are too many. Adding 24 million modes is much more valuable than updating iOS, and the experience is directly doubled.