Starting with Sonoma 14.2 it is not possible any longer to connect canon cameras to an app via USB using Canon's EDSDK framework. This has been working fine up to Sonoma 14.1.
The app using the EDSDK is not crashing, but the SDK is not reporting back any connected cameras any longer. The camera is connected and can be seen in the system report as well as in e.g. gphoto2 and even in the EOS Utility Software.
It seems that 14.2 introduced some breaking change to the access to cameras from within apps.
I've tried upgrading to the newest EDSDK version and checked with and without App Sandbox. There is no way to find the camera any longer on 14.2.
Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.
Post
Replies
Boosts
Views
Activity
Hi, my team is developing an iOS app which connects to Apple Music (currently most of it is via API, but some tests have been done with MusicKit). In a separate development, we were using a service which competes with A.M., and when their (web) API is used, if the user doesn't interact with the app somehow (scrolling, buttons, etc), the connection will timeout after 30 seconds and force them to reconnect.
So, questions I have are:
Is there any timeout regarding stopping tracks in Apple Music or does the connection to the A.M. app stay active?
If the connection drops on iOS as well, is there any limitation regarding streaming A.M. content via API (aka are there licensing issues as well?)
Even with sample code from Apple, the same warning forever.
"Error returned from daemon: Error Domain=com.apple.accounts Code=7 "(null)""
Failed to log access with error: access=<PATCCAccess 0x28316b070> accessor:<<PAApplication 0x283166df0 identifierType:auditToken identifier:{pid:1456, version:3962}>> identifier:2EE1A54C-344A-40AB-9328-3F8E8B5E8A85 kind:intervalEvent timestampAdjustment:0 visibilityState:0 assetIdentifierCount:0 tccService:kTCCServicePhotos, error=Error Domain=NSCocoaErrorDomain Code=4097 "connection to service with pid 235 named com.apple.privacyaccountingd" UserInfo={NSDebugDescription=connection to service with pid 235 named com.apple.privacyaccountingd}
Entitlements are reviewed.
Who can I contact that can remove the gray frame from around the full screen video player on my iOS mobile application? This is an Apple iOS feature that I have no control over.
The screenshot attached below shows the full screen view of a video when the iOS mobile phone is held sideways. The issue is the big gray frame that is around the video, is taking up too much space from the video and it needs to be removed so the video can be fully screened.
I recently downloaded Apple Music for for Windows 10 and started a new music library. Then I downloaded all my purchased music albums to my computer. However, the album art is not embedded in any of the m4a files. It is stored in a separate artwork folder. I have Visual Studio 2022 and was wondering how I might go about accessing the Apple Music app using the API. I want the script to go through all my purchased music and take the album art from the artwork folder and embed it in the purchased m4a files. I found some other programs/scripts online but they are all for iTunes or for Mac. None of the old scripts will work because they don’t recognize the new Apple Music for Windows music library. I can’t go back to iTunes because Apple removed all the music options from the iTunes for Windows app. Any ideas on how to get started writing the app/script would be greatly appreciated. Can I download something from Apple to make it easy to access the API in Visual Studio 2022 for Windows?
Developing for iphone/ipad/mac
I have an idea for a music training app, but need to know of supporting libraries for recognizing a musical note's fundamental frequency in close to real time (100 ms delay) Accuracy should be within a few cents (hundredths of a semi tone)
A search for "music" resolved the core-midi library -- fine if I want to take input from midi, but I want to be open to audio input too.
And I found MusicKit, which seems to be a programmer's API for digging into
Meta questions:
Should I be using different search terms:
Where are libraries listed?
Who are the names in 3rd party libraries.
I have repeatedly checked if you limit the connection speed to a host with a video file (mp4), it brings that AVPlayer is stalled, but after you return the high speed connection to the host, the player does not resume playback. If you check the status, no errors, just the empty buffer:
AVPlayer.error is nil.
AVPlayerItem.error is nil.
AVPlayerItem.isPlaybackBufferEmpty is true
AVPlayerItem.isPlaybackLikelyToKeepUp is false
Even if you try to wait a lot of time nothing happens and tapping play button it doesn't help as well. The player is frozen forever.
Only if you call "seek" or call "playImmediately" method the player is unfrozen and resume playback.
It happens not all the time, maybe one time from four.
It seems like AVPlayer has a bug. What do you think?
我使用VideoToolBox库进行编码后再通过NDI发送到网络上,是可以成功再苹果电脑上接收到ndi源屏显示画面的,但是在windows上只能ndi源名称,并没有画面显示。
我想知道是不是使用VideoToolBox库无法在windows上进行正确编码,这个问题需要如何解决
I am trying to set up HLS with MV HEVC. I have an MV HEVC MP4 converted with AVAssetWriter that plays as a "spatial video" in Photos in the simulator. I've used ffmpeg to fragment the video for HLS (sample m3u8 file below).
The HLS of the mp4 plays on a VideoMaterial with an AVPlayer in the simulator, but it is hard to determine if the streamed video is stereo. Is there any guidance on confirming that the streamed mp4 video is properly being read as stereo?
Additionally, I see that REQ-VIDEO-LAYOUT is required for multivariant HLS. However if there is ONLY stereo video in the playlist is it needed? Are there any other configurations need to make the device read as stereo?
Sample m3u8 playlist
#EXTM3U
#EXT-X-VERSION:3
#EXT-X-TARGETDURATION:13
#EXT-X-MEDIA-SEQUENCE:0
#EXTINF:12.512500,
sample_video0.ts
#EXTINF:8.341667,
sample_video1.ts
#EXTINF:12.512500,
sample_video2.ts
#EXTINF:8.341667,
sample_video3.ts
#EXTINF:8.341667,
sample_video4.ts
#EXTINF:12.433222,
sample_video5.ts
#EXT-X-ENDLIST
I'm running this SwiftUI sample app for photos without any modifications except for adding my developer profile, which is necessary to build it. When I tap on the thumbnail to see the photo library (after granting access to my photo library), I see that some of the thumbnails are stuck in a loading state, and when I tap on thumbnails, I only see a low-resolution image (the thumbnail), not the full-resolution image that should load.
In the console I can see this error that occurs when tapping on a thumbnail to see the full-resolution image:
CachedImageManager requestImage error: The operation couldn’t be completed. (PHPhotosErrorDomain error 3164.)
When I make a few modifications necessary to run the app as a native macOS app, all the thumbnails load immediately, and clicking on them reveals the full-resolution images.
So I've spent the last five years optimizing my video AI system so that it runs with less than 5% CPU while processing a 30fps video feed on a Macbook Pro M2, and everything is great, until Sonoma comes out, and I find myself consuming 40% CPU for the exact same workload.
So I fire up Instruments, and the "heaviest stack trace" (see screenshot) turns out to be Espresso doing some completely unasked-for and absolutely useless processing on my video frames. I turn off Reactions, but nothing helps - the CPU consumptions stays at 40%.
"Reactions" is nothing but a useless toy to please some WWDC keynote fanboys, I don't want it anywhere near my app or my users, and I especially do not want to take the blame for it pissing away the user's CPU cycles and battery.
Now, how do I make it go away, for ever?
Best regards
Jacob
i have create one recording application, but user switch off or kill the application, so that time how to save ongoing record.
I want to add a small play button to play a song preview on my site. I don't want to use the embedded player because I want to style it custom. If I use the MusicKit Web library, do I need to add Apple branding legally? I don't see anywhere that says its required and I want to confirm.
We have to play some encrypted videos from server.
In AVAssetResourceLoaderDelegate we got the ckc data correctly and responded with that. Then video just starts playing and stops immdediately. When we check the playerItem error description we got Error Domain=AVFoundationErrorDomain Code=-11819 "Cannot Complete Action" UserInfo={NSLocalizedDescription=Cannot Complete Action, NSLocalizedRecoverySuggestion=Try again later.}. Any one encountered this?
When I try to initialize SCContentFilter(desktopIndependentWindow: window), I get a very weird error:
Assertion failed: (did_initialize), function CGS_REQUIRE_INIT, file CGInitialization.c, line 44.
This is a brand new CLI project on Sonoma 14.2.1. Any ideas?
I'm trying to use AVCaptureSession and AVAssetWriter to convert video and audio from an iPhone's camera and microphone into a fragmented video file in AppleHLS format.
Below is part of the code.
It seems that the capture is successful, and I have confirmed that the data received with captureOutput() can be appended to videoWriterIput and audioWriterInput using append().
When executing audioWriterInput!.append(sampleBuffer), sampleBuffer has the following value, and it looks like the audio data has been passed to AssetWriter.
sampleBuffer.duration : CMTime(value: 941, timescale: 44100, flags: __C.CMTimeFlags(rawValue: 1), epoch: 0)
sampleBuffer.totalSampleSize : 1882
However, the final output init.mp4 and *.m4s do not contain Audio. (The video can be played without any problems.)
Could you please tell me any problems or hints as to why Audio is not included?
/// Capture Session
let captureSession = AVCaptureSession()
/// Capture Input
var videoDevice: AVCaptureDevice?
var audioDevice: AVCaptureDevice?
/// Configure and Start Capture Session
func startCapture() {
// Start Configuration
captureSession.beginConfiguration()
// Setup Input Video
videoDevice = self.defaultCamera(cameraSide: cameraSide)
videoDevice!.activeVideoMinFrameDuration = CMTimeMake(value: 1, timescale: 30)
let videoInput = try AVCaptureDeviceInput(device: videoDevice!) as AVCaptureDeviceInput
captureSession.addInput(videoInput)
// Setup Input Audio
audioDevice = AVCaptureDevice.default(for: AVMediaType.audio)
let audioInput = try AVCaptureDeviceInput(device: audioDevice!) as AVCaptureDeviceInput
captureSession.addInput(audioInput)
// Setup Output Video
let videoDataOutput = AVCaptureVideoDataOutput()
videoDataOutput.setSampleBufferDelegate(self, queue: recordingQueue)
videoDataOutput.alwaysDiscardsLateVideoFrames = true
videoDataOutput.videoSettings = [
kCVPixelBufferPixelFormatTypeKey: Int(kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange)] as [String : Any]
captureSession.addOutput(videoDataOutput)
// Setup Output Audio
let audioDataOutput = AVCaptureAudioDataOutput()
audioDataOutput.setSampleBufferDelegate(self, queue: recordingQueue)
captureSession.addOutput(audioDataOutput)
//End Configuration
captureSession.commitConfiguration()
// Start Capture
captureSession.startRunning()
}
private let assetWriter: AVAssetWriter?
private let startTimeOffset: CMTime
private var audioWriterInput: AVAssetWriterInput?
private let videoWriterInput: AVAssetWriterInput?
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
if assetWriter == nil {
// AssetWriter
assetWriter = AVAssetWriter(contentType: UTType(AVFileType.mp4.rawValue)!)
self.startTimeOffset = CMTime(value: 1, timescale: 1)
// Setup Input of Audio.
let audioCompressionSettings: [String: Any] = [
AVFormatIDKey: kAudioFormatMPEG4AAC,
AVSampleRateKey: 44_100,
AVNumberOfChannelsKey: 1,
AVEncoderBitRateKey: 128_000
]
audioWriterInput = AVAssetWriterInput(mediaType: .audio, outputSettings: audioCompressionSettings)
audioWriterInput!.expectsMediaDataInRealTime = true
assetWriter.add(audioWriterInput!)
// Setup Input of Video.
let videoCompressionSettings: [String: Any] = [
AVVideoCodecKey: AVVideoCodecType.h264
]
let videoCompressionSettings: [String: Any] = [
AVVideoCodecKey: AVVideoCodecType.h264,
AVVideoWidthKey: 1280,
AVVideoHeightKey: 720,
AVVideoCompressionPropertiesKey: [
kVTCompressionPropertyKey_AverageBitRate: 1_024_000,
kVTCompressionPropertyKey_ProfileLevel: kVTProfileLevel_H264_Baseline_AutoLevel
]
]
videoWriterInput = AVAssetWriterInput(mediaType: .video, outputSettings: videoCompressionSettings)
videoWriterInput!.expectsMediaDataInRealTime = true
pixelBuffer = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: videoWriterInput!, sourcePixelBufferAttributes: [kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_32BGRA)])
assetWriter.add(videoWriterInput!)
// Configure the asset writer for writing data in fragmented MPEG-4 format.
assetWriter.outputFileTypeProfile = AVFileTypeProfile.mpeg4AppleHLS
assetWriter.preferredOutputSegmentInterval = CMTime(seconds: 1.0, preferredTimescale: 1)
assetWriter.initialSegmentStartTime = startTimeOffset
assetWriter.delegate = self
// start AssetWriiter
startTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer)
assetWriter.startWriting()
assetWriter.startSession(atSourceTime: startTime)
}
let isVideo = output is AVCaptureVideoDataOutput
if isVideo {
if videoWriterInput.isReadyForMoreMediaData {
videoWriterInput!.append(sampleBuffer)
}
}else{
if audioWriterInput!.isReadyForMoreMediaData {
audioWriterInput!.append(sampleBuffer)
}
}
}
func assetWriter(_ writer: AVAssetWriter, didOutputSegmentData segmentData: Data, segmentType: AVAssetSegmentType, segmentReport: AVAssetSegmentReport?) {
:
:
}
I am using the official website API to decode now, the callback did not trigger.
Hello Apple Community,
First, I'm asking this on the holidays, so happy holidays. Now I have some "fun coding time" and can do it because I have some holiday time.
I'm new to this area and would greatly appreciate your expertise and guidance. Have mercy.
I'm attempting to develop a simple application on my Mac OS to browse my playlists. However, I've encountered a few roadblocks that I struggle to navigate. I understand I need to implement two-factor authentication to access my playlists, for which an OAuth setup is required. This involves obtaining an Apple ID and a service ID and dealing with other complex elements.
One particular challenge I'm facing is with the redirect URI in the OAuth process. It seems that it needs to be a valid domain, and I'm unsure if using a local server address like https://localhost:5000 would work. My goal is to create a basic Flask application that can interact with Apple's web authentication system, but I'm uncertain about the feasibility of this approach, given the domain restrictions.
I would appreciate any advice or step-by-step guidance on the following points.
What would be the simplest way to create an application (Swift, Python, or JavaScript) that can authenticate and enable browsing through my playlists?
Any insights, tips, or examples you could share would be immensely helpful. I am eager to learn and look forward to your valuable suggestions. Anything step-by-step would be great, but I like to dream.
Thank you so much for your time and assistance.
Hello,
We have a Photo Vault app. We were hiding users' photos behind a semi-functional calculator. But after rejection, we thought "decoy functionality" meant we needed to remove this fake calculator feature. We removed it, and tried many things to solve this issue, but couldn't understand what Apple wants us to change. We've been trying to contact Apple for more details, but they keep sending the same message every time.
Helps appreciated. Here is the rejection message:
.
Your app uses public APIs in an unapproved manner, which does not comply with guideline 2.5.1 of the App Store Review Guidelines.
Specifically, we found that your app uses a decoy functionality to hide a user’s photos, which is not an appropriate use of the Photos API.
Since there is no accurate way of predicting how an API may be modified and what effects those modifications may have, Apple does not permit unapproved uses of public APIs in App Store apps.
Next Steps
Please revise your app to ensure that documented APIs are used in the manner prescribed by Apple.
It would be appropriate to remove any features in your app that use a decoy functionality to hide a user's photos from your app.
If there are no alternatives for providing the functionality your app requires, you can use Feedback Assistant to submit an enhancement request.
I'm trying to test the MusicMarathon and MusicAlbums tutorial/demo apps for MusicKit and half the endpoints do not work.
As an example the MusicMarathon call to MusicRecentlyPlayedContainerRequest() just returns a 401.
Everything I've done seems correct. I've got a fully authorized session and I have all the development keys successfully setup.
Also it's not all API's as I can access the users Library, Just none of the recommendation and search endpoints seems to be working correctly.
I'm running iOS 17.2 and Xcode 15.1
I'm pretty certain this is easily repeatable by just running the demo applications from the MusicKit documentation,