Hi! I've been working on a project in python that pulls in a bunch of my personal apple music playback history and library, etc.
I can't find a single good/functional example on how to pull the Music User Token via the android method or MusicKit JS (web) - I've spent a lot of hours on this today, and no permutation of existing examples/documentation has worked.
Any guidance would be much appreciated!! If you have a web app that pulls the music user token, I just need help understanding how to get to the token itself.
Thank you!
Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.
Post
Replies
Boosts
Views
Activity
I have applied some filters (like applyingGaussianBlur) to a CIImage that was converted from UIImage. The resulting image data gets corrupted only in lower end devices. What could be the reason?
This can be reproduced easily with XCode's generated AUv3-Extension Projects.
For MIDI Processor type AUv3-Extensions, the contextName property is only set once during initializing when added as a MIDI FX within Logic Pro, but not after changing the track's name manually.
For Music Effect type AUv3-Extensions, contextName is set initially when added as an Audio FX within Logic Pro as well as updated as expected after changing the tracks's name manually.
Am I missing something or is this a Logic Pro bug?
Thanks,
Tobias
f I attached a FairPlay content key using AVSampleBufferAttachContentKey to my CMSampleBuffer, can I then render it using AVSampleBufferDisplayLayer
I am creating an app where you can record a video and listen to music in the background. At the top of my viewDidLoad I set the AVAudioSession Category to .playAndRecord
let audioSession = AVAudioSession.sharedInstance()
AVCaptureSession().automaticallyConfiguresApplicationAudioSession = false
do {
try audioSession.setCategory(AVAudioSession.Category.playAndRecord, options: [.mixWithOthers, .allowAirPlay, .allowBluetoothA2DP])
try audioSession.setActive(true)
} catch {
print("error trying to record and play audio")
}
However when I do this the audio cuts out for a second or less at app open and app close. I would like the audio to continue playing and not cutout. Is there anything I can do to ensure the audio continues to play?
I found that didOutputSampleBuffer would not be called for long time when the screen is no change. It sometimes make me confused for if there something wrong.
In my design, it will change to other screen shot method, such as CGWindowListCreateImage, when long time no data. But this is not what I expected.
I set the minimumFrameInterval to 30 but it seems no work.
[config setMinimumFrameInterval:CMTimeMake(1, 30)];
Is there any settings that can let me get a didOutputSampleBuffer, even given a CMSampleBufferRef with status SCFrameStatusIdle, called atlest one time per second? Which would make me think it works fine without any exception.
I'm working on a custom spatial video player that uses AVSampleBufferDisplayLayer as render layer. When I feed it with CMSampleBuffer that output from VTCompressionSession using new encoding API it can display normally but I don't know if it can work in VisionPro. Anyone has idea?
Hi,
I am looking at display some spatial video content captured on iPhone 15 Pros in a side-by-side format. I've read the HEVC Stereo Video Profile provided by Apple, but I am confused on access the left and right eye video. Looking at the AVAsset track information, there is one video track, one sound, and three metadata ones.
Apple's document references them as layers, but I am unsure how to access them. Could anyone provide some guidance on the access of them?
Thanks,
Will
Hello, I am experiencing an unexpected issue related to PhotosKit, where an occasional crash is occurring in an area it shouldn't. I've provided the crash log below for reference.
The crash can be recreated by calling PHPhotoLibrary.cloudIdentifierMappings(forLocalIdentifiers:) with random identifiers, as demonstrated below:
let cloudIdentifiers = (0...10).map { _ in PHCloudIdentifier(stringValue: UUID().uuidString) }
let localIdsMap = PHPhotoLibrary.shared().localIdentifierMappings(for: cloudIdentifiers)
I've noticed that the method seems to crash consistently when an incorrect cloud identifier is input. This is surprising to me since the return type is [PHCloudIdentifier : Result<String, Error>], so I was anticipating an explicit error.
The issue can be reproduced on both Xcode 15.0 & iOS 17.2 and Xcode 14.3.1 & iOS 16.0.
While I'm fairly certain that only valid identifiers are used in my app, I would still like to know if there is a potential way to validate a cloud identifier prior to accessing this method?
I'm currently working on live screen broadcasting app which allows the user's to record their screen to save a mp4 video. I write video file by AVAssetWriter, and it works fine. But, when there is 1GB-2BG of storage space remaining on the device, errors such as "Attempted to start an invalid broadcast session" frequently occur, and video files cannot be played due to not call assetWriter.finishWriting().
Occur on device:
iPhone se3
iPhone 12 pro max
iPhone 13
iPad 19
iPad air 5
I have tried the movieFragmentInterval of AVAssetWriter to write movie fragments , set shouldOptimizeForNetworkUse true/false , not working.The video can not be played.
I want to known how to observe or catch this error? Thanks!
Hello, I am experiencing an unexpected issue related to PhotosKit, where an occasional crash is occurring in an area it shouldn't. I've provided the crash log below for reference below:
Last Exception Backtrace:
0 CoreFoundation 0x1aa050860 __exceptionPreprocess + 164
1 libobjc.A.dylib 0x1a2363c80 objc_exception_throw + 60
2 CoreFoundation 0x1a9f68218 -[__NSDictionaryM setObject:forKeyedSubscript:] + 1184
3 Photos 0x1c11cb178 -[PHCloudIdentifierLookup _lookupCodeSpecificCloudIdentifierStrings:forIdentifierCode:] + 572
4 Photos 0x1c11cb278 -[PHCloudIdentifierLookup _lookupLocalIdentifiersForIdentifierCode:codeSpecificCloudIdentifierStrings:] + 92
5 Photos 0x1c11cbbec __69-[PHCloudIdentifierLookup lookupLocalIdentifiersForCloudIdentifiers:]_block_invoke + 88
6 CoreFoundation 0x1a9f67d70 NSDICTIONARY_IS_CALLING_OUT_TO_A_BLOCK + 24
7 CoreFoundation 0x1a9f67bec -[__NSDictionaryM enumerateKeysAndObjectsWithOptions:usingBlock:] + 288
8 Photos 0x1c11cbb40 -[PHCloudIdentifierLookup lookupLocalIdentifiersForCloudIdentifiers:] + 488
9 Photos 0x1c125e6e4 -[PHPhotoLibrary(CloudIdentifiers) localIdentifierMappingsForCloudIdentifiers:] + 96
10 Photos 0x1c1104f18 0x1c1101000 + 16152
"exceptionReason" : {"arguments":["-[__NSDictionaryM setObject:forKeyedSubscript:]"],"format_string":"*** %s: key cannot be nil","name":"NSInvalidArgumentException","type":"objc-exception","composed_message":"*** -[__NSDictionaryM setObject:forKeyedSubscript:]: key cannot be nil","class":"NSException"},
The crash can be recreated by calling PHPhotoLibrary.cloudIdentifierMappings(forLocalIdentifiers:) with random identifiers, as demonstrated below:
let cloudIdentifiers = (0...10).map { _ in PHCloudIdentifier(stringValue: UUID().uuidString) } let localIdsMap = PHPhotoLibrary.shared().localIdentifierMappings(for: cloudIdentifiers)
I've noticed that the method seems to crash consistently when an incorrect cloud identifier is input. This is surprising to me since the return type is [PHCloudIdentifier : Result<String, Error>], so I was anticipating an explicit error.
The issue can be reproduced on both Xcode 15.0 & iOS 17.2 and Xcode 14.3.1 & iOS 16.0.
While I'm fairly certain that only valid identifiers are used in my app, I would still like to know if there is a potential way to validate a cloud identifier prior to accessing this method?
I have a question about the Apple Music preview app for Windows 11.
It has a setting called Sound Check.
Is that feature available on the Apple Music web player and the Apple Music Android app?
If not, is that a planned feature for those?
There is a need to obtain data on the position of the TruDepth camera matrix. Couldn't find anything in the documentation. Has anyone solved this problem? Is it generally possible to obtain this data?
There is a need to obtain data on the position of the TrueDepth camera matrix. Couldn't find anything in the documentation. Has anyone solved this problem? Is it generally possible to obtain this data?
Hi Everyone need your help . I am working on an application where I am capturing photo from the back camera using AVCaptureSession. It is working fine with the devices running iOS17+ but I am facing an error on device iPhone X running iOS 16.7.4
ERROR:
error: Optional(Error Domain=AVFoundationErrorDomain Code=-11803 "Cannot Record" UserInfo={NSUnderlyingError=0x283f0b780 {Error Domain=NSOSStatusErrorDomain Code=-16409 "(null)"}, NSLocalizedRecoverySuggestion=Try recording again., AVErrorRecordingFailureDomainKey=3, NSLocalizedDescription=Cannot Record})
Here is my Code:
`final class CedulaScanningVC: UIViewController {
var captureSession: AVCaptureSession!
var stillImageOutput: AVCapturePhotoOutput!
var videoPreviewLayer: AVCaptureVideoPreviewLayer!
var delegate: ScanCedulaDelegate?
override func viewDidLoad() {
super.viewDidLoad()
}
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
}
override func viewWillDisappear(_ animated: Bool) {
super.viewWillDisappear(animated)
self.captureSession.stopRunning()
}
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
setupCamera()
}
// MARK: - Configure Camera
func setupCamera() {
captureSession = AVCaptureSession()
captureSession.sessionPreset = .medium
guard let backCamera = AVCaptureDevice.default(for: AVMediaType.video)
else {
print("Unable to access back camera!")
return
}
let input: AVCaptureDeviceInput
do {
input = try AVCaptureDeviceInput(device: backCamera)
//Step 9
stillImageOutput = AVCapturePhotoOutput()
if captureSession.canAddInput(input) && captureSession.canAddOutput(stillImageOutput) {
captureSession.addInput(input)
captureSession.addOutput(stillImageOutput)
setupLivePreview()
}
}
catch let error {
print("Error Unable to initialize back camera: \(error.localizedDescription)")
}
}
func setupLivePreview() {
videoPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
videoPreviewLayer.videoGravity = .resizeAspectFill
videoPreviewLayer.connection?.videoOrientation = .portrait
self.view.layer.addSublayer(videoPreviewLayer)
//Step12
DispatchQueue.global(qos: .userInitiated).async { [weak self] in
self?.captureSession.startRunning()
//Step 13
DispatchQueue.main.async {
self?.videoPreviewLayer.frame = self?.view.bounds ?? .zero
}
}
}
func failed() {
let ac = UIAlertController(title: "Scanning not supported", message: "Your device does not support scanning a code from an item. Please use a device with a camera.", preferredStyle: .alert)
ac.addAction(UIAlertAction(title: "OK", style: .default))
present(ac, animated: true)
captureSession = nil
}
// MARK: - actions
func cameraButtonPressed() {
let settings = AVCapturePhotoSettings(format: [AVVideoCodecKey: AVVideoCodecType.jpeg])
stillImageOutput.capturePhoto(with: settings, delegate: self)
}
}
extension CedulaScanningVC: AVCapturePhotoCaptureDelegate {
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {
print("error: \(error)")
captureSession.stopRunning()
DispatchQueue.main.asyncAfter(deadline: .now() + 1.0) { [weak self] in
guard let self = self else {return}
guard let imageData = photo.fileDataRepresentation()
else {
print("NO image captured")
return
}
let image = UIImage(data: imageData)
self.delegate?.capturedImage(image: image)
}
}
}`
I don't know what am doing wrong ?
Hi iOS community need your help. I am working on an application where I am capturing photo from the back camera using AVCaptureSession. It is working fine with the devices running iOS17+ but I am facing an error on device iPhone X running iOS 16.7.4
ERROR:
` error: Optional(Error Domain=AVFoundationErrorDomain Code=-11803 "Cannot Record" UserInfo={NSUnderlyingError=0x283f0b780 {Error Domain=NSOSStatusErrorDomain Code=-16409 "(null)"}, NSLocalizedRecoverySuggestion=Try recording again., AVErrorRecordingFailureDomainKey=3, NSLocalizedDescription=Cannot Record})
My Code here:
final class CedulaScanningVC: UIViewController {
var captureSession: AVCaptureSession!
var stillImageOutput: AVCapturePhotoOutput!
var videoPreviewLayer: AVCaptureVideoPreviewLayer!
var delegate: ScanCedulaDelegate?
override func viewDidLoad() {
super.viewDidLoad()
}
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
}
override func viewWillDisappear(_ animated: Bool) {
super.viewWillDisappear(animated)
self.captureSession.stopRunning()
}
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
setupCamera()
}
// MARK: - Configure Camera
func setupCamera() {
captureSession = AVCaptureSession()
captureSession.sessionPreset = .medium
guard let backCamera = AVCaptureDevice.default(for: AVMediaType.video)
else {
print("Unable to access back camera!")
return
}
let input: AVCaptureDeviceInput
do {
input = try AVCaptureDeviceInput(device: backCamera)
//Step 9
stillImageOutput = AVCapturePhotoOutput()
if captureSession.canAddInput(input) && captureSession.canAddOutput(stillImageOutput) {
captureSession.addInput(input)
captureSession.addOutput(stillImageOutput)
setupLivePreview()
}
}
catch let error {
print("Error Unable to initialize back camera: \(error.localizedDescription)")
}
}
func setupLivePreview() {
videoPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
videoPreviewLayer.videoGravity = .resizeAspectFill
videoPreviewLayer.connection?.videoOrientation = .portrait
self.view.layer.addSublayer(videoPreviewLayer)
//Step12
DispatchQueue.global(qos: .userInitiated).async { [weak self] in
self?.captureSession.startRunning()
//Step 13
DispatchQueue.main.async {
self?.videoPreviewLayer.frame = self?.view.bounds ?? .zero
}
}
}
func failed() {
let ac = UIAlertController(title: "Scanning not supported", message: "Your device does not support scanning a code from an item. Please use a device with a camera.", preferredStyle: .alert)
ac.addAction(UIAlertAction(title: "OK", style: .default))
present(ac, animated: true)
captureSession = nil
}
// MARK: - actions
func cameraButtonPressed() {
let settings = AVCapturePhotoSettings(format: [AVVideoCodecKey: AVVideoCodecType.jpeg])
stillImageOutput.capturePhoto(with: settings, delegate: self)
}
}
extension CedulaScanningVC: AVCapturePhotoCaptureDelegate {
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {
print("error: \(error)")
captureSession.stopRunning()
DispatchQueue.main.asyncAfter(deadline: .now() + 1.0) { [weak self] in
guard let self = self else {return}
guard let imageData = photo.fileDataRepresentation()
else {
print("NO image captured")
return
}
let image = UIImage(data: imageData)
self.delegate?.capturedImage(image: image)
}
}
}
I don't know what am doing wrong ?
This isn't just my observation but lots of people around me and also you can find tonnes of feedback on the inter webs.
The processing of images taken with the front facing camera on the 15 (and I think 14 before) is so over processed that I'm aware of people jumping to other phones. And they're right. The 15 exacerbates that even more. You can turn off HDR (a viewing thing), you can prioritise speed over processing but really you cannot turn this off. You can take a Live Photo and then choose a different frame and the processing is less.
As a developer I look at that and think it's bonkers, it's just software so why hasn't anyone produced a camera app that makes faces look good (not AI processing) from the front camera.
I can be all enthusiastic and say I will develop one but it seems like a simple, obvious fix for Apple.
To have the settings so bad that I have friends returning their phones, seems pretty bad. And as a photographer I would agree. There's a lot to love with Apple on the 15 and the log and prores but a simple selfie produces such ugly results. That's an actual problem.
So throwing it it out there. What does everyone think?
cheers
Paul
As the title states. Would like to use MusicKit for Web instead of the Swift integration.
Is it necessary to enroll to Apple devloper program to get into apple news publisher to get my Apple news API credentials. At last. I need a gudance how to publish the articles on the apple News using News API. a detailed explanation till getting Apple News API credentials ( Key, secret key channel id) is much appreciated!
I have a navigation controller with two VCs. One VC is pushed onto the NavController, the other is presented on top of the NavController. The presented VC has a relatively complex animation involving a CAEmitter -> Animate birth rate down -> Fade out -> Remove. The pushed VC has an 'inputAccessoryView' and can become first responder.
The expected behavior is open presented VC -> Emitter Emits pretty pictures -> emitter stops gracefully.
The animation works perfectly. UNLESS I open pushed VC -> Leave -> go to presented VC. In this case when I open the presented VC the emitter emits pretty pictures -> they never stop. (Please do not ask me how long it took to figure this much out 🤬😔)
The animation code in question is:
let animation = CAKeyframeAnimation(keyPath: #keyPath(CAEmitterLayer.birthRate))
animation.duration = 1
animation.timingFunction = CAMediaTimingFunction(name: .easeIn)
animation.values = [1, 0 , 0]
animation.keyTimes = [0, 0.5, 1]
animation.fillMode = .forwards
animation.isRemovedOnCompletion = false
emitter.beginTime = CACurrentMediaTime()
let now = Date()
CATransaction.begin()
CATransaction.setCompletionBlock { [weak self] in
print("fade beginning -- delta: \(Date().timeIntervalSince(now))")
let transition = CATransition()
transition.delegate = self
transition.type = .fade
transition.duration = 1
transition.timingFunction = CAMediaTimingFunction(name: .easeOut)
transition.setValue(emitter, forKey: kKey)
transition.isRemovedOnCompletion = false
emitter.add(transition, forKey: nil)
emitter.opacity = 0
}
emitter.add(animation, forKey: nil)
CATransaction.commit()
The delegate method is:
extension PresentedVC: CAAnimationDelegate {
func animationDidStop(_ anim: CAAnimation, finished flag: Bool) {
if let emitter = anim.value(forKey: kKey) as? CALayer {
emitter.removeAllAnimations()
emitter.removeFromSuperlayer()
} else {
}
}
}
Here is the pushed VC:
class PushedVC: UIViewController {
override var canBecomeFirstResponder: Bool {
return true
}
override var canResignFirstResponder: Bool {
return true
}
override var inputAccessoryView: UIView? {
return UIView()
}
}
So to reiterate - If I push pushedVC onto the navController, pop it, present PresentedVC the emitters emit, but then the call to emitter.add(animation, forKey: nil) is essentially ignored. The emitter just keeps emitting.
Here are some sample happy print statements from the completion block:
fade beginning -- delta: 1.016232967376709
fade beginning -- delta: 1.0033869743347168
fade beginning -- delta: 1.0054619312286377
fade beginning -- delta: 1.0080779790878296
fade beginning -- delta: 1.0088880062103271
fade beginning -- delta: 0.9923020601272583
fade beginning -- delta: 0.99943196773529
Here are my findings:
The issue presents only when the pushed VC has an inputAccessoryView
AND canBecomeFirstResponder is true
It does not matter if the inputAccessoryView is UIKit or custom, has size, is visible, or anything.
When I dismiss PresentedVC the animation is completed and the print
statements show. Here are some unhappy print examples:
fade beginning -- delta: 5.003802061080933
fade beginning -- delta: 5.219511032104492
fade beginning -- delta: 5.73025906085968
fade beginning -- delta: 4.330522060394287
fade beginning -- delta: 4.786169052124023
CATransaction.flush() does not fix anything
Removing the entire CATransaction block and just calling
emitter.add(animation, forKey: nil) similarly does nothing - the
birth rate decrease animation does not happen
I am having trouble creating a simple demo project where the issue is reproducible (it is 100% reproducible in my code, the entirety of which I'm not going to link here) so I think getting a "solution" is unrealistic. What I would love is if anyone had any suggestions on where else to look? Any ways to debug CAAnimation? I think if I can solve the last bullet - emitter.add(animation, forKey: nil) called w/o a CATransaction - I can break this whole thing. Why would a CAAnimation added directly to the layer which is visible and doing stuff refuse to run?