Hello, can anybody help me with this ? I am downloading video in FS, and when I give that url to player it gives me this error. but this comes up only in case of m3u8. other format like mp4 are working fine locally. please help !
{"error": {"code": -12865, "domain": "CoreMediaErrorDomain", "localizedDescription": "The operation couldn’t be completed. (CoreMediaErrorDomain error -12865.)", "localizedFailureReason": "", "localizedRecoverySuggestion": ""}, "target": 13367}
Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.
Post
Replies
Boosts
Views
Activity
struct AlbumDetails : Hashable {
let artistId: String?
}
func fetchAlbumDetails(upc: String) async throws -> AlbumDetails {
let request = MusicCatalogResourceRequest<Album>(matching: \.upc, equalTo: upc)
let response = try await request.response()
guard let album = response.items.first else {
throw NSError(domain: "AlbumNotFound", code: 0, userInfo: nil)
}
do {
let artistID = try await fetchAlbumDetails(upc: upc)
print("Artist ID: \(artistID)")
} catch {
print("Error fetching artist ID: \(error)")
}
return AlbumDetails(artistId: album.artists?.first?.id)
with this function, i can return nearly everything except the artist ID so i know its not a problem with the request but I know there has to be a way to get the artistID, there has too. If anyone has a solution to this I would reallly appricate it
Hello! I'm trying to save videos asynchronously. I've already used performChanges without the completionHandler, but it didn't work. Can you give me an example? Consider that the variable with the file URL is named fileURL. What would this look like asynchronously?
I have a custom USB device that includes a microphone. I can see the microphone on macOS when I plug in the device so I know that it is working with the kernel and AV subsystems. I can enumerate and reference the microphone using AVCaptureDevice but I have not been able to figure out how to use this device reference with AVAudioEngine. I'm trying to accomplish two things with this microphone.
I want to stream audio from the microphone and have it rendered to the speakers on my MacBook Pro.
I want to capture sound data from the microphone and forward it to a live streaming API.
To my mind, from what I've read, I need AVAudioEngine to do this but I'm having trouble determining from the documentation just how to go about it on macOS. It seems that there is a lot more information for iOS or iPadOS but since USB-C support is sparsely documented on those operating systems, I'm focusing on the desktop (macOS) for now.
Can I convert an AVCaptureDevice into and audio input for AVAudioEngine? If not, how can I accomplish what I'm trying to do using whatever is available on AVFoundation?
My app uses camera and photo library. I found that if a user follows certain steps, they will no longer be able to change the photo permissions for my app in the Settings app.
The steps are as follows
Press the camera button in the app to launch the camera.
Take a picture with camera permissions granted.
grant ".addOnly" permission to the photo library.
Press the photo library button in the app to read photo library.
Deny ".readWrite" permission to the photo library.
After step 5, the Settings app only shows items to switch ".addOnly" permissions, but not ".readWrite" permissions.
I am aware that in iOS14 or later, the permission required after a photo is taken with the camera should be ".addOnly". Therefore, I suspect that this problem is occurring in other apps.
So far I have devised my app to deal with this problem, but is this the expected behavior of the Settings app? If so, how can I avoid this problem?
Since iOS 17.2. the video player in Safari becomes black if I jump forward in a HLS video stream. I only hear the sound of the video. If I close the full screen and reopen it the video continious normally.
I checked if the source meets all the requirements mentioned here and it does.
Does anybody have the same issue or maybe a solution for this problem?
Per FP Streaming programming guide,
The SPC includes a specific TLLV to provide the state of the media content playback. And total value length of this is 16 in decimals.
Here i'm trying to retrieve the Playback State. which is of 20-23 ByteRange.
byte[] mediaPlaybackStateBlock = getBlock(MEDIA_PLAYBACK_STATE).getValueData();
playbackState = Arrays.copyOfRange(mediaPlaybackStateBlock, 20, 24);
I'm endup in issue - arraycopy: length -4 is negative.
I'm bit confused on how to retrieve the playback state from 20-23 ByteRange when its length jus 16..
Kindly clarify
Is it possible using MusicKit API's to access the About information displayed on an artist page in Apple Music?
I hoped Artist.editiorialNotes would give me the information but there is scarce information in there. Even for Taylor Swift, only editorialNotes.short displays brief info: "The genre-defying singer-songwriter is the voice of a generation."
If currently not possible, are there plans for it in the future?
Also, with the above in mind, and never seen any editorialNotes for a song, is it safe to assume editorialNotes are mainly used for albums?
Hello,
I came across the Object Capture for iOS example from WWDC23, which utilizes LiDAR sensor.
However, I’m interested in using the TrueDepth camera system instead.
What I have tried is to save depth photos (.HEIC) to the Images/ folder (based on modifying the example below), which is hopefully used by the Photogrammetry session. But I haven’t been successful so far in starting the 3D reconstruction.
Could there be something I’ve missed, or is the Object Capture sample code exclusively designed for LiDAR? Or maybe .HEIC is not the right format to use?
Thank you for your assistance.
import AVFoundation
import UIKit
class DepthPhotoCapture: NSObject, AVCapturePhotoCaptureDelegate {
let photoOutput = AVCapturePhotoOutput()
let captureSession = AVCaptureSession()
override init() {
super.init()
setupCaptureSession()
}
func setupCaptureSession() {
// Get the front camera (TrueDepth camera)
guard let frontCamera = AVCaptureDevice.default(.builtInTrueDepthCamera, for: .video, position: .front) else {
print("Unable to access front camera!")
return
}
do {
// Create an input object from the camera
let input = try AVCaptureDeviceInput(device: frontCamera)
// Add the input to the capture session
captureSession.addInput(input)
} catch {
print("Unable to create AVCaptureDeviceInput: \(error)")
}
// Check if depth data capture is supported
if photoOutput.isDepthDataDeliverySupported {
// Enable depth data capture
photoOutput.isDepthDataDeliveryEnabled = true
}
// Add the photo output to the capture session
captureSession.addOutput(photoOutput)
// Start the capture session
captureSession.startRunning()
}
func captureDepthPhoto() {
// Create a photo settings object
let photoSettings = AVCapturePhotoSettings(format: [AVVideoCodecKey: AVVideoCodecType.hevc])
photoSettings.isDepthDataDeliveryEnabled = photoOutput.isDepthDataDeliveryEnabled
// Capture a photo with depth data
photoOutput.capturePhoto(with: photoSettings, delegate: self)
}
// Implement the AVCapturePhotoCaptureDelegate method
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {
guard let imageData = photo.fileDataRepresentation() else {
print("Error while generating image from photo capture data.")
return
}
// Get the documents directory
let documentsDirectory = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first!
// Append the image directory and a unique image name
let fileURL = documentsDirectory.appendingPathComponent("Images/").appendingPathComponent(UUID().uuidString).appendingPathExtension("heic")
do {
// Write the image data to the file
try imageData.write(to: fileURL)
print("Saved photo with depth data to \(fileURL)")
} catch {
print("Failed to write the image data to disk: \(error)")
}
}
}
- (void)cameraDevice:(ICCameraDevice*)camera
didReceiveMetadata:(NSDictionary* _Nullable)metadata
forItem:(ICCameraItem*)item
error:(NSError* _Nullable) error API_AVAILABLE(ios(13.0)){
NSLog(@"metadata = %@",metadata);
if (item) {
ICCameraFile *file = (ICCameraFile *)item;
NSURL *downloadsDirectoryURL = [[NSFileManager defaultManager] URLsForDirectory:NSDocumentDirectory inDomains:NSUserDomainMask].firstObject;
downloadsDirectoryURL = [downloadsDirectoryURL URLByAppendingPathComponent:@"Downloads"];
NSDictionary *downloadOptions = @{ ICDownloadsDirectoryURL: downloadsDirectoryURL,
ICSaveAsFilename: item.name,
ICOverwrite: @YES,
ICDownloadSidecarFiles: @YES
};
[self.cameraDevice requestDownloadFile:file options:downloadOptions downloadDelegate:self didDownloadSelector:@selector(didDownloadFile:error:options:contextInfo:) contextInfo:nil];
}
}
- (void)didDownloadFile:(ICCameraFile *)file
error:(NSError* _Nullable)error
options:(NSDictionary<NSString*, id>*)options
contextInfo:(void* _Nullable) contextInfo API_AVAILABLE(ios(13.0)){
if (error) {
NSLog(@"Download failed with error: %@", error);
}
else {
NSLog(@"Download completed for file: %@", file);
}
}
I don't know what's wrong. I don't know if this is the right way to get the camera pictures. I hope someone can help me
I found that the app reported a crash of a pure virtual function call, which could not be reproduced.
A third-party library is referenced:
https://github.com/lincf0912/LFPhotoBrowser
Achieve smearing, blurring, and mosaic processing of images
Crash code:
if (![LFSmearBrush smearBrushCache]) {
[_edit_toolBar setSplashWait:YES index:LFSplashStateType_Smear];
CGSize canvasSize = AVMakeRectWithAspectRatioInsideRect(self.editImage.size, _EditingView.bounds).size;
[LFSmearBrush loadBrushImage:self.editImage canvasSize:canvasSize useCache:YES complete:^(BOOL success) {
[weakToolBar setSplashWait:NO index:LFSplashStateType_Smear];
}];
}
- (UIImage *)LFBB_patternGaussianImageWithSize:(CGSize)size orientation:(CGImagePropertyOrientation)orientation filterHandler:(CIFilter *(^ _Nullable )(CIImage *ciimage))filterHandler
{
CIContext *context = LFBrush_CIContext;
NSAssert(context != nil, @"This method must be called using the LFBrush class.");
CIImage *midImage = [CIImage imageWithCGImage:self.CGImage];
midImage = [midImage imageByApplyingTransform:[self LFBB_preferredTransform]];
midImage = [midImage imageByApplyingTransform:CGAffineTransformMakeScale(size.width/midImage.extent.size.width,size.height/midImage.extent.size.height)];
if (orientation > 0 && orientation < 9) {
midImage = [midImage imageByApplyingOrientation:orientation];
}
//图片开始处理
CIImage *result = midImage;
if (filterHandler) {
CIFilter *filter = filterHandler(midImage);
if (filter) {
result = filter.outputImage;
}
}
CGImageRef outImage = [context createCGImage:result fromRect:[midImage extent]];
UIImage *image = [UIImage imageWithCGImage:outImage];
CGImageRelease(outImage);
return image;
}
This line trigger crash:
CGImageRef outImage = [context createCGImage:result fromRect:[midImage extent]];
b9c90c7bbf8940e5aabed7f3f62a65a2-symbolicated.crash
I'm working on a very simple App where I need to visualize an image on the screen of an iPhone. However, the image has some special properties. It's a 16bit, yuv422_yuy2 encoded image. I already have all the raw bytes saved in a Data object.
After googling for a long time, I still did not figure out the correct way. My current understanding is first create a CVPixelBuffer to properly represent the encoding information. Then conver the CVPixelBuffer to an UIImage. The following is my current implementation.
public func YUV422YUY2ToUIImage(data: Data, height: Int, width: Int, bytesPerRow: Int) -> UIImage {
return rosImage.data.withUnsafeMutableBytes { rawPointer in
let baseAddress = rawPointer.baseAddress!
let tempBufferPointer = UnsafeMutablePointer<CVPixelBuffer?>.allocate(capacity: 1)
CVPixelBufferCreateWithBytes( kCFAllocatorDefault,
width,
height,
kCVPixelFormatType_422YpCbCr16,
baseAddress,
bytesPerRow,
nil,
nil,
nil,
tempBufferPointer)
let ciImage = CIImage(cvPixelBuffer: tempBufferPointer.pointee!)
return UIImage(ciImage: ciImage)
}
}
However, when I execute the code, I have the followin error
-[CIImage initWithCVPixelBuffer:options:] failed because its pixel format v216 is not supported.
So it seems CIImage is unhappy. I think I need to convert the encoding from yuv422_yuy2 to something like plain ARGB. But after a long tim googling, I didn't find a way to do that. The closest function I cand find is https://developer.apple.com/documentation/accelerate/1533015-vimageconvert_422cbypcryp16toarg
But the function is too complex for me to understand how to use it.
Any help is appreciated. Thank you!
Hi,
In the Destinations sample code project and related WWDC talk on spatial video, it seems to imply that the video player will show 3D stereoscopic videos.
However, in the Photos app there's a vignetting in the simulator (and marketing material) when viewing spatial video — a portal kind of effect.
Without access to a device I'm wondering if my spatial videos are actually being played as 3D spatial videos in the AVPlayerController, since I'm not seeing the vignetting.
I'm thinking that the vignetting is a photos specific visual effect, but wanted to double check to make sure I'm not misunderstanding something about AVPlayerController.
Does anyone know if spatial videos played through AVPlayerController will appear as stereoscopic, even if the vignetting isn't there? Has anyone tried the Destinations sample code to play spatial videos on a device to confirm?
thanks!
Hello Apple Developer Community,
I'm excited to make my first post here and am seeking guidance for a feature I'd like to implement in my app. My objective is to enable users to select an image and crop it. Ideally, there should be a visible indicator, like a rectangle, to show the area that will be cropped. Upon clicking the save button, the image would be saved with the selected cropped area.
I'm aiming for functionality to the image editor in the Photos app. Is there a straightforward method or integration for this that adheres to Apple's native frameworks, without resorting to external GitLab repositories?
Thank you in advance for your assistance.
Best regards,
Nicola
What are the Mac hardware and software requirements to decode and encode MV-HEVC video with AVFoundation?
Many of the new MV-HEVC-related keys require macOS 14.0+, so I'm guessing that macOS Sonoma or later is required on the software side.
What about processor architectures? I can read an MV-HEVC source on my Apple Silicon M1. But when I run the same code on my Intel Mac mini (2018) running Sonoma 14.3, AVAssetReader's startReading() returns false.
Similarly, when I try to create an AVAssetWriterInput with MV-HEVC output settings, I receive:
-[AVAssetWriterInput initWithMediaType:outputSettings:sourceFormatHint:] Compression property MVHEVCVideoLayerIDs is not supported for video codec type hvc1'
Is this because Intel-based Macs don't support MV-HEVC? Or am I missing something else?
how to use AIGC in the workplace to assist with work?
HI
My 2017 MacBook Air appears to be cracking when using face time, when using YouTube or playing music, speakers are fine and no distortion. The Mac is fully up to date, macOS Montery 12.7.3
Any help would be great, have tried input and output levels.
Hi all!
As the title states, I'm looking to integrate with Apple Music's REST API. I'm wondering if there are any OpenAPI 3.0 YAML or JSON specs available anywhere?
I'd like to avoid transcribing the types found in the developer docs manually.
Here's a link to the Apple Music API docs and to the OpenAPI 3.0 spec:
https://developer.apple.com/documentation/applemusicapi
https://spec.openapis.org/oas/latest.html
Open API was previously known as "Swagger" too.
Many thanks in advance!
I am trying to retrieve same information about a song as is available on the music.apple.com page (see screenshot).
It seems neither MusicKit and/or Apple Music API's deliver that depth of information. For example, the names listed for Production and Techniques.
Am I correct?
how can we change the image quality, size, camera and cadence used during RoomPlan's scanning. We are getting the images from RoomCaptureSession.