I found that the app reported a crash of a pure virtual function call, which could not be reproduced.
A third-party library is referenced:
https://github.com/lincf0912/LFPhotoBrowser
Achieve smearing, blurring, and mosaic processing of images
Crash code:
if (![LFSmearBrush smearBrushCache]) {
[_edit_toolBar setSplashWait:YES index:LFSplashStateType_Smear];
CGSize canvasSize = AVMakeRectWithAspectRatioInsideRect(self.editImage.size, _EditingView.bounds).size;
[LFSmearBrush loadBrushImage:self.editImage canvasSize:canvasSize useCache:YES complete:^(BOOL success) {
[weakToolBar setSplashWait:NO index:LFSplashStateType_Smear];
}];
}
- (UIImage *)LFBB_patternGaussianImageWithSize:(CGSize)size orientation:(CGImagePropertyOrientation)orientation filterHandler:(CIFilter *(^ _Nullable )(CIImage *ciimage))filterHandler
{
CIContext *context = LFBrush_CIContext;
NSAssert(context != nil, @"This method must be called using the LFBrush class.");
CIImage *midImage = [CIImage imageWithCGImage:self.CGImage];
midImage = [midImage imageByApplyingTransform:[self LFBB_preferredTransform]];
midImage = [midImage imageByApplyingTransform:CGAffineTransformMakeScale(size.width/midImage.extent.size.width,size.height/midImage.extent.size.height)];
if (orientation > 0 && orientation < 9) {
midImage = [midImage imageByApplyingOrientation:orientation];
}
//图片开始处理
CIImage *result = midImage;
if (filterHandler) {
CIFilter *filter = filterHandler(midImage);
if (filter) {
result = filter.outputImage;
}
}
CGImageRef outImage = [context createCGImage:result fromRect:[midImage extent]];
UIImage *image = [UIImage imageWithCGImage:outImage];
CGImageRelease(outImage);
return image;
}
This line trigger crash:
CGImageRef outImage = [context createCGImage:result fromRect:[midImage extent]];
b9c90c7bbf8940e5aabed7f3f62a65a2-symbolicated.crash
Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.
Post
Replies
Boosts
Views
Activity
I'm working on a very simple App where I need to visualize an image on the screen of an iPhone. However, the image has some special properties. It's a 16bit, yuv422_yuy2 encoded image. I already have all the raw bytes saved in a Data object.
After googling for a long time, I still did not figure out the correct way. My current understanding is first create a CVPixelBuffer to properly represent the encoding information. Then conver the CVPixelBuffer to an UIImage. The following is my current implementation.
public func YUV422YUY2ToUIImage(data: Data, height: Int, width: Int, bytesPerRow: Int) -> UIImage {
return rosImage.data.withUnsafeMutableBytes { rawPointer in
let baseAddress = rawPointer.baseAddress!
let tempBufferPointer = UnsafeMutablePointer<CVPixelBuffer?>.allocate(capacity: 1)
CVPixelBufferCreateWithBytes( kCFAllocatorDefault,
width,
height,
kCVPixelFormatType_422YpCbCr16,
baseAddress,
bytesPerRow,
nil,
nil,
nil,
tempBufferPointer)
let ciImage = CIImage(cvPixelBuffer: tempBufferPointer.pointee!)
return UIImage(ciImage: ciImage)
}
}
However, when I execute the code, I have the followin error
-[CIImage initWithCVPixelBuffer:options:] failed because its pixel format v216 is not supported.
So it seems CIImage is unhappy. I think I need to convert the encoding from yuv422_yuy2 to something like plain ARGB. But after a long tim googling, I didn't find a way to do that. The closest function I cand find is https://developer.apple.com/documentation/accelerate/1533015-vimageconvert_422cbypcryp16toarg
But the function is too complex for me to understand how to use it.
Any help is appreciated. Thank you!
Hi,
In the Destinations sample code project and related WWDC talk on spatial video, it seems to imply that the video player will show 3D stereoscopic videos.
However, in the Photos app there's a vignetting in the simulator (and marketing material) when viewing spatial video — a portal kind of effect.
Without access to a device I'm wondering if my spatial videos are actually being played as 3D spatial videos in the AVPlayerController, since I'm not seeing the vignetting.
I'm thinking that the vignetting is a photos specific visual effect, but wanted to double check to make sure I'm not misunderstanding something about AVPlayerController.
Does anyone know if spatial videos played through AVPlayerController will appear as stereoscopic, even if the vignetting isn't there? Has anyone tried the Destinations sample code to play spatial videos on a device to confirm?
thanks!
Hello Apple Developer Community,
I'm excited to make my first post here and am seeking guidance for a feature I'd like to implement in my app. My objective is to enable users to select an image and crop it. Ideally, there should be a visible indicator, like a rectangle, to show the area that will be cropped. Upon clicking the save button, the image would be saved with the selected cropped area.
I'm aiming for functionality to the image editor in the Photos app. Is there a straightforward method or integration for this that adheres to Apple's native frameworks, without resorting to external GitLab repositories?
Thank you in advance for your assistance.
Best regards,
Nicola
What are the Mac hardware and software requirements to decode and encode MV-HEVC video with AVFoundation?
Many of the new MV-HEVC-related keys require macOS 14.0+, so I'm guessing that macOS Sonoma or later is required on the software side.
What about processor architectures? I can read an MV-HEVC source on my Apple Silicon M1. But when I run the same code on my Intel Mac mini (2018) running Sonoma 14.3, AVAssetReader's startReading() returns false.
Similarly, when I try to create an AVAssetWriterInput with MV-HEVC output settings, I receive:
-[AVAssetWriterInput initWithMediaType:outputSettings:sourceFormatHint:] Compression property MVHEVCVideoLayerIDs is not supported for video codec type hvc1'
Is this because Intel-based Macs don't support MV-HEVC? Or am I missing something else?
how to use AIGC in the workplace to assist with work?
HI
My 2017 MacBook Air appears to be cracking when using face time, when using YouTube or playing music, speakers are fine and no distortion. The Mac is fully up to date, macOS Montery 12.7.3
Any help would be great, have tried input and output levels.
Hi all!
As the title states, I'm looking to integrate with Apple Music's REST API. I'm wondering if there are any OpenAPI 3.0 YAML or JSON specs available anywhere?
I'd like to avoid transcribing the types found in the developer docs manually.
Here's a link to the Apple Music API docs and to the OpenAPI 3.0 spec:
https://developer.apple.com/documentation/applemusicapi
https://spec.openapis.org/oas/latest.html
Open API was previously known as "Swagger" too.
Many thanks in advance!
I am trying to retrieve same information about a song as is available on the music.apple.com page (see screenshot).
It seems neither MusicKit and/or Apple Music API's deliver that depth of information. For example, the names listed for Production and Techniques.
Am I correct?
how can we change the image quality, size, camera and cadence used during RoomPlan's scanning. We are getting the images from RoomCaptureSession.
When I try to play video on my Apple Vision Pro simulator using a custom view with an AVPlayerLayer (as seen in my below VideoPlayerView), nothing displays but a black screen while the audio for the video i'm trying to play plays in the background. I've tried everything I can think of to resolve this issue, but to no avail.
import SwiftUI
import AVFoundation
import AVKit
struct VideoPlayerView: UIViewRepresentable {
var player: AVPlayer
func makeUIView(context: Context) -> UIView {
let view = UIView(frame: .zero)
let playerLayer = AVPlayerLayer(player: player)
playerLayer.videoGravity = .resizeAspect
view.layer.addSublayer(playerLayer)
return view
}
func updateUIView(_ uiView: UIView, context: Context) {
if let layer = uiView.layer.sublayers?.first as? AVPlayerLayer {
layer.frame = uiView.bounds
}
}
}
I have noticed however that if i use the default VideoPlayer (as demonstrated below), and not my custom VideoPlayerView, the video displays just fine, but any modifiers I use on that VideoPlayer (like the ones in my above custom struct), cause the video to display black while the audio plays in the background.
import SwiftUI
import AVKit
struct MyView: View {
var player: AVPlayer
var body: some View {
ZStack {
VideoPlayer(player: player)
Does anyone know a solution to this problem to make it so that video is able to display properly and not just appear as a black screen with audio playing in the background?
I use a data cable to connect my Nikon camera to my iPhone. In my project, I use the framework ImageCaptureCore. Now I can read the photos in the camera memory card, but when I press the shutter of the camera to take a picture, the camera does not respond, the connection between the camera and the phone is normal. Then the camera screen shows a picture of a laptop. I don't know. Why is that? I hope someone can help me.
Hello,
I tried to build AVCam sample application for iOS17 and run it on MacBook (designed as iPad) with macos14.3 (Sonoma).
https://developer.apple.com/documentation/avfoundation/capture_setup/avcam_building_a_camera_app?language=objc
When building and testing with Xcode 15.2, AVCam application crashes systematically when choosing target "My Mac (Designed for iPad)"
In fact, SIGABORT signal is received in a thread dealing with "portrait effect"
Thread 19 Queue : com.apple.portrait.effect_init (serial)
Is it a known bug? Is there a workaround about this case?
Best regards
External webcam is detected by AVCam but preview and capture are systematically upside down. (may be the same FaceTime HD camera's)
Is it a known bug? Is there a workaround about this case?
Hi everyone, I was working on some code that involves recording audio with AVAudioEngine and got an issue that just crashes the app:
EXC_BREAKPOINT
Exception 6, Code 1, Subcode 4304279688
+0x009888 AudioRecordModule.setupAudioEngine
+0x009788
AudioRecordModule.setupAudioEngine
+0x00c5bc
AudioRecordModule.handleConfigurationChange
Below is the relevant code in the Recorder class.
public class AudioRecordModule: Module {
private var audioEngine: AVAudioEngine?
private func startRecording(options recordingOptions: RecordingOptions) {
try AVAudioSession.sharedInstance().setCategory(.playAndRecord, options: .mixWithOthers)
try AVAudioSession.sharedInstance().setActive(true)
outputFormat = AVAudioFormat(
commonFormat: recordingOptions.bitDepth == 32 ? .pcmFormatInt32 : .pcmFormatInt16,
sampleRate: Double(recordingOptions.sampleRate),
channels: AVAudioChannelCount(recordingOptions.channels),
interleaved: true
)!
let fileUri = URL(string: recordingOptions.fileUri)!
let formatSettings: [String: Any] = [
AVFormatIDKey: kAudioFormatMPEG4AAC,
AVSampleRateKey: recordingOptions.sampleRate,
AVNumberOfChannelsKey: recordingOptions.channels,
AVEncoderBitRateStrategyKey: AVAudioBitRateStrategy_Constant,
AVEncoderAudioQualityKey: AVAudioQuality.high.rawValue,
]
self.recordedFile = try AVAudioFile(
forWriting: fileUri,
settings: formatSettings,
commonFormat: outputFormat.commonFormat,
interleaved: outputFormat.isInterleaved
)
if !hadSetupNotification {
setupNotifications()
}
}
func handleConfigurationChange() {
DispatchQueue.main.async {
self.releaseAudioEngine()
self.setupAudioEngine()
if self.state == "recording" {
// we could attempt to keep recording
do {
try self.audioEngine?.start()
} catch {
self.internalPauseRecording()
self.sendInterruptEvent()
}
}
}
}
func setupNotifications() {
nc.addObserver(
forName: Notification.Name.AVAudioEngineConfigurationChange,
object: nil,
queue: nil
) { [weak self] _ in
guard let weakself = self else {
return
}
if weakself.state != "inactive" {
weakself.handleConfigurationChange()
}
}
}
private func setupAudioEngine() {
self.audioEngine = nil
let audioEngine = AVAudioEngine()
self.audioEngine = audioEngine
let inputNode = audioEngine.inputNode
let inputFormat = inputNode.inputFormat(forBus: 0)
let converter = AVAudioConverter(from: inputFormat, to: outputFormat)!
inputNode.installTap(onBus: 0, bufferSize: 1024, format: inputFormat) {
(buffer: AVAudioPCMBuffer!, time: AVAudioTime!) -> Void in
do {
let inputBlock: AVAudioConverterInputBlock = { _, outStatus in
outStatus.pointee = AVAudioConverterInputStatus.haveData
return buffer
}
let frameCapacity =
AVAudioFrameCount(self.outputFormat.sampleRate) * buffer.frameLength
/ AVAudioFrameCount(buffer.format.sampleRate)
let outputBuffer = AVAudioPCMBuffer(
pcmFormat: self.outputFormat,
frameCapacity: frameCapacity
)!
var error: NSError?
converter.convert(to: outputBuffer, error: &error, withInputFrom: inputBlock)
if let error = error {
throw error
} else {
try self.recordedFile?.write(from: outputBuffer)
}
} catch {
print(error)
}
}
}
private func releaseAudioEngine() {
if let audioEngine = self.audioEngine {
audioEngine.inputNode.removeTap(onBus: 0)
audioEngine.stop()
}
audioEngine = nil
}
}
Beside that, the record module works normally. It is just the configuration change that it does not handle well.
I understand that when configuration changes, I need to reinit the audio engine to have the correct input format (since the new config/audio device can have different sample rate and such). If I don't do that, the app also crashes perhaps due to the mismatch.
AVAudioRecorder is not an option for me.
Thank you for your help.
Not sure when it happened but I can no longer play explicit songs in my app using MK v3.
I've turned off restrictions/made sure i have access to explicit in...
My phone (Screen Time)
My computer (Screen Time)
My iPad (Screen Time)
music.apple.com (Settings)
and I still get this error when I try to play a song in console.
`CONTENT_RESTRICTED: Content restricted
at set isRestricted (https://js-cdn.music.apple.com/musickit/v3/musickit.js:28:296791)
at SerialPlaybackController._prepareQueue (https://js-cdn.music.apple.com/musickit/v3/musickit.js:28:318357)
at SerialPlaybackController._prepareQueue (https://js-cdn.music.apple.com/musickit/v3/musickit.js:28:359408)
at set queue (https://js-cdn.music.apple.com/musickit/v3/musickit.js:28:308934)
at https://js-cdn.music.apple.com/musickit/v3/musickit.js:28:357429
at Generator.next (<anonymous>)
at asyncGeneratorStep$j (https://js-cdn.music.apple.com/musickit/v3/musickit.js:28:351481)
at _next (https://js-cdn.music.apple.com/musickit/v3/musickit.js:28:351708)`
I’m exploring enabling speech-to-commands processing for a game, but would like to try and do a baseline of voice recognition within that to allow two people in close proximity to interact , but not interfere with each others voice commands to this system.
(it’s for an accessible game idea)
My app is consistently crashing for a specific user on 14.3 (iMac (24-inch, M1, 2021) when their library is being retrieved in full. User says they have 36k+ songs in their library which includes purchased music.
This is the code making the call:
var request = MusicLibraryRequest<Album>()
request.limit = 10000
let response = try await request.response()
I’m aware of a similar (?) crash FB13094022 (https://forums.developer.apple.com/forums/thread/736717) that was claim fixed for 14.2. Not sure if this is a separate issue or linked.
I’ve submitted new FB13573268 for it.
CrashReporter Key: 0455323d871db6008623d9288ecee16c676248c6
Hardware Model: iMac21,1
Process: Music Flow
Identifier: com.third.musicflow
Version: 1.2
Role: Foreground
OS Version: Mac OS 14.3
NSInternalInconsistencyException: No identifiers for model class: MPModelSong from source: (null)
0 CoreFoundation +0xf2530 __exceptionPreprocess
1 libobjc.A.dylib +0x19eb0 objc_exception_throw
2 Foundation +0x10f398 -[NSAssertionHandler handleFailureInMethod:object:file:lineNumber:description:]
3 MediaPlayer +0xd59f0 -[MPBaseEntityTranslator _objectForPropertySet:source:context:]
4 MediaPlayer +0xd574c -[MPBaseEntityTranslator _objectForRelationshipKey:propertySet:source:context:]
5 MediaPlayer +0xd5cd4 __63-[MPBaseEntityTranslator _objectForPropertySet:source:context:]_block_invoke_2
6 CoreFoundation +0x40428 __NSDICTIONARY_IS_CALLING_OUT_TO_A_BLOCK__
7 CoreFoundation +0x402f0 -[__NSDictionaryI enumerateKeysAndObjectsWithOptions:usingBlock:]
8 MediaPlayer +0xd5c1c __63-[MPBaseEntityTranslator _objectForPropertySet:source:context:]_block_invoke
9 MediaPlayer +0x11296c -[MPModelObject initWithIdentifiers:block:]
10 MediaPlayer +0xd593c -[MPBaseEntityTranslator _objectForPropertySet:source:context:]
11 MediaPlayer +0xd66c4 -[MPBaseEntityTranslator objectForPropertySet:source:context:]
12 MediaPlayer +0x1a7744 __47-[MPModeliTunesLibraryRequestOperation execute]_block_invoke
13 iTunesLibrary +0x16d84 0x1b4e1cd84 (0x1b4e1cd30 + 84)
14 CoreFoundation +0x5dec0 __invoking___
15 CoreFoundation +0x5dd38 -[NSInvocation invoke]
16 Foundation +0x1e874 __NSXPCCONNECTION_IS_CALLING_OUT_TO_REPLY_BLOCK__
17 Foundation +0x1cef4 -[NSXPCConnection _decodeAndInvokeReplyBlockWithEvent:sequence:replyInfo:]
18 Foundation +0x1c850 __88-[NSXPCConnection _sendInvocation:orArguments:count:methodSignature:selector:withProxy:]_block_invoke_3
19 libxpc.dylib +0x10020 _xpc_connection_reply_callout
20 libxpc.dylib +0xff18 _xpc_connection_call_reply_async
21 libdispatch.dylib +0x398c _dispatch_client_callout3
22 libdispatch.dylib +0x21384 _dispatch_mach_msg_async_reply_invoke
23 libdispatch.dylib +0xad24 _dispatch_lane_serial_drain
24 libdispatch.dylib +0xba04 _dispatch_lane_invoke
25 libdispatch.dylib +0x16618 _dispatch_root_queue_drain_deferred_wlh
26 libdispatch.dylib +0x15e8c _dispatch_workloop_worker_thread
27 libsystem_pthread.dylib +0x3110 _pthread_wqthread
28 libsystem_pthread.dylib +0x1e2c start_wqthread
Hello. Does anyone have any ideas on how to work with the new iOS 17 Live Photo? I can save the live photo, but I can't set it as wallpaper. Error: "Motion is not available in iOS 17" There are already applications that allow you to do this - VideoToLive and the like. What should I use to implement this with swift language? Most likely the metadata needs to be changed, but I'm not sure.
I'm looking for a sample code project on integrating Spatial Audio into my app, Tunda Island, a music-loving, make friends and dating app. I have gone as far as purchasing a book "Exploring MusicKit" by Rudrank Riyam but to no avail.
I have trained a model to classify some symbols using Create ML.
In my app I am using VNImageRequestHandler and VNCoreMLRequest to classify image data.
If I use a CVPixelBuffer obtained from an AVCaptureSession then the classifier runs as I would expect. If I point it at the symbols it will work fairly accurately, so I know the model is trained fairly correctly and works in my app.
If I try to use a cgImage that is obtained by cropping a section out of a larger image (from the gallery), then the classifier does not work. It always seems to return the same result (although the confidence is not a 1.0 and varies for each image, it will be to within several decimal points of it, eg 9.9999).
If I pause the app when I have the cropped image and use the debugger to obtain the cropped image (via the little eye icon and then open in preview), then drop the image into the Preview section of the MLModel file or in Create ML, the model correctly classifies the image.
If I scale the cropped image to be the same size as I get from my camera, and convert the cgImage to a CVPixelBuffer with same size and colour space to be the same as the camera (1504, 1128, kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange) then I get some difference in ouput, it's not accurate, but it returns different results if I specify the 'centerCrop' or 'scaleFit' options. So I know that 'something' is happening, but it's not the correct thing.
I was under the impression that passing a cgImage to the VNImageRequestHandler would perform the necessary conversions, but experimentation shows this is not the case. However, when using the preview tool on the model or in Create ML this conversion is obviously being done behind the scenes because the cropped part is being detected.
What am I doing wrong.
tl;dr
my model works, as backed up by using video input directly and also dropping cropped images into preview sections
passing the cropped images directly to the VNImageRequestHandler does not work
modifying the cropped images can produce different results, but I cannot see what I should be doing to get reliable results.
I'd like my app to behave the same way the preview part behaves, I give it a cropped part of an image, it does some processing, it goes to the classifier, it returns a result same as in Create ML.