After investing more than a week into getting a bunch of audio unit projects converted into app + appex + framework, they all are now correctly loaded in-process in the demo host app that is part of Xcode's template.
However, Logic Pro adamantly refuses to load them in-process.
Does Logic Pro simply not do that ever, or is there some hint or configuration my plugins need to provide to enable that? If it is unsupported, will it be supported in some future version of Logic?
The entire point of investing that week was performance, which is moot if it is impossible to test the impact of loading in-process in a real-world usage scenario.
Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.
Post
Replies
Boosts
Views
Activity
I cannot mirror or extend my screen from mac mini m2 to iPad 10 gen. Whenever I click on "mirror or extend screen" my external display for mac refreshes after showing "no signal" and comes back on meanwhile my iPad locks out and screen mirror or extending is unsuccessful. But I can mirror my iPad screen to mac mini m2. Earlier everything was working, suddenly it is not working
Is anyone developing a way for users to control an iOS or PadOS device playing Apple Music to a DAC via USB to amp from another iOS or PadOS device wirelessly? Specifically, full control. Not Accessibility, not to Apple TV, not HomePods, not firmware downgraded Airport Expresses to a DAC or other hacks mentioned for the past decade this “connect” like feature has been desired by audiophiles seeking exclusive mode on a device with that (iOS/PadOS) but — control it while sitting on a couch or in a wheel chair across the room. Exclusive mode being the key feature iOS and PadOS offer that is desired with full or nearly full Apple Music control.
Hi Apple Engineer,
My App is using ImageCapture Framwork to connect DSLR Camera, Before iOS 18 this method is effective,but When I upgraded my iPhone and iPad, found my app can`t connect DSLR Camera, open Setting -> Privacy & Security -> Files and Folders permission, can‘t found my app, I swear it worked before iOS 18.
I find other developers have the same problem.
https://forums.developer.apple.com/forums/thread/756960 .
https://developer.apple.com/forums/thread/765768.
I also found a process for reproducing this problem in ios 18,
Do reset all settings.
Can you help me with this problem? Or tell me how to use the API properly.Look forward to your reply. Thank you very much.
Hi community,
I'm trying to setup an AVAudioFormat with AVAudioPCMFormatInt16. But, i've an error :
AVAEInternal.h:125 [AUInterface.mm:539:SetFormat: ([[busArray objectAtIndexedSubscript:(NSUInteger)element] setFormat:format error:&nsErr])] returned false, error Error Domain=NSOSStatusErrorDomain Code=-10868 "(null)"
If i understand the error code 10868, the format is not correct. But, how i can use PCM Int16 format ? Here is my method :
- (void)setupAudioDecoder:(double)sampleRate audioChannels:(double)audioChannels {
if (self.isRunning) {
return;
}
self.audioEngine = [[AVAudioEngine alloc] init];
self.audioPlayerNode = [[AVAudioPlayerNode alloc] init];
[self.audioEngine attachNode:self.audioPlayerNode];
AVAudioChannelCount channelCount = (AVAudioChannelCount)audioChannels;
self.audioFormat = [[AVAudioFormat alloc] initWithCommonFormat:AVAudioPCMFormatInt16
sampleRate:sampleRate
channels:channelCount
interleaved:YES];
NSLog(@"Audio Format: %@", self.audioFormat);
NSLog(@"Audio Player Node: %@", self.audioPlayerNode);
NSLog(@"Audio Engine: %@", self.audioEngine);
// Error on this line
[self.audioEngine connect:self.audioPlayerNode to:self.audioEngine.mainMixerNode format:self.audioFormat];
/**NSError *error = nil;
if (![self.audioEngine startAndReturnError:&error]) {
NSLog(@"Erreur lors de l'initialisation du moteur audio: %@", error);
return;
}
[self.audioPlayerNode play];
self.isRunning = YES;*/
}
Also, i see the audioEngine seem not running ?
Audio Engine:
________ GraphDescription ________
AVAudioEngineGraph 0x600003d55fe0: initialized = 0, running = 0, number of nodes = 1
Anyone have already use this format with AVAudioFormat ?
Thank you !
I feel that IOS18 camera filters are over complicated and generate lower level results than iOS18 filters. I am really missing the Vivid filter.
It was perfect on ios17.
As a straightforward example, I've taken Apple's MV-HEVC sample project and added two lines.
First, after the AVAssetWriterInput is created:
frameInput.performsMultiPassEncodingIfSupported = true
Second, after the call to multiviewWriter.startWriting():
print("canPerformMultiplePasses: \(frameInput.canPerformMultiplePasses)")
Which prints true.
This leads me to believe that the first encoding pass should proceed as-normal (even though I haven't handled the logic for the completion of the first pass, etc.).
However, I receive this error when the code attempts to appendTaggedBuffers to the AVAssetWriterInputTaggedPixelBufferGroupAdaptor:
Fatal error: Failed to append tagged buffers to multiview output
Am I missing a step? Or is the multi-pass encoding only supported for standard sample/pixel buffers (and not tagged buffers)?
We have a new photo sharing app (https://photodare.ca).
We've had no issues with photos loading in North America and Caribbean, but so far 2 users (Germany, Netherlands) are saying they can't load photos even though they've proven they have permissions for photos enabled.
I can't reproduce this in Canada.
Anyone know about other permissions we need to setup for european countries, or is anyone in GDPR countries willing to try this for us?
They were on 17.6.1.
Thanks either way
Task {
for await update in LockedCameraCaptureManager.shared.sessionContentUpdates {
switch update {
case .initial(let urls):
print("frank: init \(urls)")
await MainActor.run {
let label = UILabel(frame: CGRect(x: 100, y: 100, width: 100, height: 30))
label.text = "frank test"
label.textColor = .black
UIViewController.getTop().view.addSubview(label)
}
case .added(let url):
print("frank: add \(url)")
case .removed(let url):
print("frank: removed \(url)")
default:
break
}
}
}
why 'case .initial(let urls)': never never be executed? Can some one provide a sample code?
Hi:
I am working with the ObjectCapture frameworks and sample code.
Everything works great.
We are trying to go from capturing 12MP images as in the sample code to capturing 48MP 6048 × 8064 images.
We can't seem to get it to work.
Any advice here?
This ideal gonna be cool:
When people finish recording a video and later realize there's something else worth capturing, they can only create a second clip. But what if it were possible to reopen the first video and continue recording from where they left off? This would be a great convenience for many people
With iOS 18.1 having call recording out of the box, is it now possible to build apps that can record calls?
I could not find anything in the swift ios docs yet.
I'm seeking information about the original file schema for an m4a file recorded directly on an iPhone (iPhone 5 running iOS 9.2.0).
I currently have two files from which I extracted metadata using ExifTool.
The first file was provided to me by someone who claims it was recorded on an iPhone 5 with iOS 9.2.0. I would like to verify whether this file has been edited.
File Permissions: -rwx------
Content Create Date: 2016:03:01 14:21:08+07:00
The second file was recorded by me on the same device model and iOS version.
File Permissions: -rw-r--r--
Date/Time Original: 2024:10:03 11:44:16+07:00
As you can see, the file permissions differ, and the key for the recording date also differs: one uses "Content Create Date" while the other uses "Date/Time Original." I would like to determine if the first file was edited, but I haven't been able to find any official documentation on the m4a schema or metadata structure from audio recorder apps. I reached out to support, and they directed me to this forum. Any insights or help would be appreciated.
Hello developer community.
I purchase recently my new iPhone 16 Pro Max; it is a premium device with great quality overall.
However, I am having a big trouble shotting in ProRaw MAX (48 mode) with native camera.
Just to be clear, the problem that i will describe do not happen in 3rd apps, such as ProCam; only with native camera.
When I use ProRaw Max, and take the photo, and watch the photo in the gallery the image can’t load and render properly. Even, when I maximize the image to the maximum I can see pixelated portions, defects and super low resolution and excessive denoise.
For comparison, this not occur with my previous iPhone 15 PM and/or when I capture photos from ProCam (same settings and configurations) in the 16PM. I proceed to take the photo, open the gallery and I see full of details, when zoomed to 100%.
I tried to format the phone, reinstall the software via my mac. Tried even to look at some forums to find if there’s someone with the same issue, the information available so far is very low.
I’m in contact with apple assistant from my country (Portugal), and they escalated this problem to the engineers. (that’s what I’ve been told).
They did all the tests remotely (via analysis and improvement’s) and they told me that my phone is perfect in the hardware department. I will wait for the next days to be contacted again.
I’m on iOS 18.0.1. (The last software available at this time).
I tried multiple 16PM, from friends, family and stores (more or less 10 units), and they all showed the exact same problematic.
I’m a professional photographer, so I find this frustrating and unacceptable.
I would appreciate any additional suggestion or information. Thank you!
Cannot add photos or files because they are bigger than 5Mb.
Currently we tested iOS AAC LC encoder using AudioToolbox framework, no matter we set mManufacturer to kAppleHardwareAudioCodecManufacturer or kAppleSoftwareAudioCodecManufacturer, it always run on CPU.
We are working on an app for the vision pro which has high polygons count and lots of high resolution textures. Everything looks smooth and in general very well, The issue is the moment we turn on voice control even if it is not being used the visuals at the center start to stutter left to right. Has anyone seen this ?, it must be a bug, any workaround ?
Thanks,
Guillermo
When we tested the audio quality of our VoIP App, we found that when the iOS18.0 device was played with AirPods Pro 2, we could hear noises similar to peak clipping and distortion, especially when the sound source played was loud and high-pitched. Here is the device information we tested:
Model: iPhone 16 Pro Max, iPhone 15 Pro
System version: iOS 18.0 (22A3354)
Bluetooth headset model: AirPods Pro 2
Bluetooth firmware version: 6F8
We tested multiple apps (including phone calls, FaceTime, Zoom, WeChat, Tencent Meeting), and they all had the above noise problem.
We also found two phenomena:
If we use the same iOS 18 device to connect HUAWEI FreeBuds Pro or FreeBuds 2, there is no such noise problem;
If we use an iOS 17 device to connect to the same AirPods Pro 2 for testing, there is no such noise problem;
Therefore, we suspect that it is caused by the compatibility problem between iOS 18.0 and AirPods firmware 6F8. The firmware version of our AirPods Pro 2 is 6F8, which was released on June 26, and iOS 18.0 was released on September 16. Maybe they are not very compatible. I hope that subsequent firmware updates can fix this problem.
I’m working on an iOS app for a client, and I have a question regarding a specific feature we're looking to implement.
We want the app to respond to a user pressing the volume button three times while the app is in the background. The goal is to allow users to discreetly trigger a safety feature without drawing attention, particularly in situations where they may be in danger or at risk.
This feature is critical for the app and would be a valuable addition, as it could potentially help protect users in emergency situations. However, I haven’t found much information on whether iOS allows background listening for volume button presses. Therefore, I would greatly appreciate your insights on the following:
Is it possible to listen for volume button presses when the app is in the background, or are there system-level restrictions that prevent this?
If it's not directly possible, are there any special provisions, APIs, or entitlements that can be requested from Apple to enable this functionality?
In case this feature is not supported, are there alternative approaches to achieve a similar discreet activation mechanism?
If this is something that requires special permission or a process, could you please guide me on how to proceed?
I understand that maintaining user privacy and security is a priority for iOS, and I want to ensure that any implementation fully complies with Apple's guidelines.
Thanks in advance for your help!
Hi,
for the implementation of an audio player with signed URL's, I need to be able to set an authorization header to the request for an AVURLAsset.
This works but not on Airplay when trying to stream multiple songs in a queue.
For each item I do:
let headerFields: [String: String] = ["Authorization": getIdToken()!]
super.init(url: url, options: ["AVURLAssetHTTPHeaderFieldsKey": headerFields])
But only the first 2 songs in the queue actually get this authorization header sent along, somehow it is removed for subsequent songs.
Any ideas on how I can fix this?
thanks,
Thomas
Hello,
I apologize if the answer is obvious but I'm having a hard time figuring this one out.
Let's say the user taps an "Edit" button in my LockedCameraCaptureSession. The extension calls:
activity.userInfo = ["ActivityKey": "ID"]
try await session.openApplication(for: activity)
Can I retrieve, in my application, the data stored in activity.userInfo (lets say, a flag "open editor"), or is data passing exclusively handled via appContext of CameraCaptureIntent?
Thank you!