I'm the developer of a small utility for Mac called "MusicDeviceHost".
https://apps.apple.com/us/app/musicdevicehost/id1261046263?mt=12
As the name suggests, it is a host application for audio units (music device components).
See also "Using Sound Canvas VA with QMidi":
https://youtu.be/F9C4BiBR
A problem occurs while trying to authorize the "Sound Canvas VA" component, Roland Cloud Manager (v3.0.3) returns the following error:
“Authorization Error - RM Service not connected
Error Connecting to Roland Cloud Manager Service”
I guess the error is caused by some permission denied to the sandboxed application version. The NOT sandboxed version of MDH actually works flawlessly.
I am using the following entitlements:
com.apple.security.app-sandbox
com.apple.security.network.client
So connecting to the service should work, because "com.apple.security.network.client" is enabled.
At Roland, they say:
"Cloud Manager isn't supported in a sandboxed environment."
But as far as I can see, MainStage and other sandboxed apps works fine...
So what is the right answer? Is there someone out there with the same issue? Thanks for helping :)
AudioUnit
RSS for tagCreate audio unit extensions and add sophisticated audio manipulation and processing capabilities to your app using AudioUnit.
Posts under AudioUnit tag
29 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
We developed an app on MacOS, it need to record audio data by AudioUnit. But if user choose the "voice isolatation" microphone mode, the high freq audio data all lost. We trired but we found the system did not give original audio data any more, can any body help?
I'm developing a sandboxed application with Xcode which allows the user to open and work with Audio Unit plugins. Working with a beta-tester having a lot of AUs on its laptop running on macOS 12.5.1, we encountered some weird crashes while opening some plugins (Krotos, Flux Audio, Sound Toys, etc.). The message we got was in French, I try to translate it but the original English version could be a little bit different:
Impossible to open “NSCreateObjectFileImageFromMemory-p47UEwps” because the developper can not be verified.
After this first warning, a Fatal Error 100001 message opens and the plugin seems crashed (but not the host).
I easily found some music application users encountering similar issues on the web. From what I read, this error is related to new security rules introduced in macOS 12. And, effectively, some of these plugins tested on an older system work normally. I also read that some (insecure) entitlements of the Hardened Runtime should be able to fix this issue, especially Allow Unsigned Executable Memory Entitlement, whose the doc says:
In rare cases, an app might need to override or patch C code, use the long-deprecated NSCreateObjectFileImageFromMemory (which is fundamentally insecure), or use the DVDPlayback framework. Add the Allow Unsigned Executable Memory Entitlement to enable these use cases. Otherwise, the app might crash or behave in unexpected ways.
Unfortunately, checking this option didn't fix the issue. So, what I tried next was to add Disable Executable Memory Protection (no more success), and finally Allow DYLD Environment Variables and Allow Execution of JIT-compiled Code: none of them solved my problem.
I really don't see what else to do, while I'm sure that a solution exists because the same plugins work perfectly on other application (Logic, Live Ableton). Any help would be greatly appreciated. Thanks !
I am developing a multi thread instrument plugin for audio unit V2.
This topic is about a software synthesizer that has been proven to work on intel macs, and has been converted to apple silicon native.
I have a problem when I use logic pro on apple silicon macs.
Plug the created software synthesizer to the instrument track.
Make the track not exist other than the track you created.
Put it in recording mode.
When the above steps are followed, the performance meter on the logic pro will show that the load is concentrated on one specific core and far exceeds the total load when the load is divided.
This load occurs continuously and is resolved when another track is created and the track is selected.
It is understandable as a specification that the load is concentrated on a particular core. However, the magnitude of the load is abnormal.
In fact, when the peak exceeds 100%, it leads to the generation of acoustic noise.
Also, in this case, the activity monitor included with macOS does not show any increase in the usage of a specific CPU core.
Also, the time profiler included with XCode did not identify any location that took a large amount of time.
We have examined various experimental programs and found that there is a positive correlation between the frequency of thread switches in multi threaded areas and the peak of this CPU spike. We even found a positive correlation between the frequency of thread switches in the multithreaded area and the peak of this CPU spike.
Mutex is used for thread switch.
In summary In summary, we speculate that performance seems to be worse when multi thread processing is done on a single core.
Is there any solution to this problem at the developer level or at the customer level of logic pro?
Symptom environment
MacBookePro 16inch 2021
CPU: apple m1 max
OS: macOS 12.6.3
Memory: 32GB
Logic pro 10.7.9
Built-in speaker
autido buffer size: 32 sample
Performance meter before symptoms occurred
A view of the performance meter with symptoms after the recording condition
Hi community
I'm developing an application for MacOS and i need to capture the mic audio stream. Currently using CoreAudio in Swift i'm able to capture the audio stream using IO Procs and have applied the AUVoiceProcessing for prevent echo from speaker device. I was able to connect the audio unit and perform the echo cancellation.
The problem that i'm getting is that when i'm using AUVoiceProcessing the gain of the two devices get reduced and that affects the volume of the two devices (microphone and speaker).
I have tried to disable the AGC using the property kAUVoiceIOProperty_VoiceProcessingEnableAGCbut the results are the same.
There is any option to disable the gain reduction or there is a better approach to get the echo cancellation working?
Back at the start of January this year we filed a bug report that still hasn't been acknowledged. Before we file a code level support ticket, we would love to know if there’s any one else out there who has experienced anything similar.
We have read documentation (and again repeatedly) and searched the web over and still found no solution and this issue does look like it could be a bug in the system (or our coding) rather than proper behaviour.
The app is a host for a v3 audio unit which itself is a workstation that can host other audio units. The host and audio unit are both working well in all other areas other than this issue.
Note: This is not running on catalyst, it is a native Mac OS app. ( not iOS )
The problem is that when an AUv3 is hosted out of process (on the Mac) and then goes to fetch a list of all the other available installed audio units, the list that is returned from the system does not contain any of the other v3 audio units in the system. It only contains v2.
We see this issue if we load our AU out of process in our own bespoke host, and also when it loads into Logic Pro which gives no choice but to load out of process.
This means that, as it stands at the moment, when we release the app our users will have limited functionality in Logic Pro, and possibly by then, other hosts too.
In our own host we can load our hosting AU in-process and then it can find and use all the available units both v2 and v3. So no issue there but sadly when loaded into the only other host that can do anything like this ( Logic Pro at the time of posting) it won't be able to use v3 units which is quite a serious limitation.
SUMMARY
v3 Audio Unit being hosted out of process.
Audio unit fetches the list of available audio units on the system.
v3 audio units are not provided in the list. Only v2 are presented.
EXPECTED
In some ways this seems to be the opposite of the the behaviour we would expect.
We would expect to see and host ALL the other units that are installed on the system.
“out of process” suggests the safer of the two options and so this looks like it could be related to some kind of sand boxing issue. But sadly we cannot work out a solution hence this report.
Is Quin “The Eskimo” still around? Can you help us out here?
Hi,
I'm having trouble saving user presets in the plugin for Audio Units. This works well for saving the user presets in the Host, but I get an error when trying to save them in the plugin.
I'm not using a parameter tree, but instead using the fullState's getter and setter for saving and retrieving a dictionary with the state.
With some simplified parameters it looks something like this:
var gain: Double = 0.0
var frequency: Double = 440.0
private var currentState: [String: Any] = [:]
override var fullState: [String: Any]? {
get {
// Save the current state
currentState["gain"] = gain
currentState["frequency"] = frequency
// Return the preset state
return ["myPresetKey": currentState]
}
set {
// Extract the preset state
currentState = newValue?["myPresetKey"] as? [String: Any] ?? [:]
// Set the Audio Unit's properties
gain = currentState["gain"] as? Double ?? 0.0
frequency = currentState["frequency"] as? Double ?? 440.0
}
}
This works perfectly well for storing user presets when saved in the host. When trying to save them in the plugin to be able to reuse them across hosts, I get the following error in the interface: "Missing key in preset state map". Note that I am testing mostly in AUM.
I could not find any documentation for what the missing key is about and how can I get around this. Any ideas?
Hi,
This topic is about Workgroups.
I create child processes and I'd like to communicate a os_workgroup_t to my child process so they can join the work group as well.
As far as I understand, the os_workgroup_t value is local to the process.
I've found that one can use os_workgroup_copy_port() and os_workgroup_create_with_port(), but I'm not familiar at all with ports and I wonder what would be the minimal effort to achieve that.
Thank you very much!
Alex
I’m developing a voice communication app for the iPad with both playback and record and using AudioUnit of type kAudioUnitSubType_VoiceProcessingIO to have echo cancellation.
When playing the audio before initializing the recording audio unit, volume is high. But if I'm playing the audio after initializing the audio unit or when switching to remoteio and then back to vpio the playback volume is low.
It seems like a bug in iOS, any solution or workaround for this? Searching the net I only found this post without any solution: https://developer.apple.com/forums/thread/671836