Hi all,
I am having a mysterious problem trying to load a user LaunchAgent under Big Sur - It is the .plist of gniemetz's automount.sh https://github.com/gniemetz/automount
for mounting SMB shares via pwd access from the Keychain -
Placed the .sh into /usr/local/bin, chmod 644 and chown user:staff
Placed the LaunchAgent .plist into ~/Library/LaunchAgents (created LaunchAgents it as it didn't exist), same chmod/chown.
drwxr-xr-x		3	 users		 96 Nov	1 22:13 LaunchAgents
~/Library/LaunchAgentsrw-r--r--		1	 users	 1038 Nov	1 22:13 it.niemetz.automount.plist
/usr/local
drwxr-xr-x		4 root		wheel		128 Nov	1 21:52 bin
/usr/local/binrwxr-xr-x		1 root		wheel	30310 Oct 29 21:58 automount.sh
then the following:
Load failed: 5: Input/output error
For the life of me, I cannot find anywhere what this means...
launchctl start ~/Library/LaunchAgents/it.niemetz.automount.plist
completes with no errors, syntax also parses OK
/Users//Library/LaunchAgents/it.niemetz.automount.plist: OK
I have added Terminal and /bin/bash to Full Disk Access under Security...
Launching the script manually as /usr/local/bin/automount.sh works fine.
Console shows
system.log shows this when load -w is run:
00:27:14 mac-mini-Big-Sur com.apple.xpc.launchd[1] (com.apple.xpc.launchd.user.domain.1000002.100006.Aqua): entering bootstrap mode
Nov	3 00:27:14 mac-mini-Big-Sur com.apple.xpc.launchd[1] (com.apple.xpc.launchd.user.domain.1000002.100006.Aqua): exiting bootstrap mode
For easy reference the .plist is pasted at the end -
Anyone seen this error before?
Thanks!
++
Label
it.niemetz.automount
LimitLoadToSessionType
Aqua
RunAtLoad
WatchPaths
/etc/resolv.conf
/Library/Preferences/SystemConfiguration/NetworkInterfaces.plist
/Library/Preferences/SystemConfiguration/com.apple.airport.preferences.plist
ProgramArguments
/usr/local/bin/automount.sh
--mountall
Processes & Concurrency
RSS for tagDiscover how the operating system manages multiple applications and processes simultaneously, ensuring smooth multitasking performance.
Post
Replies
Boosts
Views
Activity
This week I’m handling a DTS incident from a developer who wants to escalate privileges in their app. This is a tricky problem. Over the years I’ve explained aspects of this both here on DevForums and in numerous DTS incidents. Rather than do that again, I figured I’d collect my thoughts into one place and share them here.
If you have questions or comments, please start a new thread with an appropriate tag (Service Management or XPC are the most likely candidates here) in the App & System Services > Core OS topic area.
Share and Enjoy
—
Quinn “The Eskimo!” @ Developer Technical Support @ Apple
let myEmail = "eskimo" + "1" + "@" + "apple.com"
BSD Privilege Escalation on macOS
macOS has multiple privilege models. Some of these were inherited from its ancestor platforms. For example, Mach messages has a capability-based privilege model. Others were introduced by Apple to address specific user scenarios. For example, macOS 10.14 and later have mandatory access control (MAC), as discussed in On File System Permissions.
One of the most important privilege models is the one inherited from BSD. This is the classic users and groups model. Many subsystems within macOS, especially those with a BSD heritage, use this model. For example, a packet tracing tool must open a BPF device, /dev/bpf*, and that requires root privileges. Specifically, the process that calls open must have an effective user ID of 0, that is, the root user. That process is said to be running as root, and escalating BSD privileges is the act of getting code to run as root.
IMPORTANT Escalating privileges does not bypass all privilege restrictions. For example, MAC applies to all processes, including those running as root. Indeed, running as root can make things harder because TCC will not display UI when a launchd daemon trips over a MAC restriction.
Escalating privileges on macOS is not straightforward. There are many different ways to do this, each with its own pros and cons. The best approach depends on your specific circumstances.
Note If you find operations where a root privilege restriction doesn’t make sense, feel free to file a bug requesting that it be lifted. This is not without precedent. For example, in macOS 10.2 (yes, back in 2002!) we made it possible to implement ICMP (ping) without root privileges. And in macOS 10.14 we removed the restriction on binding to low-number ports (r. 17427890). Nice!
Decide on One-Shot vs Ongoing Privileges
To start, decide whether you want one-shot or ongoing privileges. For one-shot privileges, the user authorises the operation, you perform it, and that’s that. For example, if you’re creating an un-installer for your product, one-shot privileges make sense because, once it’s done, your code is no longer present on the user’s system.
In contrast, for ongoing privileges the user authorises the installation of a launchd daemon. This code always runs as root and thus can perform privileged operations at any time.
Folks often ask for one-shot privileges but really need ongoing privileges. A classic example of this is a custom installer. In many cases installation isn’t a one-shot operation. Rather, the installer includes a software update mechanism that needs ongoing privileges. If that’s the case, there’s no point dealing with one-shot privileges at all. Just get ongoing privileges and treat your initial operation as a special case within that.
Keep in mind that you can convert one-shot privileges to ongoing privileges by installing a launchd daemon.
Just Because You Can, Doesn’t Mean You Should
Ongoing privileges represent an obvious security risk. Your daemon can perform an operation, but how does it know whether it should perform that operation?
There are two common ways to authorise operations:
Authorise the user
Authorise the client
To authorise the user, use Authorization Services. For a specific example of this, look at the EvenBetterAuthorizationSample sample code.
Note This sample hasn’t been updated in a while (sorry!) and it’s ironic that one of the things it demonstrates, opening a low-number port, no longer requires root privileges. However, the core concepts demonstrated by the sample are still valid.
The packet trace example from above is a situation where authorising the user with Authorization Services makes perfect sense. By default you might want your privileged helper tool to allow any user to run a packet trace. However, your code might be running on a Mac in a managed environment, where the site admin wants to restrict this to just admin users, or just a specific group of users. A custom authorisation right gives the site admin the flexibility to configure authorisation exactly as they want.
Authorising the client is a relatively new idea. It assumes that some process is using XPC to request that the daemon perform a privileged operation. In that case, the daemon can use XPC facilities to ensure that only certain processes can make such a request.
Doing this securely is a challenge. For specific API advice, see this post.
WARNING This authorisation is based on the code signature of the process’s main executable. If the process loads plug-ins [1], the daemon can’t tell the difference between a request coming from the main executable and a request coming from a plug-in.
[1] I’m talking in-process plug-ins here. Plug-ins that run in their own process, such as those managed by ExtensionKit, aren’t a concern.
Choose an Approach
There are (at least) seven different ways to run with root privileges on macOS:
A setuid-root executable
The sudo command
AppleScript’s do shell script command, passing true to the administrator privileges parameter
The AuthorizationExecuteWithPrivileges routine, deprecated since macOS 10.7
The SMJobSubmit routine targeting the kSMDomainSystemLaunchd domain, deprecated since macOS 10.10
The SMJobBless routine, deprecated since macOS 13
An installer package (.pkg)
The SMAppService class, a much-needed enhancement to the Service Management framework introduced in macOS 13
Note There’s one additional approach: The privileged file operation feature in NSWorkspace. I’ve not listed it here because it doesn’t let you run arbitrary code with root privileges. It does, however, have one critical benefit: It’s supported in sandboxed apps. See this post for a bunch of hints and tips.
To choose between them:
Do not use a setuid-root executable. Ever. It’s that simple! Doing that is creating a security vulnerability looking for an attacker to exploit it.
If you’re working interactively on the command line, use sudo.
IMPORTANT sudo is not appropriate to use as an API. While it may be possible to make this work under some circumstances, by the time you’re done you’ll have code that’s way more complicated than the alternatives.
If you’re building an ad hoc solution to distribute to a limited audience, and you need one-shot privileges, use either AuthorizationExecuteWithPrivileges or AppleScript.
While AuthorizationExecuteWithPrivileges still works, it’s been deprecated for many years. Do not use it in a widely distributed product.
The AppleScript approach works great from AppleScript, but you can also use it from native code using NSAppleScript. See the code snippet later in this post.
If you need one-shot privileges in a widely distributed product, consider using SMJobSubmit. While this is officially deprecated, it’s used by the very popular Sparkle update framework, and thus it’s unlikely to break without warning.
If you only need escalated privileges to install your product, consider using an installer package. That’s by far the easiest solution to this problem.
Keep in mind that an installer package can install a launchd daemon and thereby gain ongoing privileges.
If you need ongoing privileges but don’t want to ship an installer package, use SMAppService. If you need to deploy to older systems, use SMJobBless.
For instructions on using SMAppService, see Updating helper executables from earlier versions of macOS.
For a comprehensive example of how to use SMJobBless, see the EvenBetterAuthorizationSample sample code. For the simplest possible example, see the SMJobBless sample code. That has a Python script to help you debug your setup. Unfortunately this hasn’t been updated in a while; see this thread for more.
Hints and Tips
I’m sure I’ll think of more of these as time goes by but, for the moment, let’s start with the big one…
Do not run GUI code as root. In some cases you can make this work but it’s not supported. Moreover, it’s not safe. The GUI frameworks are huge, and thus have a huge attack surface. If you run GUI code as root, you are opening yourself up to security vulnerabilities.
Appendix: Running an AppleScript from Native Code
Below is an example of running a shell script with elevated privileges using NSAppleScript.
WARNING This is not meant to be the final word in privilege escalation. Before using this, work through the steps above to see if it’s the right option for you.
Hint It probably isn’t!
let url: URL = … file URL for the script to execute …
let script = NSAppleScript(source: """
on open (filePath)
if class of filePath is not text then
error "Expected a single file path argument."
end if
set shellScript to "exec " & quoted form of filePath
do shell script shellScript with administrator privileges
end open
""")!
// Create the Apple event.
let event = NSAppleEventDescriptor(
eventClass: AEEventClass(kCoreEventClass),
eventID: AEEventID(kAEOpenDocuments),
targetDescriptor: nil,
returnID: AEReturnID(kAutoGenerateReturnID),
transactionID: AETransactionID(kAnyTransactionID)
)
// Set up the direct object parameter to be a single string holding the
// path to our script.
let parameters = NSAppleEventDescriptor(string: url.path)
event.setDescriptor(parameters, forKeyword: AEKeyword(keyDirectObject))
// The `as NSAppleEventDescriptor?` is required due to a bug in the
// nullability annotation on this method’s result (r. 38702068).
var error: NSDictionary? = nil
guard let result = script.executeAppleEvent(event, error: &error) as NSAppleEventDescriptor? else {
let code = (error?[NSAppleScript.errorNumber] as? Int) ?? 1
let message = (error?[NSAppleScript.errorMessage] as? String) ?? "-"
throw NSError(domain: "ShellScript", code: code, userInfo: nil)
}
let scriptResult = result.stringValue ?? ""
Revision History
2024-11-15 Added info about SMJobSubmit. Made other minor editorial changes.
2024-07-29 Added a reference to the NSWorkspace privileged file operation feature. Made other minor editorial changes.
2022-06-22 First posted.
XPC is the preferred inter-process communication (IPC) mechanism on Apple platforms. XPC has three APIs:
The high-level NSXPCConnection API, for Objective-C and Swift
The low-level Swift API, introduced with macOS 14
The low-level C API, which, while callable from all languages, works best with C-based languages
General:
DevForums tag: XPC
Creating XPC services documentation
NSXPCConnection class documentation
Low-level API documentation
XPC has extensive man pages — For the low-level API, start with the xpc man page; this is the original source for the XPC C API documentation and still contains titbits that you can’t find elsewhere. Also read the xpcservice.plist man page, which documents the property list format used by XPC services.
Daemons and Services Programming Guide archived documentation
WWDC 2012 Session 241 Cocoa Interprocess Communication with XPC — This is no longer available from the Apple Developer website )-:
Technote 2083 Daemons and Agents — It hasn’t been updated in… well… decades, but it’s still remarkably relevant.
TN3113 Testing and Debugging XPC Code With an Anonymous Listener
XPC and App-to-App Communication DevForums post
Validating Signature Of XPC Process DevForums post
Related tags include:
Inter-process communication, for other IPC mechanisms
Service Management, for installing and uninstalling Service Management login items, launchd agents, and launchd daemons
Share and Enjoy
—
Quinn “The Eskimo!” @ Developer Technical Support @ Apple
let myEmail = "eskimo" + "1" + "@" + "apple.com"
In my TestApp I run the following code, to calculate every pixel of a bitmap concurrently:
private func generate() async {
for x in 0 ..< bitmap.width{
for y in 0 ..< bitmap.height{
let result = await Task.detached(priority:.userInitiated){
return iterate(x,y)
}.value
displayResult(result)
}
}
}
This works and does not give any warnings or runtime issues.
After watching the WWDC talk "Visualize and optimize Swift concurrency" I used instruments to visualize the Tasks:
The number of active tasks continuously raises until 2740 and stays constant at this value even after all 64000 pixels have been calculated and displayed.
What am I doing wrong?
Hi, I work on a game for iOS and the framerate decreases progressively when the debugger is attached.
Running it for 2mins, it went from 30 to 1 FPS while rendering a simple static scene.
I narrowed it down to a call to dispatch_async_f which takes longer to execute over time.
clock_t t1 = clock();
dispatch_async_f(queue, context, function);
clock_t t2 = clock();
double duration = (double)(t2 -t1)/(double)CLOCKS_PER_SEC;
Dodumentation says dispatch_async_f is supposed to return immediatly.
So what could explain duration to increases in debug? Am i measuring this incorrectly?
The game is written in mixed C++ and ObjC. It uses Metal as graphic API and GCD for dispatching jobs.
I have Xcode 13.4.1 and test on an iPhone 13 Pro with iOS 15.7.
Thanks.
Swift Concurrency Resources:
DevForums tags: Concurrency
The Swift Programming Language > Concurrency documentation
Migrating to Swift 6 documentation
WWDC 2022 Session 110351 Eliminate data races using Swift Concurrency — This ‘sailing on the sea of concurrency’ talk is a great introduction to the fundamentals.
WWDC 2021 Session 10134 Explore structured concurrency in Swift — The table that starts rolling out at around 25:45 is really helpful.
Swift Async Algorithms package
Swift Concurrency Proposal Index DevForum post
Matt Massicotte’s blog
Dispatch Resources:
DevForums tags: Dispatch
Dispatch documentation — Note that the Swift API and C API, while generally aligned, are different in many details. Make sure you select the right language at the top of the page.
Dispatch man pages — While the standard Dispatch documentation is good, you can still find some great tidbits in the man pages. See Reading UNIX Manual Pages. Start by reading dispatch in section 3.
WWDC 2015 Session 718 Building Responsive and Efficient Apps with GCD [1]
WWDC 2017 Session 706 Modernizing Grand Central Dispatch Usage [1]
Avoid Dispatch Global Concurrent Queues DevForums post
Share and Enjoy
—
Quinn “The Eskimo!” @ Developer Technical Support @ Apple
let myEmail = "eskimo" + "1" + "@" + "apple.com"
[1] These videos may or may not be available from Apple. If not, the URL should help you locate other sources of this info.
Hi,
Despite the following code works great on Windows and Linux (well, there is an OS layer stripped from the code), it hangs on macOS (pseudocode first):
create non-blocking socketpair(AF_UNIX, SOCK_STREAM, ...); + a couple of fcntl(fd, ..., flags | O_NONBLOCK)
spawn 128 pairs of threads (might be as little as 32, but will need several iterations to reproduce). Of course, there is the errno check to ensure there are no errors but EWOULDBLOCK / EAGAIN
readers read a byte 10000 times: for (...) { while (read(fd[1]...) < 1) select(...); r++;}
writers write a byte 10000 times: for (...) { while (write(fd[0]...) < 1) select(...); w++;}
Join writers;
Join readers;
On Linux/Windows with the iterations number really cranked up, I'm getting a socket buffer overflow, so ::write returns EWOULDBLOCK, then I'm waiting on a socket until it's ready, continue, and after joining both sets of threads I see that bytes-read is equal to bytes-written, everything fine.
However, on macOS I quickly end up in a strange lock when writers are waiting on ::select(...., &write_fds, ...) and readers on the corresponding ::select(..., &read_fds, ...);
I have really no idea how that could happen except that the read/write is not thread-safe. However, it looks like POSIX docs and manpages state that it is (at least, reentrant).
Could anyone point me in the right direction?
Detailed code below:
std::atomic<int> bytes_written(0);
std::atomic<int> bytes_read(0);
static constexpr int k_packets = 10000;
static constexpr int k_threads = 32;
std::vector<std::thread> writers;
std::vector<std::thread> readers;
writers.reserve(k_threads);
readers.reserve(k_threads);
for (int i = 0; i < k_threads; ++i)
{
writers.emplace_back([fd_write = fd[1], &bytes_written]()
{
char data = 'x';
for (int i = 0; i < k_packets; ++i)
{
while (::write(fd_write, &data, 1) < 1)
{
fd_set writefds;
FD_ZERO(&writefds);
FD_SET(fd_write, &writefds);
assert(errno == EAGAIN || errno == EWOULDBLOCK);
int retval = ::select(fd_write + 1, nullptr, &writefds, nullptr, nullptr);
if (retval < 1)
assert(errno == EAGAIN || errno == EWOULDBLOCK);
}
++bytes_written;
}
});
readers.emplace_back([fd_read = fd[0], &bytes_read]()
{
char data;
for (int i = 0; i < k_packets; ++i)
{
while (::read(fd_read, &data, 1) < 1)
{
fd_set readfds;
FD_ZERO(&readfds);
FD_SET(fd_read, &readfds);
assert(errno == EAGAIN || errno == EWOULDBLOCK);
int retval = ::select(fd_read + 1, &readfds, nullptr, nullptr, nullptr);
if (retval < 1)
assert(errno == EAGAIN || errno == EWOULDBLOCK);
}
++bytes_read;
}
});
}
for (auto& t : writers)
t.join();
for (auto& t : readers)
t.join();
assert(bytes_written == bytes_read);
Hello community,
I am in search of a tutorial that comprehensively explains the proper utilization of SwiftData for updating model data in a background thread. From my understanding, there is extensive coverage on creating a model and loading model data into a view, likely due to Apple's detailed presentation on this aspect of SwiftData during WWDC23.
Nevertheless, I am encountering difficulties in finding a complete tutorial that addresses the correct usage of SwiftData for model updates in a background thread. While searching the web, I came across a few discussions on Stack Overflow and this forum that potentially provide an approach. However, they were either incomplete or proved ineffective in practical application.
I would greatly appreciate any links to tutorials that thoroughly cover this topic.
I am currently working on planning a multi-component software system that consists of an Audio Server Plugin and an application for user interaction. I have very little experience with IPC/XPC and its performance implications, so I hope I can find a little guidance here.
The Audio Server plugin publishes a number of multi-channel output devices on which it should perform computations and pass the result on to a different Core Audio device. My concerns here are:
Can the plugin directly access other CoreAudio devices for audio output or is this prohibited by the sandboxing? If it cannot, would relaying the audio data via XPC be a good idea in terms of low latency stability?
Can I use metal compute from within the Audio Server plugin? I have not found any information about metal related sandboxing entitlements. I am also concerned about performance implications as above.
Regarding the user interface application, I would like to know:
If a process that has not been started by launchd can communicate with the Audio Server plugin using XPC. If not, would a user agent instead of an app be a better choice? Or are there other communication channels that would work with sandboxing?
Thank you very much!
Andreas
CIFormat static var such as RGBA16 give concurrency warnings:
Reference to static property 'RGBA16' is not concurrency-safe because it involves shared mutable state; this is an error in Swift 6
Should all these formats be static let to suppress the warnings (future errors)?
I run the following code in an actor:
func aaa() async throws -> Data {
async let result = Task(
operation: {
... decompressing data through try (data as NSData).decompressed(using: .lzfse) as Data
}
).result
switch await result {
case .success(let value): return value
case .failure(let error): throw error
}
I do it this way because I do not want to block the actor by decompression, and there is no state change in the actor afterwards. I would say that the actor plays no significant role here. Important is that many (14) concurrent tasks run in parallel, however NOT on the same data. It runs fine for a while (dozens/hundreds of data decompressed), and then the following happens:
Activity Monitor (macOS GUI tool) shows almost none User CPU time, and approx. 75% System CPU time. The rest is idle. (When it runs fine, User CPU time is 95+%)
When I pause the run in Xcode (in release config it behaves the same), all threads are in mach_msg2_trap
#0 0x0000000180ac21f4 in mach_msg2_trap ()
#1 0x0000000180ad4b24 in mach_msg2_internal ()
#2 0x0000000180ac52fc in vm_copy ()
#3 0x0000000180916b78 in szone_realloc ()
#4 0x000000018093cfb0 in _malloc_zone_realloc ()
#5 0x000000018093d7e8 in _realloc ()
#6 0x0000000180bb8a10 in __CFSafelyReallocate ()
#7 0x0000000181d00e30 in _NSMutableDataGrowBytes ()
#8 0x0000000181ce2630 in -[NSConcreteMutableData appendBytes:length:] ()
#9 0x00000001823c30d8 in -[_NSDataCompressor processBytes:size:flags:] ()
#10 0x00000001823c32c4 in -[NSData(NSDataCompression) _produceDataWithCompressionOperation:algorithm:handler:] ()
#11 0x00000001823c3598 in -[NSData(NSDataCompression) _decompressedDataUsingCompressionAlgorithm:error:] ()
It looks like something is wrong with safe reallocation, however if this have been a bug, then all macOS is stuck.
Any idea, please?
Hello,
im currently rewriting my entire network stuff to swift concurrency. I have a Swift Package which contains the NWConnection with my custom framing protocol. So Network framework does not support itself concurrency so I build an api around that. To receive messages I used an AsyncThrowingStream and it works like that:
let connection = MyNetworkFramework(host: "example.org")
Task {
await connection.start()
for try await result in connection.receive() {
// do something with result
}
}
that's pretty neat and I like it a lot but now things got tricky. in my application I have up to 10 different tcp streams I open up to handle connection stuff. so with my api change every tcp connection runs in it's own task like above and I have no idea how to handle the possible errors from the .receive() func inside the tasks.
First my idea was to use a ThrowingTaskGroup for that and I think that will work but biggest problem is that I initially start with let's say 4 tcp connections and I need the ability to add additional ones later if I need them. so it seems not possible to add a Task afterwards to the ThrowingTaskGroup.
So what's a good way to handle a case like that?
i have an actor which handles everything in it's isolated context and basically I just need let the start func throw if any of the Tasks throw I open up. Here is a basic sample of how it's structured.
Thanks Vinz
internal actor MultiConnector {
internal var count: Int { connections.count }
private var connections: [ConnectionsModel] = []
private let host: String
private let port: UInt16
private let parameters: NWParameters
internal init(host: String, port: UInt16, parameters: NWParameters) {
self.host = host
self.port = port
self.parameters = parameters
}
internal func start(count: Int) async throws -> Void {
guard connections.isEmpty else { return }
guard count > .zero else { return }
try await sockets(from: count)
}
internal func cancel() -> Void {
guard !connections.isEmpty else { return }
for connection in connections { connection.connection.cancel() }
connections.removeAll()
}
internal func sockets(from count: Int) async throws -> Void {
while connections.count < count { try await connect() }
}
}
// MARK: - Private API -
private extension MultiConnector {
private func connect() async throws -> Void {
let uuid = UUID(), connection = MyNetworkFramework(host: host, port: port, parameters: parameters)
connections.append(.init(id: uuid, connection: connection))
let task = Task { [weak self] in guard let self else { return }; try await stream(connection: connection, id: uuid) }
try await connection.start(); await connection.send(message: "Sample Message")
// try await task.value <-- this does not work because stream runs infinite until i cancel it (that's expected and intended but it need to handle if the stream throws an error)
}
private func stream(connection: MyNetworkFramework, id: UUID) async throws -> Void {
for try await result in connection.receive() {
if case .message(_) = result { await connection.send(message: "Sample Message") }
// ... more to handle
}
}
}
I'm working on a macOS application that deals with a few external dependencies that can only be compiled for intel (x86_64) but I want the app to run natively on both arm and x86_64.
One idea I have been playing with is to move the x86_64 dependencies to an xpc service compiled only as x86_64 and use the service only the intel machine. However, I can't figure out how to setup my project to compile everything at once...
Any ideas? Is this even possible? If not, I'm open to suggestions...
Thanks
I'm trying to reproduce a case when there are more dispatch queues than there are threads serving them. Is that a possible scenario?
I've been experimenting with the new low-level Swift API for XPC (XPCSession and XPCListener). The ability to send and receive Codable messages is an appealing alternative to making an @objc protocol in order to use NSXPCConnection from Swift — I can easily create an enum type whose cases map onto the protocol's methods.
But our current XPC code validates the incoming connection using techniques similar to those described in Quinn's "Apple Recommended" response to the "Validating Signature Of XPC Process" thread. I haven't been able to determine how to do this with XPCListener; neither the documentation nor the Swift interface have yielded any insight.
The Creating XPC Services article suggests using Xcode's XPC Service template, which contains this code:
let listener = try XPCListener(service: serviceName) { request in
request.accept { message in
performCalculation(with: message)
}
}
The apparent intent is to inspect the incoming request and decide whether to accept it or reject it, but there aren't any properties on IncomingSessionRequest that would allow the service to make that decision. Ideally, there would be a way to evaluate a code signing requirement, or at least obtain the audit token of the requesting process.
(I did notice that a function xpc_listener_set_peer_code_signing_requirement was added in macOS 14.4, but it takes an xpc_listener_t argument and I can't tell whether XPCListener is bridged to that type.)
Am I missing something obvious, or is there a gap in the functionality of XPCListener and IncomingSessionRequest?
I have an application, it has main process and some child processes. As we want those child processes to have their own minimum sandbox privilege, not inheriting from parent process, we plan to use XPCService which uses a NSTask to launch those child processes, so those child processes can have its own sandbox privilege.
We plan to deliver the application to Mac App Store, so process mode is: the sandboxed main process builds connections to the unsandboxed XPCService, the unsandboxed XPCService launch those sandboxed child processes.
Can this process mode pass the Mac App Store rules? I see, there is a rule that all processes must be sandboxed, including XPCService. But I tested locally, the Application downloaded from Mac apple store also launches unsandboxed XPCService, like OneDrive.
Do you have any suggestions for my application scenario, sandboxed child processes having its own privilege not inheriting from parent?
OSAllocatedUnfairLock has two different methods for executing a block of code with the lock held: withLock() and withLockUnchecked(). The only difference between the two is that the former has the closure and return type marked as Sendable. withLockUnchecked() has a comment (that is for reasons I do not understand not visible in the documentation) saying:
/// This method does not enforce sendability requirement
/// on closure body and its return type.
/// The caller of this method is responsible for ensuring references
/// to non-sendables from closure uphold the Sendability contract.
What I do not understand here is why Sendable conformance would be needed. These function should call this block synchronously in the same context, and return the return value through a normal function return. None of this should require the types to be Sendable. This seems to be supported by this paragraph of the documentation for OSAllocatedUnfairLock:
/// This lock must be unlocked from the same thread that locked it. As such, it
/// is unsafe to use `lock()` / `unlock()` across an `await` suspension point.
/// Instead, use `withLock` to enforce that the lock is only held within
/// a synchronous scope.
So, why does this Sendable requirement exist, and in practice, if I did want to use withLockUnchecked, how would I "uphold the Sendability contract"?
To summarise the question in a more concise way: Is there an example where using withLockUnchecked() would actually cause a problem, and using withLock() instead would catch that problem?
Hello Apple Developer Community,
I'm encountering an issue with my macOS application where I'm receiving the following error message:
Error Domain=NSCocoaErrorDomain Code=4099 "The connection to service named com.FxPlugTestXPC was invalidated: failed at lookup with error 159 - Sandbox restriction." UserInfo={NSDebugDescription=The connection to service named com.FxPlugTestXPC was invalidated: failed at lookup with error 159 - Sandbox restriction.}
This error occurs when my application tries to establish a connection to an XPC service named com.FxPlugTestXPC. It appears to be related to a sandbox restriction, but I'm unsure how to resolve it.
I've checked the sandboxing entitlements and ensured that the necessary permissions are in place. However, the issue persists.
Has anyone encountered a similar error before? If so, could you please provide guidance on how to troubleshoot and resolve this issue?
Any help or insights would be greatly appreciated.
Thank you.
this is some photos about my entitlements :
I noticed a problem while writing a program using XPC on macOS.
When I write it in the form of a closure that receives the result of an XPC call, I can't receive it forever.
I add an XPC target in Xcode, the sample code is used in the pass closure format, but can't I use closure passing with XPC?
My Environment:
Xcode 15.3
macOS 14.4.1
caller (closure version)
struct ContentView: View {
@State var callbackResult: String = "Waiting…"
var body: some View {
Form {
Section("Run XPC Call with no argument and no return value using callback") {
Button("Run…") {
callbackResult = "Running…"
let service = NSXPCConnection(serviceName: "net.mtgto.example-nsxpc-throws-error.ExampleXpc")
service.remoteObjectInterface = NSXPCInterface(with: ExampleXpcProtocol.self)
service.activate()
guard let proxy = service.remoteObjectProxy as? any ExampleXpcProtocol else { return }
defer {
service.invalidate()
}
proxy.performCallback {
callbackResult = "Done"
}
}
Text(callbackResult)
...
}
}
}
callee (closure version)
@objc protocol ExampleXpcProtocol {
func performCallback(with reply: @escaping () -> Void)
}
class ExampleXpc: NSObject, ExampleXpcProtocol {
@objc func performCallback(with reply: @escaping () -> Void) {
reply()
}
}
I found this problem can be solved by receiving asynchronous using Swift Concurrency.
caller (async version)
struct ContentView: View {
@State var callbackResult: String = "Waiting…"
var body: some View {
Form {
Section("Run XPC Call with no argument and no return value using callback") {
Button("Run…") {
simpleAsyncResult = "Running…"
Task {
let service = NSXPCConnection(serviceName: "net.mtgto.example-nsxpc-throws-error.ExampleXpc")
service.remoteObjectInterface = NSXPCInterface(with: ExampleXpcProtocol.self)
service.activate()
guard let proxy = service.remoteObjectProxy as? any ExampleXpcProtocol else { return }
defer {
service.invalidate()
}
await proxy.performNothingAsync()
simpleAsyncResult = "DONE"
}
Text(simpleAsyncResult)
...
}
}
}
callee (async version)
@objc protocol ExampleXpcProtocol {
func performNothingAsync() async
}
class ExampleXpc: NSObject, ExampleXpcProtocol {
@objc func performNothingAsync() async {}
}
To simplify matters, I write source code that omits the arguments and return value, but it is not also invoked by using callback style.
All sample codes are available in
https://github.com/mtgto/example-nsxpc-throws-error
Hi, just trying to learn how to work with mainActor. I am in a need of analyzing users data with API service one a background. Whenever user saves a post into SwiftData, I need to analyze that posts asynchronously. Here is my current code, which by the way works, but I am getting warning here;
actor DatabaseInteractor {
let networkInteractor: any NetworkInteractor = NetworkInteractorImpl()
func loadUserProfile() async -> String {
do {
let objects = try await modelContainer.mainContext.fetch(FetchDescriptor<ProfileSwiftData>())
if let profileTest = objects.first?.profile {
return profileTest
}
} catch {
}
return ""
}
I get a warning on let objects line.
Warning: Non-sendable type 'ModelContext' in implicitly asynchronous access to main actor-isolated property 'mainContext' cannot cross actor boundary