Prioritize user privacy and data security in your app. Discuss best practices for data handling, user consent, and security measures to protect user information.

Post

Replies

Boosts

Views

Activity

Unclear working of Local Network Privacy feature on macOS Sequoia
Starting with macOS Sequoia Beta, a new "Local Network Privacy” feature was introduced, which had previously been present in iOS. Unfortunately, there is little information about this topic on the Apple developer website, especially for the macOS platform. I conducted some experiments to understand the new feature, but was confused by the results. Firstly, I noticed that the type of application accessing the local network in macOS matters - bundled or command-line (CLI) applications. The TCC subsystem does not restrict access to the local network for CLI applications at all, regardless of how they are launched - as a launchd daemon with root privileges, or through a terminal with standard user privileges. At the same time, access to the local network for bundled applications is controlled by the TCC subsystem at most cases. Upon the first request, the system will display an alert to the user explaining the purpose of using the local network. Then, communication with local network devices will be allowed or restricted based on whether consent has been granted or revoked. Secondly, it's worth noting that if the bundled application supports CLI mode (launched through the terminal without a GUI), it will be able to access the local network in that mode regardless of the “Local Network Access” consent state if it has been granted at least once. However, if the same application is in GUI mode, its access to the local network will be limited by the current consent. Is this behaviour correct and likely to remain the same in future releases of macOS Sequoia? Or is there something described here that is incorrect and will be fixed in the upcoming betas? Also, I have posted FB14581221 on this topic with the results of my experiments.
4
0
1k
Jul ’24
How does homomorphic encryption usage affect privacy labels?
If I encrypt user data with Apple's newly released homomorphic encryption package and send it to servers I control for analysis, how would that affect the privacy label for that app? E.g. If my app collected usage data plus identifiers, then sent it for collection and analysis, would I be allowed to say that we don't collect information linked to the user? Does it also automatically exclude the relevant fields from the "Data used to track you" section? Is it possible to make even things that were once considered inextricably tied to a user identity (e.g. purchases in an in-app marketplace) something not linked, according to Apple's rules? How would I prove to Apple that the relevant information is indeed homomorphically encrypted?
0
0
332
Jul ’24
Can developers know if App lock (Require passcode) has been enabled for my app in iOS 18.
My app already has an app lock system which includes text & biometric combinations. Now iOS 18 has introduced a passcode lock for every app. So if users want to enable the app lock provided by us (developer), we want to inform them that you have enabled the iOS-provided app lock, in addition to that do you want to allow app-specific lock? For this, developers want to know whether iOS-provided app lock is enabled. -Rajdurai
1
0
280
Jul ’24
[iOS 18 Beta] isCaptured return false in iOS 18 Beta
UIWindowScene.screen.isCaptured property returned false if the recording is already happening and launch the app in iOS 18 Beta 4. I also tried with UIScreen.main.isCapture (also returned false) and sceneCaptureState (UITraitCollection) (which returned .inactive) The flag returned true if the app is already opened in the foreground or put it in the background when the screen recording starts. Is it a bug in iOS 18 Beta or is there any other API I need to use it for iOS 18?
7
0
525
Jul ’24
handleNewFlow of NEAppProxyProvider subclass isn't called
Hi! I am experimenting with NEAppProxyProvider (I just want to see the differences between this and NETransparentProxyProvider in action). I have subclassed it in my system extension and it seems like it reaches the startProxy point because I see the corresponding logs. I didn't forget to call the completion handler. However, I do not see logs about flow handling. Can you suggest to me why? Posting the extension source code just in case. import Foundation import NetworkExtension import OSLog class AppProxyProvider: NEAppProxyProvider { override func startProxy(options: [String : Any]? = nil, completionHandler: @escaping (Error?) -> Void) { Logger.appProxyProviderSystExt.warning("Starting NEAppProxy") setTunnelNetworkSettings(configureProxy()) { error in if let error { Logger.appProxyProviderSystExt.warning("\(#functicompletionHandler(nil)on) Unable to set settings for NEAppProxy syst ext") completionHandler(error) return } completionHandler(nil) } } override func handleNewFlow(_ flow: NEAppProxyFlow) -> Bool { Logger.appProxyProviderSystExt.warning("Handling flow") return false } override func stopProxy(with reason: NEProviderStopReason, completionHandler: @escaping () -> Void) { Logger.appProxyProviderSystExt.warning("Stopping NEAppProxy") completionHandler() } override func handleAppMessage(_ messageData: Data, completionHandler: ((Data?) -> Void)? = nil) { completionHandler?(nil) } private func configureProxy() -> NETunnelNetworkSettings { let settings = NETunnelNetworkSettings(tunnelRemoteAddress: "127.0.0.1") return settings Am I missing something in configuration?
1
0
351
Jul ’24
Anti Virus for macOS Sequoia
I am currently running the beta version of macOS Sequoia on my MacBook Pro. Are there any approved or recommended antivirus softwares I can install on this MacBook? I would greatly appreciate if anyone could point me towards some resources for this. Thanks!
1
0
356
Jul ’24
Changing the ACL for a private key item in the System keychain
Hello, I am having trouble with changing the ACL for a private key item my app is saving to the system keychain. I want to restrict access to the key, so that only my app can use the private key and not all applications. Applications that try to access it, should be prompted for an administrator password. When I save the item as a private key, I get: What I want: note (I put a random binary but obviously this should be my app) I am using rust bindings to the security framework, but an answer in swift would suffice. I am really stuck so any help would be greatly appreciated. let key_options = GenerateKeyOptions::default() .set_key_type(KeyType::ec()) .set_token(Token::Software) .to_dictionary(); let key = SecKey::generate(key_options).map_err(|e| anyhow!("Could not generate a private key: {}", e))?; let sys_keychain = mac::system_keychain()?; let value = ItemAddValue::Ref(AddRef::Key(key.clone())); let options = ItemAddOptions::new(value) .set_label(format!("{}.{}", SERVICE, label)) .set_location(Location::FileKeychain(sys_keychain)) .set_access_group(ACCESS_GROUP) .to_dictionary(); item::add_item(options).map_err(|e| anyhow!("Failed to add key item to keychain: {}", e))?;
1
0
185
Jul ’24
Security Audit Thoughts
I regularly see questions, both here on DevForums and via DTS code-level support requests, from developers who are working with a security auditor. This is a tricky topic, and I’m using this post to collect my thoughts on it. If you have questions or comments, please start a new thread. Put it in Privacy & Security > General and tag it with Security; that way I’m more likely to see it. Share and Enjoy — Quinn “The Eskimo!” @ Developer Technical Support @ Apple let myEmail = "eskimo" + "1" + "@" + "apple.com" Security Audit Thoughts DTS is a technical support organisation, not a security auditing service. Moreover, we don’t work with security auditors directly. However, we regularly get questions from developers who are working with a security auditor, and those often land in my queue. Given that, I’ve created this post to collect my ideas on this topic. I see two types of security audits: static analysis — This looks at the built code but doesn’t run it. dynamic analysis — This runs the code and looks at its run-time behaviour. While both techniques are valid, it’s critical that you interpret the resulting issues correctly. Without that, you run the risk of wasting a lot of time investigating issues that are not a problem in practice. In some cases it’s simply impossible to resolve an issue. And even if it is possible to resolve an issue, it might be a better use of your time to focus on other, more important work. A good security auditor should understand the behaviour of the platform you’re targeting and help you prioritise issues based on that. My experience is that many security auditors are not that good )-: Static Analysis The most common issue I see relates to static analysis. The security auditor runs their auditing tool over your built product, it highlights an issue, and they report that to you. These issues are usually reported with logic like this: Routine f could be insecure. Your program imports routine f. Therefore your program is insecure. This is logically unsound. The problem is with step 1: Just because a routine might be insecure, doesn’t mean that your use of that function is insecure. Now, there are routines that are fundamentally insecure (I’m looking at your gets!). Your security auditor is right to highlight those. However, there are many routines that are secure as long as you call them correctly. Your security auditor should understand the difference. The canonical example of this is malloc. Calling malloc is not a fundamentally insecure operation. Sure, the world would be a better place if everyone used memory-safe languages [1], but that’s not the world we live in. If your security auditor highlights such a routine, you have two options: Rewrite your code to avoid that routine. Audit your use of that routine to ensure that it’s correct. This is something that you’ll have to negotiate with you security auditor. [1] Or would it? (-: The act of rewriting all that code is likely to produce its own crop of security bugs. Tracking Down the Call Site In most cases it’s easy to find the call site of a specific routine. Let’s say your security auditor notices that you’re calling gets and you agree that this is something you really should fix. To find the call site, just search your source code for gets. In some case it’s not that simple. The call site might be within a framework, a static library, or even inserted by the compiler. I have a couple of posts that explain how to track down such elusive call sites: Using a Link Map to Track Down a Symbol’s Origin Determining Why a Symbol is Referenced The first is short and simple; the second is longer but comprehensive. Apple Call Sites In some cases the call site might be within Apple code. You most commonly see this when the Apple code is inserted in your product by the toolchain, that is, programs like the compiler and linker that are used to build your product. There are two ways you can audit such call sites: Disassemble the code and audit the assembly language. Locate the source of the code and audit that. The latter only works when the toolchain code is open source. That’s commonly true, but not universally. If you’re unable to track down the source for an Apple call site, please start a thread here on DevForums with the details and we’ll try to help out. If you’re analysis of the Apple call site indicates that it uses a routine incorrectly, you should absolutely file a bug about that. Note Don’t file a bug that says “The Swift compiler inserted a call to malloc and that’s insecure.” That just wastes everyones time. Only file a bug if you can show that the code uses malloc incorrectly. Dynamic Analysis The vast majority of security audit questions come from static analysis, but every now and again I’ll see one based on dynamic analysis. However, that doesn’t change the fundamentals: Dynamic analysis is not immune from faulty logic. For example, the following sequence suffers from exactly the same logic problem that I highlighted for static analysis: Routine f could be insecure. Something in your process calls routine f. Therefore your program is insecure. However, there are two additional wrinkles. The first is that you might not have any control over the code in question. Let’s say you’re using some Apple framework and it calls gets [1]. What can you do about that? The obvious thing is not use that framework. But what if that framework is absolutely critical to your product. Again, this is something you need to negotiate with your security auditor. The second wrinkle is the misidentification of code. Your program might use some open source library and an Apple framework might have its own copy of that open source library [2]. Are you sure your security auditor is looking at the right one? [1] Gosh, I hope that’s never the case. But if you do see such an obvious security problem in Apple’s code, you know what to do. [2] That library’s symbols might even be exported, a situation that’s not ambiguous because of the two-level namespace used by the dynamic linker on Apple platforms. See An Apple Library Primer for more on that.
0
0
158
Jul ’24
Transferring App ownership and maintaining access to the old keychain values
Hi Apple, Recently, we transferred an app from one Team ID to another (keeping the same App ID) and noticed that the app distributed with the new Team ID has lost access to the keychain values created by the old Team ID. We found an article on the Apple Developer Forums (https://developer.apple.com/forums/thread/706128) that explains this issue, but we want to confirm if this is still true.
1
0
238
Jul ’24
The customer requested a pen-test for this app, and they reported some issues related to buffer overflow and weak randomness functions
The customer requested a pen-test for this app, and they reported some issues related to buffer overflow and weak randomness functions. I reviewed the identified methods, but I couldn't find them in the code or third-party SDKs. We would like to know if you can review these methods to see if there is a possible solution or if you can guarantee that these functions are safe. They say that they applied a reverse engineering tool and it delivered our app compiled using this c/c++ functions that are considered unsafe. The tool used is: Ghidra (https://ghidra-sre.org/) These are methods reported by Ciber security team: Related to buffer overflow: Related to weak randomness functions:
2
0
439
Jul ’24
How is an iOS app on the App Store able to detect other apps?!?!
A client asked why we can't detect other apps installed on a device without an MDM profile, we explained this isn't possible due to privacy and security restrictions on iOS. A regular app cannot find other apps that are installed unless part of the same group. The client then told us to download SpyBuster (on the App Store) which somehow is collecting a list of Bundle IDs or names of all installed apps somehow. We were skeptical, but sure enough, the app showed us a list of apps we had installed. How is it doing this?!?! No MDM profile associated with the app. No special permissions requested. No access to anything shown in privacy & security in settings. Is there a special entitlement we're not aware of? Just seems like they must be using a private API call to get this info but that would of course mean it should be pulled from the App Store. We'd love to have this capability in our apps if it's legit and accepted by App Store review. Thanks!
3
0
450
Jul ’24
Transferring your apps and users to another team
Dear Apple: Due to the change of company information, we need to migrate the online App from the developer account to the developer account of the new company. We have some questions about the migration process. Please help us answer them. After the new developer account accepts the transferred App, when the online old version of the App uses Apple login authorization, does the unique identifier obtained correspond to the old developer account, or will it become a new identifier corresponding to the new developer account? Does the Apple login user migration API (userMigrationInfo endpoint) provided by Apple have some restrictions on the call volume and call frequency (currently our Apple authorized user level is about 5 million) Thank you
3
0
419
Jul ’24
Can the Endpoint Security Extension communicate with a regular app
I'm developing a system that uses an ES extension to control user file openings on Mac. When a user tries to open a file, the ES extension can either allow or deny the user from opening it. However, the policy for allowing/denying users to open files is managed by my normal Mac app. Therefore, the ES extension needs to proactively communicate with the normal app. Initially, I wanted to create an XPC service in my regular app, but according to the documentation, XPC services are managed by launchd and cannot be created by regular apps. So if I want my ES extension to communicate with the regular app proactively, what IPC method can I use?
9
0
548
Jul ’24
SWIFT: server certificate does NOT include an ID which matches the server name
I'm working on MBP OSX Ventura 13.5.2 I'm working with Swift 5 (Xcode 15.2) I have a local httpd configured with vhosts. I create my local certs using mkcert. When I visit the https://example site with Chrome the certificate is perfectly valid and there are no issues. When I try and contact the same site using a DataCallBack function to the URL I get the error "server certificate does NOT include an ID which matches the server name" In the log: Connection 1: default TLS Trust evaluation failed(-9807) Connection 1: TLS Trust encountered error 3:-9807 Connection 1: encountered error(3:-9807)
25
1
715
Jul ’24
In macOS, is it possible to have a hardware-bound key in the system context?
We are interested in using a hardware-bound key to sign requests made to a server prior to a user being logged in. We would do this using a launch daemon. The goal here is to have a high degree of assurance that a request came from a particular device. The normal way to do this would be with a private key in the Secure Enclave. Based on these threads: https://forums.developer.apple.com/forums/thread/719342 https://forums.developer.apple.com/forums/thread/115833 and the write-up about the Data Protection Keychain, it doesn't appear possible with the SE. Rather, it seems that we must wait until we have a logged-in user context before we can use the SE. My questions are: am I correct in that the SE is not usable in the system context prior to login? is there any other way on macOS to sign a request in such a way that we know it comes from a specific device? Thanks.
4
0
379
Jul ’24
Custom Authorization Plugin for macOS: Issue with MCXSecurityAgent.invoke Error
Hi everyone, I'm currently developing a custom authorization plugin for macOS and have encountered an issue that I need help with. I've modified the auth DB to use my custom plugin instead of the default login window. Although I'm able to set both the name and password as context values, the login process is failing, and I'm seeing the following error in the security agent log: <string>builtin:prelogin</string>     <string>builtin:policy-banner</string>     <string>MyPlugin:login</string>     <string>MyPlugin:value</string>     <string>builtin:login-begin</string>     <string>builtin:reset-password,privileged</string>     <string>loginwindow:FDESupport,privileged</string>     <string>builtin:forward-login,privileged</string>     <string>builtin:auto-login,privileged</string>     <string>builtin:authenticate,privileged</string>     <string>PKINITMechanism:auth,privileged</string>     <string>builtin:login-success</string>     <string>loginwindow:success</string>     <string>HomeDirMechanism:login,privileged</string>     <string>HomeDirMechanism:status</string>     <string>MCXMechanism:login</string>     <string>CryptoTokenKit:login</string>     <string>PSSOAuthPlugin:login-auth</string>     <string>loginwindow:done</string> I am setting name and password in MyPlugin:login and also able to see same in MyPlugin:value mechanics. 2 2024-07-25 06:53:30.813047-0700 0x2e3b Info 0x0 822 0 SecurityAgentHelper-x86_64: (MyPlugin) *****The name and password is test and test1234**** But 2024-07-25 02:33:00.777530-0700 0x8772 Debug 0x0 1527 0 SecurityAgent: (MCXMechanism) [com.apple.ManagedClient:MCXSecurityPlugin] MCXSecurityAgent.invoke kAuthorizationEnvironmentName is NULL 2024-07-25 02:33:00.777530-0700 0x8772 Debug 0x0 1527 0 SecurityAgent: (MCXMechanism) [com.apple.ManagedClient:MCXSecurityPlugin] MCXSecurityAgent.invoke - user logging in is '(null)' Has anyone encountered this issue before or have any insights into what might be causing the kAuthorizationEnvironmentName is NULL error and why the user logging in is shown as '(null)'? Any guidance or suggestions on how to resolve this would be greatly appreciated.
3
0
297
Jul ’24