In my app I have the option to enable a help screen. This is a new view that simply shows a .html file.
It works fine until tvOS 16.1
In tvOS 17.0 the screen is blank.
Any ideas?
This is how it looks in tvOS 16.1
This is tvOS 17.0
textView.backgroundColor = SKColor.white
textView.isScrollEnabled = true
textView.clipsToBounds = true
textView.layer.cornerRadius = 20.0
textView.textColor = SKColor.black
textView.isUserInteractionEnabled = true;
textView.isScrollEnabled = true;
textView.showsVerticalScrollIndicator = true;
textView.bounces = true;
textView.panGestureRecognizer.allowedTouchTypes = [NSNumber(value: UITouch.TouchType.indirect.rawValue)]
if let htmlPath = Bundle.main.url(forResource: NSLocalizedString("manual", tableName: nil, comment: ""), withExtension: "html") {
do {
let attributedStringWithHtml:NSAttributedString = try NSAttributedString(
url: htmlPath,
options: [.documentType: NSAttributedString.DocumentType.html],
documentAttributes: nil
)
self.textView.attributedText = attributedStringWithHtml
} catch {
print("Error loading text")
}
}
Core OS
RSS for tagExplore the core architecture of the operating system, including the kernel, memory management, and process scheduling.
Post
Replies
Boosts
Views
Activity
When attempting to load an mlmodel and run it on the CPU/GPU by passing the ComputeUnit you'd like to use when creating the model with:
model = ct.models.MLModel('mymodel.mlmodel', ct.ComputeUnit.CPU_ONLY)
Documentation for coremltools v7.0 says:
compute_units: coremltools.ComputeUnit
coremltools.ComputeUnit.ALL: Use all compute units available, including the neural engine.
coremltools.ComputeUnit.CPU_ONLY: Limit the model to only use the CPU.
coremltools.ComputeUnit.CPU_AND_GPU: Use both the CPU and GPU, but not the neural engine.
coremltools.ComputeUnit.CPU_AND_NE: Use both the CPU and neural engine, but not the GPU. Available only for macOS >= 13.0.
coremltools 7.0 (and previous versions I've tried) now seems to ignore that hint and only runs my models on the ANE. Same model when loaded into XCode and run a perf test with cpu only runs happily on the CPU and selected in Xcode performance tool.
Is there a way in python to get our models to run on different compute units?
I know it's uncommon for Mac apps to have uninstallers. But in our use-case, we need to have a separate uninstaller app for various purposes, including:
to prompt the user for some information before deleting the app
to stop the app (which is kept running via launch agent with KeepAlive = TRUE), so technically this is to unload or bootout the launch agent (and delete it's plist from ~/Library/LaunchAgents/)
to remove the main app from the /Applications folder (and ofcourse removing its extensions, but I assume that is automatic)
to remove the artifacts that the app has created outside it's container
Please note that the main app is sandboxed, signed by developer id, and contains system extensions.
I've took a look at a couple of other third party apps (eg. AWS VPN Client), which also have a separate uninstaller app. When run, it asks for system admin credentials before proceeding, and then clean up things as expected.
The question is:
What is the 'right' way (best practices, APIs, pitfalls etc) to build such an uninstaller app that can do all of the above ?
I'm trying to debug a problem that's affecting customers who have upgraded to WatchOS 10, and I'm unable to get any console output from the watch when I debug the watch app in Xcode, or from the console app connecting from my Mac.
The other weird thing is that my watch shows up twice in the device list in the console app.
Is this a known issue?
I'm creating an immersive experience with RealityView (just consider it Fruit Ninja like experience). Saying I have some random generated fruits that were generated by certain criteria in System.update function. And I want to interact these generated fruits with whatever hand gesture.
Well it simply doesn't work, the gesture.onChange function isn't fire as I expected. I put both InputTargetComponent and CollisionComponent to make it detectable in an immersive view. It works fine if I already set up these fruits in the scene with Reality Composer Pro before the app running.
Here is what I did
Firstly I load the fruitTemplate by:
let tempScene = try await Entity(named: fruitPrefab.usda, in: realityKitContentBundle)
fruitTemplate = tempScene.findEntity(named: "fruitPrefab")
Then I clone it during the System.update(context) function. parent is an invisible object being placed in .zero in my loaded immersive scene
let fruitClone = fruitTemplate!.clone(recursive: true)
fruitClone.position = pos
fruitClone.scale = scale
parent.addChild(fruitClone)
I attached my gesture to RealityView by
.gesture(DragGesture(minimumDistance: 0.0)
.targetedToAnyEntity()
.onChanged { value in
print("dragging")
}
.onEnded { tapEnd in
print("dragging ends")
}
)
I was considering if the runtime-generated entity is not tracked by RealityView, but since I have added it as a child to a placeholder entity in the scene, it should be fine...right?
Or I just needs to put a new AnchorEntity there?
Thanks for any advice in advance. I've been tried it out for the whole day.
Xcode 15 and iOS 17.0.2 causing debugging issues when running from Xcode using a cable. When I updated to the new Xcode 15 and device to iOS version to 17.0.2, it is taking a long delay of 1 to 3 minutes to launch the app in the real device. It is also really slow after launching. Every step over or into take almost a minute.
I can see the below warning in console "warning: libobjc.A.dylib is being read from process memory. This indicates that LLDB could not find the on-disk shared cache for this device. This will likely reduce debugging performance."
I tried with the fix of executing the following command to clear the Device support files
rm -r ~/Library/Developer/Xcode/iOS\ DeviceSupport.
But even after I am facing the same issue.
Please do the needful to fix this issue.
Is there a limit to the size of the object that you are wanting to capture with Object Capture. e.g could it capture a horse or other such sized animal?
If the user deletes the app, must something be done to ensure that the launchd agent is removed (or is that something that does not need to be worried about in general)? For more context, I am creating a launchd agent using the SMAppService.
I am only an amateur so apologies if this is a bit of a stupid question.
As of iOS 17 SFSpeechRecognizer.isAvailable returns true, even when recognition tasks cannot be fulfilled and immediately fail with error “Siri and Dictation are disabled”.
The same speech recognition code works as expected in iOS 16.
In iOS 16, neither Siri or Dictation needed to be enabled to have SpeechRecognition to be available and it works as expected. In the past, once permissions given, only an active network connection is required to have functional SpeechRecognition.
There seems to be 2 issues in play:
In iOS 17, SFSpeechRecognizer.isAvailable incorrectly returns true, when it can’t fulfil requests.
In iOS 17 dictation or Siri being enabled is required to handle SpeechRecognition tasks, while in iOS 17 this isn’t the case.
If issue 2. Is expected behaviour (I surely hope not), there is no way to actually query if Siri or dictation is enabled to properly handle those cases in code and inform the user why speech recognition doesn’t work.
Expected behaviour:
Speech recognition is available when Siri and dictation is disabled
SFSpeechRecognizer.isAvailable returns correctly false when no SpeechRecognition requests can be handled.
iOS Version 17.0 (21A329)
Xcode Version 15.0 (15A240d)
Anyone else experiencing the same issues or have a solution?
Reported this to Apple as well -> FB13235751
Our application which isn't a sandboxed app tries to access(copies) the file from the URL given by the file provider to the application's cache path but fails with the 'operation not permitted' error. This happens in two cases 1. File Creation & 2. File Modification.
Also, on checking the path, it is like "/Library/Application Support/FileProvider/{RandID}/wharf/wharf/propagate".
Even we tried to access the folder using a Python script and run it via the terminal but it also failed with the same error.
But when we enable the 'full disk access' option in the 'privacy & security' tab of the system settings for the application(our main app/terminal), the files can be accessed.
Our application doesn't need the 'full disk access' instead it needs permission to access the file provider extension's cache path where the temp files were stored.
How to get permissions for that folder and access the files (like setting the entitlement keys or other ways)? Or else
Is there any way to inform the system to use our application cache path as the file provider's cache path?
Any help would be appreciated.
Hi all:
I am a retired engineer who spent many years developing software for various non-real time applications. I am interested in continuing to develop software in a MacOS/iOS environment, primarily to support some of my hobbies, but do not know the best approach for doing so.
At least initially, I would like to develop a few small applications that could run on a Mac or iPad for personal use (i.e., I am not interested in releasing the applications on the Apple Store - I only want to install the applications on my own devices). Does Xcode make sense as a development environment for me? Would I need to become a member of the Apple Developer Program?
I'm working on a new application that uses the UNUserNotification framework. I distributed the application to a few testers. One of them reports a crash at launch, occurring in Dispatch queue: com.apple.usernotifications.UNUserNotificationServiceConnection.call-out. All other users report the application working correctly and have reported no crash. The user is using an iMac Retina 5K, 27-inch, from 2017. He's running macOS 13.6 (22G120). The strange part is that he sent me a screenshot showing that my application does not appear under the Notifications of System Settings, which might explain the crash. The relevant thread that crashes is:
Thread 12 Crashed:: Dispatch queue: com.apple.usernotifications.UNUserNotificationServiceConnection.call-out
0 libsystem_kernel.dylib 0x7ff810ecd1e2 __pthread_kill + 10
1 libsystem_pthread.dylib 0x7ff810f04ee6 pthread_kill + 263
2 libsystem_c.dylib 0x7ff810e2bb45 abort + 123
3 libc++abi.dylib 0x7ff810ebf282 abort_message + 241
4 libc++abi.dylib 0x7ff810eb13fb demangling_terminate_handler() + 267
5 libobjc.A.dylib 0x7ff810b857ca _objc_terminate() + 96
6 libc++abi.dylib 0x7ff810ebe6db std::__terminate(void (*)()) + 6
7 libc++abi.dylib 0x7ff810ebe696 std::terminate() + 54
8 libdispatch.dylib 0x7ff810d64047 _dispatch_client_callout + 28
9 libdispatch.dylib 0x7ff810d6a200 _dispatch_lane_serial_drain + 769
10 libdispatch.dylib 0x7ff810d6ad6c _dispatch_lane_invoke + 417
11 libdispatch.dylib 0x7ff810d753fc _dispatch_workloop_worker_thread + 765
12 libsystem_pthread.dylib 0x7ff810f01c55 _pthread_wqthread + 327
13 libsystem_pthread.dylib 0x7ff810f00bbf start_wqthread + 15
Does anyone have any idea of what's happening?
Hello everyone!
I'm currently working on an iOS app developed with Swift that involves connecting to a specific ble (Bluetooth Low Energy) device and exchanging data even when the app is terminated or running in the background.
I'm trying to figure out a way to wake up my application when a specific Bluetooth device(uuid is known) is visible and then connect to it and exchange data.
Is this functionality achievable?
Thank you in advance for your help!
I have been reporting through Feedback Assistant almost daily our Mac Studio's panic at Shutdown since 14.1 betas started. If the Mac is shut down at the login screen or immediately after logging in, it will execute a normal shutdown. If used for any length of time beyond that, it will panic at shutdown and reboot.
I'd like to be able to associate some data with each CapturedRoom scan and maintain those associations when CapturedRooms are combined in a CapturedStructure.
For example, in the delegate method captureView(didPresent:error:), I'd like to associate external data with the CapturedRoom. That's easy enough to do with a Swift dictionary, using the CapturedRoom's identifier as the key to the associated data.
However, when I assemble a list of CapturedRooms into a CapturedStructure using StructureBuilder.init(from:), the rooms in the output CapturedStructure have different identifiers so their associations to the external data are lost.
Is there any way to track or identify CapturedRoom objects that are input into a StructureBuilder to the rooms in the CapturedStructure output? I looked for something like a "userdata" property on a CapturedRoom that might be preserved, but couldn't find one. And since the room identifiers change when they are built into a CapturedStructure, I don't see an obvious way to do this.
I'm working on a game which uses HDR display output for a much brighter range.
One of a feature of the game is the ability to export in-game photos. The only appropriate format I found for this is Open EXR.
The embedded Photos app is capable of showing HDR photos on an HDR display.
However, if drop an EXR file to the photos with a large range, it won't be properly displayed with HDR mode with the full range. At the same time, pressing Edit on the file makes it HDR displayable and it remains displayable if save the edit with any, even a tiny, change.
Moreover, if the EXR file is placed next to 'true' HDR one (or an EXR 'fixed' as on above), then durring scroll between the files, the broken EXR magically fixes at the exact moment the other HDR drives up to the screen.
I tested on different files with various internal format. Seems to be a coomon problem for all.
Tested on the latest iOS 17.0.3.
Thank you in advance.
I am new to security framework.
I want to access items only in dynamic keychain for smartCards. And just user keychains in case of some other scenario.
But SecKeychainOpen,SecKeychainGetPath and SecKeychainCopyDomainSearchList are deprecated. How do I make sure the secItemCopyMatching only looks for items in specific type of keychain.
In the context of a system utility that reports OS stats periodically, the security type for a connected WiFi network could be obtained with Core WLAN via CWInterface.security.
This used to work up to Ventur; howver, after upgrading to Sonoma, cwInterface.security now returns kCWSecurityUnknown.
In other posts, I have read about changes in how Core WLAN works which are related to Sonoma.
How can I determine the security type for a connected WiFi network on Sonoma? It would be preferable if the suggested approach would also work on previous macOS versions as well.
Many thanks in advance! :-)
Hi all!
So SMJobBless is deprecated, and I want to my app to do some privileged things, e.g. move file to root user folder with permission dialog. Simple, right?
But how can I do that simple thing? Found example with agent, but it does not have root permission to write a file in root's folder.
Any help?
I built my first app on my iPhone 14 running iOS 16, including any updated versions of iOS 16 since June. It is an OBD-II app to read vehicle data through the BLE adapter on my vehicle. I have ran the app for ~20+ hours of testing on iOS 16 with zero errors, warnings or crashes in Xcode, and more on my drive to/from school with just my phone.
I updated my phone to iOS 17.0.3 four days ago and it has been crashing ever since. I open the app, and as soon as I try to connect to my Bluetooth adapter via button press, it crashes. It will do this countless times and then randomly work once. I was hoping the update today for iOS 17.1 would fix the issue but it hasn't.
It shows this error, highlighted in red on the "struct DataManagementApp: App {" line of my code, right after the @main:
Thread 1: EXC_BAD_ACCESS (code=1, address=0x10)
When it randomly decides to work, it also prints this warning each time I move between views. It is highlighted in yellow in the console window:
Snapshotting a view (0x105879710, _UIButtonBarStackView) that is not in a visible window requires afterScreenUpdates:YES.
Lastly, as I am new to iOS development, I have been trying to figure out how to look at debugging and logs. Somewhere along the way I had this print into the console, highlighted in green:
warning: Module "/Users/bs/Library/Developer/Xcode/iOS DeviceSupport/iPhone15,2 17.1 (21B74)/Symbols/usr/lib/system/libsystem_kernel.dylib" uses triple "arm64e-apple-ios17.1.0", which is not compatible with the target triple "arm64-apple-ios17.0.0". Enabling per-module Swift scratch context.
I never had any of these pop up or my app crash throughout the development or testing. It has only started happening since I updated my phone to iOS 17. I'm not sure on how to find the problem, or what it might be. I figured it was an issue with the software updates, either iOS or Xcode. To make it even more confusing, the app still works on my iPad after updating to iPadOS 17.0.3.
Any help would be much appreciated.