We have observed for a few months that the Instruments tool in Xcode does not show correct memory allocation for the PacketTunnelProvider process on iOS 17. The memory allocation does not exceed 6-7 MB, which is not the case with iOS 16 or 15. Additionally, Instruments crashes the PacketTunnelProvider process after profiling for a few minutes.
Please note that I am not running Xcode in debugger mode for the PacketTunnelProvider process along with instruments, as this is a known issue that causes the PacketTunnelProvider to be killed when both Instruments and the Xcode debugger are running.
Is anyone else facing this issue and have a workaround?
Instruments
RSS for tagInstruments is a performance-analysis and testing tool for iOS, iPadOS, watchOS, tvOS, and macOS apps.
Posts under Instruments tag
92 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
the downloads button in safari keeps flickering and cannot be pressed no matter what.
Error when trying to generate CoreML performance report, message says
The data couldn't be written because it isn't in the correct format.
Here is the code to replicate the issue
import numpy as np
import coremltools as ct
from coremltools.converters.mil import Builder as mb
import coremltools.converters.mil as mil
w = np.random.normal(size=(256, 128, 1))
wemb = np.random.normal(size=(1, 32000, 128)) # .astype(np.float16)
rope_emb = np.random.normal(size=(1, 2048, 128))
shapes = [(1, seqlen) for seqlen in (32, 64)]
enum_shape = mil.input_types.EnumeratedShapes(shapes=shapes)
fixed_shape = (1, 128)
max_length = 2048
dtype = np.float32
@mb.program(
input_specs=[
# mb.TensorSpec(enum_shape.symbolic_shape, dtype=mil.input_types.types.int32),
mb.TensorSpec(enum_shape.symbolic_shape, dtype=mil.input_types.types.int32),
],
opset_version=mil.builder.AvailableTarget.iOS17,
)
def flex_like(input_ids):
indices = mb.fill_like(ref_tensor=input_ids, value=np.array(1, dtype=np.int32))
causal_mask = np.expand_dims(
np.triu(np.full((max_length, max_length), -np.inf, dtype=dtype), 1),
axis=0,
)
mask = mb.gather(
x=causal_mask,
indices=indices,
axis=2,
batch_dims=1,
name="mask_gather_0",
)
# mask = mb.gather(
# x=mask, indices=indices, axis=1, batch_dims=1, name="mask_gather_1"
# )
rope = mb.gather(x=rope_emb.astype(dtype), indices=indices, axis=1, batch_dims=1, name="rope")
hidden_states = mb.gather(x=wemb.astype(dtype), indices=input_ids, axis=1, batch_dims=1, name="embedding")
return (
hidden_states,
mask,
rope,
)
cml_flex_like = ct.convert(
flex_like,
compute_units=ct.ComputeUnit.ALL,
compute_precision=ct.precision.FLOAT32,
minimum_deployment_target=ct.target.iOS17,
inputs=[
ct.TensorType(name="input_ids", shape=enum_shape),
],
)
cml_flex_like.save("flex_like_32")
If I remove the hidden states from the return it does work, and it also works if I keep the hidden states, but remove both mask, and rope, i.e, the report is generated for both programs with either these returns:
return (
# hidden_states,
mask,
rope,
)
and
return (
hidden_states,
# mask,
# rope,
)
It also works if I use a static shape instead of an EnumeratedShape
I'm using macOS 15.0 and Xcode 16.0
Edit 1:
Forgot to mention that although the performance report fails, the model is still able to make predictions
I've been seeing warning like the following with my Apple Watch app:
=== AttributeGraph: cycle detected through attribute 140952 ===
=== AttributeGraph: cycle detected through attribute 131640 ===
=== AttributeGraph: cycle detected through attribute 131640 ===
=== AttributeGraph: cycle detected through attribute 131640 ===
=== AttributeGraph: cycle detected through attribute 131640 ===
=== AttributeGraph: cycle detected through attribute 131640 ===
=== AttributeGraph: cycle detected through attribute 131640 ===
I've done some debugging with Instruments and Xcode, I've determined that its caused by adding a
.toolbar {
ToolbarItemGroup(placement: .topBarTrailing) {
Label("Stop", systemImage: "xmark")
}
}
to an item within my NavigationSplitView List Destination view.
To aid reproduction of this issue even more, I've verified the same warning occurs with the Apple sample code from the WWDC23 Backyard Birds app.
https://developer.apple.com/documentation/swiftui/backyard-birds-sample?changes=_9
If you take the Apple sample code, open it in Xcode 15.4 and run it in the Apple Watch WatchOS 10.5 simulator, then select the Bird Springs item, all is well.
If you then add the above toolbar code to the BackyardSummaryTab file at the bottom of the VStack, you'll see the same AttributeGraph issues I'm seeing in my app.
Is this a bug? I can't understand why adding a static ToolBarItem in WatchOS is causing this.
I've seen this issue in the simulator and also on device.
Is there way to identify process which is handling a global keyboard shortcut? Let's say I press some CMD-ALT-L in any application, some file shows up on disk and there is no any kind of visual indication of where it's coming from. How do I find the offender?
We are getting crash issue in our WKWebkit based Cordova application due to memory issues. However, for debugging the memory issue we are trying to use JavaScript allocations in the memory profiler of instruments. and while using it, the profiler automatically closes within few seconds, and we are not able to debug the underlaying memory issue in the WKWebKit. I
I saved a trace from the Leaks tool. However when I try to open the saved file, I get an error that says, "The document 'Generations.trace' could not be opened. Trace is malformed - run data is missing." This happens every time I try to save from Instruments.
I'm using Instruments 15.3. Any ideas?
We're planning to integrate performance profiling tool into our own Application.
The standard tool is Xcode Instruments, which is great but requires Xcode to be installed.
I have couple of queries related to that.
Is there a way to use Instruments without clients needing to install Xcode?
If not, then can we include Xcode installation as part of my application installation in the customer machine?
I'm currently supporting an app where one of the dependencies we use had an update. The app is an iPad only app that is meant to be run as a kiosk (guided access, single app mode). After the dependency update for the casing of the tablet / kiosk, we noticed an uptick in errors that may or may not have been associated with the recent 3rd party software update.
I wanted to verify if the update had an impact on our production kiosks. To get a good idea of whether or not the update for casework's library had a negative impact, though, I would need to run instruments on a tablet while it's in the case. There's only one power cable to the iPad and the cable that would be used to connect directly to a Mac / Xcode is the same cable that connects to the casework.
Is it possible to run instruments, and then retrieve data, from an app that's not directly connected to Xcode? Or is it possible for Xcode to somehow connect via a running instance of an app remotely?
I have the Vision Pro developer strap, do I need to do anything to make Instruments transfer the data over it rather than wifi? Or will it do that automatically?
It seems incredibly slow for transferring and then analysing data.
I can see the Vision Pro recognised in Configurator, so assume it's working.
Otherwise.. Any tips for speeding up Instruments? Capturing 5 mins of gameplay (high-freq) then takes 30-40+ mins to appear in Instruments on an M2 Max 32gb.
Thanks!
When I run my app in debug mode, whenever a time consuming task is run (like a core data fetch request) there is a time indication appearing on top of the view. I am unable to find the meaning of this and how it is named.
It looks like:
"appName NNNN ms"
(it is not caught by the screen shot)
Can someone tell if it is possible to get rid of it and how ? Thanks for your help !
I ran it (Leaks) on a process for about 2 hours. It collected 68gytes of data. It cannot open the folder -- can't find a file (which is there as a .zip archive) or if I expand it, just an error about missing an index.
Filing a bug about this is difficult, since it's 68gbyets of data.
I'm on VisionOS 1.2 beta and Instruments will capture everything but RealityKit information.
RealityKit frames and RealityKit metrics captures no data. This used to work though I'm not sure what version it did. Unbelievably frustrating.
I was watching https://developer.apple.com/videos/play/wwdc2023/10248/ , in this video it is adviced to make below shated property async to benefit from concurency (in video 40:04 , exact time) , do yo know how to do it ?
class ColorizingService {
static let shared = ColorizingService()
func colorize(_ grayscaleImage: CGImage) async throws -> CGImage
// [...]
}
struct ImageTile: View {
// [...]
// implicit @MainActor
var body: some View {
mainContent
.task() { // inherits @MainActor isolation
// [...]
result = try await ColorizingService.shared.colorize(image)
}
}
}
Hello, technical friends, I am developing a custom keyboard extension, currently encountered a technical difficulty, almost asked AI and Google have not solved my problem.
Problem description:
When my App was first installed, I opened Settings from the App, enabled full access, and crashed when I returned to the App. Run the App for the second time, open the Settings from the App, update the full access permission, and automatically re-run the App after returning to the App, and then the third, fourth, and NTH times will not crash.
Seems like ios will kill host apps for custom keyboard extensions after full access is updated?
I want my App installed for the first time to update full access to return App without crashing, but don't know how to fix this problem. I look forward to the technical experts working in Apple development to help me provide relevant technical methods and ideas so that I can solve this problem. Thank you very much!
When tracing debugging in xcode, after the App forces exit, the console prompts:
Message from debugger: Terminated due to signal 9
When I was monitoring CPU and memory usage, it was very low and I didn't see anything unusual.
Users have reported unusually high data usage with my app.
So to investigate I have profiled in instruments. My app as expected in using minimal data. However in instruments I see an "Unknown" process. Which sends around 1mb of data every 2 seconds.
Can anyone explain what unknown process is? Sorry my question is vague but I'm at the beginning of understanding the instruments outputs so your help is so very much appreciated.
Hi! I allready have started a product page optimization tests in the past, but now I want to test different versions of my icon.
But I can't see the option to change the icons, how can I do that?
Hi,
I am using xctrace to launch an app and check if the app launched successfully or crashed by reading the content of the .trace file. However if multiple devices are connected , xctrace complain No devices found matching the identifier specified. Please let me know if there is any limitations for the device to be connected.
・Xcode 15.1
・The app is also compatible with Watch.
In the privacy manifest, we defined NSPrivacyTracking to YES and NSPrivacyTrackingDomains to specific domains.
Furthermore, to avoid warnings when uploading to Testflight,
we have implemented a privacy manifest file in the app with the following configuration.
・Place the .xcprivacy files for the app itself and WatchExtension under their respective Target directories.
・Settings related to tracking domains are listed in .xcprivacy of the app itself.
・In .xcprivacy of WatchExtension, only describe the reason for UserDefault of NSPrivacyAccessedAPIType
However, these implementations do not block network connections,
"Fault" still occurs on "Point of Intereset instruments".
Is there something wrong with my implementation?
Hi!
I watched WWDC 2019 Optimizing App Launch video and can't see the Lifecycle phases when I Profile my App.
I'm using Xcode 15.2 with Instruments 15.2, and SwiftUI as UI framework.
Here is a screenshot of what I get.
It's there another tool or another way to get this information?
Thanks!
Alfonso.