iPhone and iPad accelerometer specifications

Hi all, I'm looking for the specifications for the accelerometers used in current iPhone and iPad devices, including the sensitivity and the update rates possible.

Despite the documentation saying it's 100Hz, I've not yet found it to be better than 50Hz on any of iPhone SE, 8 or 11, nor iPad mini or Pros.

Any help would be much appreciated!

A
Answered by DTS Engineer in 803795022

Hi all, I'm looking for the specifications for the accelerometers used in current iPhone and iPad devices, including the sensitivity and the update rates possible.

We've never published any formal documentation about what the exact capabilities of the hardware were.

However, I have a word of caution around this:

Despite the documentation saying it's 100Hz, I've not yet found it to be better than 50Hz on any of iPhone SE, 8 or 11, nor iPad mini or Pros.

Having worked with many developers using CoreMotion, I've seen the same pattern happen MANY times:

  1. A basic motion analysis engine is implemented, typically using the real time API and the with a target queue. The main thread or other "busy" thread are entangled with that algorithm, either because the code is running directly on that thread or because of implicit relationships which aren't as visible.

  2. Testing begins and the results are somewhat positive, however, there are issues with "random" data glitches or other oddities in the data.

  3. The assumption is made that the issue here is caused by the sample rate being to low, so the sample rate is increased. Things improve, but the issues don't entirely disappear. Further testing then shows that some devices seem much worse/inconsistent.

  4. The assumption is then made that the underlying problem is caused by differences in the collection hardware, at which point they start asking questions about exactly what our hardware is capable of etc.

The problem here is that the entire analysis is based on a misdiagnosis of what the problem actually is. The actual problem is that issue with how that data is being processed and analyzed mean that the app isn't actually getting a consistent stream of data at a standard interval, but is actually getting "clumps" of data (delivered when the thread is available) and then missing data (when the thread is blocked). Most apps I've worked haven't even tried to account for that, so they're actually analyzing this irregular "lumpy" data as if it were a steady stream.

Note that this is just one example of the various "flavors" this problem can occur at. For example, many apps receive the event on one thread/queue and then immediately send it to a different thread/queue, but at a high sample rate that generates an ENORMOUS amount of wasted scheduler activity.

With all that background, I first want to talk about motion controls. If you're implementing something like motion controls, most apps are better off avoiding the queue delivery system entirely and simply retrieving the most recent sample through CMMotionManager.deviceMotion when the render each frame. Frankly, it's easier to implement and, in my experience, just works "better" than any other approach I've seen.

Moving over to activity analysis (which is where most of these issues come up), the place to start is by understanding what you ACTUALLY need, both in terms of:

  1. The actual samples coming from the device.

  2. How frequently you actually need to PROCESS those samples in order to provide a good user experience.

What many developer miss here is that #2 is actually what defines how your app should be processing data, NOT #1. As my favorite example of this, a watchOS app that's analyzing swim strokes may need good samples at a high collection rate, but very likely that it DOESN'T actually need to analyze them all that often. The act of looking at your watch while actively swimming is so awkward and disruptive that many users are going to start the tracking and not look at their watch again until the ENTIRE swim session is over. Even an "active" checker simply can't look at their watch every few second while still doing something you'd call "swimming".

Now, it's entirely possible that other features (for example, using haptics) might mean that you still need to do relatively "real time" analysis, but that typically means running an analysis 5-20/s, NOT 50-100/s. People simply don't move that "fast", nor are they capable of processing input that quickly.

Understanding this matters because it generally leads to a very different (and simpler) analysis architecture. If you're using CMotionManager, it typically means that most events are simply added to collection that accumulates samples, with those samples then being offloaded to a secondary thread for final processing every 20+ samples.

However, my final recommendation here is that most motion analysis apps are better off using "CMSensorRecorder", not any of our "real-time" APIs. That's because:

  • It removes all of the timing/data collection issues I've mentioned above.

  • If there is any issues with your app (like a crash), all the data is still being collected and available to your app.

  • Any "real time" functionality you need can be implemented by retrieving sample from CMSensorRecorder at whatever fixed interval you need.

__
Kevin Elliott
DTS Engineer, CoreOS/Hardware

I have the same question.

see https://phyphox.org/sensordb/

Hi all, I'm looking for the specifications for the accelerometers used in current iPhone and iPad devices, including the sensitivity and the update rates possible.

We've never published any formal documentation about what the exact capabilities of the hardware were.

However, I have a word of caution around this:

Despite the documentation saying it's 100Hz, I've not yet found it to be better than 50Hz on any of iPhone SE, 8 or 11, nor iPad mini or Pros.

Having worked with many developers using CoreMotion, I've seen the same pattern happen MANY times:

  1. A basic motion analysis engine is implemented, typically using the real time API and the with a target queue. The main thread or other "busy" thread are entangled with that algorithm, either because the code is running directly on that thread or because of implicit relationships which aren't as visible.

  2. Testing begins and the results are somewhat positive, however, there are issues with "random" data glitches or other oddities in the data.

  3. The assumption is made that the issue here is caused by the sample rate being to low, so the sample rate is increased. Things improve, but the issues don't entirely disappear. Further testing then shows that some devices seem much worse/inconsistent.

  4. The assumption is then made that the underlying problem is caused by differences in the collection hardware, at which point they start asking questions about exactly what our hardware is capable of etc.

The problem here is that the entire analysis is based on a misdiagnosis of what the problem actually is. The actual problem is that issue with how that data is being processed and analyzed mean that the app isn't actually getting a consistent stream of data at a standard interval, but is actually getting "clumps" of data (delivered when the thread is available) and then missing data (when the thread is blocked). Most apps I've worked haven't even tried to account for that, so they're actually analyzing this irregular "lumpy" data as if it were a steady stream.

Note that this is just one example of the various "flavors" this problem can occur at. For example, many apps receive the event on one thread/queue and then immediately send it to a different thread/queue, but at a high sample rate that generates an ENORMOUS amount of wasted scheduler activity.

With all that background, I first want to talk about motion controls. If you're implementing something like motion controls, most apps are better off avoiding the queue delivery system entirely and simply retrieving the most recent sample through CMMotionManager.deviceMotion when the render each frame. Frankly, it's easier to implement and, in my experience, just works "better" than any other approach I've seen.

Moving over to activity analysis (which is where most of these issues come up), the place to start is by understanding what you ACTUALLY need, both in terms of:

  1. The actual samples coming from the device.

  2. How frequently you actually need to PROCESS those samples in order to provide a good user experience.

What many developer miss here is that #2 is actually what defines how your app should be processing data, NOT #1. As my favorite example of this, a watchOS app that's analyzing swim strokes may need good samples at a high collection rate, but very likely that it DOESN'T actually need to analyze them all that often. The act of looking at your watch while actively swimming is so awkward and disruptive that many users are going to start the tracking and not look at their watch again until the ENTIRE swim session is over. Even an "active" checker simply can't look at their watch every few second while still doing something you'd call "swimming".

Now, it's entirely possible that other features (for example, using haptics) might mean that you still need to do relatively "real time" analysis, but that typically means running an analysis 5-20/s, NOT 50-100/s. People simply don't move that "fast", nor are they capable of processing input that quickly.

Understanding this matters because it generally leads to a very different (and simpler) analysis architecture. If you're using CMotionManager, it typically means that most events are simply added to collection that accumulates samples, with those samples then being offloaded to a secondary thread for final processing every 20+ samples.

However, my final recommendation here is that most motion analysis apps are better off using "CMSensorRecorder", not any of our "real-time" APIs. That's because:

  • It removes all of the timing/data collection issues I've mentioned above.

  • If there is any issues with your app (like a crash), all the data is still being collected and available to your app.

  • Any "real time" functionality you need can be implemented by retrieving sample from CMSensorRecorder at whatever fixed interval you need.

__
Kevin Elliott
DTS Engineer, CoreOS/Hardware

iPhone and iPad accelerometer specifications
 
 
Q