Haptic to Audio (synthesize audio file from haptic patterns?)

Haptics are often represented as audio for presentation purposes by Apple in videos and learning resources.

I am curious if:

  1. ...Apple has released, or is willing to release any tools that may have been used synthesize audio to represent a haptic patterns (such as in their WWDC19 Audio-Haptic presentation)?
  2. ...there are any current tools available that take haptic instruction as input (like AHAP) and outputs an audio file?
  3. ...there is some low-level access to the signal that drives the Taptic Engine, so that it can be repurposed as an audio stream?
  4. ...you have any other suggestions!

I can imagine some crude solutions that hack together preexisting synthesizers and fudging together a process to convert AHAP to MIDI instructions, dialing in some synth settings to mimic the behaviour of an actuator, but I'm not too interested in that rabbit hole just yet.

Any thoughts? Very curious what the process was for the WWDC videos and audio examples in their documentation...

Thank you!

Hello @slufter, you can create a haptic event from an audio file by registering an audio resource with registerAudioResource(_:options:), then using this audio resource to initialize a CHHapticEvent with init(audioResourceID:parameters:relativeTime:).

Going from a haptic to an audio representation is arguably a more nuanced problem in the sense that a haptic representation can yield an arbitrary number of audio representations, just like the same rhythmic pattern can be present in completely different melodies.

You mentioned going from the Apple Haptic and Audio Pattern (AHAP) file format to MIDI, then using the MIDI messages to drive synthesizers. The missing piece here is that the MIDI representation would still lack a concept of pitch.

Take, for example, the two haptic events below, taken from the Playing a Custom Haptic Pattern from a File sample code project:

{
    "Event": {
        "Time": 0.0,
        "EventType": "HapticTransient",
        "EventParameters": [
            { "ParameterID": "HapticIntensity", "ParameterValue": 0.8 },
            { "ParameterID": "HapticSharpness", "ParameterValue": 0.4 }
        ]
    }
},
{
    "Event": {
        "Time": 0.015,
        "EventType": "HapticContinuous",
        "EventDuration": 0.25,
        "EventParameters": [
            { "ParameterID": "HapticIntensity", "ParameterValue": 0.8 },
            { "ParameterID": "HapticSharpness", "ParameterValue": 0.4 }
        ]
    }
}

Since AHAP is a JSON-like file format, you can use JSONDecoder or JSONSerialization to read an AHAP file. You can then use the time, event type, duration and intensity information for each event, for example, to create instances of AVMIDINoteEvent, which you can then add to an AVMusicTrack using addEvent(_:at:).

The initializer of AVMIDINoteEvent expects a MIDI key number between 0 and 127. This is something you cannot gather from the AHAP file, but that your implementation might want to provide. You are not restricted to a single MIDI event either, as one haptic event can map to multiple musical events.

To answer your original question, there is no mechanism in Core Haptics to go from a haptic to an audio or MIDI representation. If you have any suggestions for improving Core Haptics, including requests for sample code or documentation, please use Feedback Assistant to submit an enhancement request. From an API standpoint, you can drive a synthesizer using an AHAP file by converting the JSON-like information in the file to AVMIDINoteEvents, for example, but the MIDI key number you choose for each event is an implementation detail.

Thank you so much, erm.. Engineer!

Thanks for you thoughts on the missing pieces about MIDI notes. I don't want to spend too much time on this topic, but long story short is that I would likely use MIDI less for choosing particular notes, but instead to animate changes to an oscillator frequency, and amplitude envelopes.

Do you happen to have any insight on the process used by Apple themselves for the presentational haptic audio examples? (For example, on this page: https://developer.apple.com/design/human-interface-guidelines/playing-haptics)

Side note: currently, I am physically recording haptic sounds with a contact microphone just to get something.

Haptic to Audio (synthesize audio file from haptic patterns?)
 
 
Q