AHAPpy now ships with a Swift Package that turns audio into Core Haptics patterns at runtime—no .ahap files required. Drop the repository into Xcode via File ▸ Add Packages… and point it at the project root, or reference the Git URL. The package exposes HapticManager, a lightweight helper that analyses WAV files and plays synchronized sound and haptics.
import AHAPpy
let manager = HapticManager.shared
try? manager.playAudioNamed("Wakeup", withExtension: "wav", mode: .music)
// Custom tuning
var music = HapticManager.Options.musicDefaults
music.intensityMultiplier = 1.2
try manager.playAudio(at: url, mode: .music, options: music)- In Xcode, choose File ▸ Add Packages… and supply the repo URL (
https://github.com/samroman3/AHAPpy), or select Add Local… and point to the folder if you have it checked out locally. - Add the
AHAPpyproduct to the target you want to enrich with audio-driven haptics. - Bundle a WAV (or other Core Audio readable) asset with your app, then call
HapticManagerto play it. The manager will analyse the audio on the fly, create a matchingCHHapticPattern, and start both audio and haptics together. - Tune
HapticManager.Optionsif you want different window sizes, thresholds, or intensity multipliers per clip or mode. You can provide custom options per call or configure defaults at initialization.
HapticManager exposes a simple API surface:
playAudioNamed(_:withExtension:mode:options:)to play bundle resources.playAudio(at:mode:options:)for URLs (e.g., user-imported audio).prepareHaptics(for:mode:options:)if you need theCHHapticPatternwithout triggering playback.stopAll()to halt both audio and haptics immediately.
The sample iOS app (AHAPpySample) is already wired to the local package. Build it on a haptics-capable device to try:
- Preloaded demos for layered, sequential, and music-driven haptics.
- An audio importer that copies a user-selected file into the sandbox, then runs it through
HapticManager. - Playback controls that ensure only one audio/haptic stream runs at a time.
A Python script that converts WAV files to AHAP (Apple Haptic Audio Pattern) format, which can be used to synchronize audio and haptic effects in games, virtual reality experiences, and interactive multimedia. With a simple GUI, AHAPpyUI.py, to streamline the conversion process. A sample iOS app is avaiable with an example implementation.
This project was inspired by Lofelt's NiceVibrations, which offers advanced haptic feedback solutions for various applications. I felt a need for an even simpler implementation of haptic-audio synchronization that focused specifically on iOS development, leading to the creation of this AHAP converter.
You can install the dependencies using pip:
pip install numpy librosa pydub tkinteror install using the requirements:
pip install -r requirements.txtRun the script with the following command:
python generate_ahap.py input_wav [--output_dir OUTPUT_DIR] [--mode MODE] [--split SPLIT]--input_wav: Path to the input WAV file. --output_dir: Directory where the output AHAP files will be saved. Defaults to the same directory as the input WAV file. --mode: Mode for processing the WAV file. Can be 'sfx' (sound effects) or 'music'. Defaults to 'music'. --split: Split mode for processing. Options are 'none', 'all', 'vocal', 'drums', 'bass', 'other'. Only applicable when mode is 'music'. Defaults to 'none'.
Example:
python generate_ahap.py example.wav --mode music --split vocalNew AHAP files will be saved in the specified directory, or the same directory as the WAV file if no output directory is specified.
Alternatively, you can use the AHAPpyUI.py file, which provides a minimal GUI for the conversion process.
python AHAPpyUI.py
- Transient and Continuous Events: The AHAP file contains both transient and continuous haptic events, synchronized with the audio content.
- Dynamic Parameter Calculation: Haptic parameters such as intensity and sharpness are calculated dynamically based on audio features.
- Customizable Parameters: You can adjust thresholds and window sizes to customize the conversion process according to your requirements.
- Music Mode and Splits: In music mode, you can specify different splits such as 'vocal', 'drums', 'bass', and 'other' to process specific components of the audio.
The conversion process involves the following steps:
- Load Audio: The input WAV file is loaded using PyDub and converted to a NumPy array.
- Feature Extraction: Audio features such as onsets and spectral centroid are extracted using Librosa.
- Event Generation: Transient and continuous haptic events are generated based on the audio features.
- Parameter Calculation: Haptic parameters such as intensity and sharpness are calculated dynamically.
- Output AHAP: The AHAP content is written in JSON format and saved to same directory as the input file.
If AHAPpy has helped you in your own projects I would love to hear about it! If you have any questions, feedback, or suggestions for improvement, feel free to reach out.