Clone the repo
https://github.com/Emerge-Lab/PufferDrive.git
Make a venv
uv venv
Activate the venv
source .venv/bin/activate
Install inih
wget https://github.com/benhoyt/inih/archive/r62.tar.gz
Inside the venv, install the dependencies
uv pip install -e .
Compile the C code
python setup.py build_ext --inplace --force
To test your setup, you can run
puffer train puffer_drive
Alternative options for working with pufferdrive are found at https://puffer.ai/docs.html
Start a training run
puffer train puffer_drive
To train with PufferDrive, you need to convert JSON files to map binaries. Run the following command with the path to your data folder:
python pufferlib/ocean/drive/drive.py
You can download the WOMD data from Hugging Face in two versions:
- Mini Dataset: GPUDrive_mini contains 1,000 training files and 300 test/validation files
- Full Dataset: GPUDrive contains 100,000 unique scenes
Note: Replace 'GPUDrive_mini' with 'GPUDrive' in your download commands if you want to use the full dataset.
For more training data compatible with PufferDrive, see ScenarioMax. The GPUDrive data format is fully compatible with PufferDrive.
Run the Raylib visualizer on a headless server and export as GIF.
sudo apt update
sudo apt install ffmpeg xvfb
For HPC(There are no root privileges), so install into the conda environment
conda install -c conda-forge xorg-x11-server-xvfb-cos6-x86_64
conda install -c conda-forge ffmpeg
ffmpeg
: Video processing and conversionxvfb
: Virtual display for headless environments
- Build the application:
bash scripts/build_ocean.sh drive local
- Run with virtual display:
xvfb-run -s "-screen 0 1280x720x24" ./drive
The -s
flag sets up a virtual screen at 1280x720 resolution with 24-bit color depth.
The visualizer will automatically generate a GIF file from the rendered frames.