S-TLLR: STDP-inspired Temporal Local Learning Rule for Spiking Neural Networks (Event-based Optical Flow experiments)
This repository contains the official implementation of the event-based optical flow experiments reported in the paper S-TLLR: STDP-Inspired Temporal Local Learning Rule for Spiking Neural Networks, published in Transactions on Machine Learning Research (TMLR). All other experiments can be found at this repository.
Spiking Neural Networks (SNNs) are biologically plausible models that have been identified as potentially apt for deploying energy-efficient intelligence at the edge, particularly for sequential learning tasks. However, training of SNNs poses significant challenges due to the necessity for precise temporal and spatial credit assignment. Back-propagation through time (BPTT) algorithm, whilst the most widely used method for addressing these issues, incurs high computational cost due to its temporal dependency. In this work, we propose S-TLLR, a novel three-factor temporal local learning rule inspired by the Spike-Timing Dependent Plasticity (STDP) mechanism, aimed at training deep SNNs on event-based learning tasks. Furthermore, S-TLLR is designed to have low memory and time complexities, which are independent of the number of time steps, rendering it suitable for online learning on low-power edge devices. To demonstrate the scalability of our proposed method, we have conducted extensive evaluations on event-based datasets spanning a wide range of applications, such as image and gesture recognition, audio classification, and optical flow estimation. S-TLLR achieves comparable accuracy to BPTT (within
Clone this repository using:
git clone https://github.com/mapolinario94/S-TLLR-OpticalFlow.git
Create a conda environment using the environment.yml file:
conda env create -f environment.yml
Activate the conda environment:
conda activate testenv
The data for the outdoor_day
and indoor_flying
sequences can be found here.
Ground truth flow computed from the paper can also be downloaded here.
Download the *_data.hdf5
and *_gt.hdf5
files from the above link in their respective folders inside the /datasets.
Example: Download indoor_flying1_data.hdf5
and indoor_flying1_gt.hdf5
files into /datasets/indoor_flying1 folder.
Convert the hdf5 files into encoded format using /encoding/split_coding.py.
The basic syntax is:
For S-TLLR
:
python3 main_optical_flow_estimation.py --arch spiking_sfn_stllr --training-mode stllr
For BPTT
:
python3 main_optical_flow_estimation.py --arch spiking_sfn --training-mode bptt
The basic syntax is:
python3 main_optical_flow_estimation.py --arch spiking_sfn_stllr --training-mode stllr --evaluate --pretrained='checkpoint_path'
--data
: specifies the dataset folder /datasets
--savedir
: folder for saving training results
--workers
: number of workers to use
--render
: render flow outputs while evaluating
--evaluate-interval
: how many epochs to evaluate after
--pretrained
: path to pretrained model
Other available command line arguments for hyperparameter tuning can be found in the main_optical_flow_estimation.py
files.
If you use this code in your research, please cite our paper:
@article{
apolinario2025stllr,
title={S-{TLLR}: {STDP}-inspired Temporal Local Learning Rule for Spiking Neural Networks},
author={Marco Paul E. Apolinario and Kaushik Roy},
journal={Transactions on Machine Learning Research},
issn={2835-8856},
year={2025},
url={https://openreview.net/forum?id=CNaiJRcX84},
note={}
}
This repository is based on Spike-FlowNet by Chankyu Lee. The main modifications include the introduction of a Fully-Spiking FlowNet (FSFN) model with two variations for supporting temporal local training (S-TLLR) and non-local training (BPTT).