In this repo, we would analyze the open source human action regonition data from UT Dallas
- Get Anaconda download
- Clone this repo
- Setup conda environment
conda env create -f=environment.yml
- Activate environment
source activate mmha
- Download data from here and save them under a folder named
data
at the project root directory - change directory to notebooks and launch a jupyter notebook
cd notebooks
jupyter notebook
The naming convention of a file is "ai_sj_tk_modality", where ai stands for action number i, sj stands for subject number j, tk stands for trial k, and modality corresponds to four data modalities (color, depth, skeleton, inertial).
The depth data is a (240, 320, 55) WidthxHeightxFrames matrix
.
Each frame contains an image
Example of the depth data looks like this
Each skeleton data is a 20 x 3 x num_frame matrix
.
Each row of a skeleton frame corresponds to three spatial coordinates of a joint.
The skeleton joint order in UTD-MAD dataset:
- head
- shoulder_center
- spine
- hip_center
- left_shoulder
- left_elbow
- left_wrist
- left_hand
- right_shoulder
- right_elbow
- right_wrist
- right_hand
- left_hip
- left_knee
- left_ankle
- left_foot
- right_hip
- right_knee
- right_ankle
- right_foot