Clone the repository:
git clone https://github.com/pedro-varela1/Automated-Source-Classification-of-Marmoset-Vocalizations.git
cd Automated-Source-Classification-of-Marmoset-Vocalizations
Build the container by replacing it with the tag
, container name
and version
you want:
docker build -t <tag>/<container_name>:<version> .
Finally, run the app:
docker container run -p 5000:5000 <tag>/<container_name>:<version>
In your browser, go to http://127.0.0.1:5000/.
- Audio in WAV format containing calls.
- CSV file containing at least three columns:
label
: Label indicating what type of call it is (only calls with the labelp
will be used);onset_s
: Start time of the call in seconds, relative to the audio;offset_s
: End time of the call in seconds, relative to the audio.
The data output will be a zip file containing images of the calls from each possible line that served as input for the marmoset classification model, as well as a CSV file with the model predictions added to each line.