Before developing your custom NN plugin, it is recommended to look at the following tutorials
- Agent explanation
- How to create plugin
- General detailed guide: Integrate any custom neural network
- Easy guide: Integrate a custom Pytorch Segmentation neural network
Here is the principal scheme that describes what agent is and how agent works. The main idea behind basic Supervisely plugins is the following: plugin works directly with hard drive, it reads input data and some settings from input directory and produces result to another directory, and all logs are printed to STDOUT. It allows to develop, run and use any plugin in isolation from Supervisely ecosystem. Therefore Agent downloads all necessary data for the plugin, parses and submits its STDOUT and uploads plugin results to the main Supervisely server.
NOTICE: Also there are some advanced versions of plugins that can directly communicates with Supervisely instance via API without agent, but it is not covered by this tutorial.
Let's take a look at the example: UNet plugin. Directory structure is the following:
<plugin directory>
├── Dockerfile
├── plugin_info.json
├── predefined_run_configs.json
├── README.md
├── src
│ ├── <some sources here, you can use any directory structure you want>
│ ├── deploy.py
│ ├── inference.py
│ └── train.py
└── VERSION
Let's look into every file:
plugin_info.json
contains some information that shortly describes plugin. This information is packed into docker image to the special label. Docker image building step will be described later. After you build it to docker image and attach it ot the Supervisely platform, this information will look something like this:
It contains the following json (example):
{
"title": "<plugin name>",
"description": "<short description>",
"type": "architecture"
}
type
field is a type of plugin. It is using in UI and Supervisely supports different plugin types: ("dtl", "architecture" - for NN plugins, "custom", "import", "general_plugin" - for advanced usage.
-
README.md
Some useful information about plugin, that you would like to share with users. For example: how to run NN, some configs explanation, examples of predition, etc. This information is packed into docker image to the special label. Docker image building step will be described later. -
Dockerfile
is used during building docker image (example) -
predefined_run_configs.json
(example). This configs are available on the "Run plugin" page. For example, when you start training or inference. This config is a way to pass some settings from UI to the plugin, e.g. how to run NN (full image or sliding window manner). You can pass any settings you want, as long as you implement the usage of the fields. This information is packed into docker image to the special label.
VERSION
- file is docker image name and version (example). Is used to name docker image and define tag for it. Advanced users can use it in CI.
<image_name>:<image tag>
train.py
,inference.py
,deploy.py
files are responsible for corresponding running modes. For example, if filedeploy.py
is missing, deploy mode will be unavailable in UI. The list of available modes is packing to docker image labels during docker image building.
Read how to build docker image for plugin and how to add it to the platform here.
Once you have a directory with input data, you can start development/debugging process. This section explains the the structure of the input directories for training, inference and deploy modes for UNet plugin. You can take examples and use them as a quick start.
NOTICE: this directories can be obtained if you will run agent with environment variable DELETE_TASK_DIR_ON_FINISH
with value false
(default value is true
). Then you can go to <agent directory>/tasks/<task-id>
Let's consider the example with UNet plugin.
Here is the bash command that is needed to mount necessary directories. In this example we will run PyCharm CE right inside container and use GUI for development/debug. You can change the script to modify this behaviour.
nvidia-docker run \
--rm \
-ti \
-e DISPLAY=$DISPLAY \
-v /tmp/.X11-unix:/tmp/.X11-unix \
-v ~/.Xauthority:/root/.Xauthority \
--entrypoint="" \
--shm-size='1G' \
-e PYTHONUNBUFFERED='1' \
-v ${PWD}/src:/workdir/src \
-v ${PWD}/../../../supervisely_lib:/workdir/supervisely_lib \
-v /home/ds/work/examples_sly_task_data/inference/:/sly_task_data \
-v /home/ds/soft/pycharm:/pycharm \
-v /home/ds/pycharm-settings/unet_v2:/root/.PyCharmCE2018.2 \
-v /home/ds/pycharm-settings/unet_v2__idea:/workdir/.idea \
supervisely/nn-unet-v2:latest \
bash
Just create a script ./run_plugin_dev.sh
and put it to the root directory of your plugin. Before you run the script, please execute command xhost +
(X11 is not working otherwise. This command should be executed only once until you restart computer).
Lets slice and dice the run command above:
- for X11 support (to be able to run GUI app inside container, in our case it will be PyCharm CE):
-e DISPLAY=$DISPLAY \
-v /tmp/.X11-unix:/tmp/.X11-unix \
-v ~/.Xauthority:/root/.Xauthority \
- Override the default entrypoint:
--entrypoint="" \
- we should mount two directories: plugin sources and supervisely lib. It allows to change the files inside the container and they will be automatically change on your host machine (because you are working directly with files from host inside a container). If you will not do it, all changes in sources will be missed when you exit container.
-v ${PWD}/src:/workdir/src \
-v ${PWD}/../../../supervisely_lib:/workdir/supervisely_lib \
- Mount directory with input data
-v /home/ds/work/train_data_example:/sly_task_data \
- Mount directoty with PyCharm app
-v /home/ds/soft/pycharm:/pycharm \
- Mount directories to store python interpreter index (optional). It helps to speed up python indexing. You can run and kill container as many times as you want. Python cache will not be affected and all available packages will be indexed in a seconds (instead of minutes).
-v /home/ds/pycharm-settings/unet_v2:/root/.PyCharmCE2018.2 \
-v /home/ds/pycharm-settings/unet_v2__idea:/workdir/.idea \
Do not forget to change PyCharmCE2018.2
to your version
- Docker image name
supervisely/nn-unet-v2:latest \
- Command to run
bash
Once you modify the script and execute, you will be inside container's bash.
NOTICE: most of the steps (98%) below you have to do only once at first launch.
- Run command to start PyCharm CE, that we've mounted earlier.
/pycharm/bin/pycharm.sh
- IDE is started. Scroll licanse agreement and press "Accept" button
- During the first launch, you will see the following. Press "Open"
- Choose directory
/workdir
and press "OK"
- Now you see an opened PyCharm project
- Define Correct Interpreter for your project
Go to "File"->"Settings..."->"Project: workdir"
Click "Project interpreter link"
Click "Settings (icon)" on the right top corner -> "Add button". You will see the following "Add python interpreter window"
Choose "System interpreter" page on the left
And press "three dots" button on the right. You will see the following window. Just select path /usr/local/bin/python3.6
and press "OK" button.
Once you choose correct interpreter you will see the list of available packages:
Press OK button. And wait until interpreter indexing will be finished (again, it's a one time procedure).
Now you can chose src/train.py
(or inference.py
or deploy.py
) and start debugging!
If you will ahve any questions regarding implementing custom NN plugins, or some missed or unclear parts in this guide, please contact tech support and we will help.