The following four files showcase how to tune PyTorch models using the @metaflow_ray
decorator with @kubernetes
.
-
gpu_profile.py
contains the@gpu_profile
decorator, and is available here. It is used in the fileflow_gpu.py
-
utils.py
contains helper functions to train and test a custom PyTorch model. -
flow_cpu.py
contains a flow that uses@metaflow_ray
with@kubernetes
to tune the PyTorch model.
- This can be run using:
python examples/tune_pytorch/flow_cpu.py --no-pylint --environment=pypi run
flow_gpu.py
contains a flow that uses@metaflow_ray
with@kubernetes
to tune the PyTorch model. It also passes ingpu
requirement to@kubernetes
and therun
function fromutils.py
.
- This can be run using:
python examples/tune_pytorch/flow_gpu.py --no-pylint --environment=pypi run
- If you are on the Outerbounds platform, you can leverage
fast-bakery
for blazingly fast docker image builds. This can be used bypython examples/tune_pytorch/flow_gpu.py --no-pylint --environment=fast-bakery run