Skip to content

Commit c7985a6

Browse files
committed
Update version to 0.16.0
1 parent 6a9c84e commit c7985a6

File tree

24 files changed

+66
-69
lines changed

24 files changed

+66
-69
lines changed

README.md

Lines changed: 4 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -2,10 +2,6 @@
22

33
<br>
44

5-
<!-- Delete on release branches -->
6-
<!-- CORTEX_VERSION_README_MINOR -->
7-
[install](https://cortex.dev/install)[docs](https://cortex.dev)[examples](https://github.com/cortexlabs/cortex/tree/0.15/examples)[we're hiring](https://angel.co/cortex-labs-inc/jobs)[chat with us](https://gitter.im/cortexlabs/cortex)<br><br>
8-
95
<!-- Set header Cache-Control=no-cache on the S3 object metadata (see https://help.github.com/en/articles/about-anonymized-image-urls) -->
106
![Demo](https://d1zqebknpdh033.cloudfront.net/demo/gif/v0.13_2.gif)
117

@@ -29,7 +25,7 @@
2925

3026
<!-- CORTEX_VERSION_README_MINOR -->
3127
```bash
32-
$ bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/0.15/get-cli.sh)"
28+
$ bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/0.16/get-cli.sh)"
3329
```
3430

3531
### Implement your predictor
@@ -149,6 +145,6 @@ Cortex is an open source alternative to serving models with SageMaker or buildin
149145
## Examples
150146

151147
<!-- CORTEX_VERSION_README_MINOR x3 -->
152-
* [Image classification](https://github.com/cortexlabs/cortex/tree/0.15/examples/tensorflow/image-classifier): deploy an Inception model to classify images.
153-
* [Search completion](https://github.com/cortexlabs/cortex/tree/0.15/examples/pytorch/search-completer): deploy Facebook's RoBERTa model to complete search terms.
154-
* [Text generation](https://github.com/cortexlabs/cortex/tree/0.15/examples/pytorch/text-generator): deploy Hugging Face's DistilGPT2 model to generate text.
148+
* [Image classification](https://github.com/cortexlabs/cortex/tree/0.16/examples/tensorflow/image-classifier): deploy an Inception model to classify images.
149+
* [Search completion](https://github.com/cortexlabs/cortex/tree/0.16/examples/pytorch/search-completer): deploy Facebook's RoBERTa model to complete search terms.
150+
* [Text generation](https://github.com/cortexlabs/cortex/tree/0.16/examples/pytorch/text-generator): deploy Hugging Face's DistilGPT2 model to generate text.

build/build-image.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ set -euo pipefail
1919

2020
ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")"/.. >/dev/null && pwd)"
2121

22-
CORTEX_VERSION=master
22+
CORTEX_VERSION=0.16.0
2323

2424
slim="false"
2525
while [[ $# -gt 0 ]]; do

build/cli.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ set -euo pipefail
1919

2020
ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")"/.. >/dev/null && pwd)"
2121

22-
CORTEX_VERSION=master
22+
CORTEX_VERSION=0.16.0
2323

2424
arg1=${1:-""}
2525
upload="false"

build/push-image.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@
1717

1818
set -euo pipefail
1919

20-
CORTEX_VERSION=master
20+
CORTEX_VERSION=0.16.0
2121

2222
slim="false"
2323
while [[ $# -gt 0 ]]; do

docs/cluster-management/config.md

Lines changed: 14 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -64,7 +64,7 @@ operator_load_balancer_scheme: internet-facing # must be "internet-facing" or "
6464
log_group: cortex
6565

6666
# whether to use spot instances in the cluster (default: false)
67-
# see https://cortex.dev/v/master/cluster-management/spot-instances for additional details on spot configuration
67+
# see https://cortex.dev/v/0.16/cluster-management/spot-instances for additional details on spot configuration
6868
spot: false
6969
```
7070
@@ -75,17 +75,17 @@ The docker images used by the Cortex cluster can also be overriden, although thi
7575
<!-- CORTEX_VERSION_BRANCH_STABLE -->
7676
```yaml
7777
# docker image paths
78-
image_operator: cortexlabs/operator:master
79-
image_manager: cortexlabs/manager:master
80-
image_downloader: cortexlabs/downloader:master
81-
image_request_monitor: cortexlabs/request-monitor:master
82-
image_cluster_autoscaler: cortexlabs/cluster-autoscaler:master
83-
image_metrics_server: cortexlabs/metrics-server:master
84-
image_nvidia: cortexlabs/nvidia:master
85-
image_fluentd: cortexlabs/fluentd:master
86-
image_statsd: cortexlabs/statsd:master
87-
image_istio_proxy: cortexlabs/istio-proxy:master
88-
image_istio_pilot: cortexlabs/istio-pilot:master
89-
image_istio_citadel: cortexlabs/istio-citadel:master
90-
image_istio_galley: cortexlabs/istio-galley:master
78+
image_operator: cortexlabs/operator:0.16.0
79+
image_manager: cortexlabs/manager:0.16.0
80+
image_downloader: cortexlabs/downloader:0.16.0
81+
image_request_monitor: cortexlabs/request-monitor:0.16.0
82+
image_cluster_autoscaler: cortexlabs/cluster-autoscaler:0.16.0
83+
image_metrics_server: cortexlabs/metrics-server:0.16.0
84+
image_nvidia: cortexlabs/nvidia:0.16.0
85+
image_fluentd: cortexlabs/fluentd:0.16.0
86+
image_statsd: cortexlabs/statsd:0.16.0
87+
image_istio_proxy: cortexlabs/istio-proxy:0.16.0
88+
image_istio_pilot: cortexlabs/istio-pilot:0.16.0
89+
image_istio_citadel: cortexlabs/istio-citadel:0.16.0
90+
image_istio_galley: cortexlabs/istio-galley:0.16.0
9191
```

docs/cluster-management/install.md

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@
88

99
<!-- CORTEX_VERSION_MINOR -->
1010
```bash
11-
$ bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/master/get-cli.sh)"
11+
$ bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/0.16/get-cli.sh)"
1212
```
1313

1414
## Running at scale on AWS
@@ -24,17 +24,18 @@ To use GPU nodes, you may need to subscribe to the [EKS-optimized AMI with GPU S
2424
<!-- CORTEX_VERSION_MINOR -->
2525
```bash
2626
# install the CLI on your machine
27-
$ bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/master/get-cli.sh)"
27+
$ bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/0.16/get-cli.sh)"
2828

2929
# provision infrastructure on AWS and spin up a cluster
3030
$ cortex cluster up
3131
```
3232

3333
## Deploy an example
3434

35+
<!-- CORTEX_VERSION_MINOR -->
3536
```bash
3637
# clone the Cortex repository
37-
git clone -b master https://github.com/cortexlabs/cortex.git
38+
git clone -b 0.16 https://github.com/cortexlabs/cortex.git
3839

3940
# navigate to the TensorFlow iris classification example
4041
cd cortex/examples/tensorflow/iris-classifier

docs/cluster-management/update.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ cortex cluster update
2222
cortex cluster down
2323

2424
# update your CLI
25-
bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/master/get-cli.sh)"
25+
bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/0.16/get-cli.sh)"
2626

2727
# confirm version
2828
cortex version

docs/deployments/deployment.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -63,4 +63,4 @@ deleting my-api
6363
<!-- CORTEX_VERSION_MINOR -->
6464
* [Tutorial](../../examples/sklearn/iris-classifier/README.md) provides a step-by-step walkthough of deploying an iris classifier API
6565
* [CLI documentation](../miscellaneous/cli.md) lists all CLI commands
66-
* [Examples](https://github.com/cortexlabs/cortex/tree/master/examples) demonstrate how to deploy models from common ML libraries
66+
* [Examples](https://github.com/cortexlabs/cortex/tree/0.16/examples) demonstrate how to deploy models from common ML libraries

docs/deployments/exporting.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ Here are examples for some common ML libraries:
1111
The recommended approach is export your PyTorch model with [torch.save()](https://pytorch.org/docs/stable/torch.html?highlight=save#torch.save). Here is PyTorch's documentation on [saving and loading models](https://pytorch.org/tutorials/beginner/saving_loading_models.html).
1212

1313
<!-- CORTEX_VERSION_MINOR -->
14-
[examples/pytorch/iris-classifier](https://github.com/cortexlabs/cortex/blob/master/examples/pytorch/iris-classifier) exports its trained model like this:
14+
[examples/pytorch/iris-classifier](https://github.com/cortexlabs/cortex/blob/0.16/examples/pytorch/iris-classifier) exports its trained model like this:
1515

1616
```python
1717
torch.save(model.state_dict(), "weights.pth")
@@ -22,7 +22,7 @@ torch.save(model.state_dict(), "weights.pth")
2222
It may also be possible to export your PyTorch model into the ONNX format using [torch.onnx.export()](https://pytorch.org/docs/stable/onnx.html#torch.onnx.export).
2323

2424
<!-- CORTEX_VERSION_MINOR -->
25-
For example, if [examples/pytorch/iris-classifier](https://github.com/cortexlabs/cortex/blob/master/examples/pytorch/iris-classifier) were to export the model to ONNX, it would look like this:
25+
For example, if [examples/pytorch/iris-classifier](https://github.com/cortexlabs/cortex/blob/0.16/examples/pytorch/iris-classifier) were to export the model to ONNX, it would look like this:
2626

2727
```python
2828
placeholder = torch.randn(1, 4)
@@ -50,7 +50,7 @@ A TensorFlow `SavedModel` directory should have this structure:
5050
```
5151

5252
<!-- CORTEX_VERSION_MINOR -->
53-
Most of the TensorFlow examples use this approach. Here is the relevant code from [examples/tensorflow/sentiment-analyzer](https://github.com/cortexlabs/cortex/blob/master/examples/tensorflow/sentiment-analyzer):
53+
Most of the TensorFlow examples use this approach. Here is the relevant code from [examples/tensorflow/sentiment-analyzer](https://github.com/cortexlabs/cortex/blob/0.16/examples/tensorflow/sentiment-analyzer):
5454

5555
```python
5656
import tensorflow as tf
@@ -88,14 +88,14 @@ aws s3 cp bert.zip s3://my-bucket/bert.zip
8888
```
8989

9090
<!-- CORTEX_VERSION_MINOR -->
91-
[examples/tensorflow/iris-classifier](https://github.com/cortexlabs/cortex/blob/master/examples/tensorflow/iris-classifier) also use the `SavedModel` approach, and includes a Python notebook demonstrating how it was exported.
91+
[examples/tensorflow/iris-classifier](https://github.com/cortexlabs/cortex/blob/0.16/examples/tensorflow/iris-classifier) also use the `SavedModel` approach, and includes a Python notebook demonstrating how it was exported.
9292

9393
### Other model formats
9494

9595
There are other ways to export Keras or TensorFlow models, and as long as they can be loaded and used to make predictions in Python, they will be supported by Cortex.
9696

9797
<!-- CORTEX_VERSION_MINOR -->
98-
For example, the `crnn` API in [examples/tensorflow/license-plate-reader](https://github.com/cortexlabs/cortex/blob/master/examples/tensorflow/license-plate-reader) uses this approach.
98+
For example, the `crnn` API in [examples/tensorflow/license-plate-reader](https://github.com/cortexlabs/cortex/blob/0.16/examples/tensorflow/license-plate-reader) uses this approach.
9999

100100
## Scikit-learn
101101

@@ -104,7 +104,7 @@ For example, the `crnn` API in [examples/tensorflow/license-plate-reader](https:
104104
Scikit-learn models are typically exported using `pickle`. Here is [Scikit-learn's documentation](https://scikit-learn.org/stable/modules/model_persistence.html).
105105

106106
<!-- CORTEX_VERSION_MINOR -->
107-
[examples/sklearn/iris-classifier](https://github.com/cortexlabs/cortex/blob/master/examples/sklearn/iris-classifier) uses this approach. Here is the relevant code:
107+
[examples/sklearn/iris-classifier](https://github.com/cortexlabs/cortex/blob/0.16/examples/sklearn/iris-classifier) uses this approach. Here is the relevant code:
108108

109109
```python
110110
pickle.dump(model, open("model.pkl", "wb"))
@@ -157,7 +157,7 @@ model.save_model("model.bin")
157157
It is also possible to export an XGBoost model to the ONNX format using [onnxmltools](https://github.com/onnx/onnxmltools).
158158

159159
<!-- CORTEX_VERSION_MINOR -->
160-
[examples/xgboost/iris-classifier](https://github.com/cortexlabs/cortex/blob/master/examples/xgboost/iris-classifier) uses this approach. Here is the relevant code:
160+
[examples/xgboost/iris-classifier](https://github.com/cortexlabs/cortex/blob/0.16/examples/xgboost/iris-classifier) uses this approach. Here is the relevant code:
161161

162162
```python
163163
from onnxmltools.convert import convert_xgboost

docs/deployments/predictors.md

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -70,10 +70,10 @@ For proper separation of concerns, it is recommended to use the constructor's `c
7070
### Examples
7171

7272
<!-- CORTEX_VERSION_MINOR -->
73-
Many of the [examples](https://github.com/cortexlabs/cortex/tree/master/examples) use the Python Predictor, including all of the PyTorch examples.
73+
Many of the [examples](https://github.com/cortexlabs/cortex/tree/0.16/examples) use the Python Predictor, including all of the PyTorch examples.
7474

7575
<!-- CORTEX_VERSION_MINOR -->
76-
Here is the Predictor for [examples/pytorch/iris-classifier](https://github.com/cortexlabs/cortex/tree/master/examples/pytorch/iris-classifier):
76+
Here is the Predictor for [examples/pytorch/iris-classifier](https://github.com/cortexlabs/cortex/tree/0.16/examples/pytorch/iris-classifier):
7777

7878
```python
7979
import re
@@ -151,7 +151,7 @@ xgboost==1.0.2
151151
```
152152

153153
<!-- CORTEX_VERSION_MINOR x2 -->
154-
The pre-installed system packages are listed in [images/python-predictor-cpu/Dockerfile](https://github.com/cortexlabs/cortex/tree/master/images/python-predictor-cpu/Dockerfile) (for CPU) or [images/python-predictor-gpu/Dockerfile](https://github.com/cortexlabs/cortex/tree/master/images/python-predictor-gpu/Dockerfile) (for GPU).
154+
The pre-installed system packages are listed in [images/python-predictor-cpu/Dockerfile](https://github.com/cortexlabs/cortex/tree/0.16/images/python-predictor-cpu/Dockerfile) (for CPU) or [images/python-predictor-gpu/Dockerfile](https://github.com/cortexlabs/cortex/tree/0.16/images/python-predictor-gpu/Dockerfile) (for GPU).
155155

156156
If your application requires additional dependencies, you can install additional [Python packages](python-packages.md) and [system packages](system-packages.md).
157157

@@ -184,17 +184,17 @@ class TensorFlowPredictor:
184184
```
185185

186186
<!-- CORTEX_VERSION_MINOR -->
187-
Cortex provides a `tensorflow_client` to your Predictor's constructor. `tensorflow_client` is an instance of [TensorFlowClient](https://github.com/cortexlabs/cortex/tree/master/pkg/workloads/cortex/lib/client/tensorflow.py) that manages a connection to a TensorFlow Serving container to make predictions using your model. It should be saved as an instance variable in your Predictor, and your `predict()` function should call `tensorflow_client.predict()` to make an inference with your exported TensorFlow model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
187+
Cortex provides a `tensorflow_client` to your Predictor's constructor. `tensorflow_client` is an instance of [TensorFlowClient](https://github.com/cortexlabs/cortex/tree/0.16/pkg/workloads/cortex/lib/client/tensorflow.py) that manages a connection to a TensorFlow Serving container to make predictions using your model. It should be saved as an instance variable in your Predictor, and your `predict()` function should call `tensorflow_client.predict()` to make an inference with your exported TensorFlow model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
188188

189189
For proper separation of concerns, it is recommended to use the constructor's `config` paramater for information such as configurable model parameters or download links for initialization files. You define `config` in your [API configuration](api-configuration.md), and it is passed through to your Predictor's constructor.
190190

191191
### Examples
192192

193193
<!-- CORTEX_VERSION_MINOR -->
194-
Most of the examples in [examples/tensorflow](https://github.com/cortexlabs/cortex/tree/master/examples/tensorflow) use the TensorFlow Predictor.
194+
Most of the examples in [examples/tensorflow](https://github.com/cortexlabs/cortex/tree/0.16/examples/tensorflow) use the TensorFlow Predictor.
195195

196196
<!-- CORTEX_VERSION_MINOR -->
197-
Here is the Predictor for [examples/tensorflow/iris-classifier](https://github.com/cortexlabs/cortex/tree/master/examples/tensorflow/iris-classifier):
197+
Here is the Predictor for [examples/tensorflow/iris-classifier](https://github.com/cortexlabs/cortex/tree/0.16/examples/tensorflow/iris-classifier):
198198

199199
```python
200200
labels = ["setosa", "versicolor", "virginica"]
@@ -226,7 +226,7 @@ tensorflow==2.1.0
226226
```
227227

228228
<!-- CORTEX_VERSION_MINOR -->
229-
The pre-installed system packages are listed in [images/tensorflow-predictor/Dockerfile](https://github.com/cortexlabs/cortex/tree/master/images/tensorflow-predictor/Dockerfile).
229+
The pre-installed system packages are listed in [images/tensorflow-predictor/Dockerfile](https://github.com/cortexlabs/cortex/tree/0.16/images/tensorflow-predictor/Dockerfile).
230230

231231
If your application requires additional dependencies, you can install additional [Python packages](python-packages.md) and [system packages](system-packages.md).
232232

@@ -259,14 +259,14 @@ class ONNXPredictor:
259259
```
260260

261261
<!-- CORTEX_VERSION_MINOR -->
262-
Cortex provides an `onnx_client` to your Predictor's constructor. `onnx_client` is an instance of [ONNXClient](https://github.com/cortexlabs/cortex/tree/master/pkg/workloads/cortex/lib/client/onnx.py) that manages an ONNX Runtime session to make predictions using your model. It should be saved as an instance variable in your Predictor, and your `predict()` function should call `onnx_client.predict()` to make an inference with your exported ONNX model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
262+
Cortex provides an `onnx_client` to your Predictor's constructor. `onnx_client` is an instance of [ONNXClient](https://github.com/cortexlabs/cortex/tree/0.16/pkg/workloads/cortex/lib/client/onnx.py) that manages an ONNX Runtime session to make predictions using your model. It should be saved as an instance variable in your Predictor, and your `predict()` function should call `onnx_client.predict()` to make an inference with your exported ONNX model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
263263

264264
For proper separation of concerns, it is recommended to use the constructor's `config` paramater for information such as configurable model parameters or download links for initialization files. You define `config` in your [API configuration](api-configuration.md), and it is passed through to your Predictor's constructor.
265265

266266
### Examples
267267

268268
<!-- CORTEX_VERSION_MINOR -->
269-
[examples/xgboost/iris-classifier](https://github.com/cortexlabs/cortex/tree/master/examples/xgboost/iris-classifier) uses the ONNX Predictor:
269+
[examples/xgboost/iris-classifier](https://github.com/cortexlabs/cortex/tree/0.16/examples/xgboost/iris-classifier) uses the ONNX Predictor:
270270

271271
```python
272272
labels = ["setosa", "versicolor", "virginica"]
@@ -303,7 +303,7 @@ requests==2.23.0
303303
```
304304

305305
<!-- CORTEX_VERSION_MINOR x2 -->
306-
The pre-installed system packages are listed in [images/onnx-predictor-cpu/Dockerfile](https://github.com/cortexlabs/cortex/tree/master/images/onnx-predictor-cpu/Dockerfile) (for CPU) or [images/onnx-predictor-gpu/Dockerfile](https://github.com/cortexlabs/cortex/tree/master/images/onnx-predictor-gpu/Dockerfile) (for GPU).
306+
The pre-installed system packages are listed in [images/onnx-predictor-cpu/Dockerfile](https://github.com/cortexlabs/cortex/tree/0.16/images/onnx-predictor-cpu/Dockerfile) (for CPU) or [images/onnx-predictor-gpu/Dockerfile](https://github.com/cortexlabs/cortex/tree/0.16/images/onnx-predictor-gpu/Dockerfile) (for GPU).
307307

308308
If your application requires additional dependencies, you can install additional [Python packages](python-packages.md) and [system packages](system-packages.md).
309309

0 commit comments

Comments
 (0)