Skip to content

Commit bc9d589

Browse files
committed
Update version to 0.21.0
1 parent 345cf44 commit bc9d589

File tree

38 files changed

+96
-109
lines changed

38 files changed

+96
-109
lines changed

README.md

Lines changed: 2 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -1,15 +1,3 @@
1-
<!-- Delete on release branches -->
2-
<img src='https://s3-us-west-2.amazonaws.com/cortex-public/logo.png' height='42'>
3-
4-
<br>
5-
6-
<!-- Delete on release branches -->
7-
<!-- CORTEX_VERSION_README_MINOR -->
8-
9-
[install](https://docs.cortex.dev/install)[documentation](https://docs.cortex.dev)[examples](https://github.com/cortexlabs/cortex/tree/0.20/examples)[we're hiring](https://angel.co/cortex-labs-inc/jobs)[chat with us](https://gitter.im/cortexlabs/cortex)
10-
11-
<br>
12-
131
# Model serving at scale
142

153
### Deploy
@@ -142,10 +130,9 @@ cortex is ready!
142130

143131
## Get started
144132

145-
<!-- CORTEX_VERSION_README_MINOR -->
146133
```bash
147-
bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/0.20/get-cli.sh)"
134+
pip install cortex
148135
```
149136

150137
<!-- CORTEX_VERSION_README_MINOR -->
151-
See our [installation guide](https://docs.cortex.dev/install), then deploy one of our [examples](https://github.com/cortexlabs/cortex/tree/0.20/examples) or bring your own models to build [realtime APIs](https://docs.cortex.dev/deployments/realtime-api) and [batch APIs](https://docs.cortex.dev/deployments/batch-api).
138+
See our [installation guide](https://docs.cortex.dev/install), then deploy one of our [examples](https://github.com/cortexlabs/cortex/tree/0.21/examples) or bring your own models to build [realtime APIs](https://docs.cortex.dev/deployments/realtime-api) and [batch APIs](https://docs.cortex.dev/deployments/batch-api).

build/build-image.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ set -euo pipefail
1919

2020
ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")"/.. >/dev/null && pwd)"
2121

22-
CORTEX_VERSION=master
22+
CORTEX_VERSION=0.21.0
2323

2424
image=$1
2525
dir="${ROOT}/images/${image/-slim}"

build/cli.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ set -euo pipefail
1919

2020
ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")"/.. >/dev/null && pwd)"
2121

22-
CORTEX_VERSION=master
22+
CORTEX_VERSION=0.21.0
2323

2424
arg1=${1:-""}
2525
upload="false"

build/push-image.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@
1717

1818
set -euo pipefail
1919

20-
CORTEX_VERSION=master
20+
CORTEX_VERSION=0.21.0
2121

2222
image=$1
2323

cli/cluster/errors.go

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -55,7 +55,7 @@ func ErrorFailedToConnectOperator(originalError error, envName string, operatorU
5555
msg += "\nif you have a cluster running:\n"
5656
msg += fmt.Sprintf(" → run `cortex cluster info --env %s` to update your environment (include `--config <cluster.yaml>` if you have a cluster configuration file)\n", envName)
5757
// CORTEX_VERSION_MINOR
58-
msg += " → if you set `operator_load_balancer_scheme: internal` in your cluster configuration file, your CLI must run from within a VPC that has access to your cluster's VPC (see https://docs.cortex.dev/v/master/guides/vpc-peering)\n"
58+
msg += " → if you set `operator_load_balancer_scheme: internal` in your cluster configuration file, your CLI must run from within a VPC that has access to your cluster's VPC (see https://docs.cortex.dev/v/0.21/guides/vpc-peering)\n"
5959

6060
return errors.WithStack(&errors.Error{
6161
Kind: ErrFailedToConnectOperator,

docs/cluster-management/config.md

Lines changed: 19 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ instance_volume_type: gp2
3939

4040
# whether the subnets used for EC2 instances should be public or private (default: "public")
4141
# if "public", instances will be assigned public IP addresses; if "private", instances won't have public IPs and a NAT gateway will be created to allow outgoing network requests
42-
# see https://docs.cortex.dev/v/master/miscellaneous/security#private-cluster for more information
42+
# see https://docs.cortex.dev/v/0.21/miscellaneous/security#private-cluster for more information
4343
subnet_visibility: public # must be "public" or "private"
4444

4545
# whether to include a NAT gateway with the cluster (a NAT gateway is necessary when using private subnets)
@@ -48,12 +48,12 @@ nat_gateway: none # must be "none", "single", or "highly_available" (highly_ava
4848

4949
# whether the API load balancer should be internet-facing or internal (default: "internet-facing")
5050
# note: if using "internal", APIs will still be accessible via the public API Gateway endpoint unless you also disable API Gateway in your API's configuration (if you do that, you must configure VPC Peering to connect to your APIs)
51-
# see https://docs.cortex.dev/v/master/miscellaneous/security#private-cluster for more information
51+
# see https://docs.cortex.dev/v/0.21/miscellaneous/security#private-cluster for more information
5252
api_load_balancer_scheme: internet-facing # must be "internet-facing" or "internal"
5353

5454
# whether the operator load balancer should be internet-facing or internal (default: "internet-facing")
55-
# note: if using "internal", you must configure VPC Peering to connect your CLI to your cluster operator (https://docs.cortex.dev/v/master/guides/vpc-peering)
56-
# see https://docs.cortex.dev/v/master/miscellaneous/security#private-operator for more information
55+
# note: if using "internal", you must configure VPC Peering to connect your CLI to your cluster operator (https://docs.cortex.dev/v/0.21/guides/vpc-peering)
56+
# see https://docs.cortex.dev/v/0.21/miscellaneous/security#private-operator for more information
5757
operator_load_balancer_scheme: internet-facing # must be "internet-facing" or "internal"
5858

5959
# whether to disable API gateway cluster-wide
@@ -65,10 +65,10 @@ api_gateway: public # must be "public" or "none"
6565
tags: # <string>: <string> map of key/value pairs
6666

6767
# whether to use spot instances in the cluster (default: false)
68-
# see https://docs.cortex.dev/v/master/cluster-management/spot-instances for additional details on spot configuration
68+
# see https://docs.cortex.dev/v/0.21/cluster-management/spot-instances for additional details on spot configuration
6969
spot: false
7070

71-
# see https://docs.cortex.dev/v/master/guides/custom-domain for instructions on how to set up a custom domain
71+
# see https://docs.cortex.dev/v/0.21/guides/custom-domain for instructions on how to set up a custom domain
7272
ssl_certificate_arn:
7373

7474
# primary CIDR block for the cluster's VPC (default: 192.168.0.0/16)
@@ -82,17 +82,17 @@ The docker images used by the Cortex cluster can also be overridden, although th
8282
<!-- CORTEX_VERSION_BRANCH_STABLE -->
8383
```yaml
8484
# docker image paths
85-
image_operator: cortexlabs/operator:master
86-
image_manager: cortexlabs/manager:master
87-
image_downloader: cortexlabs/downloader:master
88-
image_request_monitor: cortexlabs/request-monitor:master
89-
image_cluster_autoscaler: cortexlabs/cluster-autoscaler:master
90-
image_metrics_server: cortexlabs/metrics-server:master
91-
image_inferentia: cortexlabs/inferentia:master
92-
image_neuron_rtd: cortexlabs/neuron-rtd:master
93-
image_nvidia: cortexlabs/nvidia:master
94-
image_fluentd: cortexlabs/fluentd:master
95-
image_statsd: cortexlabs/statsd:master
96-
image_istio_proxy: cortexlabs/istio-proxy:master
97-
image_istio_pilot: cortexlabs/istio-pilot:master
85+
image_operator: cortexlabs/operator:0.21.0
86+
image_manager: cortexlabs/manager:0.21.0
87+
image_downloader: cortexlabs/downloader:0.21.0
88+
image_request_monitor: cortexlabs/request-monitor:0.21.0
89+
image_cluster_autoscaler: cortexlabs/cluster-autoscaler:0.21.0
90+
image_metrics_server: cortexlabs/metrics-server:0.21.0
91+
image_inferentia: cortexlabs/inferentia:0.21.0
92+
image_neuron_rtd: cortexlabs/neuron-rtd:0.21.0
93+
image_nvidia: cortexlabs/nvidia:0.21.0
94+
image_fluentd: cortexlabs/fluentd:0.21.0
95+
image_statsd: cortexlabs/statsd:0.21.0
96+
image_istio_proxy: cortexlabs/istio-proxy:0.21.0
97+
image_istio_pilot: cortexlabs/istio-pilot:0.21.0
9898
```

docs/cluster-management/install.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ See [here](../miscellaneous/cli.md#install-cortex-cli-without-python-client) to
1616
<!-- CORTEX_VERSION_MINOR -->
1717
```bash
1818
# clone the Cortex repository
19-
git clone -b master https://github.com/cortexlabs/cortex.git
19+
git clone -b 0.21 https://github.com/cortexlabs/cortex.git
2020

2121
# navigate to the Pytorch text generator example
2222
cd cortex/examples/pytorch/text-generator
@@ -87,6 +87,6 @@ You can now run the same commands shown above to deploy the text generator to AW
8787

8888
<!-- CORTEX_VERSION_MINOR -->
8989
* Try the [tutorial](../../examples/pytorch/text-generator/README.md) to learn more about how to use Cortex.
90-
* Deploy one of our [examples](https://github.com/cortexlabs/cortex/tree/master/examples).
90+
* Deploy one of our [examples](https://github.com/cortexlabs/cortex/tree/0.21/examples).
9191
* See our [exporting guide](../guides/exporting.md) for how to export your model to use in an API.
9292
* See [uninstall](uninstall.md) if you'd like to spin down your cluster.

docs/cluster-management/update.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ cortex cluster configure
1717
cortex cluster down
1818

1919
# update your CLI
20-
bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/master/get-cli.sh)"
20+
bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/0.21/get-cli.sh)"
2121

2222
# confirm version
2323
cortex version

docs/deployments/batch-api/deployment.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -122,4 +122,4 @@ deleting my-api
122122
<!-- CORTEX_VERSION_MINOR -->
123123
* [Tutorial](../../../examples/batch/image-classifier/README.md) provides a step-by-step walkthrough of deploying an image classification batch API
124124
* [CLI documentation](../../miscellaneous/cli.md) lists all CLI commands
125-
* [Examples](https://github.com/cortexlabs/cortex/tree/master/examples/batch) demonstrate how to deploy models from common ML libraries
125+
* [Examples](https://github.com/cortexlabs/cortex/tree/0.21/examples/batch) demonstrate how to deploy models from common ML libraries

docs/deployments/batch-api/predictors.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -95,7 +95,7 @@ For proper separation of concerns, it is recommended to use the constructor's `c
9595
### Examples
9696

9797
<!-- CORTEX_VERSION_MINOR -->
98-
You can find an example of a BatchAPI using a PythonPredictor in [examples/batch/image-classifier](https://github.com/cortexlabs/cortex/tree/master/examples/batch/image-classifier).
98+
You can find an example of a BatchAPI using a PythonPredictor in [examples/batch/image-classifier](https://github.com/cortexlabs/cortex/tree/0.21/examples/batch/image-classifier).
9999

100100
### Pre-installed packages
101101

@@ -166,7 +166,7 @@ torchvision==0.6.1
166166
```
167167

168168
<!-- CORTEX_VERSION_MINOR x3 -->
169-
The pre-installed system packages are listed in [images/python-predictor-cpu/Dockerfile](https://github.com/cortexlabs/cortex/tree/master/images/python-predictor-cpu/Dockerfile) (for CPU), [images/python-predictor-gpu/Dockerfile](https://github.com/cortexlabs/cortex/tree/master/images/python-predictor-gpu/Dockerfile) (for GPU), or [images/python-predictor-inf/Dockerfile](https://github.com/cortexlabs/cortex/tree/master/images/python-predictor-inf/Dockerfile) (for Inferentia).
169+
The pre-installed system packages are listed in [images/python-predictor-cpu/Dockerfile](https://github.com/cortexlabs/cortex/tree/0.21/images/python-predictor-cpu/Dockerfile) (for CPU), [images/python-predictor-gpu/Dockerfile](https://github.com/cortexlabs/cortex/tree/0.21/images/python-predictor-gpu/Dockerfile) (for GPU), or [images/python-predictor-inf/Dockerfile](https://github.com/cortexlabs/cortex/tree/0.21/images/python-predictor-inf/Dockerfile) (for Inferentia).
170170

171171
If your application requires additional dependencies, you can install additional [Python packages](../python-packages.md) and [system packages](../system-packages.md).
172172

@@ -223,7 +223,7 @@ class TensorFlowPredictor:
223223
```
224224

225225
<!-- CORTEX_VERSION_MINOR -->
226-
Cortex provides a `tensorflow_client` to your Predictor's constructor. `tensorflow_client` is an instance of [TensorFlowClient](https://github.com/cortexlabs/cortex/tree/master/pkg/workloads/cortex/lib/client/tensorflow.py) that manages a connection to a TensorFlow Serving container to make predictions using your model. It should be saved as an instance variable in your Predictor, and your `predict()` function should call `tensorflow_client.predict()` to make an inference with your exported TensorFlow model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
226+
Cortex provides a `tensorflow_client` to your Predictor's constructor. `tensorflow_client` is an instance of [TensorFlowClient](https://github.com/cortexlabs/cortex/tree/0.21/pkg/workloads/cortex/lib/client/tensorflow.py) that manages a connection to a TensorFlow Serving container to make predictions using your model. It should be saved as an instance variable in your Predictor, and your `predict()` function should call `tensorflow_client.predict()` to make an inference with your exported TensorFlow model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
227227

228228
When multiple models are defined using the Predictor's `models` field, the `tensorflow_client.predict()` method expects a second argument `model_name` which must hold the name of the model that you want to use for inference (for example: `self.client.predict(payload, "text-generator")`). See the [multi model guide](../../guides/multi-model.md#tensorflow-predictor) for more information.
229229

@@ -232,7 +232,7 @@ For proper separation of concerns, it is recommended to use the constructor's `c
232232
### Examples
233233

234234
<!-- CORTEX_VERSION_MINOR -->
235-
You can find an example of a BatchAPI using a TensorFlowPredictor in [examples/batch/tensorflow](https://github.com/cortexlabs/cortex/tree/master/examples/batch/tensorflow).
235+
You can find an example of a BatchAPI using a TensorFlowPredictor in [examples/batch/tensorflow](https://github.com/cortexlabs/cortex/tree/0.21/examples/batch/tensorflow).
236236

237237
### Pre-installed packages
238238

@@ -253,7 +253,7 @@ tensorflow==2.3.0
253253
```
254254

255255
<!-- CORTEX_VERSION_MINOR -->
256-
The pre-installed system packages are listed in [images/tensorflow-predictor/Dockerfile](https://github.com/cortexlabs/cortex/tree/master/images/tensorflow-predictor/Dockerfile).
256+
The pre-installed system packages are listed in [images/tensorflow-predictor/Dockerfile](https://github.com/cortexlabs/cortex/tree/0.21/images/tensorflow-predictor/Dockerfile).
257257

258258
If your application requires additional dependencies, you can install additional [Python packages](../python-packages.md) and [system packages](../system-packages.md).
259259

@@ -310,7 +310,7 @@ class ONNXPredictor:
310310
```
311311

312312
<!-- CORTEX_VERSION_MINOR -->
313-
Cortex provides an `onnx_client` to your Predictor's constructor. `onnx_client` is an instance of [ONNXClient](https://github.com/cortexlabs/cortex/tree/master/pkg/workloads/cortex/lib/client/onnx.py) that manages an ONNX Runtime session to make predictions using your model. It should be saved as an instance variable in your Predictor, and your `predict()` function should call `onnx_client.predict()` to make an inference with your exported ONNX model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
313+
Cortex provides an `onnx_client` to your Predictor's constructor. `onnx_client` is an instance of [ONNXClient](https://github.com/cortexlabs/cortex/tree/0.21/pkg/workloads/cortex/lib/client/onnx.py) that manages an ONNX Runtime session to make predictions using your model. It should be saved as an instance variable in your Predictor, and your `predict()` function should call `onnx_client.predict()` to make an inference with your exported ONNX model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
314314

315315
When multiple models are defined using the Predictor's `models` field, the `onnx_client.predict()` method expects a second argument `model_name` which must hold the name of the model that you want to use for inference (for example: `self.client.predict(model_input, "text-generator")`). See the [multi model guide](../../guides/multi-model.md#onnx-predictor) for more information.
316316

@@ -319,7 +319,7 @@ For proper separation of concerns, it is recommended to use the constructor's `c
319319
### Examples
320320

321321
<!-- CORTEX_VERSION_MINOR -->
322-
You can find an example of a BatchAPI using an ONNXPredictor in [examples/batch/onnx](https://github.com/cortexlabs/cortex/tree/master/examples/batch/onnx).
322+
You can find an example of a BatchAPI using an ONNXPredictor in [examples/batch/onnx](https://github.com/cortexlabs/cortex/tree/0.21/examples/batch/onnx).
323323

324324
### Pre-installed packages
325325

@@ -337,6 +337,6 @@ requests==2.24.0
337337
```
338338

339339
<!-- CORTEX_VERSION_MINOR x2 -->
340-
The pre-installed system packages are listed in [images/onnx-predictor-cpu/Dockerfile](https://github.com/cortexlabs/cortex/tree/master/images/onnx-predictor-cpu/Dockerfile) (for CPU) or [images/onnx-predictor-gpu/Dockerfile](https://github.com/cortexlabs/cortex/tree/master/images/onnx-predictor-gpu/Dockerfile) (for GPU).
340+
The pre-installed system packages are listed in [images/onnx-predictor-cpu/Dockerfile](https://github.com/cortexlabs/cortex/tree/0.21/images/onnx-predictor-cpu/Dockerfile) (for CPU) or [images/onnx-predictor-gpu/Dockerfile](https://github.com/cortexlabs/cortex/tree/0.21/images/onnx-predictor-gpu/Dockerfile) (for GPU).
341341

342342
If your application requires additional dependencies, you can install additional [Python packages](../python-packages.md) and [system packages](../system-packages.md).

0 commit comments

Comments
 (0)