Skip to content

Commit 11408d1

Browse files
committed
Update version to 0.20.0
1 parent 77a36de commit 11408d1

File tree

37 files changed

+113
-112
lines changed

37 files changed

+113
-112
lines changed

README.md

Lines changed: 3 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -1,24 +1,15 @@
1-
<!-- Delete on release branches -->
2-
<img src='https://s3-us-west-2.amazonaws.com/cortex-public/logo.png' height='42'>
3-
4-
<br>
5-
61
# Build machine learning APIs
72

83
Cortex makes deploying, scaling, and managing machine learning systems in production simple. We believe that developers in any organization should be able to add natural language processing, computer vision, and other machine learning capabilities to their applications without having to worry about infrastructure.
94

10-
<!-- Delete on release branches -->
11-
<!-- CORTEX_VERSION_README_MINOR -->
12-
[install](https://docs.cortex.dev/install)[documentation](https://docs.cortex.dev)[examples](https://github.com/cortexlabs/cortex/tree/0.19/examples)[we're hiring](https://angel.co/cortex-labs-inc/jobs)[chat with us](https://gitter.im/cortexlabs/cortex)
13-
145
<br>
156

167
# Key features
178

189
### Deploy
1910

2011
* Run Cortex locally or as a production cluster on your AWS account.
21-
* Deploy TensorFlow, PyTorch, scikit-learn, and other models as realtime APIs or batch APIs.
12+
* Deploy TensorFlow, PyTorch, Keras, ONNX, XGBoost, scikit-learn, and other models as realtime APIs or batch APIs.
2213
* Define preprocessing and postprocessing steps in Python.
2314

2415
### Manage
@@ -52,11 +43,11 @@ Here's how to deploy GPT-2 as a scalable text generation API:
5243

5344
<!-- CORTEX_VERSION_README_MINOR -->
5445
```bash
55-
bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/0.19/get-cli.sh)"
46+
bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/0.20/get-cli.sh)"
5647
```
5748

5849
<!-- CORTEX_VERSION_README_MINOR -->
59-
See our [installation guide](https://docs.cortex.dev/install), then deploy one of our [examples](https://github.com/cortexlabs/cortex/tree/0.19/examples) or bring your own models to build [realtime APIs](https://docs.cortex.dev/deployments/realtime-api) and [batch APIs](https://docs.cortex.dev/deployments/batch-api).
50+
See our [installation guide](https://docs.cortex.dev/install), then deploy one of our [examples](https://github.com/cortexlabs/cortex/tree/0.20/examples) or bring your own models to build [realtime APIs](https://docs.cortex.dev/deployments/realtime-api) and [batch APIs](https://docs.cortex.dev/deployments/batch-api).
6051

6152
### Learn more
6253

build/build-image.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ set -euo pipefail
1919

2020
ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")"/.. >/dev/null && pwd)"
2121

22-
CORTEX_VERSION=master
22+
CORTEX_VERSION=0.20.0
2323

2424
slim="false"
2525
while [[ $# -gt 0 ]]; do

build/cli.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ set -euo pipefail
1919

2020
ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")"/.. >/dev/null && pwd)"
2121

22-
CORTEX_VERSION=master
22+
CORTEX_VERSION=0.20.0
2323

2424
arg1=${1:-""}
2525
upload="false"

build/lint.sh

Lines changed: 15 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -70,7 +70,8 @@ fi
7070
# Check for missing license
7171
output=$(cd "$ROOT" && find . -type f \
7272
! -path "./vendor/*" \
73-
! -path "./.vscode/*" \
73+
! -path "**/.vscode/*" \
74+
! -path "**/__pycache__/*" \
7475
! -path "./examples/*" \
7576
! -path "./dev/config/*" \
7677
! -path "./bin/*" \
@@ -94,7 +95,8 @@ if [ "$is_release_branch" = "true" ]; then
9495
output=$(cd "$ROOT" && find . -type f \
9596
! -path "./build/lint.sh" \
9697
! -path "./vendor/*" \
97-
! -path "./.vscode/*" \
98+
! -path "**/.vscode/*" \
99+
! -path "**/__pycache__/*" \
98100
! -path "./docs/contributing/development.md" \
99101
! -path "./bin/*" \
100102
! -path "./.git/*" \
@@ -112,7 +114,8 @@ if [ "$is_release_branch" = "true" ]; then
112114
! -path "./build/lint.sh" \
113115
! -path "./dev/update_version_comments.sh" \
114116
! -path "./vendor/*" \
115-
! -path "./.vscode/*" \
117+
! -path "**/.vscode/*" \
118+
! -path "**/__pycache__/*" \
116119
! -path "./bin/*" \
117120
! -path "./.git/*" \
118121
! -name ".*" \
@@ -156,6 +159,7 @@ else
156159
output=$(cd "$ROOT/examples" && find . -type f \
157160
! -path "./README.md" \
158161
! -path "./utils/*" \
162+
! -path "**/__pycache__/*" \
159163
! -name "*.json" \
160164
! -name "*.txt" \
161165
! -name ".*" \
@@ -170,7 +174,8 @@ fi
170174
# Check for trailing whitespace
171175
output=$(cd "$ROOT" && find . -type f \
172176
! -path "./vendor/*" \
173-
! -path "./.vscode/*" \
177+
! -path "**/.vscode/*" \
178+
! -path "**/__pycache__/*" \
174179
! -path "./bin/*" \
175180
! -path "./.git/*" \
176181
! -name ".*" \
@@ -184,7 +189,8 @@ fi
184189
# Check for missing new line at end of file
185190
output=$(cd "$ROOT" && find . -type f \
186191
! -path "./vendor/*" \
187-
! -path "./.vscode/*" \
192+
! -path "**/.vscode/*" \
193+
! -path "**/__pycache__/*" \
188194
! -path "./bin/*" \
189195
! -path "./.git/*" \
190196
! -name ".*" \
@@ -198,7 +204,8 @@ fi
198204
# Check for multiple new lines at end of file
199205
output=$(cd "$ROOT" && find . -type f \
200206
! -path "./vendor/*" \
201-
! -path "./.vscode/*" \
207+
! -path "**/.vscode/*" \
208+
! -path "**/__pycache__/*" \
202209
! -path "./bin/*" \
203210
! -path "./.git/*" \
204211
! -name ".*" \
@@ -212,7 +219,8 @@ fi
212219
# Check for new line(s) at beginning of file
213220
output=$(cd "$ROOT" && find . -type f \
214221
! -path "./vendor/*" \
215-
! -path "./.vscode/*" \
222+
! -path "**/.vscode/*" \
223+
! -path "**/__pycache__/*" \
216224
! -path "./bin/*" \
217225
! -path "./.git/*" \
218226
! -name ".*" \

build/push-image.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@
1717

1818
set -euo pipefail
1919

20-
CORTEX_VERSION=master
20+
CORTEX_VERSION=0.20.0
2121

2222
slim="false"
2323
while [[ $# -gt 0 ]]; do

docs/cluster-management/config.md

Lines changed: 21 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ instance_volume_type: gp2
3939

4040
# whether the subnets used for EC2 instances should be public or private (default: "public")
4141
# if "public", instances will be assigned public IP addresses; if "private", instances won't have public IPs and a NAT gateway will be created to allow outgoing network requests
42-
# see https://docs.cortex.dev/v/master/miscellaneous/security#private-cluster for more information
42+
# see https://docs.cortex.dev/v/0.20/miscellaneous/security#private-cluster for more information
4343
subnet_visibility: public # must be "public" or "private"
4444

4545
# whether to include a NAT gateway with the cluster (a NAT gateway is necessary when using private subnets)
@@ -48,12 +48,12 @@ nat_gateway: none # must be "none", "single", or "highly_available" (highly_ava
4848

4949
# whether the API load balancer should be internet-facing or internal (default: "internet-facing")
5050
# note: if using "internal", APIs will still be accessible via the public API Gateway endpoint unless you also disable API Gateway in your API's configuration (if you do that, you must configure VPC Peering to connect to your APIs)
51-
# see https://docs.cortex.dev/v/master/miscellaneous/security#private-cluster for more information
51+
# see https://docs.cortex.dev/v/0.20/miscellaneous/security#private-cluster for more information
5252
api_load_balancer_scheme: internet-facing # must be "internet-facing" or "internal"
5353

5454
# whether the operator load balancer should be internet-facing or internal (default: "internet-facing")
55-
# note: if using "internal", you must configure VPC Peering to connect your CLI to your cluster operator (https://docs.cortex.dev/v/master/guides/vpc-peering)
56-
# see https://docs.cortex.dev/v/master/miscellaneous/security#private-cluster for more information
55+
# note: if using "internal", you must configure VPC Peering to connect your CLI to your cluster operator (https://docs.cortex.dev/v/0.20/guides/vpc-peering)
56+
# see https://docs.cortex.dev/v/0.20/miscellaneous/security#private-cluster for more information
5757
operator_load_balancer_scheme: internet-facing # must be "internet-facing" or "internal"
5858

5959
# whether to disable API gateway cluster-wide
@@ -68,10 +68,10 @@ log_group: cortex
6868
tags: # <string>: <string> map of key/value pairs
6969

7070
# whether to use spot instances in the cluster (default: false)
71-
# see https://docs.cortex.dev/v/master/cluster-management/spot-instances for additional details on spot configuration
71+
# see https://docs.cortex.dev/v/0.20/cluster-management/spot-instances for additional details on spot configuration
7272
spot: false
7373

74-
# see https://docs.cortex.dev/v/master/guides/custom-domain for instructions on how to set up a custom domain
74+
# see https://docs.cortex.dev/v/0.20/guides/custom-domain for instructions on how to set up a custom domain
7575
ssl_certificate_arn:
7676

7777
# primary CIDR block for the cluster's VPC (default: 192.168.0.0/16)
@@ -85,19 +85,19 @@ The docker images used by the Cortex cluster can also be overridden, although th
8585
<!-- CORTEX_VERSION_BRANCH_STABLE -->
8686
```yaml
8787
# docker image paths
88-
image_operator: cortexlabs/operator:master
89-
image_manager: cortexlabs/manager:master
90-
image_downloader: cortexlabs/downloader:master
91-
image_request_monitor: cortexlabs/request-monitor:master
92-
image_cluster_autoscaler: cortexlabs/cluster-autoscaler:master
93-
image_metrics_server: cortexlabs/metrics-server:master
94-
image_inferentia: cortexlabs/inferentia:master
95-
image_neuron_rtd: cortexlabs/neuron-rtd:master
96-
image_nvidia: cortexlabs/nvidia:master
97-
image_fluentd: cortexlabs/fluentd:master
98-
image_statsd: cortexlabs/statsd:master
99-
image_istio_proxy: cortexlabs/istio-proxy:master
100-
image_istio_pilot: cortexlabs/istio-pilot:master
101-
image_istio_citadel: cortexlabs/istio-citadel:master
102-
image_istio_galley: cortexlabs/istio-galley:master
88+
image_operator: cortexlabs/operator:0.20.0
89+
image_manager: cortexlabs/manager:0.20.0
90+
image_downloader: cortexlabs/downloader:0.20.0
91+
image_request_monitor: cortexlabs/request-monitor:0.20.0
92+
image_cluster_autoscaler: cortexlabs/cluster-autoscaler:0.20.0
93+
image_metrics_server: cortexlabs/metrics-server:0.20.0
94+
image_inferentia: cortexlabs/inferentia:0.20.0
95+
image_neuron_rtd: cortexlabs/neuron-rtd:0.20.0
96+
image_nvidia: cortexlabs/nvidia:0.20.0
97+
image_fluentd: cortexlabs/fluentd:0.20.0
98+
image_statsd: cortexlabs/statsd:0.20.0
99+
image_istio_proxy: cortexlabs/istio-proxy:0.20.0
100+
image_istio_pilot: cortexlabs/istio-pilot:0.20.0
101+
image_istio_citadel: cortexlabs/istio-citadel:0.20.0
102+
image_istio_galley: cortexlabs/istio-galley:0.20.0
103103
```

docs/cluster-management/install.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44

55
<!-- CORTEX_VERSION_MINOR -->
66
```bash
7-
bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/master/get-cli.sh)"
7+
bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/0.20/get-cli.sh)"
88
```
99

1010
You must have [Docker](https://docs.docker.com/install) installed to run Cortex locally or to create a cluster on AWS.
@@ -14,7 +14,7 @@ You must have [Docker](https://docs.docker.com/install) installed to run Cortex
1414
<!-- CORTEX_VERSION_MINOR -->
1515
```bash
1616
# clone the Cortex repository
17-
git clone -b master https://github.com/cortexlabs/cortex.git
17+
git clone -b 0.20 https://github.com/cortexlabs/cortex.git
1818

1919
# navigate to the Pytorch text generator example
2020
cd cortex/examples/pytorch/text-generator
@@ -60,6 +60,6 @@ You can now run the same commands shown above to deploy the text generator to AW
6060

6161
<!-- CORTEX_VERSION_MINOR -->
6262
* Try the [tutorial](../../examples/pytorch/text-generator/README.md) to learn more about how to use Cortex.
63-
* Deploy one of our [examples](https://github.com/cortexlabs/cortex/tree/master/examples).
63+
* Deploy one of our [examples](https://github.com/cortexlabs/cortex/tree/0.20/examples).
6464
* See our [exporting guide](../guides/exporting.md) for how to export your model to use in an API.
6565
* See [uninstall](uninstall.md) if you'd like to spin down your cluster.

docs/cluster-management/update.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ cortex cluster configure
1717
cortex cluster down
1818

1919
# update your CLI
20-
bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/master/get-cli.sh)"
20+
bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/0.20/get-cli.sh)"
2121

2222
# confirm version
2323
cortex version

docs/deployments/batch-api/deployment.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -122,4 +122,4 @@ deleting my-api
122122
<!-- CORTEX_VERSION_MINOR -->
123123
* [Tutorial](../../../examples/batch/image-classifier/README.md) provides a step-by-step walkthrough of deploying an image classification batch API
124124
* [CLI documentation](../../miscellaneous/cli.md) lists all CLI commands
125-
* [Examples](https://github.com/cortexlabs/cortex/tree/master/examples/batch) demonstrate how to deploy models from common ML libraries
125+
* [Examples](https://github.com/cortexlabs/cortex/tree/0.20/examples/batch) demonstrate how to deploy models from common ML libraries

docs/deployments/batch-api/predictors.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -95,7 +95,7 @@ For proper separation of concerns, it is recommended to use the constructor's `c
9595
### Examples
9696

9797
<!-- CORTEX_VERSION_MINOR -->
98-
You can find an example of a BatchAPI using a PythonPredictor in [examples/batch/image-classifier](https://github.com/cortexlabs/cortex/tree/master/examples/batch/image-classifier).
98+
You can find an example of a BatchAPI using a PythonPredictor in [examples/batch/image-classifier](https://github.com/cortexlabs/cortex/tree/0.20/examples/batch/image-classifier).
9999

100100
### Pre-installed packages
101101

@@ -166,7 +166,7 @@ torchvision==0.6.1
166166
```
167167

168168
<!-- CORTEX_VERSION_MINOR x3 -->
169-
The pre-installed system packages are listed in [images/python-predictor-cpu/Dockerfile](https://github.com/cortexlabs/cortex/tree/master/images/python-predictor-cpu/Dockerfile) (for CPU), [images/python-predictor-gpu/Dockerfile](https://github.com/cortexlabs/cortex/tree/master/images/python-predictor-gpu/Dockerfile) (for GPU), or [images/python-predictor-inf/Dockerfile](https://github.com/cortexlabs/cortex/tree/master/images/python-predictor-inf/Dockerfile) (for Inferentia).
169+
The pre-installed system packages are listed in [images/python-predictor-cpu/Dockerfile](https://github.com/cortexlabs/cortex/tree/0.20/images/python-predictor-cpu/Dockerfile) (for CPU), [images/python-predictor-gpu/Dockerfile](https://github.com/cortexlabs/cortex/tree/0.20/images/python-predictor-gpu/Dockerfile) (for GPU), or [images/python-predictor-inf/Dockerfile](https://github.com/cortexlabs/cortex/tree/0.20/images/python-predictor-inf/Dockerfile) (for Inferentia).
170170

171171
If your application requires additional dependencies, you can install additional [Python packages](../python-packages.md) and [system packages](../system-packages.md).
172172

@@ -223,7 +223,7 @@ class TensorFlowPredictor:
223223
```
224224

225225
<!-- CORTEX_VERSION_MINOR -->
226-
Cortex provides a `tensorflow_client` to your Predictor's constructor. `tensorflow_client` is an instance of [TensorFlowClient](https://github.com/cortexlabs/cortex/tree/master/pkg/workloads/cortex/lib/client/tensorflow.py) that manages a connection to a TensorFlow Serving container to make predictions using your model. It should be saved as an instance variable in your Predictor, and your `predict()` function should call `tensorflow_client.predict()` to make an inference with your exported TensorFlow model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
226+
Cortex provides a `tensorflow_client` to your Predictor's constructor. `tensorflow_client` is an instance of [TensorFlowClient](https://github.com/cortexlabs/cortex/tree/0.20/pkg/workloads/cortex/lib/client/tensorflow.py) that manages a connection to a TensorFlow Serving container to make predictions using your model. It should be saved as an instance variable in your Predictor, and your `predict()` function should call `tensorflow_client.predict()` to make an inference with your exported TensorFlow model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
227227

228228
When multiple models are defined using the Predictor's `models` field, the `tensorflow_client.predict()` method expects a second argument `model_name` which must hold the name of the model that you want to use for inference (for example: `self.client.predict(payload, "text-generator")`). See the [multi model guide](../../guides/multi-model.md#tensorflow-predictor) for more information.
229229

@@ -232,7 +232,7 @@ For proper separation of concerns, it is recommended to use the constructor's `c
232232
### Examples
233233

234234
<!-- CORTEX_VERSION_MINOR -->
235-
You can find an example of a BatchAPI using a TensorFlowPredictor in [examples/batch/tensorflow](https://github.com/cortexlabs/cortex/tree/master/examples/batch/tensorflow).
235+
You can find an example of a BatchAPI using a TensorFlowPredictor in [examples/batch/tensorflow](https://github.com/cortexlabs/cortex/tree/0.20/examples/batch/tensorflow).
236236

237237
### Pre-installed packages
238238

@@ -253,7 +253,7 @@ tensorflow==2.3.0
253253
```
254254

255255
<!-- CORTEX_VERSION_MINOR -->
256-
The pre-installed system packages are listed in [images/tensorflow-predictor/Dockerfile](https://github.com/cortexlabs/cortex/tree/master/images/tensorflow-predictor/Dockerfile).
256+
The pre-installed system packages are listed in [images/tensorflow-predictor/Dockerfile](https://github.com/cortexlabs/cortex/tree/0.20/images/tensorflow-predictor/Dockerfile).
257257

258258
If your application requires additional dependencies, you can install additional [Python packages](../python-packages.md) and [system packages](../system-packages.md).
259259

@@ -310,7 +310,7 @@ class ONNXPredictor:
310310
```
311311

312312
<!-- CORTEX_VERSION_MINOR -->
313-
Cortex provides an `onnx_client` to your Predictor's constructor. `onnx_client` is an instance of [ONNXClient](https://github.com/cortexlabs/cortex/tree/master/pkg/workloads/cortex/lib/client/onnx.py) that manages an ONNX Runtime session to make predictions using your model. It should be saved as an instance variable in your Predictor, and your `predict()` function should call `onnx_client.predict()` to make an inference with your exported ONNX model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
313+
Cortex provides an `onnx_client` to your Predictor's constructor. `onnx_client` is an instance of [ONNXClient](https://github.com/cortexlabs/cortex/tree/0.20/pkg/workloads/cortex/lib/client/onnx.py) that manages an ONNX Runtime session to make predictions using your model. It should be saved as an instance variable in your Predictor, and your `predict()` function should call `onnx_client.predict()` to make an inference with your exported ONNX model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
314314

315315
When multiple models are defined using the Predictor's `models` field, the `onnx_client.predict()` method expects a second argument `model_name` which must hold the name of the model that you want to use for inference (for example: `self.client.predict(model_input, "text-generator")`). See the [multi model guide](../../guides/multi-model.md#onnx-predictor) for more information.
316316

@@ -319,7 +319,7 @@ For proper separation of concerns, it is recommended to use the constructor's `c
319319
### Examples
320320

321321
<!-- CORTEX_VERSION_MINOR -->
322-
You can find an example of a BatchAPI using an ONNXPredictor in [examples/batch/onnx](https://github.com/cortexlabs/cortex/tree/master/examples/batch/onnx).
322+
You can find an example of a BatchAPI using an ONNXPredictor in [examples/batch/onnx](https://github.com/cortexlabs/cortex/tree/0.20/examples/batch/onnx).
323323

324324
### Pre-installed packages
325325

@@ -337,6 +337,6 @@ requests==2.24.0
337337
```
338338

339339
<!-- CORTEX_VERSION_MINOR x2 -->
340-
The pre-installed system packages are listed in [images/onnx-predictor-cpu/Dockerfile](https://github.com/cortexlabs/cortex/tree/master/images/onnx-predictor-cpu/Dockerfile) (for CPU) or [images/onnx-predictor-gpu/Dockerfile](https://github.com/cortexlabs/cortex/tree/master/images/onnx-predictor-gpu/Dockerfile) (for GPU).
340+
The pre-installed system packages are listed in [images/onnx-predictor-cpu/Dockerfile](https://github.com/cortexlabs/cortex/tree/0.20/images/onnx-predictor-cpu/Dockerfile) (for CPU) or [images/onnx-predictor-gpu/Dockerfile](https://github.com/cortexlabs/cortex/tree/0.20/images/onnx-predictor-gpu/Dockerfile) (for GPU).
341341

342342
If your application requires additional dependencies, you can install additional [Python packages](../python-packages.md) and [system packages](../system-packages.md).

0 commit comments

Comments
 (0)