Skip to content

Commit 990a161

Browse files
committed
Update version to 0.19.0
1 parent 7d45c8e commit 990a161

File tree

33 files changed

+97
-104
lines changed

33 files changed

+97
-104
lines changed

README.md

Lines changed: 5 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -1,31 +1,21 @@
1-
<!-- Delete on release branches -->
2-
<img src='https://s3-us-west-2.amazonaws.com/cortex-public/logo.png' height='42'>
3-
4-
<br>
5-
61
# Build machine learning APIs
72

83
Cortex makes deploying, scaling, and managing machine learning systems in production simple. We believe that developers in any organization should be able to add natural language processing, computer vision, and other machine learning capabilities to their applications without having to worry about infrastructure.
94

10-
<!-- Delete on release branches -->
11-
<!-- CORTEX_VERSION_README_MINOR -->
12-
[install](https://docs.cortex.dev/install)[documentation](https://docs.cortex.dev)[examples](https://github.com/cortexlabs/cortex/tree/0.18/examples)[we're hiring](https://angel.co/cortex-labs-inc/jobs)[chat with us](https://gitter.im/cortexlabs/cortex)
13-
14-
<br>
15-
165
# Key features
176

187
### Deploy
198

209
* Run Cortex locally or as a production cluster on your AWS account.
21-
* Deploy TensorFlow, PyTorch, scikit-learn, and other models as web APIs.
10+
* Deploy TensorFlow, PyTorch, scikit-learn, and other models as realtime APIs or batch APIs.
2211
* Define preprocessing and postprocessing steps in Python.
2312

2413
### Manage
2514

2615
* Update APIs with no downtime.
2716
* Stream logs from your APIs to your CLI.
2817
* Monitor API performance and track predictions.
18+
* Run A/B tests.
2919

3020
### Scale
3121

@@ -51,11 +41,12 @@ Here's how to deploy GPT-2 as a scalable text generation API:
5141

5242
<!-- CORTEX_VERSION_README_MINOR -->
5343
```bash
54-
bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/0.18/get-cli.sh)"
44+
bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/0.19/get-cli.sh)"
5545
```
5646

5747
<!-- CORTEX_VERSION_README_MINOR -->
58-
See our [installation guide](https://docs.cortex.dev/install), then deploy one of our [examples](https://github.com/cortexlabs/cortex/tree/0.18/examples) or bring your own models to build [custom APIs](https://docs.cortex.dev/guides/exporting).
48+
See our [installation guide](https://docs.cortex.dev/install), then deploy one of our [examples](https://github.com/cortexlabs/cortex/tree/0.19/examples) or bring your own models to build [realtime APIs](https://docs.cortex.dev/deployments/realtime-api) and [batch APIs](https://docs.cortex.dev/deployments/batch-api).
49+
5950

6051
### Learn more
6152

build/build-image.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ set -euo pipefail
1919

2020
ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")"/.. >/dev/null && pwd)"
2121

22-
CORTEX_VERSION=master
22+
CORTEX_VERSION=0.19.0
2323

2424
slim="false"
2525
while [[ $# -gt 0 ]]; do

build/cli.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ set -euo pipefail
1919

2020
ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")"/.. >/dev/null && pwd)"
2121

22-
CORTEX_VERSION=master
22+
CORTEX_VERSION=0.19.0
2323

2424
arg1=${1:-""}
2525
upload="false"

build/push-image.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@
1717

1818
set -euo pipefail
1919

20-
CORTEX_VERSION=master
20+
CORTEX_VERSION=0.19.0
2121

2222
slim="false"
2323
while [[ $# -gt 0 ]]; do

docs/cluster-management/config.md

Lines changed: 21 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,7 @@ instance_volume_type: gp2
4747

4848
# whether the subnets used for EC2 instances should be public or private (default: "public")
4949
# if "public", instances will be assigned public IP addresses; if "private", instances won't have public IPs and a NAT gateway will be created to allow outgoing network requests
50-
# see https://docs.cortex.dev/v/master/miscellaneous/security#private-cluster for more information
50+
# see https://docs.cortex.dev/v/0.19/miscellaneous/security#private-cluster for more information
5151
subnet_visibility: public # must be "public" or "private"
5252

5353
# whether to include a NAT gateway with the cluster (a NAT gateway is necessary when using private subnets)
@@ -56,12 +56,12 @@ nat_gateway: none # must be "none", "single", or "highly_available" (highly_ava
5656

5757
# whether the API load balancer should be internet-facing or internal (default: "internet-facing")
5858
# note: if using "internal", APIs will still be accessible via the public API Gateway endpoint unless you also disable API Gateway in your API's configuration (if you do that, you must configure VPC Peering to connect to your APIs)
59-
# see https://docs.cortex.dev/v/master/miscellaneous/security#private-cluster for more information
59+
# see https://docs.cortex.dev/v/0.19/miscellaneous/security#private-cluster for more information
6060
api_load_balancer_scheme: internet-facing # must be "internet-facing" or "internal"
6161

6262
# whether the operator load balancer should be internet-facing or internal (default: "internet-facing")
63-
# note: if using "internal", you must configure VPC Peering to connect your CLI to your cluster operator (https://docs.cortex.dev/v/master/guides/vpc-peering)
64-
# see https://docs.cortex.dev/v/master/miscellaneous/security#private-cluster for more information
63+
# note: if using "internal", you must configure VPC Peering to connect your CLI to your cluster operator (https://docs.cortex.dev/v/0.19/guides/vpc-peering)
64+
# see https://docs.cortex.dev/v/0.19/miscellaneous/security#private-cluster for more information
6565
operator_load_balancer_scheme: internet-facing # must be "internet-facing" or "internal"
6666

6767
# whether to disable API gateway cluster-wide
@@ -76,10 +76,10 @@ log_group: cortex
7676
tags: # <string>: <string> map of key/value pairs
7777

7878
# whether to use spot instances in the cluster (default: false)
79-
# see https://docs.cortex.dev/v/master/cluster-management/spot-instances for additional details on spot configuration
79+
# see https://docs.cortex.dev/v/0.19/cluster-management/spot-instances for additional details on spot configuration
8080
spot: false
8181

82-
# see https://docs.cortex.dev/v/master/guides/custom-domain for instructions on how to set up a custom domain
82+
# see https://docs.cortex.dev/v/0.19/guides/custom-domain for instructions on how to set up a custom domain
8383
ssl_certificate_arn:
8484
```
8585
@@ -90,19 +90,19 @@ The docker images used by the Cortex cluster can also be overridden, although th
9090
<!-- CORTEX_VERSION_BRANCH_STABLE -->
9191
```yaml
9292
# docker image paths
93-
image_operator: cortexlabs/operator:master
94-
image_manager: cortexlabs/manager:master
95-
image_downloader: cortexlabs/downloader:master
96-
image_request_monitor: cortexlabs/request-monitor:master
97-
image_cluster_autoscaler: cortexlabs/cluster-autoscaler:master
98-
image_metrics_server: cortexlabs/metrics-server:master
99-
image_inferentia: cortexlabs/inferentia:master
100-
image_neuron_rtd: cortexlabs/neuron-rtd:master
101-
image_nvidia: cortexlabs/nvidia:master
102-
image_fluentd: cortexlabs/fluentd:master
103-
image_statsd: cortexlabs/statsd:master
104-
image_istio_proxy: cortexlabs/istio-proxy:master
105-
image_istio_pilot: cortexlabs/istio-pilot:master
106-
image_istio_citadel: cortexlabs/istio-citadel:master
107-
image_istio_galley: cortexlabs/istio-galley:master
93+
image_operator: cortexlabs/operator:0.19.0
94+
image_manager: cortexlabs/manager:0.19.0
95+
image_downloader: cortexlabs/downloader:0.19.0
96+
image_request_monitor: cortexlabs/request-monitor:0.19.0
97+
image_cluster_autoscaler: cortexlabs/cluster-autoscaler:0.19.0
98+
image_metrics_server: cortexlabs/metrics-server:0.19.0
99+
image_inferentia: cortexlabs/inferentia:0.19.0
100+
image_neuron_rtd: cortexlabs/neuron-rtd:0.19.0
101+
image_nvidia: cortexlabs/nvidia:0.19.0
102+
image_fluentd: cortexlabs/fluentd:0.19.0
103+
image_statsd: cortexlabs/statsd:0.19.0
104+
image_istio_proxy: cortexlabs/istio-proxy:0.19.0
105+
image_istio_pilot: cortexlabs/istio-pilot:0.19.0
106+
image_istio_citadel: cortexlabs/istio-citadel:0.19.0
107+
image_istio_galley: cortexlabs/istio-galley:0.19.0
108108
```

docs/cluster-management/install.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44

55
<!-- CORTEX_VERSION_MINOR -->
66
```bash
7-
bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/master/get-cli.sh)"
7+
bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/0.19/get-cli.sh)"
88
```
99

1010
You must have [Docker](https://docs.docker.com/install) installed to run Cortex locally or to create a cluster on AWS.
@@ -14,7 +14,7 @@ You must have [Docker](https://docs.docker.com/install) installed to run Cortex
1414
<!-- CORTEX_VERSION_MINOR -->
1515
```bash
1616
# clone the Cortex repository
17-
git clone -b master https://github.com/cortexlabs/cortex.git
17+
git clone -b 0.19 https://github.com/cortexlabs/cortex.git
1818

1919
# navigate to the Pytorch text generator example
2020
cd cortex/examples/pytorch/text-generator
@@ -60,6 +60,6 @@ You can now run the same commands shown above to deploy the text generator to AW
6060

6161
<!-- CORTEX_VERSION_MINOR -->
6262
* Try the [tutorial](../../examples/pytorch/text-generator/README.md) to learn more about how to use Cortex.
63-
* Deploy one of our [examples](https://github.com/cortexlabs/cortex/tree/master/examples).
63+
* Deploy one of our [examples](https://github.com/cortexlabs/cortex/tree/0.19/examples).
6464
* See our [exporting guide](../guides/exporting.md) for how to export your model to use in an API.
6565
* See [uninstall](uninstall.md) if you'd like to spin down your cluster.

docs/cluster-management/update.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ cortex cluster configure
1717
cortex cluster down
1818

1919
# update your CLI
20-
bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/master/get-cli.sh)"
20+
bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/0.19/get-cli.sh)"
2121

2222
# confirm version
2323
cortex version

docs/deployments/batch-api/deployment.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -122,4 +122,4 @@ deleting my-api
122122
<!-- CORTEX_VERSION_MINOR -->
123123
* [Tutorial](../../../examples/batch/image-classifier/README.md) provides a step-by-step walkthrough of deploying an image classification batch API
124124
* [CLI documentation](../../miscellaneous/cli.md) lists all CLI commands
125-
* [Examples](https://github.com/cortexlabs/cortex/tree/master/examples/batch) demonstrate how to deploy models from common ML libraries
125+
* [Examples](https://github.com/cortexlabs/cortex/tree/0.19/examples/batch) demonstrate how to deploy models from common ML libraries

docs/deployments/batch-api/predictors.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -79,7 +79,7 @@ For proper separation of concerns, it is recommended to use the constructor's `c
7979
### Examples
8080

8181
<!-- CORTEX_VERSION_MINOR -->
82-
You can find an example of a BatchAPI using a PythonPredictor in [examples/batch/image-classifier](https://github.com/cortexlabs/cortex/tree/master/examples/batch/image-classifier).
82+
You can find an example of a BatchAPI using a PythonPredictor in [examples/batch/image-classifier](https://github.com/cortexlabs/cortex/tree/0.19/examples/batch/image-classifier).
8383

8484
### Pre-installed packages
8585

@@ -148,7 +148,7 @@ torchvision==0.4.2
148148
```
149149

150150
<!-- CORTEX_VERSION_MINOR x3 -->
151-
The pre-installed system packages are listed in [images/python-predictor-cpu/Dockerfile](https://github.com/cortexlabs/cortex/tree/master/images/python-predictor-cpu/Dockerfile) (for CPU), [images/python-predictor-gpu/Dockerfile](https://github.com/cortexlabs/cortex/tree/master/images/python-predictor-gpu/Dockerfile) (for GPU), or [images/python-predictor-inf/Dockerfile](https://github.com/cortexlabs/cortex/tree/master/images/python-predictor-inf/Dockerfile) (for Inferentia).
151+
The pre-installed system packages are listed in [images/python-predictor-cpu/Dockerfile](https://github.com/cortexlabs/cortex/tree/0.19/images/python-predictor-cpu/Dockerfile) (for CPU), [images/python-predictor-gpu/Dockerfile](https://github.com/cortexlabs/cortex/tree/0.19/images/python-predictor-gpu/Dockerfile) (for GPU), or [images/python-predictor-inf/Dockerfile](https://github.com/cortexlabs/cortex/tree/0.19/images/python-predictor-inf/Dockerfile) (for Inferentia).
152152

153153
If your application requires additional dependencies, you can install additional [Python packages](../python-packages.md) and [system packages](../system-packages.md).
154154

@@ -187,7 +187,7 @@ class TensorFlowPredictor:
187187
```
188188

189189
<!-- CORTEX_VERSION_MINOR -->
190-
Cortex provides a `tensorflow_client` to your Predictor's constructor. `tensorflow_client` is an instance of [TensorFlowClient](https://github.com/cortexlabs/cortex/tree/master/pkg/workloads/cortex/lib/client/tensorflow.py) that manages a connection to a TensorFlow Serving container to make predictions using your model. It should be saved as an instance variable in your Predictor, and your `predict()` function should call `tensorflow_client.predict()` to make an inference with your exported TensorFlow model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
190+
Cortex provides a `tensorflow_client` to your Predictor's constructor. `tensorflow_client` is an instance of [TensorFlowClient](https://github.com/cortexlabs/cortex/tree/0.19/pkg/workloads/cortex/lib/client/tensorflow.py) that manages a connection to a TensorFlow Serving container to make predictions using your model. It should be saved as an instance variable in your Predictor, and your `predict()` function should call `tensorflow_client.predict()` to make an inference with your exported TensorFlow model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
191191

192192
When multiple models are defined using the Predictor's `models` field, the `tensorflow_client.predict()` method expects a second argument `model_name` which must hold the name of the model that you want to use for inference (for example: `self.client.predict(payload, "text-generator")`). See the [multi model guide](../../guides/multi-model.md#tensorflow-predictor) for more information.
193193

@@ -196,7 +196,7 @@ For proper separation of concerns, it is recommended to use the constructor's `c
196196
### Examples
197197

198198
<!-- CORTEX_VERSION_MINOR -->
199-
You can find an example of a BatchAPI using a TensorFlowPredictor in [examples/batch/tensorflow](https://github.com/cortexlabs/cortex/tree/master/examples/batch/tensorflow).
199+
You can find an example of a BatchAPI using a TensorFlowPredictor in [examples/batch/tensorflow](https://github.com/cortexlabs/cortex/tree/0.19/examples/batch/tensorflow).
200200

201201
### Pre-installed packages
202202

@@ -217,7 +217,7 @@ tensorflow==2.1.0
217217
```
218218

219219
<!-- CORTEX_VERSION_MINOR -->
220-
The pre-installed system packages are listed in [images/tensorflow-predictor/Dockerfile](https://github.com/cortexlabs/cortex/tree/master/images/tensorflow-predictor/Dockerfile).
220+
The pre-installed system packages are listed in [images/tensorflow-predictor/Dockerfile](https://github.com/cortexlabs/cortex/tree/0.19/images/tensorflow-predictor/Dockerfile).
221221

222222
If your application requires additional dependencies, you can install additional [Python packages](../python-packages.md) and [system packages](../system-packages.md).
223223

@@ -256,7 +256,7 @@ class ONNXPredictor:
256256
```
257257

258258
<!-- CORTEX_VERSION_MINOR -->
259-
Cortex provides an `onnx_client` to your Predictor's constructor. `onnx_client` is an instance of [ONNXClient](https://github.com/cortexlabs/cortex/tree/master/pkg/workloads/cortex/lib/client/onnx.py) that manages an ONNX Runtime session to make predictions using your model. It should be saved as an instance variable in your Predictor, and your `predict()` function should call `onnx_client.predict()` to make an inference with your exported ONNX model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
259+
Cortex provides an `onnx_client` to your Predictor's constructor. `onnx_client` is an instance of [ONNXClient](https://github.com/cortexlabs/cortex/tree/0.19/pkg/workloads/cortex/lib/client/onnx.py) that manages an ONNX Runtime session to make predictions using your model. It should be saved as an instance variable in your Predictor, and your `predict()` function should call `onnx_client.predict()` to make an inference with your exported ONNX model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
260260

261261
When multiple models are defined using the Predictor's `models` field, the `onnx_client.predict()` method expects a second argument `model_name` which must hold the name of the model that you want to use for inference (for example: `self.client.predict(model_input, "text-generator")`). See the [multi model guide](../../guides/multi-model.md#onnx-predictor) for more information.
262262

@@ -265,7 +265,7 @@ For proper separation of concerns, it is recommended to use the constructor's `c
265265
### Examples
266266

267267
<!-- CORTEX_VERSION_MINOR -->
268-
You can find an example of a BatchAPI using an ONNXPredictor in [examples/batch/onnx](https://github.com/cortexlabs/cortex/tree/master/examples/batch/onnx).
268+
You can find an example of a BatchAPI using an ONNXPredictor in [examples/batch/onnx](https://github.com/cortexlabs/cortex/tree/0.19/examples/batch/onnx).
269269

270270
### Pre-installed packages
271271

@@ -283,6 +283,6 @@ requests==2.23.0
283283
```
284284

285285
<!-- CORTEX_VERSION_MINOR x2 -->
286-
The pre-installed system packages are listed in [images/onnx-predictor-cpu/Dockerfile](https://github.com/cortexlabs/cortex/tree/master/images/onnx-predictor-cpu/Dockerfile) (for CPU) or [images/onnx-predictor-gpu/Dockerfile](https://github.com/cortexlabs/cortex/tree/master/images/onnx-predictor-gpu/Dockerfile) (for GPU).
286+
The pre-installed system packages are listed in [images/onnx-predictor-cpu/Dockerfile](https://github.com/cortexlabs/cortex/tree/0.19/images/onnx-predictor-cpu/Dockerfile) (for CPU) or [images/onnx-predictor-gpu/Dockerfile](https://github.com/cortexlabs/cortex/tree/0.19/images/onnx-predictor-gpu/Dockerfile) (for GPU).
287287

288288
If your application requires additional dependencies, you can install additional [Python packages](../python-packages.md) and [system packages](../system-packages.md).

docs/deployments/inferentia.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -67,8 +67,8 @@ The versions of `tensorflow-neuron` and `torch-neuron` that are used by Cortex a
6767
See AWS's [TensorFlow](https://github.com/aws/aws-neuron-sdk/blob/master/docs/tensorflow-neuron/tutorial-compile-infer.md#step-3-compile-on-compilation-instance) and [PyTorch](https://github.com/aws/aws-neuron-sdk/blob/master/docs/pytorch-neuron/tutorial-compile-infer.md#step-3-compile-on-compilation-instance) guides on how to compile models for Inferentia. Here are 2 examples implemented with Cortex:
6868

6969
<!-- CORTEX_VERSION_MINOR x2 -->
70-
1. [ResNet50 in TensorFlow](https://github.com/cortexlabs/cortex/tree/master/examples/tensorflow/image-classifier-resnet50)
71-
1. [ResNet50 in PyTorch](https://github.com/cortexlabs/cortex/tree/master/examples/pytorch/image-classifier-resnet50)
70+
1. [ResNet50 in TensorFlow](https://github.com/cortexlabs/cortex/tree/0.19/examples/tensorflow/image-classifier-resnet50)
71+
1. [ResNet50 in PyTorch](https://github.com/cortexlabs/cortex/tree/0.19/examples/pytorch/image-classifier-resnet50)
7272

7373
### Improving performance
7474

0 commit comments

Comments
 (0)