Skip to content

Commit f978586

Browse files
committed
Update version to 0.14.0
1 parent 1539585 commit f978586

File tree

23 files changed

+53
-58
lines changed

23 files changed

+53
-58
lines changed

README.md

Lines changed: 6 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -4,10 +4,6 @@ Cortex is an open source platform for deploying machine learning models as produ
44

55
<br>
66

7-
<!-- Delete on release branches -->
8-
<!-- CORTEX_VERSION_README_MINOR -->
9-
[install](https://cortex.dev/install)[tutorial](https://cortex.dev/iris-classifier)[docs](https://cortex.dev)[examples](https://github.com/cortexlabs/cortex/tree/0.13/examples)[we're hiring](https://angel.co/cortex-labs-inc/jobs)[email us](mailto:[email protected])[chat with us](https://gitter.im/cortexlabs/cortex)<br><br>
10-
117
<!-- Set header Cache-Control=no-cache on the S3 object metadata (see https://help.github.com/en/articles/about-anonymized-image-urls) -->
128
![Demo](https://d1zqebknpdh033.cloudfront.net/demo/gif/v0.13_2.gif)
139

@@ -33,7 +29,7 @@ Cortex is designed to be self-hosted on any AWS account. You can spin up a clust
3329
<!-- CORTEX_VERSION_README_MINOR -->
3430
```bash
3531
# install the CLI on your machine
36-
$ bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/0.13/get-cli.sh)"
32+
$ bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/0.14/get-cli.sh)"
3733

3834
# provision infrastructure on AWS and spin up a cluster
3935
$ cortex cluster up
@@ -140,8 +136,8 @@ The CLI sends configuration and code to the cluster every time you run `cortex d
140136
## Examples of Cortex deployments
141137

142138
<!-- CORTEX_VERSION_README_MINOR x5 -->
143-
* [Sentiment analysis](https://github.com/cortexlabs/cortex/tree/0.13/examples/tensorflow/sentiment-analyzer): deploy a BERT model for sentiment analysis.
144-
* [Image classification](https://github.com/cortexlabs/cortex/tree/0.13/examples/tensorflow/image-classifier): deploy an Inception model to classify images.
145-
* [Search completion](https://github.com/cortexlabs/cortex/tree/0.13/examples/pytorch/search-completer): deploy Facebook's RoBERTa model to complete search terms.
146-
* [Text generation](https://github.com/cortexlabs/cortex/tree/0.13/examples/pytorch/text-generator): deploy Hugging Face's DistilGPT2 model to generate text.
147-
* [Iris classification](https://github.com/cortexlabs/cortex/tree/0.13/examples/sklearn/iris-classifier): deploy a scikit-learn model to classify iris flowers.
139+
* [Sentiment analysis](https://github.com/cortexlabs/cortex/tree/0.14/examples/tensorflow/sentiment-analyzer): deploy a BERT model for sentiment analysis.
140+
* [Image classification](https://github.com/cortexlabs/cortex/tree/0.14/examples/tensorflow/image-classifier): deploy an Inception model to classify images.
141+
* [Search completion](https://github.com/cortexlabs/cortex/tree/0.14/examples/pytorch/search-completer): deploy Facebook's RoBERTa model to complete search terms.
142+
* [Text generation](https://github.com/cortexlabs/cortex/tree/0.14/examples/pytorch/text-generator): deploy Hugging Face's DistilGPT2 model to generate text.
143+
* [Iris classification](https://github.com/cortexlabs/cortex/tree/0.14/examples/sklearn/iris-classifier): deploy a scikit-learn model to classify iris flowers.

build/build-image.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ set -euo pipefail
1919

2020
ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")"/.. >/dev/null && pwd)"
2121

22-
CORTEX_VERSION=master
22+
CORTEX_VERSION=0.14.0
2323

2424
dir=$1
2525
image=$2

build/cli.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ set -euo pipefail
1919

2020
ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")"/.. >/dev/null && pwd)"
2121

22-
CORTEX_VERSION=master
22+
CORTEX_VERSION=0.14.0
2323

2424
arg1=${1:-""}
2525
upload="false"

build/push-image.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@
1717

1818
set -euo pipefail
1919

20-
CORTEX_VERSION=master
20+
CORTEX_VERSION=0.14.0
2121

2222
image=$1
2323

docs/cluster-management/config.md

Lines changed: 21 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -43,28 +43,28 @@ instance_volume_size: 50
4343
log_group: cortex
4444

4545
# whether to use spot instances in the cluster (default: false)
46-
# see https://cortex.dev/v/master/cluster-management/spot-instances for additional details on spot configuration
46+
# see https://cortex.dev/v/0.14/cluster-management/spot-instances for additional details on spot configuration
4747
spot: false
4848

4949
# docker image paths
50-
image_python_serve: cortexlabs/python-serve:master
51-
image_python_serve_gpu: cortexlabs/python-serve-gpu:master
52-
image_tf_serve: cortexlabs/tf-serve:master
53-
image_tf_serve_gpu: cortexlabs/tf-serve-gpu:master
54-
image_tf_api: cortexlabs/tf-api:master
55-
image_onnx_serve: cortexlabs/onnx-serve:master
56-
image_onnx_serve_gpu: cortexlabs/onnx-serve-gpu:master
57-
image_operator: cortexlabs/operator:master
58-
image_manager: cortexlabs/manager:master
59-
image_downloader: cortexlabs/downloader:master
60-
image_request_monitor: cortexlabs/request-monitor:master
61-
image_cluster_autoscaler: cortexlabs/cluster-autoscaler:master
62-
image_metrics_server: cortexlabs/metrics-server:master
63-
image_nvidia: cortexlabs/nvidia:master
64-
image_fluentd: cortexlabs/fluentd:master
65-
image_statsd: cortexlabs/statsd:master
66-
image_istio_proxy: cortexlabs/istio-proxy:master
67-
image_istio_pilot: cortexlabs/istio-pilot:master
68-
image_istio_citadel: cortexlabs/istio-citadel:master
69-
image_istio_galley: cortexlabs/istio-galley:master
50+
image_python_serve: cortexlabs/python-serve:0.14.0
51+
image_python_serve_gpu: cortexlabs/python-serve-gpu:0.14.0
52+
image_tf_serve: cortexlabs/tf-serve:0.14.0
53+
image_tf_serve_gpu: cortexlabs/tf-serve-gpu:0.14.0
54+
image_tf_api: cortexlabs/tf-api:0.14.0
55+
image_onnx_serve: cortexlabs/onnx-serve:0.14.0
56+
image_onnx_serve_gpu: cortexlabs/onnx-serve-gpu:0.14.0
57+
image_operator: cortexlabs/operator:0.14.0
58+
image_manager: cortexlabs/manager:0.14.0
59+
image_downloader: cortexlabs/downloader:0.14.0
60+
image_request_monitor: cortexlabs/request-monitor:0.14.0
61+
image_cluster_autoscaler: cortexlabs/cluster-autoscaler:0.14.0
62+
image_metrics_server: cortexlabs/metrics-server:0.14.0
63+
image_nvidia: cortexlabs/nvidia:0.14.0
64+
image_fluentd: cortexlabs/fluentd:0.14.0
65+
image_statsd: cortexlabs/statsd:0.14.0
66+
image_istio_proxy: cortexlabs/istio-proxy:0.14.0
67+
image_istio_pilot: cortexlabs/istio-pilot:0.14.0
68+
image_istio_citadel: cortexlabs/istio-citadel:0.14.0
69+
image_istio_galley: cortexlabs/istio-galley:0.14.0
7070
```

docs/cluster-management/install.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ See [cluster configuration](config.md) to learn how you can customize your clust
1212
<!-- CORTEX_VERSION_MINOR -->
1313
```bash
1414
# install the CLI on your machine
15-
$ bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/master/get-cli.sh)"
15+
$ bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/0.14/get-cli.sh)"
1616

1717
# provision infrastructure on AWS and spin up a cluster
1818
$ cortex cluster up
@@ -38,7 +38,7 @@ your cluster is ready!
3838

3939
```bash
4040
# clone the Cortex repository
41-
git clone -b master https://github.com/cortexlabs/cortex.git
41+
git clone -b 0.14 https://github.com/cortexlabs/cortex.git
4242

4343
# navigate to the TensorFlow iris classification example
4444
cd cortex/examples/tensorflow/iris-classifier

docs/cluster-management/update.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ cortex cluster update
2222
cortex cluster down
2323

2424
# update your CLI
25-
bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/master/get-cli.sh)"
25+
bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/0.14/get-cli.sh)"
2626

2727
# confirm version
2828
cortex version

docs/deployments/onnx.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -67,7 +67,7 @@ You can log information about each request by adding a `?debug=true` parameter t
6767
An ONNX Predictor is a Python class that describes how to serve your ONNX model to make predictions.
6868

6969
<!-- CORTEX_VERSION_MINOR -->
70-
Cortex provides an `onnx_client` and a config object to initialize your implementation of the ONNX Predictor class. The `onnx_client` is an instance of [ONNXClient](https://github.com/cortexlabs/cortex/tree/master/pkg/workloads/cortex/lib/client/onnx.py) that manages an ONNX Runtime session and helps make predictions using your model. Once your implementation of the ONNX Predictor class has been initialized, the replica is available to serve requests. Upon receiving a request, your implementation's `predict()` function is called with the JSON payload and is responsible for returning a prediction or batch of predictions. Your `predict()` function should call `onnx_client.predict()` to make an inference against your exported ONNX model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
70+
Cortex provides an `onnx_client` and a config object to initialize your implementation of the ONNX Predictor class. The `onnx_client` is an instance of [ONNXClient](https://github.com/cortexlabs/cortex/tree/0.14/pkg/workloads/cortex/lib/client/onnx.py) that manages an ONNX Runtime session and helps make predictions using your model. Once your implementation of the ONNX Predictor class has been initialized, the replica is available to serve requests. Upon receiving a request, your implementation's `predict()` function is called with the JSON payload and is responsible for returning a prediction or batch of predictions. Your `predict()` function should call `onnx_client.predict()` to make an inference against your exported ONNX model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
7171

7272
## Implementation
7373

@@ -133,6 +133,6 @@ requests==2.22.0
133133
```
134134

135135
<!-- CORTEX_VERSION_MINOR x2 -->
136-
The pre-installed system packages are listed in the [onnx-serve Dockerfile](https://github.com/cortexlabs/cortex/tree/master/images/onnx-serve/Dockerfile) (for CPU) or the [onnx-serve-gpu Dockerfile](https://github.com/cortexlabs/cortex/tree/master/images/onnx-serve-gpu/Dockerfile) (for GPU).
136+
The pre-installed system packages are listed in the [onnx-serve Dockerfile](https://github.com/cortexlabs/cortex/tree/0.14/images/onnx-serve/Dockerfile) (for CPU) or the [onnx-serve-gpu Dockerfile](https://github.com/cortexlabs/cortex/tree/0.14/images/onnx-serve-gpu/Dockerfile) (for GPU).
137137

138138
If your application requires additional dependencies, you can [install additional Python packages](../dependency-management/python-packages.md) or [install additional system packages](../dependency-management/system-packages.md).

docs/deployments/python.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -171,6 +171,6 @@ xgboost==0.90
171171
```
172172

173173
<!-- CORTEX_VERSION_MINOR x2 -->
174-
The pre-installed system packages are listed in the [python-serve Dockerfile](https://github.com/cortexlabs/cortex/tree/master/images/python-serve/Dockerfile) (for CPU) or the [python-serve-gpu Dockerfile](https://github.com/cortexlabs/cortex/tree/master/images/python-serve-gpu/Dockerfile) (for GPU).
174+
The pre-installed system packages are listed in the [python-serve Dockerfile](https://github.com/cortexlabs/cortex/tree/0.14/images/python-serve/Dockerfile) (for CPU) or the [python-serve-gpu Dockerfile](https://github.com/cortexlabs/cortex/tree/0.14/images/python-serve-gpu/Dockerfile) (for GPU).
175175

176176
If your application requires additional dependencies, you can [install additional Python packages](../dependency-management/python-packages.md) or [install additional system packages](../dependency-management/system-packages.md).

docs/deployments/tensorflow.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -68,7 +68,7 @@ You can log information about each request by adding a `?debug=true` parameter t
6868
A TensorFlow Predictor is a Python class that describes how to serve your TensorFlow model to make predictions.
6969

7070
<!-- CORTEX_VERSION_MINOR -->
71-
Cortex provides a `tensorflow_client` and a config object to initialize your implementation of the TensorFlow Predictor class. The `tensorflow_client` is an instance of [TensorFlowClient](https://github.com/cortexlabs/cortex/tree/master/pkg/workloads/cortex/lib/client/tensorflow.py) that manages a connection to a TensorFlow Serving container via gRPC to make predictions using your model. Once your implementation of the TensorFlow Predictor class has been initialized, the replica is available to serve requests. Upon receiving a request, your implementation's `predict()` function is called with the JSON payload and is responsible for returning a prediction or batch of predictions. Your `predict()` function should call `tensorflow_client.predict()` to make an inference against your exported TensorFlow model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
71+
Cortex provides a `tensorflow_client` and a config object to initialize your implementation of the TensorFlow Predictor class. The `tensorflow_client` is an instance of [TensorFlowClient](https://github.com/cortexlabs/cortex/tree/0.14/pkg/workloads/cortex/lib/client/tensorflow.py) that manages a connection to a TensorFlow Serving container via gRPC to make predictions using your model. Once your implementation of the TensorFlow Predictor class has been initialized, the replica is available to serve requests. Upon receiving a request, your implementation's `predict()` function is called with the JSON payload and is responsible for returning a prediction or batch of predictions. Your `predict()` function should call `tensorflow_client.predict()` to make an inference against your exported TensorFlow model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
7272

7373
## Implementation
7474

@@ -128,6 +128,6 @@ tensorflow==2.1.0
128128
```
129129

130130
<!-- CORTEX_VERSION_MINOR -->
131-
The pre-installed system packages are listed in the [tf-api Dockerfile](https://github.com/cortexlabs/cortex/tree/master/images/tf-api/Dockerfile).
131+
The pre-installed system packages are listed in the [tf-api Dockerfile](https://github.com/cortexlabs/cortex/tree/0.14/images/tf-api/Dockerfile).
132132

133133
If your application requires additional dependencies, you can [install additional Python packages](../dependency-management/python-packages.md) or [install additional system packages](../dependency-management/system-packages.md).

0 commit comments

Comments
 (0)