Skip to content

Commit c63b2ce

Browse files
authored
Merge pull request #959 from azavea/lf/docs
More docs
2 parents 006937b + 235fa77 commit c63b2ce

File tree

12 files changed

+229
-174
lines changed

12 files changed

+229
-174
lines changed

README.md

+81-113
Original file line numberDiff line numberDiff line change
@@ -10,12 +10,12 @@
1010
[![Documentation Status](https://readthedocs.org/projects/raster-vision/badge/?version=latest)](https://docs.rastervision.io/en/latest/?badge=latest)
1111

1212
Raster Vision is an open source Python framework for building computer vision models on satellite, aerial, and other large imagery sets (including oblique drone imagery).
13-
* It allows users (who don't need to be experts in deep learning!) to quickly and repeatably configure experiments that execute a machine learning workflow including: analyzing training data, creating training chips, training models, creating predictions, evaluating models, and bundling the model files and configuration for easy deployment.
13+
* It allows users (who don't need to be experts in deep learning!) to quickly and repeatably configure experiments that execute a machine learning pipeline including: analyzing training data, creating training chips, training models, creating predictions, evaluating models, and bundling the model files and configuration for easy deployment.
1414
![Overview of Raster Vision workflow](docs/img/rv-pipeline-overview.png)
15-
* There is built-in support for chip classification, object detection, and semantic segmentation with backends using PyTorch and Tensorflow.
15+
* There is built-in support for chip classification, object detection, and semantic segmentation with backends using PyTorch.
1616
![Examples of chip classification, object detection and semantic segmentation](docs/img/cv-tasks.png)
1717
* Experiments can be executed on CPUs and GPUs with built-in support for running in the cloud using [AWS Batch](https://github.com/azavea/raster-vision-aws).
18-
* The framework is extensible to new data sources, tasks (eg. object detection), backends (eg. TF Object Detection API), and cloud providers.
18+
* The framework is extensible to new data sources, tasks (eg. instance segmentation), backends (eg. Detectron2), and cloud providers.
1919

2020
See the [documentation](https://docs.rastervision.io) for more details.
2121

@@ -24,13 +24,11 @@ See the [documentation](https://docs.rastervision.io) for more details.
2424
There are several ways to setup Raster Vision:
2525
* To build Docker images from scratch, after cloning this repo, run `docker/build`, and run the container using `docker/run`.
2626
* Docker images are published to [quay.io](https://quay.io/repository/azavea/raster-vision). The tag for the `raster-vision` image determines what type of image it is:
27-
- The `tf-cpu-*` tags are for running the Tensorflow CPU containers.
28-
- The `tf-gpu-*` tags are for running the Tensorflow GPU containers.
2927
- The `pytorch-*` tags are for running the PyTorch containers.
3028
- We publish a new tag per merge into `master`, which is tagged with the first 7 characters of the commit hash. To use the latest version, pull the `latest` suffix, e.g. `raster-vision:pytorch-latest`. Git tags are also published, with the Github tag name as the Docker tag suffix.
3129
* Raster Vision can be installed directly using `pip install rastervision`. However, some of its dependencies will have to be installed manually.
3230

33-
For more detailed instructions, see the [Setup docs](https://docs.rastervision.io/en/0.11/setup.html).
31+
For more detailed instructions, see the [Setup docs](https://docs.rastervision.io/en/0.12/setup.html).
3432

3533
### Example
3634

@@ -39,124 +37,94 @@ The best way to get a feel for what Raster Vision enables is to look at an examp
3937
```python
4038
# tiny_spacenet.py
4139

42-
import rastervision as rv
43-
44-
class TinySpacenetExperimentSet(rv.ExperimentSet):
45-
def exp_main(self):
46-
base_uri = ('https://s3.amazonaws.com/azavea-research-public-data/'
47-
'raster-vision/examples/spacenet')
48-
train_image_uri = '{}/RGB-PanSharpen_AOI_2_Vegas_img205.tif'.format(base_uri)
49-
train_label_uri = '{}/buildings_AOI_2_Vegas_img205.geojson'.format(base_uri)
50-
val_image_uri = '{}/RGB-PanSharpen_AOI_2_Vegas_img25.tif'.format(base_uri)
51-
val_label_uri = '{}/buildings_AOI_2_Vegas_img25.geojson'.format(base_uri)
52-
channel_order = [0, 1, 2]
53-
background_class_id = 2
54-
55-
# ------------- TASK -------------
56-
57-
task = rv.TaskConfig.builder(rv.SEMANTIC_SEGMENTATION) \
58-
.with_chip_size(300) \
59-
.with_chip_options(chips_per_scene=50) \
60-
.with_classes({
61-
'building': (1, 'red'),
62-
'background': (2, 'black')
63-
}) \
64-
.build()
65-
66-
# ------------- BACKEND -------------
67-
68-
backend = rv.BackendConfig.builder(rv.PYTORCH_SEMANTIC_SEGMENTATION) \
69-
.with_task(task) \
70-
.with_train_options(
71-
batch_size=2,
72-
num_epochs=1,
73-
debug=True) \
74-
.build()
75-
76-
# ------------- TRAINING -------------
77-
78-
train_raster_source = rv.RasterSourceConfig.builder(rv.RASTERIO_SOURCE) \
79-
.with_uri(train_image_uri) \
80-
.with_channel_order(channel_order) \
81-
.with_stats_transformer() \
82-
.build()
83-
84-
train_label_raster_source = rv.RasterSourceConfig.builder(rv.RASTERIZED_SOURCE) \
85-
.with_vector_source(train_label_uri) \
86-
.with_rasterizer_options(background_class_id) \
87-
.build()
88-
train_label_source = rv.LabelSourceConfig.builder(rv.SEMANTIC_SEGMENTATION) \
89-
.with_raster_source(train_label_raster_source) \
90-
.build()
91-
92-
train_scene = rv.SceneConfig.builder() \
93-
.with_task(task) \
94-
.with_id('train_scene') \
95-
.with_raster_source(train_raster_source) \
96-
.with_label_source(train_label_source) \
97-
.build()
98-
99-
# ------------- VALIDATION -------------
100-
101-
val_raster_source = rv.RasterSourceConfig.builder(rv.RASTERIO_SOURCE) \
102-
.with_uri(val_image_uri) \
103-
.with_channel_order(channel_order) \
104-
.with_stats_transformer() \
105-
.build()
106-
107-
val_label_raster_source = rv.RasterSourceConfig.builder(rv.RASTERIZED_SOURCE) \
108-
.with_vector_source(val_label_uri) \
109-
.with_rasterizer_options(background_class_id) \
110-
.build()
111-
val_label_source = rv.LabelSourceConfig.builder(rv.SEMANTIC_SEGMENTATION) \
112-
.with_raster_source(val_label_raster_source) \
113-
.build()
114-
115-
val_scene = rv.SceneConfig.builder() \
116-
.with_task(task) \
117-
.with_id('val_scene') \
118-
.with_raster_source(val_raster_source) \
119-
.with_label_source(val_label_source) \
120-
.build()
121-
122-
# ------------- DATASET -------------
123-
124-
dataset = rv.DatasetConfig.builder() \
125-
.with_train_scene(train_scene) \
126-
.with_validation_scene(val_scene) \
127-
.build()
128-
129-
# ------------- EXPERIMENT -------------
130-
131-
experiment = rv.ExperimentConfig.builder() \
132-
.with_id('tiny-spacenet-experiment') \
133-
.with_root_uri('/opt/data/rv') \
134-
.with_task(task) \
135-
.with_backend(backend) \
136-
.with_dataset(dataset) \
137-
.with_stats_analyzer() \
138-
.build()
139-
140-
return experiment
141-
142-
143-
if __name__ == '__main__':
144-
rv.main()
40+
from os.path import join
41+
42+
from rastervision.core.rv_pipeline import *
43+
from rastervision.core.backend import *
44+
from rastervision.core.data import *
45+
from rastervision.pytorch_backend import *
46+
from rastervision.pytorch_learner import *
47+
48+
49+
def get_config(runner):
50+
root_uri = '/opt/data/output/'
51+
base_uri = ('https://s3.amazonaws.com/azavea-research-public-data/'
52+
'raster-vision/examples/spacenet')
53+
train_image_uri = '{}/RGB-PanSharpen_AOI_2_Vegas_img205.tif'.format(
54+
base_uri)
55+
train_label_uri = '{}/buildings_AOI_2_Vegas_img205.geojson'.format(
56+
base_uri)
57+
val_image_uri = '{}/RGB-PanSharpen_AOI_2_Vegas_img25.tif'.format(base_uri)
58+
val_label_uri = '{}/buildings_AOI_2_Vegas_img25.geojson'.format(base_uri)
59+
channel_order = [0, 1, 2]
60+
class_config = ClassConfig(
61+
names=['building', 'background'], colors=['red', 'black'])
62+
63+
def make_scene(scene_id, image_uri, label_uri):
64+
"""
65+
- StatsTransformer is used to convert uint16 values to uint8.
66+
- The GeoJSON does not have a class_id property for each geom,
67+
so it is inferred as 0 (ie. building) because the default_class_id
68+
is set to 0.
69+
- The labels are in the form of GeoJSON which needs to be rasterized
70+
to use as label for semantic segmentation, so we use a RasterizedSource.
71+
- The rasterizer set the background (as opposed to foreground) pixels
72+
to 1 because background_class_id is set to 1.
73+
"""
74+
raster_source = RasterioSourceConfig(
75+
uris=[image_uri],
76+
channel_order=channel_order,
77+
transformers=[StatsTransformerConfig()])
78+
vector_source = GeoJSONVectorSourceConfig(
79+
uri=label_uri, default_class_id=0, ignore_crs_field=True)
80+
label_source = SemanticSegmentationLabelSourceConfig(
81+
raster_source=RasterizedSourceConfig(
82+
vector_source=vector_source,
83+
rasterizer_config=RasterizerConfig(background_class_id=1)))
84+
return SceneConfig(
85+
id=scene_id,
86+
raster_source=raster_source,
87+
label_source=label_source)
88+
89+
dataset = DatasetConfig(
90+
class_config=class_config,
91+
train_scenes=[
92+
make_scene('scene_205', train_image_uri, train_label_uri)
93+
],
94+
validation_scenes=[
95+
make_scene('scene_25', val_image_uri, val_label_uri)
96+
])
97+
98+
# Use the PyTorch backend for the SemanticSegmentation pipeline.
99+
chip_sz = 300
100+
backend = PyTorchSemanticSegmentationConfig(
101+
model=SemanticSegmentationModelConfig(backbone=Backbone.resnet50),
102+
solver=SolverConfig(lr=1e-4, num_epochs=1, batch_sz=2))
103+
chip_options = SemanticSegmentationChipOptions(
104+
window_method=SemanticSegmentationWindowMethod.random_sample,
105+
chips_per_scene=10)
106+
107+
return SemanticSegmentationConfig(
108+
root_uri=root_uri,
109+
dataset=dataset,
110+
backend=backend,
111+
train_chip_sz=chip_sz,
112+
predict_chip_sz=chip_sz,
113+
chip_options=chip_options)
114+
145115
```
146116

147117
Raster Vision uses a unittest-like method for executing experiments. For instance, if the above was defined in `tiny_spacenet.py`, with the proper setup you could run the experiment using:
148118

149119
```bash
150-
> rastervision run local -p tiny_spacenet.py
120+
> rastervision run local tiny_spacenet.py
151121
```
152122

153-
See the [Quickstart](https://docs.rastervision.io/en/0.11/quickstart.html) for a more complete description of running this example.
123+
See the [Quickstart](https://docs.rastervision.io/en/0.12/quickstart.html) for a more complete description of running this example.
154124

155125
### Resources
156126

157127
* [Raster Vision Documentation](https://docs.rastervision.io)
158-
* [raster-vision-examples](https://github.com/azavea/raster-vision-examples): A repository of examples of running RV on open datasets
159-
* [raster-vision-aws](https://github.com/azavea/raster-vision-aws): Deployment code for setting up AWS Batch with GPUs
160128

161129
### Contact and Support
162130

docs/api.rst

+5
Original file line numberDiff line numberDiff line change
@@ -96,6 +96,11 @@ RasterioSourceConfig
9696

9797
.. autoclass:: rastervision.core.data.raster_source.RasterioSourceConfig
9898

99+
RasterizerConfig
100+
~~~~~~~~~~~~~~~~~
101+
102+
.. autoclass:: rastervision.core.data.raster_source.RasterizerConfig
103+
99104
.. _api RasterizedSourceConfig:
100105

101106
RasterizedSourceConfig

docs/architecture.rst

+5-5
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ Architecture and Customization
55

66
.. _codebase overview:
77

8-
Codebase overview
8+
Codebase Overview
99
-------------------
1010

1111
The Raster Vision codebase is designed with modularity and flexibility in mind.
@@ -32,7 +32,7 @@ Writing pipelines and plugins
3232

3333
In this section, we explain the most important aspects of the ``rastervision.pipeline`` package through a series of examples which incrementally build on one another. These examples show how to write custom pipelines and configuration schemas, how to customize an existing pipeline, and how to package the code as a plugin.
3434

35-
The full source code for Examples 1 and 2 is in `rastervision.pipeline_example_plugin1 <https://github.com/azavea/raster-vision/tree/master/rastervision_pipeline/rastervision/pipeline_example_plugin1>`_ and Example 3 is in `rastervision.pipeline_example_plugin2 <https://github.com/azavea/raster-vision/tree/master/rastervision_pipeline/rastervision/pipeline_example_plugin2>`_ and they can be run from inside the RV Docker image. However, **note that new plugins are typically created in a separate repo and Docker image**, and :ref:`bootstrap` shows how to do this.
35+
The full source code for Examples 1 and 2 is in `rastervision.pipeline_example_plugin1 <https://github.com/azavea/raster-vision/tree/0.12/rastervision_pipeline/rastervision/pipeline_example_plugin1>`_ and Example 3 is in `rastervision.pipeline_example_plugin2 <https://github.com/azavea/raster-vision/tree/0.12/rastervision_pipeline/rastervision/pipeline_example_plugin2>`_ and they can be run from inside the RV Docker image. However, **note that new plugins are typically created in a separate repo and Docker image**, and :ref:`bootstrap` shows how to do this.
3636

3737
.. _example 1:
3838

@@ -59,7 +59,7 @@ Finally, in order to package this code as a plugin, and make it usable within th
5959

6060
We can invoke the Raster Vision CLI to run the pipeline using:
6161

62-
.. code-block:: shell
62+
.. code-block:: terminal
6363
6464
> rastervision run inprocess rastervision.pipeline_example_plugin1.config1 -a root_uri /opt/data/pipeline-example/1/ -s 2
6565
@@ -94,7 +94,7 @@ We can configure the pipeline using:
9494

9595
The pipeline can then be run with the above configuration using:
9696

97-
.. code-block:: shell
97+
.. code-block:: terminal
9898
9999
> rastervision run inprocess rastervision.pipeline_example_plugin1.config2 -a root_uri /opt/data/pipeline-example/2/ -s 2
100100
@@ -129,7 +129,7 @@ The code to implement the new configuration and behavior, and a sample configura
129129

130130
We can run the pipeline as follows:
131131

132-
.. code-block:: shell
132+
.. code-block:: terminal
133133
134134
> rastervision run inprocess rastervision.pipeline_example_plugin2.config3 -a root_uri /opt/data/pipeline-example/3/ -s 2
135135

docs/bootstrap.rst

+2-2
Original file line numberDiff line numberDiff line change
@@ -3,9 +3,9 @@
33
Bootstrap new projects with a template
44
=======================================
55

6-
When using Raster Vision on a new project, the best practice is to create a new repo with its own Docker image based on the Raster Vision image. This involves a fair amount of boilerplate code which has a few things that vary between projects. To facilitate bootstrapping new projects, there is a `cookiecutter <https://cookiecutter.readthedocs.io/>`_ template. Assuming that you cloned the Raster Vision repo and ran ``pip install cookiecutter==1.7.0``, you can instantiate the template as follows (after adjusting paths appropriately for your particular setup).
6+
When using Raster Vision on a new project, the best practice is to create a new repo with its own Docker image based on the Raster Vision image. This involves a fair amount of boilerplate code which has a few things that vary between projects. To facilitate bootstrapping new projects, there is a `cookiecutter <https://cookiecutter.readthedocs.io/>`_ `template <https://github.com/azavea/raster-vision/tree/0.12/cookiecutter_template>`_. Assuming that you cloned the Raster Vision repo and ran ``pip install cookiecutter==1.7.0``, you can instantiate the template as follows (after adjusting paths appropriately for your particular setup).
77

8-
.. code-block:: console
8+
.. code-block:: terminal
99
1010
[lfishgold@monoshone ~/projects]
1111
$ cookiecutter raster-vision/cookiecutter_template/

0 commit comments

Comments
 (0)