diff --git a/docs/architecture.md b/docs/architecture.md
index 921f10ea32..1d8d5322d3 100644
--- a/docs/architecture.md
+++ b/docs/architecture.md
@@ -2,25 +2,7 @@
!!! Note
Work in progress.
-## File structure
-
-An agent is structured in a directory with a configuration file, a directory with skills, a directory with protocols, a directory with connections and a main logic file that is used when running aea run.
-agentName/ | The root of the agent
----------------------------------------------- | -----------------------------------------------------------------
-agent.yml | YAML configuration of the agent
-connections/ | Directory containing all the supported connections
- connection1/ | Connection 1
- ... | ...
- connectionN/ | Connection N
-protocols/ | Directory containing all supported protocols
- protocol1/ | Protocol 1
- ... | ...
- protocolK/ | Protocol K
-skills/ | Directory containing all the skill components
- skill1/ | Skill 1
- ... | ...
- skillN/ | Skill L
## Core components
@@ -47,4 +29,25 @@ A connection allows the AEA to connect to an external service which has a Python
A skill can encapsulate any code and ideally delivers economic value to the AEA. Each skill has at most a single Handler and potentially multiple Behaviours and Tasks. The Handler is responsible for dealing with messages of the protocol type for which this skill is registered, as such it encapsulates `reactions`. A Behaviour encapsulates `actions`, that is sequences of interactions with other agents initiated by the AEA. Finally, a Task encapsulates background work which is internal to the AEA.
+## File structure
+
+An agent is structured in a directory with a configuration file, a directory with skills, a directory with protocols, a directory with connections and a main logic file that is used when running aea run.
+
+``` bash
+agentName/
+ agent.yml YAML configuration of the agent
+ connections/ Directory containing all the supported connections
+ connection1/ First connection
+ ... ...
+ connectionN/ nth connection
+ protocols/ Directory containing all supported protocols
+ protocol1/ First protocol
+ ... ...
+ protocolK/ kth protocol
+ skills/ Directory containing all the skill components
+ skill1/ First skill
+ ... ...
+ skillN/ nth skill
+```
+
diff --git a/docs/assets/echo.png b/docs/assets/echo.png
new file mode 100644
index 0000000000..9d23fe9756
Binary files /dev/null and b/docs/assets/echo.png differ
diff --git a/docs/assets/full-scaffold.png b/docs/assets/full-scaffold.png
new file mode 100644
index 0000000000..14e902d74f
Binary files /dev/null and b/docs/assets/full-scaffold.png differ
diff --git a/docs/cli_overview.md b/docs/cli_overview.md
index 8b4aa62490..0aa94dc0a0 100644
--- a/docs/cli_overview.md
+++ b/docs/cli_overview.md
@@ -1,4 +1,4 @@
-# Commands
+# CLI commands
Command | Description
diff --git a/docs/css/my-styles.css b/docs/css/my-styles.css
new file mode 100644
index 0000000000..40aba59612
--- /dev/null
+++ b/docs/css/my-styles.css
@@ -0,0 +1,27 @@
+pre {
+ background-color: #f8f8f7;
+}
+
+code {
+ background-color: #0083fb;
+}
+
+/* this doesn't work now
+.md-nav__link {
+ text-transform: uppercase;
+ color: #0083fb;
+}
+*/
+
+/* Katharine's css additions */
+.md-header, .md-tabs, .md-footer-meta, .md-footer-nav, .md-footer-nav__inner {
+ background-color: #172b6e;
+}
+
+.md-nav__title {
+ color: #172b6e;
+}
+
+.md-icon {
+ ./assets/images/favicon.ico
+}
\ No newline at end of file
diff --git a/docs/ex_rl.md b/docs/ex_rl.md
deleted file mode 100644
index 5c6bdd606d..0000000000
--- a/docs/ex_rl.md
+++ /dev/null
@@ -1,11 +0,0 @@
-# Reinforcement Learning and the AEA Framework
-
-We provide two examples to demonstrate the utility of our framework to RL developers.
-
-## Gym Example
-
-The `train.py` file [here](https://github.com/fetchai/agents-aea/tree/master/examples/gym_ex/train.py) shows that all the RL developer needs to do is add one line of code `(proxy_env = ...)` to introduce our agent as a proxy layer between an OpenAI `gym.Env` and a standard RL agent. The `gym_ex` just serves as a demonstration and helps on-boarding, there is no immediate use case for it as you can train your RL agent without our proxy layer just fine (and faster). However, it decouples the RL agent from the `gym.Env` allowing the two do run in separate environments, potentially owned by different entities.
-
-## Gym Skill
-
-The `gym_skill` [here](https://github.com/fetchai/agents-aea/tree/master/examples/gym_skill) lets an RL developer embed their RL agent inside an AEA as a skill.
diff --git a/docs/gym-plugin.md b/docs/gym-plugin.md
new file mode 100644
index 0000000000..2269840d14
--- /dev/null
+++ b/docs/gym-plugin.md
@@ -0,0 +1,58 @@
+The `gym_ex` example demonstrates to Reinforcement Learning developers the AEA framework's flexibility.
+
+There is no immediate use case for this example as you can train an RL agent without the AEA proxy layer just fine (and faster).
+
+However, the example decouples the RL agent from the `gym.Env` allowing them to run in separate environments, potentially owned by different entities.
+
+
+## Quick start
+
+### Dependencies
+
+``` bash
+pip install numpy gym
+```
+
+### Files
+
+You will have already downloaded the `examples` directory during the AEA quick start demo.
+
+``` bash
+cd examples/gym_ex
+```
+
+### Run the example
+
+``` bash
+python train.py
+```
+
+Notice the usual RL setup, i.e. the fit method of the RL agent has the typical signature and a familiar implementation.
+
+Note how `train.py` demonstrates how easy it is to use an AEA agent as a proxy layer between an OpenAI `gym.Env` and a standard RL agent.
+
+It is just one line of code!
+
+``` python
+from gyms.env import BanditNArmedRandom
+from proxy.env import ProxyEnv
+from rl.agent import RLAgent
+
+
+if __name__ == "__main__":
+ NB_GOODS = 10
+ NB_PRICES_PER_GOOD = 100
+ NB_STEPS = 4000
+
+ # Use any gym.Env compatible environment:
+ gym_env = BanditNArmedRandom(nb_bandits=NB_GOODS, nb_prices_per_bandit=NB_PRICES_PER_GOOD)
+
+ # Pass the gym environment to a proxy environment:
+ proxy_env = ProxyEnv(gym_env)
+
+ # Use any RL agent compatible with the gym environment and call the fit method:
+ rl_agent = RLAgent(nb_goods=NB_GOODS)
+ rl_agent.fit(env=proxy_env, nb_steps=NB_STEPS)
+```
+
+
diff --git a/docs/gym-skill.md b/docs/gym-skill.md
new file mode 100644
index 0000000000..1972b8379a
--- /dev/null
+++ b/docs/gym-skill.md
@@ -0,0 +1,85 @@
+The AEA gym skill demonstrates how a custom Reinforcement Learning agent may be embedded into an Autonomous Economic Agent.
+
+
+## Demo instructions
+
+Follow the Preliminaries and Installation instructions here.
+
+Create and launch a virtual environment.
+
+``` bash
+pipenv --python 3.7 && pipenv shell
+```
+
+Install the gym library.
+
+``` bash
+pip install gym
+```
+
+Then, download the examples and channels directory.
+``` bash
+svn export https://github.com/fetchai/agents-aea.git/trunk/examples
+```
+
+
+
+
+### Create the agent
+In the root directory, create the gym agent.
+``` bash
+aea create my_gym_agent
+```
+
+
+### Add the gym skill
+``` bash
+cd my_gym_agent
+aea add skill gym
+```
+
+
+### Copy the gym environment to the agent directory
+``` bash
+mkdir gyms
+cp -a ../examples/gym_ex/gyms/. gyms/
+```
+
+
+### Add a gym connection
+``` bash
+aea add connection gym
+```
+
+
+### Update the connection config
+``` bash
+nano connections/gym/connection.yaml
+env: gyms.env.BanditNArmedRandom
+```
+
+
+
+### Run the agent with the gym connection
+
+``` bash
+aea run --connection gym
+```
+
+
+
+
+### Delete the agent
+
+When you're done, you can delete the agent.
+
+``` bash
+aea delete my_first_agent
+```
+
+
+
\ No newline at end of file
diff --git a/docs/index.md b/docs/index.md
index 33bc018f10..87f5c4dd12 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -1,17 +1,36 @@
The AEA is a framework for autonomous economic agent (AEA) development. It gives developers a quick and efficient way to build autonomous economic agents.
-The framework is super modular, easily extensible, and highly composable.
+The framework is super modular, easily extensible, and highly composable. It is ideal for Reinforcement Learning scenarios.
-An autonomous economic agent (AEA) is an intelligent agent whose goal is to generate economic value for its owner.
-AEAs achieve their goals with the help of the OEF and the Fetch.AI Ledger. Third party systems, such as Ethereum, may also allow AEA integration.
+## Our vision
+
+Fetch.AI intends the AEA framework to have two focused commercial roles.
+
+### Open source company
+
+We want to build infrastructure with which external parties build their own solutions.
+
+### Platform for start ups
+
+By operating as a platform for start ups, we hope to solve the chicken-or-egg problem through incentive schemes.
+
+
+
+## Agents
+
+An autonomous economic agent (AEA) is an intelligent agent whose goal is to generate economic value for its owner. Their super powers lie in their ability to autonomously acquire new skills.
+
+AEAs achieve their goals with the help of the OEF and the Fetch.AI Ledger.
+
+Third party systems, such as Ethereum, may also allow AEA integration.
+
-Their super power lies in the ability to autonomously acquire new skills.
!!! Note
Work in progress.
-
+
diff --git a/docs/integration.md b/docs/integration.md
new file mode 100644
index 0000000000..e2dfe4a7f9
--- /dev/null
+++ b/docs/integration.md
@@ -0,0 +1,22 @@
+In this section, we show you how to integrate the AEA with third party ledgers.
+
+
+## Fetch.AI Ledger
+
+!!! Note
+ Coming soon.
+
+
+## Ethereum
+
+!!! Note
+ Coming soon.
+
+
+## Etc.
+
+!!! Note
+ Coming soon.
+
+
+
\ No newline at end of file
diff --git a/docs/protocol.md b/docs/protocol.md
new file mode 100644
index 0000000000..cb1504869d
--- /dev/null
+++ b/docs/protocol.md
@@ -0,0 +1,8 @@
+## Envelope
+
+!!! Todo
+
+
+## Sending stuff around
+
+!!! Todo
\ No newline at end of file
diff --git a/docs/quickstart.md b/docs/quickstart.md
index 58583f6e69..7057121b6a 100644
--- a/docs/quickstart.md
+++ b/docs/quickstart.md
@@ -1,4 +1,28 @@
-## Setup
+## Preliminaries
+
+Create and cd into a new working directory.
+
+``` bash
+mkdir aea/
+cd aea/
+```
+
+Check you have `pipenv`.
+
+``` bash
+which pipenv
+```
+
+If you don't have it, install it. Instructions are here.
+
+Once installed, create a new environment and open it.
+
+``` bash
+pipenv --python 3.7 && pipenv shell
+```
+
+
+## Installation
Install the Autonomous Economic Agent framework.
@@ -19,27 +43,6 @@ pip install aea[cli]
```
-
-
-
## Echo Agent demo
### Download the examples and scripts directories.
``` bash
@@ -56,13 +59,13 @@ aea create my_first_agent
``` bash
cd my_first_agent
-aea add skill echo_skill ../examples/echo_skill
+aea add skill echo
```
### Launch the OEF
-Open a new terminal at the repo root and launch the OEF.
+Open a new terminal and launch the OEF.
``` bash
python scripts/oef/launch.py -c ./scripts/oef/launch_config.json
@@ -76,6 +79,18 @@ Go back to the other terminal and run the agent.
aea run
```
+You will see the echo task running in the terminal window.
+
+