Skip to content

Commit fec6553

Browse files
kencochraneMisty Stanley-Jones
authored andcommitted
Added updates and Release notes for Docker for AWS Beta 18 (docker#1782)
* Added updates and Release notes for Docker for AWS Beta 18 Signed-off-by: Ken Cochrane <[email protected]>
1 parent 5c8a98a commit fec6553

File tree

7 files changed

+215
-8
lines changed

7 files changed

+215
-8
lines changed

_data/toc.yaml

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -60,6 +60,8 @@ guides:
6060
title: Upgrading
6161
- path: /docker-for-aws/deploy/
6262
title: Deploy your app
63+
- path: /docker-for-aws/persistent-data-volumes/
64+
title: Persistent data volumes
6365
- path: /docker-for-aws/faqs/
6466
title: FAQs
6567
- path: /docker-for-aws/opensource/
@@ -76,6 +78,8 @@ guides:
7678
title: Upgrading
7779
- path: /docker-for-azure/deploy/
7880
title: Deploy your app
81+
- path: /docker-for-azure/persistent-data-volumes/
82+
title: Persistent data volumes
7983
- path: /docker-for-azure/faqs/
8084
title: FAQs
8185
- path: /docker-for-azure/opensource/

docker-for-aws/faqs.md

Lines changed: 45 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -46,9 +46,10 @@ This AWS documentation page will describe how you can tell if you have EC2-Class
4646
### Possible fixes to the EC2-Classic region issue:
4747
There are a few work arounds that you can try to get Docker for AWS up and running for you.
4848

49-
1. Use a region that doesn't have **EC2-Classic**. The most common region with this issue is `us-east-1`. So try another region, `us-west-1`, `us-west-2`, or the new `us-east-2`. These regions will more then likely be setup with **EC2-VPC** and you will not longer have this issue.
50-
2. Create an new AWS account, all new accounts will be setup using **EC2-VPC** and will not have this problem.
51-
3. You can try and contact AWS support to convert your **EC2-Classic** account to a **EC2-VPC** account. For more information checkout the following answer for **"Q. I really want a default VPC for my existing EC2 account. Is that possible?"** on https://aws.amazon.com/vpc/faqs/#Default_VPCs
49+
1. Create your own VPC, then [install Docker for AWS with a pre-existing VPC](index.md#install-with-an-existing-vpc).
50+
2. Use a region that doesn't have **EC2-Classic**. The most common region with this issue is `us-east-1`. So try another region, `us-west-1`, `us-west-2`, or the new `us-east-2`. These regions will more then likely be setup with **EC2-VPC** and you will not longer have this issue.
51+
3. Create an new AWS account, all new accounts will be setup using **EC2-VPC** and will not have this problem.
52+
4. Contact AWS support to convert your **EC2-Classic** account to a **EC2-VPC** account. For more information checkout the following answer for **"Q. I really want a default VPC for my existing EC2 account. Is that possible?"** on https://aws.amazon.com/vpc/faqs/#Default_VPCs
5253

5354
### Helpful links:
5455
- http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/default-vpc.html
@@ -60,15 +61,54 @@ There are a few work arounds that you can try to get Docker for AWS up and runni
6061

6162
## Can I use my existing VPC?
6263

63-
Not at this time, but it is on our roadmap for future releases.
64+
Yes, see [install Docker for AWS with a pre-existing VPC](index.md#install-with-an-existing-vpc) for more info.
65+
66+
## Recommended VPC and subnet setup
67+
68+
#### VPC
69+
70+
* **CIDR:** 172.31.0.0/16
71+
* **DNS hostnames:** yes
72+
* **DNS resolution:** yes
73+
* **DHCP option set:** DHCP Options (Below)
74+
75+
#### Internet gateway
76+
* **VPC:** VPC (above)
77+
78+
#### DHCP option set
79+
80+
* **domain-name:** ec2.internal
81+
* **domain-name-servers:** AmazonProvidedDNS
82+
83+
#### Subnet1
84+
* **CIDR:** 172.31.16.0/20
85+
* **Auto-assign public IP:** yes
86+
* **Availability-Zone:** A
87+
88+
#### Subnet2
89+
* **CIDR:** 172.31.32.0/20
90+
* **Auto-assign public IP:** yes
91+
* **Availability-Zone:** B
92+
93+
#### Subnet3
94+
* **CIDR:** 172.31.0.0/20
95+
* **Auto-assign public IP:** yes
96+
* **Availability-Zone:** C
97+
98+
#### Route table
99+
* **Destination CIDR block:** 0.0.0.0/0
100+
* **Subnets:** Subnet1, Subnet2, Subnet3
101+
102+
##### Subnet note:
103+
If you are using the `10.0.0.0/16` CIDR in your VPC. When you create a docker network, you will need to make sure you pick a subnet (using `docker network create —subnet` option) that doesn't conflict with the `10.0.0.0` network.
64104

65105
## Which AWS regions will this work with?
66106

67107
Docker for AWS should work with all regions except for AWS China, which is a little different than the other regions.
68108

69109
## How many Availability Zones does Docker for AWS use?
70110

71-
All of Amazons regions have at least 2 AZ's, and some have more. To make sure Docker for AWS works in all regions, only 2 AZ's are used even if more are available.
111+
Docker for AWS determines the correct amount of Availability Zone's to use based on the region. In regions that support it, we will use 3 Availability Zones, and 2 for the rest of the regions. We recommend running production workloads only in regions that have at least 3 Availability Zones.
72112

73113
## What do I do if I get `KeyPair error` on AWS?
74114
As part of the prerequisites, you need to have an SSH key uploaded to the AWS region you are trying to deploy to.

docker-for-aws/iam-permissions.md

Lines changed: 12 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ title: Docker for AWS IAM permissions
66

77
The following IAM permissions are required to use Docker for AWS.
88

9-
Before you deploy Docker for AWS, your account needs these permissions for the stack to deploy correctly.
9+
Before you deploy Docker for AWS, your account needs these permissions for the stack to deploy correctly.
1010
If you create and use an IAM role with these permissions for creating the stack, CloudFormation will use the role's permissions instead of your own, using the AWS CloudFormation Service Role feature.
1111

1212
This feature is called [AWS CloudFormation Service Role](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-iam-servicerole.html?icmpid=docs_cfn_console)
@@ -114,6 +114,7 @@ follow the link for more information.
114114
"ec2:DisassociateRouteTable",
115115
"ec2:GetConsoleOutput",
116116
"ec2:GetConsoleScreenshot",
117+
"ec2:ModifyNetworkInterfaceAttribute",
117118
"ec2:ModifyVpcAttribute",
118119
"ec2:RebootInstances",
119120
"ec2:ReleaseAddress",
@@ -309,6 +310,16 @@ follow the link for more information.
309310
"Resource": [
310311
"*"
311312
]
313+
},
314+
{
315+
"Sid": "Stmt1487169681000",
316+
"Effect": "Allow",
317+
"Action": [
318+
"elasticfilesystem:*"
319+
],
320+
"Resource": [
321+
"*"
322+
]
312323
}
313324
]
314325
}

docker-for-aws/index.md

Lines changed: 27 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ redirect_from:
1212
## Quickstart
1313

1414
If your account [has the proper
15-
permissions](https://docs.docker.com/docker-for-aws/iam-permissions/), you can
15+
permissions](/docker-for-aws/iam-permissions.md), you can
1616
use the blue button from the stable or beta channel to bootstrap Docker for AWS
1717
using CloudFormation. For more about stable and beta channels, see the
1818
[FAQs](/docker-for-aws/faqs.md#stable-and-beta-channels).
@@ -45,6 +45,31 @@ using CloudFormation. For more about stable and beta channels, see the
4545
</tr>
4646
</table>
4747

48+
## Deployment options
49+
50+
There are two ways to deploy Docker for AWS:
51+
52+
- With a pre-existing VPC
53+
- With a new VPC created by Docker
54+
55+
We recommend allowing Docker for AWS to create the VPC since it allows Docker to optimize the environment. Installing in an existing VPC requires more work.
56+
57+
### Create a new VPC
58+
This approach creates a new VPC, subnets, gateways and everything else needed in order to run Docker for AWS. It is the easiest way to get started, and requires the least amount of work.
59+
60+
All you need to do is run the CloudFormation template, answer some questions, and you are good to go.
61+
62+
### Install with an Existing VPC
63+
If you need to install Docker for AWS with an existing VPC, you need to do a few preliminary steps. See [recommended VPC and Subnet setup](faqs.md#recommended-vpc-and-subnet-setup) for more details.
64+
65+
1. Pick a VPC in a region you want to use.
66+
67+
2. Make sure the selected VPC is setup with an Internet Gateway, Subnets, and Route Tables
68+
69+
3. You need to have three different subnets, ideally each in their own availability zone. If you are running in a region with only two Availability Zones, you will need to add more than one subnet into one of the availability zones. For production deployments we recommend only deploying to regions that have three or more Availability Zones.
70+
71+
4. When you launch the docker for AWS CloudFormation stack, make sure you use the one for existing VPCs. This template will prompt you for the VPC and subnets that you want to use for Docker for AWS.
72+
4873
## Prerequisites
4974

5075
- Access to an AWS account with permissions to use CloudFormation and creating the following objects. [Full set of required permissions](iam-permissions.md).
@@ -140,7 +165,7 @@ Elastic Load Balancers (ELBs) are set up to help with routing traffic to your sw
140165

141166
Docker for AWS automatically configures logging to Cloudwatch for containers you run on Docker for AWS. A Log Group is created for each Docker for AWS install, and a log stream for each container.
142167

143-
`docker logs` and `docker service logs` are not supported on Docker for AWS. Instead, you should check container in CloudWatch.
168+
The `docker logs` and `docker service logs` commands are not supported on Docker for AWS when using Cloudwatch for logs. Instead, check container logs in CloudWatch.
144169

145170
## System containers
146171

Lines changed: 61 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,61 @@
1+
---
2+
description: Persistent data volumes
3+
keywords: aws persistent data volumes
4+
title: Docker for AWS persistent data volumes
5+
---
6+
7+
## What is Cloudstor?
8+
9+
Cloudstor a volume plugin managed by Docker. It comes pre-installed and pre-configured in swarms deployed on Docker for AWS. Swarm tasks use a volume created through Cloudstor to mount a persistent data volume that stays attached to the swarm tasks no matter which swarm node they get scheduled or migrated to. Cloudstor relies on shared storage infrastructure provided by AWS to allow swarm tasks to create/mount their persistent volumes on any node in the swarm. In a future release we will introduce support for direct attached storage to satisfy very low latency/high IOPs requirements.
10+
11+
## Use Cloudstor
12+
13+
After creating a swarm on Docker for AWS and connecting to any manager using SSH, verify that Cloudstor is already installed and configured for the stack/resource group:
14+
15+
```bash
16+
$ docker plugin ls
17+
ID NAME DESCRIPTION ENABLED
18+
f416c95c0dcc docker4x/cloudstor:aws-v1.13.1-beta18 cloud storage plugin for Docker true
19+
```
20+
21+
**Note**: Make note of the plugin tag name, because it will change between versions, and yours may be different then listed here.
22+
23+
The following examples show how to create swarm services that require data persistence using the --mount flag and specifying Cloudstor as the driver.
24+
25+
### Share the same volume between tasks:
26+
27+
```bash
28+
docker service create --replicas 5 --name ping1 \
29+
--mount type=volume,volume-driver=docker4x/cloudstor:aws-v1.13.1-beta18,source=sharedvol1,destination=/shareddata \
30+
alpine ping docker.com
31+
```
32+
33+
Here all replicas/tasks of the service `ping1` share the same persistent volume `sharedvol1` mounted at `/shareddata` path within the container. Docker Swarm takes care of interacting with the Cloudstor plugin to make sure the common backing store is mounted on all nodes in the swarm where tasks of the service are scheduled. Each task needs to make sure they don't write concurrently on the same file at the same time and cause corruptions since the volume is shared.
34+
35+
With the above example, you can make sure that the volume is indeed shared by logging into one of the containers in one swarm node, writing to a file under `/shareddata/` and reading the file under `/shareddata/` from another container (in the same node or a different node).
36+
37+
### Use a unique volume per task:
38+
39+
```bash
40+
docker service create --replicas 5 --name ping2 \
41+
--mount type=volume,volume-driver=docker4x/cloudstor:aws-v1.13.1-beta18,source={{.Service.Name}}-{{.Task.Slot}}-vol,destination=/mydata \
42+
alpine ping docker.com
43+
```
44+
45+
Here the templatized notation is used to indicate to Docker Swarm that a unique volume be created and mounted for each replica/task of the service `ping2`. After initial creation of the volumes corresponding to the tasks they are attached to (in the nodes the tasks are scheduled in), if a task is rescheduled on a different node, Docker Swarm will interact with the Cloudstor plugin to create and mount the volume corresponding to the task on the node the task got scheduled on. It's highly recommended that you use the `.Task.Slot` template to make sure task N always gets access to vol N no matter which node it is executing on/scheduled to.
46+
47+
In the above example, each task has it's own volume mounted at `/mydata/` and the files under there are unique to the task mounting the volume.
48+
49+
### List or remove volumes created by Cloudstor
50+
51+
You can use `docker volume ls` to enumerate all volumes created on a node including those backed by Cloudstor. Note that if a swarm service task starts off in a node and has a Cloudstor volume associated and later gets rescheduled to a different node, `docker volume ls` in the initial node will continue to list the Cloudstor volume that was created for the task that no longer executes on the node although the volume is mounted elsewhere. Do NOT prune/rm the volumes that gets enumerated on a node without any tasks associated since these actions will result in data loss if the same volume is mounted in another node (i.e. the volume shows up in the `docker volume ls` output on another node in the swarm). We can try to detect this and block/handle in post-Beta.
52+
53+
### Configure IO performance
54+
55+
If you want a higher level of IO performance like the maxIO mode for EFS, a perfmode parameter can be specified as volume-opt:
56+
57+
```bash
58+
docker service create --replicas 5 --name ping3 \
59+
--mount type=volume,volume-driver=docker4x/cloudstor:aws-v1.13.1-beta18,source={{.Service.Name}}-{{.Task.Slot}}-vol5,destination=/mydata,volume-opt=perfmode=maxio \
60+
alpine ping docker.com
61+
```

docker-for-aws/release-notes.md

Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -27,6 +27,21 @@ Release date: 01/18/2017
2727

2828
## Beta Channel
2929

30+
### 1.13.1-beta18
31+
Release date: 02/16/2017
32+
33+
**New**
34+
35+
- Docker Engine upgraded to [Docker 1.13.1](https://github.com/docker/docker/blob/master/CHANGELOG.md)
36+
- Added a second CloudFormation template that allows you to [install Docker for AWS into a pre-existing VPC](index.md#install-into-an-existing-vpc).
37+
- Added Swarm wide support for [persistent storage volumes](persistent-data-volumes.md)
38+
- Added the following engine labels
39+
- **os** (linux)
40+
- **region** (us-east-1, etc)
41+
- **availability_zone** (us-east-1a, etc)
42+
- **instance_type** (t2.micro, etc)
43+
- **node_type** (worker, manager)
44+
3045
### 1.13.1-rc2-beta17
3146
Release date: 02/07/2017
3247

Lines changed: 51 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,51 @@
1+
---
2+
description: Persistent data volumes
3+
keywords: azure persistent data volumes
4+
title: Docker for Azure persistent data volumes
5+
---
6+
7+
## What is Cloudstor?
8+
9+
Cloudstor a volume plugin managed by Docker. It comes pre-installed and pre-configured in swarms deployed on Docker for Azure. Swarm tasks use a volume created through Cloudstor to mount a persistent data volume that stays attached to the swarm tasks no matter which swarm node they get scheduled or migrated to. Cloudstor relies on shared storage infrastructure provided by AWS to allow swarm tasks to create/mount their persistent volumes on any node in the swarm. In a future release we will introduce support for direct attached storage to satisfy very low latency/high IOPs requirements.
10+
11+
## Use Cloudstor
12+
13+
After creating a swarm on Docker for Azure and connecting to any manager using SSH, verify that Cloudstor is already installed and configured for the stack/resource group:
14+
15+
```bash
16+
$ docker plugin ls
17+
ID NAME DESCRIPTION ENABLED
18+
f416c95c0dcc docker4x/cloudstor:azure-v1.13.1-beta18 cloud storage plugin for Docker true
19+
```
20+
21+
**Note**: Make note of the plugin tag name, because it will change between versions, and yours may be different then listed here.
22+
23+
The following examples show how to create swarm services that require data persistence using the --mount flag and specifying Cloudstor as the driver.
24+
25+
### Share the same volume between tasks:
26+
27+
```bash
28+
docker service create --replicas 5 --name ping1 \
29+
--mount type=volume,volume-driver=docker4x/cloudstor:aws-v1.13.1-beta18,source=sharedvol1,destination=/shareddata \
30+
alpine ping docker.com
31+
```
32+
33+
Here all replicas/tasks of the service `ping1` share the same persistent volume `sharedvol1` mounted at `/shareddata` path within the container. Docker Swarm takes care of interacting with the Cloudstor plugin to make sure the common backing store is mounted on all nodes in the swarm where tasks of the service are scheduled. Each task needs to make sure they don't write concurrently on the same file at the same time and cause corruptions since the volume is shared.
34+
35+
With the above example, you can make sure that the volume is indeed shared by logging into one of the containers in one swarm node, writing to a file under `/shareddata/` and reading the file under `/shareddata/` from another container (in the same node or a different node).
36+
37+
### Use a unique volume per task:
38+
39+
```bash
40+
docker service create --replicas 5 --name ping2 \
41+
--mount type=volume,volume-driver=docker4x/cloudstor:azure-v1.13.1-beta18,source={{.Service.Name}}-{{.Task.Slot}}-vol,destination=/mydata \
42+
alpine ping docker.com
43+
```
44+
45+
Here the templatized notation is used to indicate to Docker Swarm that a unique volume be created and mounted for each replica/task of the service `ping2`. After initial creation of the volumes corresponding to the tasks they are attached to (in the nodes the tasks are scheduled in), if a task is rescheduled on a different node, Docker Swarm will interact with the Cloudstor plugin to create and mount the volume corresponding to the task on the node the task got scheduled on. It's highly recommended that you use the `.Task.Slot` template to make sure task N always gets access to vol N no matter which node it is executing on/scheduled to.
46+
47+
In the above example, each task has it's own volume mounted at `/mydata/` and the files under there are unique to the task mounting the volume.
48+
49+
#### List or remove volumes created by Cloudstor
50+
51+
You can use `docker volume ls` to enumerate all volumes created on a node including those backed by Cloudstor. Note that if a swarm service task starts off in a node and has a Cloudstor volume associated and later gets rescheduled to a different node, `docker volume ls` in the initial node will continue to list the Cloudstor volume that was created for the task that no longer executes on the node although the volume is mounted elsewhere. Do NOT prune/rm the volumes that gets enumerated on a node without any tasks associated since these actions will result in data loss if the same volume is mounted in another node (i.e. the volume shows up in the `docker volume ls` output on another node in the swarm). We can try to detect this and block/handle in post-Beta.

0 commit comments

Comments
 (0)