Skip to content

Reorder first part for newer AWS console #21

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 21 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
21 commits
Select commit Hold shift + click to select a range
86332f4
Recommend usage of password manager
sebastian-correa Dec 2, 2021
5ada904
Fix typos in set-up-users
sebastian-correa Dec 2, 2021
1fc8611
Mention AWS CLI profiles in setup instructions
sebastian-correa Dec 2, 2021
52e2361
Fix typos and add headings
sebastian-correa Dec 2, 2021
12f8304
Add notice that frontend is incomplete
sebastian-correa Dec 2, 2021
f193a3e
Add console and repository sections to intro
sebastian-correa Dec 2, 2021
d503163
Add tag/value recommendation for services
sebastian-correa Dec 2, 2021
7dfc748
Reorder section 01
sebastian-correa Dec 2, 2021
9779e7e
Change parameter names in section 01
sebastian-correa Dec 2, 2021
0f6addf
Reorder section 02
sebastian-correa Dec 2, 2021
3815378
Change parameter names in section 02
sebastian-correa Dec 2, 2021
78b400f
Reorder section 03
sebastian-correa Dec 2, 2021
73c45c1
Change parameter names in section 03
sebastian-correa Dec 2, 2021
43a162b
Reorder section 04
sebastian-correa Dec 2, 2021
a2ff091
Change parameter names in section 04
sebastian-correa Dec 2, 2021
92a6c9b
Add re deploying instructions to section 04
sebastian-correa Dec 2, 2021
4273df6
Reorder section 05
sebastian-correa Dec 2, 2021
d7224b3
Change parameter names in section 05
sebastian-correa Dec 2, 2021
666d550
Add instructions for seeing deployment logs.
sebastian-correa Dec 2, 2021
70ea0cb
Add indication to make changes in other parts too
sebastian-correa Dec 3, 2021
5434062
Change API_URL parameter name in rest of files
sebastian-correa Dec 3, 2021
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion workshop/beanstalk/03-finish-integration.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ Now we need to paste the API URL in the Parameter Store read for the frontend.

1. Go to **EC2** under **Compute**.
2. Click on **Parameter Store** under **SYSTEMS MANAGER SHARED RESOURCES**.
3. Select the parameter **/prod/frontend/API_URL**.
3. Select the parameter **/<your-name>/prod/frontend/API_URL**.
4. Click **Actions**, **Edit Parameter**.
5. In the value field past the URL for the API. You may need to remove the last `/` so the URL ends in `elasticbeanstalk.com`. If you left the last path separator all the API calls will fail.

Expand Down
2 changes: 1 addition & 1 deletion workshop/elb-auto-scaling-group/03-finishing-up.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ Finally, we need to re-run CodeBuild so the new bundle on S3 points to the DNS o
2. On left menu select **Load Balancer** under **LOAD BALANCING**.
3. Copy the DNS name of your load balancer that appears under **Description**.
4. On left menu, select **Parameter Store**.
5. Click on `/prod/frontend/API_URL` and on **Actions** select **Edit Parameter**.
5. Click on `/<your-name>/prod/frontend/API_URL` and on **Actions** select **Edit Parameter**.
6. As Value put: `http://` + the DNS that you copied 3 steps ago.
7. Click **Save Parameter**.

Expand Down
121 changes: 77 additions & 44 deletions workshop/s3-web-ec2-api-rds/01-serve-website-from-s3.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,32 +4,56 @@

First we need to create a bucket from where we are going to serve the website.

1. On your AWS Console, go to **S3** under **Storage section** and click on Create bucket.
2. Enter the name of the bucket. Remember, bucket names must be unique across all existing accounts and regions in AWS. You cannot rename a bucket after it is created, so chose the name wisely. Amazon suggests using DNS-compliant bucket names. You should read more about this [here](https://docs.aws.amazon.com/AmazonS3/latest/dev/BucketRestrictions.html#bucketnamingrules).
3. Pick a region for the S3 bucket. You can chose any region you like, but beware that Amazon has [different pricing](https://aws.amazon.com/s3/pricing/) for storage in different regions. In this case (though it won't matter too much) we will pick `US East (N. Virginia)`.
4. Click Create. We will configure the properties later.
5. Once created, click on the name of your bucket, go to properties, click **Static website hosting** check the option **Use this bucket to host a website**
6. As index and error document put: `index.html`. Later, we will go to the **endpoint url** specified at the top to access our website.
7. Click Save.
8. Go to **Permissions** tab.
9. On the **Block public access** section, click **Edit** , uncheck **Block all public access**, save and confirm.
9. Then go to **Bucket Policy** section and add the following policy to make every object readable:
```
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AddPerm",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::<your-bucket-name>/*"
}
]
}
```

10. Click Save
1. On your AWS Console, go to **S3** under **Storage section**.
2. Click on Create bucket.
3. Enter the name of the bucket.

Remember, bucket names must be unique across all existing accounts and regions in AWS. You cannot rename a bucket after it is created, so chose the name wisely. Amazon suggests using DNS-compliant bucket names. You should read more about this [here](https://docs.aws.amazon.com/AmazonS3/latest/dev/BucketRestrictions.html#bucketnamingrules).

A good bucket name is `<your_name>-workshop`.

4. Pick a region for the S3 bucket.

You can chose any region you like, but beware that Amazon has [different pricing](https://aws.amazon.com/s3/pricing/) for storage in different regions. In this case (though it won't matter too much) we will pick `US East (N. Virginia)`.
5. Click Create. We will configure the properties later.

## Enable static website hosting
Once created, enable static website hosting for this bucket by
1. Clicking on the name of your bucket.
2. Going to Properties.
3. Scrolling down to **Static website hosting**.
4. Clicking the _Edit_ button.
5. Checking the **Enable** option under **Static website hosting**.
6. Checking the **Host static website** option under **Hosting Type**.
6. Putting `index.html` as index and error documents.
7. Clicking Save.

Note the URL under **Bucket website endpoint** in the **Static website hosting** section. Later, we will go to the **endpoint url** specified to access our website.

## Enable and configure public access
Enable public access by going to the **Permissions** tab (you might need to scroll back up from where you are) and:
1. Click **Edit** on the **Block public access** section.
2. Uncheck **Block all public access**.
3. Save and confirm.

Now, still in the **Permissions** tab, make every object readable by
1. Clicking **Edit** on the **Bucket Policy** section.
2. Adding the following policy to make every object readable:
```
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AddPerm",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::<your-bucket-name>/*"
}
]
}
```
3. Saving.


## Add `WEBSITE_BUCKET_NAME` to the Parameters Store
Expand All @@ -38,17 +62,20 @@ Every application needs to have some configurations that inherently will vary be

[AWS Parameters Store](http://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-paramstore.html) is a service designed for just this, and we will use it to store variables of our system. This will enable us to store constants and later use them during other steps of the deployment. We will start by storing the bucket name.

1. Go to **S3** under **Storage** **section**.
2. See details of the bucket you just created and copy its name.
3. Go to AWS console **Systems Manager** under **Management & Governance**.
1. Get your bucket's name (the one you created before).

If you don't remember it, you can find all your buckets if you go to **S3** under **Storage** **section**.
3. In the search bar up top, search for **Systems Manager** (it's under **Management & Governance**).
4. On the left menu select **Parameter Store**.
5. Click **Create Parameter**.
6. Enter `/prod/codebuild/WEBSITE_BUCKET_NAME` as name and a meaningful description of what the parameter means (ie. "name of the website bucket").
6. Enter `/<your-name>/prod/codebuild/WEBSITE_BUCKET_NAME` as name and a meaningful description of what the parameter means (ie. "name of the website bucket").
7. Enter `s3://<your-bucket-name>` as value.
8. Click create parameter.

Now we can retrieve the bucket name with `aws ssm get-parameter` like we did [here](/buildspec.frontend.yml). Also, we can use [AWS SSM Agent](http://docs.aws.amazon.com/systems-manager/latest/userguide/ssm-agent.html) to manage our instances' configuration from the AWS web console.

You should _now_ go to [buildspec.frontend.yml](/buildspec.frontend.yml) and change the `BUCKET_PARAMETER_NAME` to `/<your-name>/prod/codebuild/WEBSITE_BUCKET_NAME`. This is necessary for the app to work correctly. Push this to your branch.


## Create a policy to get full access to the S3 website bucket

Expand All @@ -61,10 +88,12 @@ With [AWS Policies](http://docs.aws.amazon.com/IAM/latest/UserGuide/access_polic
5. Search and select `AmazonS3FullAccess` (this is a premade policy, but you can also build your own).
6. Click the **JSON** tab and change the `Resource` value to `["arn:aws:s3:::<your-bucket-name>", "arn:aws:s3:::<your-bucket-name>/*"]` in the JSON content.
7. Click **Review policy**
8. Choose a name for the policy (eg. S3WebsiteFullAccess) and click in Create Policy.
8. Choose a name for the policy (eg. `<YourName>S3WebsiteFullAccess`) and click in Create Policy.

Now we have a policy that allows full access (list, write, update, delete, etc) to our website bucket. Let’s see how we can use it in the following section.

Don't fret, only a particular _role_ will have this policy attached to it. It's not like _everyone_ will have full access to your S3 bucket (that would be dangerous). More on this later.


## Create a project in CodeBuild to build and deploy the frontend

Expand All @@ -74,21 +103,24 @@ Follow these steps to get it ready:

1. Go to **CodeBuild** under the **Developer Tools** section.
2. Click on Get Started (or Create Project if you had other projects).
3. Choose a project name and write a description (optional).
3. Choose a project name and write a description (optional). A good name is `<your-name>-workshop`.
4. On the Source section:
1. Choose **Github** as the source provider.
2. Select an option for the repository.
3. Connect Github with AWS if neccesary.
4. Fill the repository URL or choose one repository from your Github account.
1. Choose **Github** as the source provider.
2. Select an option for the repository (probably _Public repository_).
3. Connect Github with AWS if neccesary.
4. Fill the repository URL or choose one repository from your Github account.
5. Write your branch's name under **Source version**.
5. On the Environment section:
1. Choose Ubuntu as the OS and Standard as the Runtime.
2. Select `aws/codebuild/standard:1.0` as the Image and latest Image Version.
1. Choose Ubuntu as the OS and Standard as the Runtime.
2. Select `aws/codebuild/standard:5.0` as the Image and latest Image Version.
6. In the Service Role section:
1. Select New service role.
2. Choose a name for the Role and name it `codebuild-aws-workshop-service-role`.
7. In the BuildSpec section choose `Use a Buildspec file` and below name to `buildspec.frontend.yml` (our yaml file with the steps to follow).
1. Select New service role.
1. Choose a name for the Role and name it `<your-name>-aws-workshop-service-rcodebuild-ole`.
7. In the BuildSpec section:
1. Choose `Use a Buildspec file`.
2. Set to name to `buildspec.frontend.yml` (our yaml file with the steps to follow).
8. In the Artifacts section select _No artifacts_.
9. Click on Continue.
9. Click on Create Build Project.
10. Click on Save.

Now, we have created a CodeBuild application. We won’t be able to run it though, because we don’t have permissions to add files to our S3 bucket. That is why earlier we created a policy and also something called a "role". For everything to work, we need to attach the policy to the role.
Expand All @@ -104,8 +136,9 @@ Earlier, we created a policy to allow full access to our S3 bucket and assigned
1. Go to IAM under Security, Identity & Compliance.
2. Click in Roles.
3. You should see the role created in the CodeBuild project creation, select it.
4. Click Attach Policy.
5. Search for the Policy for full access to the S3 website bucket, select it and then click Attach Policy.
4. Click Attach Policies.
5. Search for the Policy for full access to the S3 website bucket (`<YourName>S3WebsiteFullAccess`) and select it.
6. Click Attach Policy.

**SSM read access**

Expand Down
84 changes: 49 additions & 35 deletions workshop/s3-web-ec2-api-rds/02-EC2-instances.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,64 +6,78 @@ First we will create a role to allow our EC2 instances access to SSM:

1. Go to **IAM** under **Security, Identity & Compliance**.
2. Go to Role section and click Create Role.
3. In 'Select type of trusted entity' select **AWS Service**, then **EC2** and click next.
3. In 'Select type of trusted entity' select **AWS Service**, then **EC2** and click _Next: Permissions_.
4. Search for `AmazonSSMReadOnlyAccess`, select it and click next.
5. Lets call it `ApiRole`. Click create Role.
5. Lets call it `<YourName>ApiRole`.
6. Click create Role.

We have already created entries in the Parameter Store. In the future we will need encrypted variables, like the password for our database. For this, will create an encryption key to encrypt and decrypt those values. That encryption key will be attached to our admin user and to the role we just created, so only services that are setup to assume the role can get access to the decrypted values. You can read more about SSM and secure data [here](https://aws.amazon.com/blogs/compute/managing-secrets-for-amazon-ecs-applications-using-parameter-store-and-iam-roles-for-tasks/).

1. Go to **Key Management Service (KMS)** under **Security, Identity & Compliance**.
2. Select **Create key**.
3. Selct symmetric and click next.
3. Enter `workshopkey` as alias and a meaningful description like "this is the encryption key for the AWS workshop".
3. Enter `<your-name>-workshopkey` as alias and a meaningful description like "this is the encryption key for the AWS workshop".
4. Click next step.
5. Select both your AWS CLI and console users and click next.
6. Select your EC2 Role and click next.
7. Click Finish.
5. Select both your AWS CLI and console users as key administrators. If you are using your Tryo Playground account, it's just 1 user that can do both.
6. Click next.
7. Select your EC2 Role (`<YourRole>ApiRole`) and click next.
8. Click Finish.

In the future, if an EC2 instance with our new role wants to access an encrypted parameter, AWS will automatically decrypt it!

## Launch your first EC2 instance

We are ready to launch our first EC2 instance. We will create a standard EC2 instance, add a [startup script](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html) (which will run automatically when the instance boots) and finally create a [security group](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html) that will control the outbound and inbound in our EC2 instances.

1. Go to the **EC2** under **Compute section**, and in the top right corner, you can pick the region we are going to use. In this case, we will be using the same region that we used for the S3 bucket setup earlier, that is, `US East (N. Virginia)`.
1. Go to **EC2** under **Compute section**.

In the top right corner, you can pick the region we are going to use. In this case, we will be using the same region that we used for the S3 bucket setup earlier, that is, `US East (N. Virginia)`.
2. Click on _Instances_ in the left panel.
2. Click on Launch Instance.
3. Look for Ubuntu Server (make sure it is Free tier eligible) and click Select.
4. Select `t2.micro` and then click on Next: Configure Instance Details.
5. Select our `ApiRole` on **IAM role**.
6. On Advanced Details, select "As text" in User data and then paste the following bash script:
```
#!/bin/bash
export LC_ALL=C.UTF-8
apt update
apt -y install ruby
cd /home/ubuntu
wget https://aws-codedeploy-us-east-1.s3.amazonaws.com/latest/install
chmod +x ./install
./install auto
```

Be careful, if you leave spaces at the beginning of the script it will not work. So NO SPACES!
If you are using another region, the bucket name in the `wget` line needs to be modified (see [here](https://docs.aws.amazon.com/codedeploy/latest/userguide/resource-kit.html#resource-kit-bucket-names)).

7. Click Next: Add Storage.
8. Leave the default settings and click Next: Add Tags.
9. Click Add Tag.
10. Fill Key with `service` and in Value with `api`.
11. Add another tag with Key `environment` and Value `prod`. These keys will help us identify our EC2 instances running the API later.
12. Click on Next: Configure Security Group.
13. Make sure the _Create a new security group_ option is selected and write a descriptive name on the _Security group name:_ field. You cannot rename it later so choose the name wisely.
14. Click Add Rule.
15. In port range put `9000` and in Source `0.0.0.0/0`, and add a meaningful description. This will enable incoming traffic on port 9000 from every IP, so you can "contact" your instance from the outside. If you pay attention, by default we also get a rule allowing inbound traffic on port 22, which we will use for SSH'ing to the instance. Also by default, outbound traffic (that is, traffic originating from your instance) will be allowed to any destination and port, but you can restrict that later by editing the outbound rules for the security group.
4. Select `t2.micro`.
5. Click on Next: Configure Instance Details. Configure the following:

1. Select our `<YourRole>ApiRole` on **IAM role**.
2. On Advanced Details, select "As text" in User data and then paste the following bash script:
```
#!/bin/bash
export LC_ALL=C.UTF-8
apt update
apt -y install ruby
cd /home/ubuntu
wget https://aws-codedeploy-us-east-1.s3.amazonaws.com/latest/install
chmod +x ./install
./install auto
```

Be careful, if you leave spaces at the beginning of the script it will not work. So NO SPACES!
If you are using another region, the bucket name in the `wget` line needs to be modified (see [here](https://docs.aws.amazon.com/codedeploy/latest/userguide/resource-kit.html#resource-kit-bucket-names)).
6. Click Next: Add Storage.
1. Leave the default settings
7. Click Next: Add Tags. These keys will help us identify our EC2 instances running the API later.
1. Click Add Tag.
2. Fill Key with `service` and Value with `api`.
3. Add another tag with Key `environment` and Value `prod`.
8. Click on Next: Configure Security Group.
1. Make sure the _Create a new security group_ option is selected.
2. Write a descriptive name on the _Security group name:_ field. You cannot rename it later so choose the name wisely. A good name is `<your-name>-workshop-ec2-security-group`.
3. Click Add Rule.
4. In port range put `9000` and in Source `0.0.0.0/0`, and add a meaningful description.

This will enable incoming traffic on port 9000 from every IP, so you can "contact" your instance from the outside.

If you pay attention, by default we also get a rule allowing inbound traffic on port 22, which we will use for SSH'ing to the instance.

Also by default, outbound traffic (that is, traffic originating from your instance) will be allowed to any destination and port, but you can restrict that later by editing the outbound rules for the security group.
16. Click Review and Launch.
17. Click Launch.
18. When asked to select an existing key pair, choose `create a new key pair`, name it `aws_workshop` and click download. Store it in a secure place (`~/.ssh` is good, but make sure you `chmod 400` the PEM file so only your user can read it), we will use it to SSH into the instances during the whole workshop.
18. When asked to select an existing key pair, choose `create a new key pair`, name it `<your_name>_aws_workshop` and click download. Store it in a secure place (`~/.ssh` is good, but make sure you `chmod 400` the PEM file so only your user can read it), we will use it to SSH into the instances during the whole workshop.
19. Click Launch Instances.

## Add Security Group inbound rule
1. Go to **Security Groups** under **Network & Security** (still on EC2 service).
2. Open the Security Group you created when launching the EC2 (step 13).
2. Open the Security Group you created when launching the EC2 (`<your-name>-workshop-ec2-security-group`).
3. Click **Edit inbound rules**.
4. Add a new rule with type `PostgreSQL` (port `5432` should be set automatically). As source select the security group itself (start typing the name and select the one suggested). Note that this rule could not be added on the previous step because the security group didn't exist at that point.
5. Click **Save rules**.
Expand Down
Loading