AWS Kubernetes is a Kubernetes cluster deployed using Kubeadm tool. It provides full integration with AWS. It is able to handle ELB load balancers, EBS disks, Route53 domains etc.
- 22.8.2017: Update Kubernetes and Kubeadm to 1.7.4
- 30.8.2017: New addon - Fluentd + ElasticSearch + Kibana
- AWS Kubernetes deployes into existing VPC / public subnet. If you don't have your VPC / subnet yet, you can use this configuration to create one.
- To deploy AWS Kubernetes there are no other dependencies apart from Terraform. Kubeadm is used only on the EC2 hosts and doesn't have to be installed locally.
The configuration is done through Terraform variables. Example tfvars file is part of this repo and is named example.tfvars
. Change the variables to match your environment / requirements before running terraform apply ...
.
Option | Explanation | Example |
---|---|---|
aws_region |
AWS region which should be used | eu-central-1 |
cluster_name |
Name of the Kubernetes cluster (also used to name different AWS resources) | my-aws-kubernetes |
master_instance_type |
AWS EC2 instance type for master | t2.medium |
worker_instance_type |
AWS EC2 instance type for worker | t2.medium |
ssh_public_key |
SSH key to connect to the remote machine | ~/.ssh/id_rsa.pub |
master_subnet_id |
Subnet ID where master should run | subnet-8d3407e5 |
worker_subnet_ids |
List of subnet IDs where workers should run | [ "subnet-8d3407e5" ] |
min_worker_count |
Minimal number of worker nodes | 3 |
max_worker_count |
Maximal number of worker nodes | 6 |
hosted_zone |
DNS zone which should be used | my-domain.com |
hosted_zone_private |
Is the DNS zone public or private | false |
addons |
List of addons which should be installed | [ "https://..." ] |
tags |
Tags which should be applied to all resources | see example.tfvars file |
tags2 |
Tags in second format which should be applied to AS groups | see example.tfvars file |
ssh_access_cidr |
List of CIDRs from which SSH access is allowed | [ "0.0.0.0/0" ] |
api_access_cidr |
List of CIDRs from which API access is allowed | [ "0.0.0.0/0" ] |
If you want to run workers in additional / different subnet(s) than master you have to tag the subnets with kubernetes.io/cluster/{cluster_name}=shared
. For example kubernetes.io/cluster/my-aws-kubernetes=shared
. This tagging is not done by this script! The subnets should belong to the same VPC as the master subnet. Running workers in different subnets accross several AWS availability zones will give you more resilient setup which might help your applications to survive issues relevant to single AWS AZ. Keep in mind that the master is still single node only.
To create AWS Kubernetes cluster,
- Export AWS credentials into environment variables
AWS_ACCESS_KEY_ID
andAWS_SECRET_ACCESS_KEY
- Apply Terraform configuration:
terraform apply --var-file example.tfvars
To delete AWS Kubernetes cluster,
- Export AWS credentials into environment variables
AWS_ACCESS_KEY_ID
andAWS_SECRET_ACCESS_KEY
- Destroy Terraform configuration:
terraform destroy --var-file example.tfvars
Currently, following addons are supported:
- Kubernetes dashboard
- Heapster for resource monitoring
- Storage class for automatic provisioning of persisitent volumes
- External DNS (Replaces Route53 mapper)
- Ingress
- Autoscaler
- Logging with Fluentd + ElasticSearch + Kibana
The addons will be installed automatically based on the Terraform variables.
Custom addons can be added if needed. For every URL in the addons
list, the initialization scripts will automatically call kubectl -f apply <Addon URL>
to deploy it. The cluster is using RBAC. So the custom addons have to be RBAC ready.
If you need to tag resources created by your Kubernetes cluster (EBS volumes, ELB load balancers etc.) check this AWS Lambda function which can do the tagging.