The following README will guide you on how to use the provided Terraform plan to deploy an Amazon Web Services (AWS) Kubernetes Engine cluster and connected it as an Azure Arc cluster resource.
-
Clone this repo
git clone https://github.com/microsoft/azure_arc.git
-
Install wget package (required for the eks module)
-
Install or update Azure CLI. Azure CLI should be running version 2.7 or later. Use
az --version
to check your current installed version. -
Create Azure Service Principal (SP)
To connect a Kubernetes cluster to Azure Arc, Azure Service Principal assigned with the "Contributor" role is required. To create it, login to your Azure account run the below command (this can also be done in Azure Cloud Shell).
az login az ad sp create-for-rbac -n "<Unique SP Name>" --role contributor
For example:
az ad sp create-for-rbac -n "http://AzureArcK8s" --role contributor
Output should look like this:
{ "appId": "XXXXXXXXXXXXXXXXXXXXXXXXXXXX", "displayName": "AzureArcK8s", "name": "http://AzureArcK8s", "password": "XXXXXXXXXXXXXXXXXXXXXXXXXXXX", "tenant": "XXXXXXXXXXXXXXXXXXXXXXXXXXXX" }
Note: It is optional but highly recommended to scope the SP to a specific Azure subscription and Resource Group
-
Enable subscription for two providers for Azure Arc enabled Kubernetes
Registration is an asynchronous process, and registration may take approximately 10 minutes.az provider register --namespace Microsoft.Kubernetes Registering is still on-going. You can monitor using 'az provider show -n Microsoft.Kubernetes' az provider register --namespace Microsoft.KubernetesConfiguration Registering is still on-going. You can monitor using 'az provider show -n Microsoft.KubernetesConfiguration'
You can monitor the registration process with the following commands:
az provider show -n Microsoft.Kubernetes -o table az provider show -n Microsoft.KubernetesConfiguration -o table
-
Install the Azure Arc for Kubernetes CLI extensions connectedk8s and k8sconfiguration:
az extension add --name connectedk8s az extension add --name k8sconfiguration
Note: If you already used this guide before and/or have the extensions installed, use the az extension update --name connectedk8s
and the az extension update --name k8sconfiguration
commands.
-
Create AWS User IAM Key
An access key grants programmatic access to your resources. To create an AWS Access Key for a user:
- Navigate to the IAM Access page.
- Select the Users from the side menu.
- Select the User you want to create the access key for.
- Select *Security credentials of the User selected.
- Under Access Keys select Create Access Keys, this will download the
- In the popup window it will show you the Access key ID and Secret access key. Save both of these values to configure AWS CLI later
- Set your credentials via the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY, environment variables, representing your AWS Access Key and AWS Secret Key.
$ export AWS_ACCESS_KEY_ID="anaccesskey" $ export AWS_SECRET_ACCESS_KEY="asecretkey" $ export AWS_DEFAULT_REGION="us-west-2"
- Navigate to the IAM Access page.
-
Navigate to the folder that has EKS terraform binaries.
cd azure_arc_k8s_jumpstart/eks/terraform
-
Initialize Terraform
Run theterraform init
command which will initialize Terraform, creating the state file to track our work:
-
Deploy EKS
Run theterraform apply --auto-approve
command. Wait for the plan to finish: -
Setting Up kubectl
You will need the configuration output from Terraform in order to use kubectl to interact with your new cluster.
Create your kube configuration directory, and output the configuration from Terraform into the config file using the Terraform output command:mkdir ~/.kube/ terraform output kubeconfig>~/.kube/config
Check to see if cluster is discoverable by
kubectl
by running:$ kubectl version Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.5", GitCommit:"20c265fef0741dd71a66480e35bd69f18351daea", GitTreeState:"clean", BuildDate:"2019-10-15T19:16:51Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.8-eks-e16311", GitCommit:"e163110a04dcb2f39c3325af96d019b4925419eb", GitTreeState:"clean", BuildDate:"2020-03-27T22:37:12Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
-
Configure EKS Nodes to communicate to EKS Control Plane
Now let’s add the ConfigMap to the cluster from Terraform as well. The ConfigMap is a Kubernetes configuration, in this case for granting access to our EKS cluster. This ConfigMap allows our ec2 instances in the cluster to communicate with the EKS master, as well as allowing our user account access to run commands against the cluster. You’ll run the Terraform output command to a file, and the kubectl apply command to apply that file:terraform output config_map_aws_auth > configmap.yml kubectl apply -f configmap.yml
Once this is complete, you should see your nodes from your autoscaling group either starting to join or joined to the cluster. Once the second column reads Ready the node can have deployments pushed to it. Again, your output may vary here:
kubectl get nodes -o wide
-
Finished Deploying EKS
Once done, you will have a ready EKS cluster under the Elastic Kubernetes Service section in your AWS console.
Now that you have a running EKS cluster, lets connect the EKS cluster to Azure Arc by:
-
Login to previously created Service Principal
az login --service-principal -u mySpnClientId -p mySpnClientSecret --tenant myTenantID
-
Create a resource group
az group create --name arceksdemo -l EastUS -o table
Note: Azure Arc enabled Kubernetes is currently supported in East US and West Europe
-
Deploy Arc binaries using Azure CLI:
az connectedk8s connect --name arceksdemo --resource-group arceksdemo --location 'eastus' --tags 'Project=jumpstart_azure_arc_k8s'
-
Upon completion, you will have your EKS cluster connect as a new Azure Arc Kubernetes cluster resource in a new Resource Group.
In Azure, the most straightforward way is to delete the cluster or the Resource Group via the Azure Portal or through the CLI.
az group delete --name arceksdemo
On your AWS portal, select the cluster and delete it or alternatively, you can use the terraform destroy --auto-approve
command.