The following repo documents a process for creating an OpenShift Cluster in AWS on a private VPC minimizing the amount of permissions required to create the cluster. This will create an operational cluster, however it will NOT have many of the AWS integrations that are available with the standard IPI install process. This is also a very manual process, and will require editing multiple files to complete. Be sure to have a good editor handy.
Software You will need the following software to follow this install process:
- "openshift-install" - https://mirror.openshift.com/pub/openshift-v4/x86_64/clients/ocp/latest-4.6/openshift-install-linux.tar.gz
- "oc" - https://mirror.openshift.com/pub/openshift-v4/x86_64/clients/ocp/latest-4.6/openshift-client-linux.tar.gz
- "aws cli" - https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2-linux.html
You will need the following information in order to proceed with this install process:
- OpenShift Pull Secret - https://cloud.redhat.com/openshift/install/pull-secret
- SSH Public Key -
- RHCOS AMI - https://docs.openshift.com/container-platform/4.6/installing/installing_aws/installing-aws-user-infra.html#installation-aws-user-infra-rhcos-ami_installing-aws-user-infra
- AWS VPC ID - this will look something like "vpc-026c4e9f12adc018d"
- AWS Private Subnet ID(s) - this will look something like "subnet-0655610aa6217f120". You may have multiple of these which will help with HA.
- EBS Encryption Key - This is used to encrypt the boot disk at build time. This will look something like "alias/encryptionkeyname"
- AWS Tags - Review the template json and yaml for AWS resource tags that may apply.
- Base Domain name - This is something like example.com
- Cluster Name - This would be the cluster name you are building and will be used to create the fully qualified domain name eg. cfbuild.example.com
Leveraging the install-config.yaml
template in the root directory of this repo, update the following fields:
- baseDomain
- metadata.name
- pullSecret
- sshKey
Review the "networking" section and ensure that the IP address ranges do not conflict in your network. You will also need to update networking.machineNetwork.cidr to match the network you are deploying your machines on.
Proxy Config If you need to use Proxy to access outside resources, we will need to update the install-config.yaml file with a proxy section as listed below: Add the following to the install-config.yaml
proxy:
httpProxy: http://<username>:<pswd>@<ip>:<port>
httpsProxy: http://<username>:<pswd>@<ip>:<port>
noProxy: example.com
additionalTrustBundle: |
-----BEGIN CERTIFICATE-----
<MY_TRUSTED_CA_CERT>
-----END CERTIFICATE-----
To get the CA trust bundle run:
openssl s_client -connect -showcerts
Copy the signing cert into the TrustBundle Above
- mkdir install && cp install-config.yaml install
- openshift-install create manifests --dir=install
- vi install/manifests/cluster-scheduler-02-config.yml
- update schedulable to false
- edit both
install/manifests/cluster-ingress-02-config.yml
andcluster-ingress-default-ingresscontroller.yaml
- update endpointPublishingStrategy to be "HostNetwork" BE SURE TO NOT MODIFY YOUR "domain:" entry
spec:
domain: apps.cfbuild.example.com
endpointPublishingStrategy:
type: HostNetwork
- openshift-install create ignition-configs --dir=install
- jq -r .infraID install/metadata.json
- update the following files with the cluster name: bootstrap.json(x2), control-plane.json, nw_lb.json, sg_roles.json, worker.json
- update step 16 to 18 below with new cluster name in S3 bucket creation step
- validate settings in nw_lb.json
- this should not change for RH testing
- cd cf
- export AWS_DEFAULT_OUTPUT="text"
- aws cloudformation create-stack --stack-name cfbuildint-nwlb
--template-body file://nw_lb.yaml
--parameters file://nw_lb.json
--capabilities CAPABILITY_NAMED_IAM - aws cloudformation describe-stacks --stack-name cfbuildint-nwlb
-------- External DNS Steps --------- If you are using an external DNS server you will need to create CNAMES that point to the AWS LB instances that were created
- use the "int" LB and point to both api and api-int
- use the "ingress" LB and point to *.apps.
- aws cloudformation create-stack --stack-name cfbuildint-sgroles
--template-body file://sg_roles.yaml
--parameters file://sg_roles.json - aws cloudformation describe-stacks --stack-name cfbuildint-sgroles
- cd ../install
- aws s3 mb s3://cfbuild-dtxlg-infra
- aws s3 cp bootstrap.ign s3://cfbuild-dtxlg-infra/bootstrap.ign --acl public-read
- aws s3 ls s3://cfbuild-dtxlg-infra
- update bootstrap.json with new S3 bucket
- cd ../cf
- update bootstrap.json with the updated ARNs (x3)
- update bootstrap.json with the updated SecurityGroups
- aws cloudformation create-stack --stack-name cfbuildint-bootstrap
--template-body file://bootstrap.yaml
--parameters file://bootstrap.json - update control_plane.json with the updated ARNs (x3)
- update control_plane.json with the updated SecurityGroup
- update control_plane.json with the updated IAM Profile
- get the certificate authority from the master.ign file
- update control_plane.json with the updated certificate authority
- get a copy of the master.ign from https://api-int.cfbuild.example.com:22623/config/master
- update master.ign version to 3.1.0
- upload to s3
- aws s3 cp master.ign s3://cfbuild-dtxlg-infra/master.ign --acl public-read
- aws cloudformation create-stack --stack-name cfbuildint-controlplane
--template-body file://control-plane.yaml
--parameters file://control-plane.json - cd ..
- openshift-install wait-for bootstrap-complete --dir=install
- aws cloudformation delete-stack --stack-name cfbuildint-bootstrap
- get a copy of the master.ign from https://api-int.cfbuild.example.com:22623/config/master
- update worker.ign version to 3.1.0
- upload to s3
- aws s3 cp worker.ign s3://cfbuild-dtxlg-infra/worker.ign --acl public-read
- update worker.json with new securtiyGroup ID
- update worker.json with new IAM Profile
- update CertificateAuthority entry
- cd cf NOTE: If you want to have your workers on mulitple subnets/AZ be sure to create multiple "worker.json" files and update the subnets for each.
- aws cloudformation create-stack --stack-name cfbuildint-worker0
--template-body file://worker.yaml
--parameters file://worker.json - aws cloudformation create-stack --stack-name cfbuildint-worker1
--template-body file://worker.yaml
--parameters file://worker.json - aws cloudformation create-stack --stack-name cfbuildint-worker2
--template-body file://worker.yaml
--parameters file://worker.json - cd ..
- export KUBECONFIG=$(pwd)/install/auth/kubeconfig
- oc get nodes
- oc get csr
- oc adm certificate approve <csr_name>
- repeat steps 45 and 50 again (you will need to approve certs 2x for each worker node)
- openshift-install wait-for install-complete --dir=install
Access Console from: console-openshift-console.apps.cfbuild.example.com
It is possible to build using cached copies of the boot info:
{ "ParameterKey": "IgnitionLocation", "ParameterValue": "s3://cfbuild-dtxlg-infra/worker.ign" },
If DNS does not work, and we need to update DNS servers from the default supplied by AWS, it may be possible to update this at the very beginning. at step 4 above look at the files in manifest with particular interest around the cluster-dns file and this web page https://docs.openshift.com/container-platform/4.6/networking/dns-operator.html THIS IS UNTESTED DONT USE.
aws cloudformation delete-stack --stack-name cfbuildint-worker0 aws cloudformation delete-stack --stack-name cfbuildint-worker1 aws cloudformation delete-stack --stack-name cfbuildint-worker2 aws cloudformation delete-stack --stack-name cfbuildint-controlplane aws cloudformation delete-stack --stack-name cfbuildint-bootstrap