Skip to content

rh-telco-tigers/ocpcfbuild

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 

Repository files navigation

CloudFormation Templates for Building OCP

Table of Contents

Introduction

The following repo documents a process for creating an OpenShift Cluster in AWS on a private VPC minimizing the amount of permissions required to create the cluster. This will create an operational cluster, however it will NOT have many of the AWS integrations that are available with the standard IPI install process. This is also a very manual process, and will require editing multiple files to complete. Be sure to have a good editor handy.

Prerequisites

Software You will need the following software to follow this install process:

You will need the following information in order to proceed with this install process:

Create Install Config file

Leveraging the install-config.yaml template in the root directory of this repo, update the following fields:

  • baseDomain
  • metadata.name
  • pullSecret
  • sshKey

Review the "networking" section and ensure that the IP address ranges do not conflict in your network. You will also need to update networking.machineNetwork.cidr to match the network you are deploying your machines on.

Proxy Config If you need to use Proxy to access outside resources, we will need to update the install-config.yaml file with a proxy section as listed below: Add the following to the install-config.yaml

proxy:
  httpProxy: http://<username>:<pswd>@<ip>:<port> 
  httpsProxy: http://<username>:<pswd>@<ip>:<port> 
  noProxy: example.com 
additionalTrustBundle: | 
    -----BEGIN CERTIFICATE-----
    <MY_TRUSTED_CA_CERT>
    -----END CERTIFICATE-----

To get the CA trust bundle run:

openssl s_client -connect -showcerts

Copy the signing cert into the TrustBundle Above

Install Steps

  1. mkdir install && cp install-config.yaml install
  2. openshift-install create manifests --dir=install
  3. vi install/manifests/cluster-scheduler-02-config.yml
    1. update schedulable to false
  4. edit both install/manifests/cluster-ingress-02-config.yml and cluster-ingress-default-ingresscontroller.yaml
    1. update endpointPublishingStrategy to be "HostNetwork" BE SURE TO NOT MODIFY YOUR "domain:" entry
spec:
  domain: apps.cfbuild.example.com
  endpointPublishingStrategy:
    type: HostNetwork
  1. openshift-install create ignition-configs --dir=install
  2. jq -r .infraID install/metadata.json
    1. update the following files with the cluster name: bootstrap.json(x2), control-plane.json, nw_lb.json, sg_roles.json, worker.json
    2. update step 16 to 18 below with new cluster name in S3 bucket creation step
  3. validate settings in nw_lb.json
    1. this should not change for RH testing
  4. cd cf
  5. export AWS_DEFAULT_OUTPUT="text"
  6. aws cloudformation create-stack --stack-name cfbuildint-nwlb
    --template-body file://nw_lb.yaml
    --parameters file://nw_lb.json
    --capabilities CAPABILITY_NAMED_IAM
  7. aws cloudformation describe-stacks --stack-name cfbuildint-nwlb

-------- External DNS Steps --------- If you are using an external DNS server you will need to create CNAMES that point to the AWS LB instances that were created

  • use the "int" LB and point to both api and api-int
  • use the "ingress" LB and point to *.apps.

  1. aws cloudformation create-stack --stack-name cfbuildint-sgroles
    --template-body file://sg_roles.yaml
    --parameters file://sg_roles.json
  2. aws cloudformation describe-stacks --stack-name cfbuildint-sgroles
  3. cd ../install
  4. aws s3 mb s3://cfbuild-dtxlg-infra
  5. aws s3 cp bootstrap.ign s3://cfbuild-dtxlg-infra/bootstrap.ign --acl public-read
  6. aws s3 ls s3://cfbuild-dtxlg-infra
  7. update bootstrap.json with new S3 bucket
  8. cd ../cf
  9. update bootstrap.json with the updated ARNs (x3)
  10. update bootstrap.json with the updated SecurityGroups
  11. aws cloudformation create-stack --stack-name cfbuildint-bootstrap
    --template-body file://bootstrap.yaml
    --parameters file://bootstrap.json
  12. update control_plane.json with the updated ARNs (x3)
  13. update control_plane.json with the updated SecurityGroup
  14. update control_plane.json with the updated IAM Profile
  15. get the certificate authority from the master.ign file
  16. update control_plane.json with the updated certificate authority
  17. get a copy of the master.ign from https://api-int.cfbuild.example.com:22623/config/master
  18. update master.ign version to 3.1.0
  19. upload to s3
  20. aws s3 cp master.ign s3://cfbuild-dtxlg-infra/master.ign --acl public-read
  21. aws cloudformation create-stack --stack-name cfbuildint-controlplane
    --template-body file://control-plane.yaml
    --parameters file://control-plane.json
  22. cd ..
  23. openshift-install wait-for bootstrap-complete --dir=install
  24. aws cloudformation delete-stack --stack-name cfbuildint-bootstrap
  25. get a copy of the master.ign from https://api-int.cfbuild.example.com:22623/config/master
  26. update worker.ign version to 3.1.0
  27. upload to s3
  28. aws s3 cp worker.ign s3://cfbuild-dtxlg-infra/worker.ign --acl public-read
  29. update worker.json with new securtiyGroup ID
  30. update worker.json with new IAM Profile
  31. update CertificateAuthority entry
  32. cd cf NOTE: If you want to have your workers on mulitple subnets/AZ be sure to create multiple "worker.json" files and update the subnets for each.
  33. aws cloudformation create-stack --stack-name cfbuildint-worker0
    --template-body file://worker.yaml
    --parameters file://worker.json
  34. aws cloudformation create-stack --stack-name cfbuildint-worker1
    --template-body file://worker.yaml
    --parameters file://worker.json
  35. aws cloudformation create-stack --stack-name cfbuildint-worker2
    --template-body file://worker.yaml
    --parameters file://worker.json
  36. cd ..
  37. export KUBECONFIG=$(pwd)/install/auth/kubeconfig
  38. oc get nodes
  39. oc get csr
  40. oc adm certificate approve <csr_name>
  41. repeat steps 45 and 50 again (you will need to approve certs 2x for each worker node)
  42. openshift-install wait-for install-complete --dir=install

Access Console from: console-openshift-console.apps.cfbuild.example.com

Additional notes

It is possible to build using cached copies of the boot info:

{ "ParameterKey": "IgnitionLocation", "ParameterValue": "s3://cfbuild-dtxlg-infra/worker.ign" },

If DNS does not work, and we need to update DNS servers from the default supplied by AWS, it may be possible to update this at the very beginning. at step 4 above look at the files in manifest with particular interest around the cluster-dns file and this web page https://docs.openshift.com/container-platform/4.6/networking/dns-operator.html THIS IS UNTESTED DONT USE.

Cleanup

aws cloudformation delete-stack --stack-name cfbuildint-worker0 aws cloudformation delete-stack --stack-name cfbuildint-worker1 aws cloudformation delete-stack --stack-name cfbuildint-worker2 aws cloudformation delete-stack --stack-name cfbuildint-controlplane aws cloudformation delete-stack --stack-name cfbuildint-bootstrap

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published