-
Notifications
You must be signed in to change notification settings - Fork 46
Add Tekton tasks to install and scale Karpenter #538
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
tests/tekton-resources/tasks/generators/karpenter/kubectl-cluster-wait.yaml
Show resolved
Hide resolved
- name: create-role | ||
image: alpine/k8s:1.23.7 | ||
script: | | ||
aws iam create-instance-profile --instance-profile-name "KarpenterNodeInstanceProfile-$(params.cluster-name)" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lets create it only if doesn't exist please.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This raises a larger question I had around idempotency and error handling. I was planning to tear down the instance profile every time and re-create it every time (in case the instance profile details change). If teardown starts failing, won't we want subsequent runs to fail?
description: The name of the cluster | ||
steps: | ||
- name: create-role | ||
image: alpine/k8s:1.23.7 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please update the image to most recent version.
- name: create-role | ||
image: alpine/k8s:1.23.7 | ||
script: | | ||
aws iam create-role --role-name "KarpenterNodeRole-$(params.cluster-name)" \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we should create them only if it is not available (or) delete them before recreating them to ensure resources are cleaned up from previous run.
|
||
# kubectl taint nodes -l dedicated=karpenter dedicated=karpenter:NoSchedule | ||
|
||
helm upgrade --install karpenter oci://$(params.karpenter-ecr-repo)/karpenter/karpenter --version $(params.karpenter-version) \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
are changes public ? if not, and they are being hosted internally ? does the underlying task have permissions to pull it ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can discuss this offline
- name: create-role | ||
image: alpine/k8s:1.23.7 | ||
script: | | ||
aws iam delete-instance-profile --instance-profile-name "KarpenterNodeInstanceProfile-$(params.cluster-name)" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please add a check to ensure it deletes only if it exists
- name: replicas | ||
description: Number of replicas to scale to | ||
- name: nodepool | ||
description: Name of the nodepool to drift |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is generically used, you may want to remove drift
here.
Issue #, if available:
Description of changes:
This change introduces the tasks necessary to install and leverage Karpenter to scale a cluster. This was tested in a dev cluster
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.