Skip to content

Commit 1496768

Browse files
committed
updated readme linking to examples and webinars
1 parent 575384e commit 1496768

File tree

5 files changed

+129
-16
lines changed

5 files changed

+129
-16
lines changed

README.md

Lines changed: 46 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -6,31 +6,62 @@ This project aims to provide a general-purpose, Kubernetes-native upgrade contro
66
It introduces a new CRD, the **Plan**, for defining any and all of your upgrade policies/requirements.
77
For up-to-date details on defining a plan please review [v1/types.go](pkg/apis/upgrade.cattle.io/v1/types.go).
88

9-
The Controller manages Plans by selecting Nodes to run upgrade Jobs on.
10-
A Plan defines which Nodes are eligible by specifying label selector.
11-
When a Job has run to completion successfully the Controller will label the Node on which it ran
12-
according to the Plan that was applied by the Job.
9+
![diagram](doc/architecture.png "The Controller manages Plans by selecting Nodes to run upgrade Jobs on.
10+
A Plan defines which Nodes are eligible for upgrade by specifying a label selector.
11+
When a Job has run to completion successfully the Controller will label the Node
12+
on which it ran according to the Plan that was applied by the Job.")
13+
14+
### Presentations and Recordings
15+
16+
#### April 14, 2020
17+
[CNCF Member Webinar: Declarative Host Upgrades From Within Kubernetes](https://www.cncf.io/webinars/declarative-host-upgrades-from-within-k8s/)
18+
- [Slides](https://www.cncf.io/wp-content/uploads/2020/04/CNCF-Webinar-System-Upgrade-Controller-1.pdf)
19+
- [Video](https://www.youtube.com/watch?v=uHF6C0GKjlA)
20+
21+
#### March 4, 2020
22+
[Rancher Online Meetup: Automating K3s Cluster Upgrades](https://info.rancher.com/online-meetup-automating-k3s-cluster-upgrades)
23+
- [Video](https://www.youtube.com/watch?v=UsPV8cZX8BY)
1324

1425
### Considerations
1526

1627
Purporting to support general-purpose node upgrades (essentially, arbitrary mutations) this controller attempts
17-
minimal imposition of opinion. Our design constraints, such as they are, follow:
28+
minimal imposition of opinion. Our design constraints, such as they are:
1829

19-
- Content delivery via container image a.k.a. container command pattern
20-
- Operator-overridable command(s)
21-
- A very privileged job/pod/container:
22-
- Host IPC, NET, and PID
30+
- content delivery via container image a.k.a. container command pattern
31+
- operator-overridable command(s)
32+
- a very privileged job/pod/container:
33+
- host IPC, NET, and PID
2334
- CAP_SYS_BOOT
24-
- Host root mounted at `/host` (read/write)
25-
- Optional opt-in/opt-out via node labels
26-
- Optional cordon/drain a la `kubectl`
35+
- host root file-system mounted at `/host` (read/write)
36+
- optional opt-in/opt-out via node labels
37+
- optional cordon/drain a la `kubectl`
2738

2839
_Additionally, one should take care when defining upgrades by ensuring that such are idempotent--**there be dragons**._
2940

30-
### Example Upgrade Plans
41+
## Deploying
42+
43+
The most up-to-date manifest is always [manifests/system-upgrade-controller.yaml](manifests/system-upgrade-controller.yaml)
44+
but since release v0.4.0 a manifest specific to the release has been created and uploaded to the release artifacts page.
45+
See [releases/download/v0.4.0/system-upgrade-controller.yaml](https://github.com/rancher/system-upgrade-controller/releases/download/v0.4.0/system-upgrade-controller.yaml)
46+
47+
But in the time-honored tradition of `curl ${script} | sudo sh -` here is a nice one-liner:
48+
49+
```shell script
50+
# Y.O.L.O.
51+
kustomize build https://github.com/rancher/system-upgrade-controller | kubectl apply -f -
52+
```
53+
54+
### Example Plans
3155

3256
- [examples/k3s-upgrade.yaml](examples/k3s-upgrade.yaml)
57+
- Demonstrates upgrading k3s itself.
3358
- [examples/ubuntu/bionic.yaml](examples/ubuntu/bionic.yaml)
59+
- Demonstrates upgrading, apt-get style, arbitrary packages at pinned versions.
60+
- [examples/ubuntu/bionic/linux-kernel-aws.yaml](examples/ubuntu/bionic/linux-kernel-aws.yaml)
61+
- Demonstrates upgrading the kernel on Ubuntu 18.04 EC2 instances on AWS.
62+
- [examples/ubuntu/bionic/linux-kernel-virtual-hwe-18.04.yaml](examples/ubuntu/bionic/linux-kernel-virtual-hwe-18.04.yaml)
63+
- Demonstrates upgrading the kernel on Ubuntu 18.04 (to the HWE version) on generic virtual machines.
64+
3465

3566
Below is an example Plan developed for [k3OS](https://github.com/rancher/k3os) that implements something like an
3667
`rsync` of content from the container image to the host, preceded by a remount if necessary, immediately followed by a reboot.
@@ -118,15 +149,15 @@ spec:
118149
make
119150
```
120151

121-
### Local Execution
152+
## Running
122153

123154
Use `./bin/system-upgrade-controller`.
124155

125156
Also see [`manifests/system-upgrade-controller.yaml`](manifests/system-upgrade-controller.yaml) that spells out what a
126157
"typical" deployment might look like with default environment variables that parameterize various operational aspects
127158
of the controller and the resources spawned by it.
128159

129-
### End-to-End Testing
160+
## Testing
130161

131162
Integration tests are bundled as a [Sonobuoy plugin](https://sonobuoy.io/docs/v0.17.2/plugins/) that expects to be run within a pod.
132163
To verify locally:

doc/architecture.png

140 KB
Loading

examples/k3s-upgrade.yaml

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,5 @@
1+
# These plans are adapted from work by Dax McDonald (https://github.com/daxmc99) and Hussein Galal (https://github.com/galal-hussein)
2+
# in support of Rancher v2 managed k3s upgrades. See Also: https://rancher.com/docs/k3s/latest/en/upgrades/automated/
13
---
24
apiVersion: upgrade.cattle.io/v1
35
kind: Plan
@@ -42,7 +44,9 @@ spec:
4244
- {key: node-role.kubernetes.io/master, operator: NotIn, values: ["true"]}
4345
serviceAccountName: system-upgrade
4446
prepare:
45-
image: rancher/k3s-upgrade:v1.17.4-k3s1
47+
# Since v0.5.0-m1 SUC will use the resolved version of the plan for the tag on the prepare container.
48+
# image: rancher/k3s-upgrade:v1.17.4-k3s1
49+
image: rancher/k3s-upgrade
4650
args: ["prepare", "k3s-server"]
4751
drain:
4852
force: true
Lines changed: 78 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,78 @@
1+
---
2+
apiVersion: v1
3+
kind: Secret
4+
metadata:
5+
name: kernel
6+
namespace: system-upgrade
7+
type: Opaque
8+
stringData:
9+
version: 5.3.0.46.102
10+
upgrade.sh: |
11+
#!/bin/sh
12+
set -e
13+
export DEBIAN_FRONTEND=noninteractive
14+
secrets=$(dirname $0)
15+
apt-get update
16+
apt-get -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install -yq \
17+
linux-virtual-hwe-18.04=$(cat $secrets/version) \
18+
linux-headers-virtual-hwe-18.04=$(cat $secrets/version) \
19+
linux-image-virtual-hwe-18.04=$(cat $secrets/version)
20+
if [ -f /run/reboot-required ]; then
21+
cat /run/reboot-required
22+
reboot
23+
fi
24+
---
25+
apiVersion: upgrade.cattle.io/v1
26+
kind: Plan
27+
metadata:
28+
name: kernel
29+
namespace: system-upgrade
30+
spec:
31+
# The maximum number of concurrent nodes to apply this update on.
32+
concurrency: 2
33+
34+
# Select which nodes this plan can be applied to.
35+
nodeSelector:
36+
matchExpressions:
37+
- {key: kernel-upgrade, operator: Exists}
38+
- {key: kernel-upgrade, operator: NotIn, values: ["disabled", "false"]}
39+
40+
# The service account for the pod to use. As with normal pods, if not specified the `default` service account from the namespace will be assigned.
41+
# See https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
42+
serviceAccountName: system-upgrade
43+
44+
# Link to secrets that will be mounted into all containers in job pods.
45+
# Secrets are calculated as part of the `.status.latestHash`. This means that changing
46+
# the secrets profile for a plan or changing the secrets that a plan refers to will
47+
# trigger application.
48+
secrets:
49+
- name: kernel
50+
# optional: default=/run/system-upgrade/secrets/{name}
51+
path: /host/run/system-upgrade/secrets/kernel
52+
53+
# The value for `channel` is assumed to be a URL that returns HTTP 302 with the last path element of the value
54+
# returned in the Location header assumed to be an image tag.
55+
#channel: https://canonical.example.com/ubuntu/lts
56+
57+
# Providing a value for `version` will prevent polling/resolution of the `channel` if specified.
58+
version: bionic
59+
60+
# The prepare init container is run before cordon/drain which is run before the upgrade container.
61+
# Shares the same format as the `upgrade` container
62+
#prepare:
63+
# image: alpine:3.11
64+
# command: [sh, -c]
65+
# args: [" echo '### ENV ###'; env | sort; echo '### RUN ###'; find /run/system-upgrade | sort"]
66+
67+
# See https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/#use-kubectl-drain-to-remove-a-node-from-service
68+
drain:
69+
# deleteLocalData: true
70+
# ignoreDaemonSets: true
71+
force: true
72+
73+
# A very limited container spec
74+
upgrade:
75+
# The tag portion of the image will be overridden with the value from `.status.latestVersion` a.k.a. the resolved version.
76+
image: ubuntu
77+
command: ["chroot", "/host"]
78+
args: ["sh", "/run/system-upgrade/secrets/kernel/upgrade.sh"]

0 commit comments

Comments
 (0)