Skip to content

Commit b2eb153

Browse files
committed
add systemd services for configuration after start
the services does the various needed tasks to setup the ocp or microshift cluster, these systemd units runs small shell scripts which are based on: https://github.com/crc-org/crc-cloud/blob/main/pkg/bundle/setup/clustersetup.sh and does the following tasks: - creates crc specific configurations for dnsmasq - sets a new uuid as cluster id - creates the pod for routes-controller - tries to grow the disk and filesystem - checks if the cluster operators are ready - adds the pull secret to the cluster - sets kubeadmin and developer user passwords - sets a custom ca for authentication - sets custom nip.io cluster domain
1 parent 914f90f commit b2eb153

25 files changed

+421
-0
lines changed

createdisk-library.sh

+21
Original file line numberDiff line numberDiff line change
@@ -216,6 +216,7 @@ function prepare_hyperV() {
216216
echo 'CONST{virt}=="microsoft", RUN{builtin}+="kmod load hv_sock"' > /etc/udev/rules.d/90-crc-vsock.rules
217217
EOF
218218
}
219+
219220
function prepare_qemu_guest_agent() {
220221
local vm_ip=$1
221222

@@ -400,3 +401,23 @@ function remove_pull_secret_from_disk() {
400401
esac
401402
}
402403

404+
function copy_systemd_units() {
405+
${SSH} core@${VM_IP} -- 'mkdir -p /home/core/systemd-units && mkdir -p /home/core/systemd-scripts'
406+
${SCP} systemd/crc-*.service core@${VM_IP}:/home/core/systemd-units/
407+
${SCP} systemd/crc-*.path core@${VM_IP}:/home/core/systemd-units/
408+
${SCP} systemd/crc-*.sh core@${VM_IP}:/home/core/systemd-scripts/
409+
410+
case "${BUNDLE_TYPE}" in
411+
"snc"|"okd")
412+
${SCP} systemd/ocp-*.service core@${VM_IP}:/home/core/systemd-units/
413+
${SCP} systemd/ocp-*.path core@${VM_IP}:/home/core/systemd-units/
414+
${SCP} systemd/ocp-*.sh core@${VM_IP}:/home/core/systemd-scripts/
415+
;;
416+
esac
417+
418+
${SSH} core@${VM_IP} -- 'sudo cp /home/core/systemd-units/* /etc/systemd/system/ && sudo cp /home/core/systemd-scripts/* /usr/local/bin/'
419+
${SSH} core@${VM_IP} -- 'ls /home/core/systemd-scripts/ | xargs -t -I % sudo chmod +x /usr/local/bin/%'
420+
${SSH} core@${VM_IP} -- 'sudo restorecon -rv /usr/local/bin'
421+
${SSH} core@${VM_IP} -- 'ls /home/core/systemd-units/ | xargs sudo systemctl enable'
422+
${SSH} core@${VM_IP} -- 'rm -rf /home/core/systemd-units /home/core/systemd-scripts'
423+
}

createdisk.sh

+2
Original file line numberDiff line numberDiff line change
@@ -130,6 +130,8 @@ if [ "${ARCH}" == "aarch64" ] && [ ${BUNDLE_TYPE} != "okd" ]; then
130130
${SSH} core@${VM_IP} -- "sudo rpm-ostree install https://kojipkgs.fedoraproject.org//packages/qemu/8.2.6/3.fc40/aarch64/qemu-user-static-x86-8.2.6-3.fc40.aarch64.rpm"
131131
fi
132132

133+
copy_systemd_units
134+
133135
cleanup_vm_image ${VM_NAME} ${VM_IP}
134136

135137
# Delete all the pods and lease from the etcd db so that when this bundle is use for the cluster provision, everything comes up in clean state.

docs/self-sufficient-bundle.md

+34
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,34 @@
1+
# Self sufficient bundles
2+
3+
Since release 4.19.0 of OpenShift Local, the bundles generated by `snc` contain additional systemd services to provision the cluster and remove the need for
4+
an outside entity to provision the cluster, although an outside process needs to create some files on pre-defined locations inside the VM for the systemd
5+
services to do their work.
6+
7+
## The following table lists the systemd services and the location of files they need to provision the cluster, users of SNC need to create those files
8+
9+
| Systemd unit | Runs for (ocp, MicroShift, both) | Input files location | Marker env variables |
10+
| :----------------------------: | :------------------------------: | :----------------------------------: | :------------------: |
11+
| `crc-cluster-status.service` | both | none | none |
12+
| `crc-pullsecret.service` | both | /opt/crc/pull-secret | none |
13+
| `crc-dnsmasq.service` | both | none | none |
14+
| `crc-routes-controller.service`| both | none | none |
15+
| `ocp-cluster-ca.service` | ocp | /opt/crc/custom-ca.crt | CRC_CLOUD=1 |
16+
| `ocp-clusterid.service` | ocp | none | none |
17+
| `ocp-custom-domain.service` | ocp | none | CRC_CLOUD=1 |
18+
| `ocp-growfs.service` | ocp | none | none |
19+
| `ocp-userpasswords.service` | ocp | /opt/crc/pass_{kubeadmin, developer} | none |
20+
21+
In addition to the above services we have `ocp-cluster-ca.path`, `crc-pullsecret.path` and `ocp-userpasswords.path` that monitors the filesystem paths
22+
related to their `*.service` counterparts and starts the service when the paths become available.
23+
24+
> [!NOTE]
25+
> "Marker env variable" is set using an env file, if the required env variable is not set then unit is skipped
26+
> some units are run only when CRC_CLOUD=1 is set, these are only needed when using the bundles with crc-cloud
27+
28+
The systemd services are heavily based on the [`clustersetup.sh`](https://github.com/crc-org/crc-cloud/blob/main/pkg/bundle/setup/clustersetup.sh) script found in the `crc-cloud` project.
29+
30+
## Naming convention for the systemd unit files
31+
32+
Systemd units that are needed for both 'OpenShift' and 'MicroShift' are named as `crc-*.service`, units that are needed only for 'OpenShift' are named
33+
as `ocp-*.service` and when we add units that are only needed for 'MicroShift' they should be named as `ucp-*.service`
34+

systemd/crc-cluster-status.service

+11
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
[Unit]
2+
Description=CRC Unit checking if cluster is ready
3+
After=kubelet.service
4+
5+
[Service]
6+
Type=oneshot
7+
ExecStart=/usr/local/bin/crc-cluster-status.sh
8+
RemainAfterExit=true
9+
10+
[Install]
11+
WantedBy=multi-user.target

systemd/crc-cluster-status.sh

+43
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,43 @@
1+
#!/bin/bash
2+
3+
set -x
4+
5+
export KUBECONFIG=/opt/kubeconfig
6+
7+
function check_cluster_healthy() {
8+
WAIT="authentication|console|etcd|ingress|openshift-apiserver"
9+
10+
until `oc get co > /dev/null 2>&1`
11+
do
12+
sleep 2
13+
done
14+
15+
for i in $(oc get co | grep -P "$WAIT" | awk '{ print $3 }')
16+
do
17+
if [[ $i == "False" ]]
18+
then
19+
return 1
20+
fi
21+
done
22+
return 0
23+
}
24+
25+
# rm -rf /tmp/.crc-cluster-ready
26+
27+
COUNTER=0
28+
CLUSTER_HEALTH_SLEEP=8
29+
CLUSTER_HEALTH_RETRIES=500
30+
31+
while ! check_cluster_healthy
32+
do
33+
sleep $CLUSTER_HEALTH_SLEEP
34+
if [[ $COUNTER == $CLUSTER_HEALTH_RETRIES ]]
35+
then
36+
return 1
37+
fi
38+
((COUNTER++))
39+
done
40+
41+
# need to set a marker to let `crc` know the cluster is ready
42+
# touch /tmp/.crc-cluster-ready
43+

systemd/crc-dnsmasq.service

+14
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
[Unit]
2+
Description=CRC Unit for configuring dnsmasq
3+
Requires=sys-class-net-tap0.device
4+
After=sys-class-net-tap0.device
5+
After=network-online.target ovs-configuration.service gvisor-tap-vsock.service
6+
7+
[Service]
8+
Type=oneshot
9+
ExecCondition=/usr/bin/bash -c "/usr/sbin/ip link show dev tap0 && exit 1 || exit 0"
10+
ExecStart=/usr/local/bin/crc-dnsmasq.sh
11+
ExecStartPost=/usr/bin/systemctl start dnsmasq.service
12+
13+
[Install]
14+
WantedBy=multi-user.target

systemd/crc-dnsmasq.sh

+19
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,19 @@
1+
#!/bin/bash
2+
3+
set -x
4+
5+
hostName=$(hostname)
6+
hostIp=$(hostname --all-ip-addresses | awk '{print $1}')
7+
8+
cat << EOF > /etc/dnsmasq.d/crc-dnsmasq.conf
9+
interface=br-ex
10+
expand-hosts
11+
log-queries
12+
local=/crc.testing/
13+
domain=crc.testing
14+
address=/apps-crc.testing/192.168.130.11
15+
address=/api.crc.testing/192.168.130.11
16+
address=/api-int.crc.testing/192.168.130.11
17+
address=/$hostName.crc.testing/$hostIp
18+
EOF
19+

systemd/crc-pullsecret.path

+11
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
[Unit]
2+
Description=CRC Unit for monitoring the pull secret path
3+
After=kubelet.service
4+
5+
[Path]
6+
PathExists=/opt/crc/pull-secret
7+
TriggerLimitIntervalSec=1min
8+
TriggerLimitBurst=0
9+
10+
[Install]
11+
WantedBy=multi-user.target

systemd/crc-pullsecret.service

+10
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,10 @@
1+
[Unit]
2+
Description=CRC Unit for adding pull secret to cluster
3+
After=kubelet.service
4+
5+
[Service]
6+
Type=oneshot
7+
ExecStart=/usr/local/bin/crc-pullsecret.sh
8+
9+
[Install]
10+
WantedBy=multi-user.target

systemd/crc-pullsecret.sh

+21
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,21 @@
1+
#!/bin/bash
2+
3+
set -x
4+
5+
source /usr/local/bin/crc-systemd-common.sh
6+
export KUBECONFIG="/opt/kubeconfig"
7+
8+
wait_for_resource secret
9+
10+
# check if existing pull-secret is valid if not add the one from /opt/crc/pull-secret
11+
existingPsB64=$(oc get secret pull-secret -n openshift-config -o jsonpath="{['data']['\.dockerconfigjson']}")
12+
existingPs=$(echo "${existingPsB64}" | base64 -d)
13+
14+
echo "${existingPs}" | jq -e '.auths'
15+
16+
if [[ $? != 0 ]]; then
17+
pullSecretB64=$(cat /opt/crc/pull-secret | base64 -w0)
18+
oc patch secret pull-secret -n openshift-config --type merge -p "{\"data\":{\".dockerconfigjson\":\"${pullSecretB64}\"}}"
19+
rm -f /opt/crc/pull-secret
20+
fi
21+

systemd/crc-routes-controller.service

+13
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
[Unit]
2+
Description=CRC Unit starting routes controller
3+
Requires=sys-class-net-tap0.device
4+
After=sys-class-net-tap0.device
5+
After=network-online.target kubelet.service gvisor-tap-vsock.service crc-dnsmasq.service
6+
7+
[Service]
8+
Type=oneshot
9+
ExecCondition=/usr/bin/bash -c "/usr/sbin/ip link show dev tap0"
10+
ExecStart=/usr/local/bin/crc-routes-controller.sh
11+
12+
[Install]
13+
WantedBy=multi-user.target

systemd/crc-routes-controller.sh

+11
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
#!/bin/bash
2+
3+
set -x
4+
5+
source /usr/local/bin/crc-systemd-common.sh
6+
export KUBECONFIG=/opt/kubeconfig
7+
8+
wait_for_resource pods
9+
10+
oc apply -f /opt/crc/routes-controller.yaml
11+

systemd/crc-systemd-common.sh

+12
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,12 @@
1+
# $1 is the resource to check
2+
# $2 is an optional maximum retry count; default 20
3+
function wait_for_resource() {
4+
local retry=0
5+
local max_retry=${2:-20}
6+
until `oc get "$1" > /dev/null 2>&1`
7+
do
8+
[ $retry == $max_retry ] && exit 1
9+
sleep 5
10+
((retry++))
11+
done
12+
}

systemd/ocp-cluster-ca.path

+10
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,10 @@
1+
[Unit]
2+
Description=CRC Unit monitoring custom-ca.crt file path
3+
4+
[Path]
5+
PathExists=/opt/crc/custom-ca.crt
6+
TriggerLimitIntervalSec=1min
7+
TriggerLimitBurst=0
8+
9+
[Install]
10+
WantedBy=multi-user.target

systemd/ocp-cluster-ca.service

+10
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,10 @@
1+
[Unit]
2+
Description=CRC Unit setting custom cluster ca
3+
After=kubelet.service ocp-clusterid.service
4+
5+
[Service]
6+
Type=oneshot
7+
ExecStart=/usr/local/bin/ocp-cluster-ca.sh
8+
9+
[Install]
10+
WantedBy=multi-user.target

systemd/ocp-cluster-ca.sh

+26
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,26 @@
1+
#!/bin/bash
2+
3+
set -x
4+
5+
source /usr/local/bin/crc-systemd-common.sh
6+
export KUBECONFIG="/opt/kubeconfig"
7+
8+
wait_for_resource configmap
9+
10+
custom_ca_path=/opt/crc/custom-ca.crt
11+
12+
# retry=0
13+
# max_retry=20
14+
# until `ls ${custom_ca_path} > /dev/null 2>&1`
15+
# do
16+
# [ $retry == $max_retry ] && exit 1
17+
# sleep 5
18+
# ((retry++))
19+
# done
20+
21+
oc create configmap client-ca-custom -n openshift-config --from-file=ca-bundle.crt=${custom_ca_path}
22+
oc patch apiserver cluster --type=merge -p '{"spec": {"clientCA": {"name": "client-ca-custom"}}}'
23+
oc create configmap admin-kubeconfig-client-ca -n openshift-config --from-file=ca-bundle.crt=${custom_ca_path} \
24+
--dry-run -o yaml | oc replace -f -
25+
26+
rm -f /opt/crc/custom-ca.crt

systemd/ocp-clusterid.service

+11
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
[Unit]
2+
Description=CRC Unit setting random cluster ID
3+
After=kubelet.service
4+
5+
[Service]
6+
Type=oneshot
7+
ExecStart=/usr/local/bin/ocp-clusterid.sh
8+
Restart=on-failure
9+
10+
[Install]
11+
WantedBy=multi-user.target

systemd/ocp-clusterid.sh

+11
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
#!/bin/bash
2+
3+
set -x
4+
5+
source /usr/local/bin/crc-systemd-common.sh
6+
export KUBECONFIG="/opt/kubeconfig"
7+
uuid=$(uuidgen)
8+
9+
wait_for_resource clusterversion
10+
11+
oc patch clusterversion version -p "{\"spec\":{\"clusterID\":\"${uuid}\"}}" --type merge

systemd/ocp-custom-domain.service

+11
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
[Unit]
2+
Description=CRC Unit setting nip.io domain for cluster
3+
After=kubelet.service ocp-clusterid.service ocp-cluster-ca.service
4+
5+
[Service]
6+
Type=oneshot
7+
EnvironmentFile=/opt/crc/crc-cloud
8+
ExecStart=/usr/local/bin/ocp-custom-domain.sh
9+
10+
[Install]
11+
WantedBy=multi-user.target

systemd/ocp-custom-domain.sh

+47
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,47 @@
1+
#!/bin/bash
2+
3+
set -x
4+
5+
if [ -z $CRC_CLOUD ]; then
6+
echo "Not running in crc-cloud mode"
7+
exit 0
8+
fi
9+
10+
source /usr/local/bin/crc-systemd-common.sh
11+
export KUBECONFIG="/opt/kubeconfig"
12+
export EIP=$(hostname -i)
13+
14+
STEPS_SLEEP_TIME=30
15+
16+
wait_for_resource secret
17+
18+
# create cert and add as secret
19+
openssl req -newkey rsa:2048 -new -nodes -x509 -days 3650 -keyout nip.key -out nip.crt -subj "/CN=$EIP.nip.io" -addext "subjectAltName=DNS:apps.$EIP.nip.io,DNS:*.apps.$EIP.nip.io,DNS:api.$EIP.nip.io"
20+
oc create secret tls nip-secret --cert=nip.crt --key=nip.key -n openshift-config
21+
sleep $STEPS_SLEEP_TIME
22+
23+
# patch ingress
24+
cat <<EOF > ingress-patch.yaml
25+
spec:
26+
appsDomain: apps.$EIP.nip.io
27+
componentRoutes:
28+
- hostname: console-openshift-console.apps.$EIP.nip.io
29+
name: console
30+
namespace: openshift-console
31+
servingCertKeyPairSecret:
32+
name: nip-secret
33+
- hostname: oauth-openshift.apps.$EIP.nip.io
34+
name: oauth-openshift
35+
namespace: openshift-authentication
36+
servingCertKeyPairSecret:
37+
name: nip-secret
38+
EOF
39+
oc patch ingresses.config.openshift.io cluster --type=merge --patch-file=ingress-patch.yaml
40+
41+
# patch API server to use new CA secret
42+
oc patch apiserver cluster --type=merge -p '{"spec":{"servingCerts": {"namedCertificates":[{"names":["api.'$EIP'.nip.io"],"servingCertificate": {"name": "nip-secret"}}]}}}'
43+
44+
# patch image registry route
45+
oc patch -p '{"spec": {"host": "default-route-openshift-image-registry.'$EIP'.nip.io"}}' route default-route -n openshift-image-registry --type=merge
46+
47+
#wait_cluster_become_healthy "authentication|console|etcd|ingress|openshift-apiserver"

systemd/ocp-growfs.service

+9
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
1+
[Unit]
2+
Description=CRC Unit to grow the root filesystem
3+
4+
[Service]
5+
Type=oneshot
6+
ExecStart=/usr/local/bin/ocp-growfs.sh
7+
8+
[Install]
9+
WantedBy=multi-user.target

0 commit comments

Comments
 (0)