Skip to content

Commit d4d136b

Browse files
authored
K8s 1.30 provider slim s390x (kubevirt#1252)
* docs: Updated K8S.md and KUBEVIRTCI_LOCAL_TESTNIG.md for s390x changes Signed-off-by: chandramerla <[email protected]> * feat: Updated Centos9 Dockerfile and its CMD script to support s390x arch - Added conditional execution in Dockerfile based on arch - Updated Dockerfile CMD script (vm.sh) with qemu-system-s390x and its args as per what are supported on s390x Signed-off-by: chandramerla <[email protected]> * feat: Added s390x default kernel args The defaults here are copied from within whatever are set in the generic cloud image of centos9 for s390x. One can get those values from zipl command under 'kernel parmline'. I've copied them from there to this file to mainly keep parity with x86 based kernel args, which exists for configuring args for kernel. For 'root' key value is set using actual path of file than its UUID. Signed-off-by: chandramerla <[email protected]> * feat: gocli and its Dockerfile changes to support s390x - In gocli codebase, updated hardcoded vagrant username to be fetched calling function, which returns username based on arch. - Use virtio-net-ccw device incase of s390x Architecture, as virtio-net-pci isn’t supported. - Skipping provisioning sound cards for s390x as they are not supported. - Updated method signature of waitForVMToBeUp() to include arg docker client, so that it can be re-used outside run.go - Bumped docker-registry container image version from 2.7 to 2.8.2 as 2.7 wasn’t supporting s390x Signed-off-by: chandramerla <[email protected]> * feat: K8S 1.30 provider provisioning script changes to support s390x - Updated username to be obtained conditionally based on arch. - Removed CPU manager related kubelet arguments, as CPU manager args aren’t supported on s390x. - Installing openvswitch for s390x from kojipkgs.fedoraproject.org, as for s390x same isn’t available from default centos repos - Explicitly installing gettext rpm to make envsubst command available for s390x environments. Signed-off-by: chandramerla <[email protected]> * feat: Publish multiarch manifests for provider related container images Multi-arch images are enabled for gocli, centos9 and k8s providers Signed-off-by: chandramerla <[email protected]> * feat: Updated cluster-up/check.sh for s390x special checks On an IBM Z system (s390x), huge-page backing storage and nested virtualization cannot be used at the same time. Ref: https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_virtualization/creating-nested-virtual-machines_configuring-and-managing-virtualization Signed-off-by: chandramerla <[email protected]> * feat: Updated check-cluster-up.sh and beneath scripts to work for s390x and slim mode As in the case of s390x provider, which currently only supports slim mode, it doesn't have optional componets like CNAO, ISTIO, PROMETHEUS, CDI, etc. enabled and also extra-pre-pull-images is also not used. So, skipping all those components in case of slim mode. Also using arch speicific manfest/binary files. Signed-off-by: chandramerla <[email protected]> * bug: Updated default ARTIFACTS env var to work for local run As cp command copying to ARTIFACTS dir was failing in local run with error that source and desnitation are the same file, changed the value of ARTIFACTS when not defined in local run case, unlike in prow jobs where it is defined. Signed-off-by: chandramerla <[email protected]> * feat: Updated multi-arch centos9 base image with changes from the PR As once the PR changes are merged, k8s 1.30 provider build needs latest centos9 base image which is multi-arch, so that provider image can be built for multi-arch(x86, s390x) as well Signed-off-by: chandramerla <[email protected]> --------- Signed-off-by: chandramerla <[email protected]>
1 parent 6c4fa8f commit d4d136b

24 files changed

+407
-138
lines changed

K8S.md

+3-2
Original file line numberDiff line numberDiff line change
@@ -48,13 +48,14 @@ make cluster-up
4848
# Attach to node01 console
4949
docker exec -it ${KUBEVIRT_PROVIDER}-node01 screen /dev/pts/0
5050
```
51-
Use `vagrant:vagrant` to login.
51+
Use `vagrant:vagrant` for x86 and root:root for s390x to login.
5252
Note: it is sometimes `/dev/pts/1` or `/dev/pts/2`, try them in case you don't get a prompt.
5353

5454
Make sure you don't leave open screens, else the next screen will be messed up.
5555
`screen -ls` shows the open screens.
56-
`screen -XS <ID> quit` closes an open session.
56+
`screen -XS <session-id> quit` closes an open session.
5757
Close all zombies and shutdown screen gracefully if you plan to open a new one instead.
58+
Ctrl+A and Ctrl+D will detach your screen session and `screen -r <session-id>` reattach to a detached screen session.
5859

5960
## Container image cache
6061
In order to have a local cache of container images:

KUBEVIRTCI_LOCAL_TESTING.md

+10-6
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ cd $KUBEVIRTCI_DIR
2121

2222
```bash
2323
# Build a provider. This includes starting it with cluster-up for verification and shutting it down for cleanup.
24-
(cd cluster-provision/k8s/1.27; ../provision.sh)
24+
(cd cluster-provision/k8s/1.30; ../provision.sh)
2525
```
2626

2727
Note:
@@ -34,7 +34,7 @@ please use `export BYPASS_PMAN_CHANGE_CHECK=true` to bypass provision-manager ch
3434
# set local provision test flag (mandatory)
3535
export KUBEVIRTCI_PROVISION_CHECK=1
3636
```
37-
37+
This ensures to set container-registry to quay.io and container-suffix to :latest
3838
If `KUBEVIRTCI_PROVISION_CHECK` is not used,
3939
you can set `KUBEVIRTCI_CONTAINER_REGISTRY` (default: `quay.io`), `KUBEVIRTCI_CONTAINER_ORG` (default: `kubevirtci`) and `KUBEVIRTCI_CONTAINER_SUFFIX` (default: according gocli tag),
4040
in order to use a custom image.
@@ -134,12 +134,16 @@ For that we have phased mode.
134134
Usage: export the required mode, i.e `export PHASES=linux` or `export PHASES=k8s`
135135
and then run the provision. the full flow will be:
136136

137-
`export PHASES=linux; (cd cluster-provision/k8s/1.21; ../provision.sh)`
138-
`export PHASES=k8s; (cd cluster-provision/k8s/1.21; ../provision.sh)`
137+
`export PHASES=linux; (cd cluster-provision/k8s/1.30; ../provision.sh)`
138+
`export PHASES=k8s; (cd cluster-provision/k8s/1.30; ../provision.sh)`
139139
Run the `k8s` step as much as needed. It reuses the intermediate image that was created
140-
by the `linux` phase.
140+
by the `linux` phase.
141+
Note :
142+
1. By default when you run `k8s` phase alone, it uses centos9 image specified in cluster-provision/k8s/base-image, not the one built locally in the `linux` phase. So, to make `k8s` phase use the locally built centos9 image, update cluster-provision/k8s/base-image with the locally built image name and tag (default: quay.io/kubevirtci/centos9:latest)
143+
2. Also note if you run both `linux,k8s` phases, then it doesn't save the intermediate container image generated post linux image. So, for the centos9 image required for k8s stage, you've to run the linux phase alone.
144+
141145
Once you are done, either check the cluster manually, or use:
142-
`export PHASES=k8s; export CHECK_CLUSTER=true; (cd cluster-provision/k8s/1.21; ../provision.sh)`
146+
`export PHASES=k8s; export CHECK_CLUSTER=true; (cd cluster-provision/k8s/1.30; ../provision.sh)`
143147

144148
### provision without pre-pulling images
145149

cluster-provision/centos9/Dockerfile

+41-18
Original file line numberDiff line numberDiff line change
@@ -1,33 +1,56 @@
1+
FROM quay.io/fedora/fedora:39 AS base
12

2-
FROM quay.io/kubevirtci/fedora@sha256:e3a6087f62f288571db14defb7e0e10ad7fe6f973f567b0488d3aac5e927035a
3+
RUN dnf -y install jq iptables iproute dnsmasq qemu socat openssh-clients screen bind-utils tcpdump iputils libguestfs-tools-c && dnf clean all
34

4-
ARG centos_version
55

6-
RUN dnf -y install jq iptables iproute dnsmasq qemu openssh-clients screen bind-utils tcpdump iputils && dnf clean all
6+
FROM base AS imageartifactdownload
7+
8+
ARG BUILDARCH
9+
10+
ARG centos_version
711

812
WORKDIR /
913

10-
COPY vagrant.key /vagrant.key
14+
RUN echo "Centos9 version $centos_version"
1115

12-
RUN chmod 700 vagrant.key
16+
COPY scripts/download_box.sh /
1317

14-
ENV DOCKERIZE_VERSION v0.6.1
18+
RUN if test "$BUILDARCH" != "s390x"; then \
19+
/download_box.sh https://cloud.centos.org/centos/9-stream/x86_64/images/CentOS-Stream-Vagrant-9-$centos_version.x86_64.vagrant-libvirt.box && \
20+
curl -L -o /initramfs-amd64.img http://mirror.stream.centos.org/9-stream/BaseOS/x86_64/os/images/pxeboot/initrd.img && \
21+
curl -L -o /vmlinuz-amd64 http://mirror.stream.centos.org/9-stream/BaseOS/x86_64/os/images/pxeboot/vmlinuz; \
22+
else \
23+
/download_box.sh https://cloud.centos.org/centos/9-stream/s390x/images/CentOS-Stream-GenericCloud-9-$centos_version.s390x.qcow2 && \
24+
# Access virtual machine disk images directly by using LIBGUESTFS_BACKEND=direct, instead of libvirt
25+
export LIBGUESTFS_BACKEND=direct && \
26+
guestfish --ro --add box.qcow2 --mount /dev/sda1:/ ls /boot/ | grep -E '^vmlinuz-|^initramfs-' | xargs -I {} guestfish --ro --add box.qcow2 -i copy-out /boot/{} / ; \
27+
fi
1528

16-
RUN curl -LO https://github.com/jwilder/dockerize/releases/download/$DOCKERIZE_VERSION/dockerize-linux-amd64-$DOCKERIZE_VERSION.tar.gz \
17-
&& tar -xzvf dockerize-linux-amd64-$DOCKERIZE_VERSION.tar.gz \
18-
&& rm dockerize-linux-amd64-$DOCKERIZE_VERSION.tar.gz \
19-
&& chmod u+x dockerize \
20-
&& mv dockerize /usr/local/bin/
2129

22-
COPY scripts/download_box.sh /
30+
FROM base AS nodecontainer
2331

24-
RUN echo "Centos9 version $centos_version"
32+
ARG BUILDARCH
33+
34+
WORKDIR /
35+
36+
COPY vagrant.key /vagrant.key
37+
38+
RUN chmod 700 vagrant.key
2539

26-
ENV CENTOS_URL https://cloud.centos.org/centos/9-stream/x86_64/images/CentOS-Stream-Vagrant-9-$centos_version.x86_64.vagrant-libvirt.box
40+
ENV DOCKERIZE_VERSION=v0.8.0
2741

28-
RUN /download_box.sh ${CENTOS_URL}
42+
RUN if test "$BUILDARCH" != "s390x"; then \
43+
curl -L -o dockerize-linux-$BUILDARCH.tar.gz https://github.com/jwilder/dockerize/releases/download/$DOCKERIZE_VERSION/dockerize-linux-amd64-$DOCKERIZE_VERSION.tar.gz; \
44+
else \
45+
curl -L -o dockerize-linux-$BUILDARCH.tar.gz https://github.com/jwilder/dockerize/releases/download/$DOCKERIZE_VERSION/dockerize-linux-s390x-$DOCKERIZE_VERSION.tar.gz; \
46+
fi && \
47+
tar -xzvf dockerize-linux-$BUILDARCH.tar.gz && \
48+
rm dockerize-linux-$BUILDARCH.tar.gz && \
49+
chmod u+x dockerize && \
50+
mv dockerize /usr/local/bin/
2951

30-
RUN curl -L -o /initrd.img http://mirror.stream.centos.org/9-stream/BaseOS/x86_64/os/images/pxeboot/initrd.img
31-
RUN curl -L -o /vmlinuz http://mirror.stream.centos.org/9-stream/BaseOS/x86_64/os/images/pxeboot/vmlinuz
52+
COPY --from=imageartifactdownload /box.qcow2 box.qcow2
53+
COPY --from=imageartifactdownload /vmlinuz-* /vmlinuz
54+
COPY --from=imageartifactdownload /initramfs-* /initrd.img
3255

33-
COPY scripts/* /
56+
COPY scripts/* /

cluster-provision/centos9/build.sh

+1-1
Original file line numberDiff line numberDiff line change
@@ -4,4 +4,4 @@ DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
44

55
centos_version="$(cat $DIR/version | tr -d '\n')"
66

7-
docker build --build-arg centos_version=$centos_version . -t quay.io/kubevirtci/centos9
7+
docker build --build-arg BUILDARCH=$(uname -m) --build-arg centos_version=$centos_version . -t quay.io/kubevirtci/centos9

cluster-provision/centos9/scripts/download_box.sh

+10-3
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,13 @@
33
set -e
44
set -o pipefail
55

6-
curl -L $1 | tar -zxvf - box.img
7-
qemu-img convert -O qcow2 box.img box.qcow2
8-
rm box.img
6+
ARCH=$(uname -m)
7+
8+
#For the s390x architecture, instead of vagrant box image, generic cloud (qcow2) image is used directly.
9+
if [ "$ARCH" == "s390x" ]; then
10+
curl -L $1 -o box.qcow2
11+
else
12+
curl -L $1 | tar -zxvf - box.img
13+
qemu-img convert -O qcow2 box.img box.qcow2
14+
rm box.img
15+
fi
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
root=/dev/vda1 ro no_timer_check console=ttyS0,115200n8 net.ifnames=0 biosdevname=0 crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M

cluster-provision/centos9/scripts/vm.sh

+87-13
Original file line numberDiff line numberDiff line change
@@ -7,17 +7,21 @@ MEMORY=3096M
77
CPU=2
88
NUMA=1
99
QEMU_ARGS=""
10+
QEMU_MONITOR_ARGS=""
1011
KERNEL_ARGS=""
1112
NEXT_DISK=""
1213
BLOCK_DEV=""
1314
BLOCK_DEV_SIZE=""
15+
VM_USER=$( [ "$(uname -m)" = "s390x" ] && echo "cloud-user" || echo "vagrant" )
16+
VM_USER_SSH_KEY="vagrant.key"
1417

1518
while true; do
1619
case "$1" in
1720
-m | --memory ) MEMORY="$2"; shift 2 ;;
1821
-a | --numa ) NUMA="$2"; shift 2 ;;
1922
-c | --cpu ) CPU="$2"; shift 2 ;;
2023
-q | --qemu-args ) QEMU_ARGS="${2}"; shift 2 ;;
24+
-qm | --qemu-monitor-args ) QEMU_MONITOR_ARGS="${2}"; shift 2 ;;
2125
-k | --additional-kernel-args ) KERNEL_ARGS="${2}"; shift 2 ;;
2226
-n | --next-disk ) NEXT_DISK="$2"; shift 2 ;;
2327
-b | --block-device ) BLOCK_DEV="$2"; shift 2 ;;
@@ -39,6 +43,12 @@ function calc_next_disk {
3943
if [ -n "$NEXT_DISK" ]; then next=${NEXT_DISK}; fi
4044
if [ "$last" = "00" ]; then
4145
last="box.qcow2"
46+
# Customize qcow2 image using virt-sysprep (with KVM accelerator)
47+
if [ "$(uname -m)" = "s390x" ]; then
48+
export LIBGUESTFS_BACKEND=direct
49+
export LIBGUESTFS_BACKEND_SETTINGS=force_kvm
50+
virt-sysprep -a box.qcow2 --run-command 'useradd -m cloud-user' --append '/etc/cloud/cloud.cfg:runcmd:' --append '/etc/cloud/cloud.cfg: - hostnamectl set-hostname ""' --root-password password:root --ssh-inject cloud-user:string:"ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA6NF8iallvQVp22WDkTkyrtvp9eWW6A8YVr+kz4TjGYe7gHzIw+niNltGEFHzD8+v1I2YJ6oXevct1YeS0o9HZyN1Q9qgCgzUFtdOKLv6IedplqoPkcmF0aYet2PkEDo3MlTBckFXPITAMzF8dJSIFo9D8HfdOV0IAdx4O7PtixWKn5y2hMNG0zQPyUecp4pzC6kivAIhyfHilFR61RGL+GPXQ2MWZWFYbAGjyiYJnAmCP3NOTd0jMZEnDkbUvxhMmBYSdETk1rRgm+R4LOzFUGaHqHDLKLX+FIPKcF96hrucXzcWyLbIbEgE98OHlnVYCzRdK8jlqm8tehUc9c9WhQ== vagrant insecure public key"
51+
fi
4252
else
4353
last=$(printf "/disk%02d.qcow2" $last)
4454
fi
@@ -51,7 +61,7 @@ cat >/usr/local/bin/ssh.sh <<EOL
5161
#!/bin/bash
5262
set -e
5363
dockerize -wait tcp://192.168.66.1${n}:22 -timeout 300s &>/dev/null
54-
ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no vagrant@192.168.66.1${n} -i vagrant.key -p 22 -q \$@
64+
ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no ${VM_USER}@192.168.66.1${n} -i ${VM_USER_SSH_KEY} -p 22 -q \$@
5565
EOL
5666
chmod u+x /usr/local/bin/ssh.sh
5767
echo "done" >/ssh_ready
@@ -195,15 +205,79 @@ if [ "${NUMA}" -gt 1 ]; then
195205
done
196206
fi
197207

198-
exec qemu-system-x86_64 -enable-kvm -drive format=qcow2,file=${next},if=virtio,cache=unsafe ${block_dev_arg} \
199-
-device virtio-net-pci,netdev=network0,mac=52:55:00:d1:55:${n} \
200-
-netdev tap,id=network0,ifname=tap${n},script=no,downscript=no \
201-
-device virtio-rng-pci \
202-
-initrd /initrd.img \
203-
-kernel /vmlinuz \
204-
-append "$(cat /kernel.args) $(cat /additional.kernel.args) ${KERNEL_ARGS}" \
205-
-vnc :${n} -cpu host,migratable=no,+invtsc -m ${MEMORY} -smp ${CPU} ${numa_arg} \
206-
-serial pty -M q35,accel=kvm,kernel_irqchip=split \
207-
-device intel-iommu,intremap=on,caching-mode=on -device intel-hda -device hda-duplex -device AC97 \
208-
-uuid $(cat /proc/sys/kernel/random/uuid) \
209-
${QEMU_ARGS}
208+
if [ "$(uname -m)" == "s390x" ]; then
209+
# As per https://www.qemu.org/docs/master/system/s390x/bootdevices.html#booting-without-bootindex-parameter -drive if=virtio can't be specified with bootindex for s390x
210+
qemu_system_cmd="qemu-system-s390x \
211+
-enable-kvm \
212+
-drive format=qcow2,file=${next},if=none,cache=unsafe,id=drive1 ${block_dev_arg} \
213+
-device virtio-blk,drive=drive1,bootindex=1 \
214+
-device virtio-net-ccw,netdev=network0,mac=52:55:00:d1:55:${n} \
215+
-netdev tap,id=network0,ifname=tap${n},script=no,downscript=no \
216+
-device virtio-rng \
217+
-initrd /initrd.img \
218+
-kernel /vmlinuz \
219+
-append \"$(cat /kernel.s390x.args) $(cat /additional.kernel.args) ${KERNEL_ARGS}\" \
220+
-vnc :${n} \
221+
-cpu host \
222+
-m ${MEMORY} \
223+
-smp ${CPU} ${numa_arg} \
224+
-serial pty \
225+
-machine s390-ccw-virtio,accel=kvm \
226+
-uuid $(cat /proc/sys/kernel/random/uuid) \
227+
-monitor unix:/tmp/qemu-monitor.sock,server,nowait \
228+
${QEMU_ARGS}"
229+
else
230+
#Docs: https://www.qemu.org/docs/master/system/invocation.html
231+
qemu_system_cmd="qemu-system-x86_64 \
232+
-enable-kvm \
233+
-drive format=qcow2,file=${next},if=virtio,cache=unsafe ${block_dev_arg} \
234+
-device virtio-net-pci,netdev=network0,mac=52:55:00:d1:55:${n} \
235+
-netdev tap,id=network0,ifname=tap${n},script=no,downscript=no \
236+
-device virtio-rng-pci \
237+
-initrd /initrd.img \
238+
-kernel /vmlinuz \
239+
-append \"$(cat /kernel.args) $(cat /additional.kernel.args) ${KERNEL_ARGS}\" \
240+
-vnc :${n} \
241+
-cpu host,migratable=no,+invtsc \
242+
-m ${MEMORY} \
243+
-smp ${CPU} ${numa_arg} \
244+
-serial pty \
245+
-machine q35,accel=kvm,kernel_irqchip=split \
246+
-device intel-iommu,intremap=on,caching-mode=on \
247+
-device intel-hda \
248+
-device hda-duplex \
249+
-device AC97 \
250+
-uuid $(cat /proc/sys/kernel/random/uuid) \
251+
-monitor unix:/tmp/qemu-monitor.sock,server,nowait \
252+
${QEMU_ARGS}"
253+
fi
254+
255+
PID=0
256+
eval "nohup $qemu_system_cmd &"
257+
PID=$!
258+
259+
# Function to check if QEMU monitor socket is ready
260+
is_qemu_monitor_ready() {
261+
socat - UNIX-CONNECT:/tmp/qemu-monitor.sock < /dev/null > /dev/null 2>&1
262+
}
263+
264+
# Wait for the QEMU monitor socket to be ready
265+
elapsed=0
266+
while ! is_qemu_monitor_ready; do
267+
if [ $elapsed -ge 60 ]; then
268+
echo "QEMU monitor socket did not become available within 60 seconds."
269+
exit 1
270+
fi
271+
sleep 1
272+
elapsed=$((elapsed + 1))
273+
done
274+
echo "QEMU monitor socket is ready."
275+
276+
if [[ -n "$QEMU_MONITOR_ARGS" ]]; then
277+
IFS=';' read -ra ADDR <<< "$QEMU_MONITOR_ARGS"
278+
for QEMU_MONITOR_CMD in "${ADDR[@]}"; do
279+
echo "$QEMU_MONITOR_CMD" | socat - UNIX-CONNECT:/tmp/qemu-monitor.sock
280+
done
281+
fi
282+
283+
wait $PID

cluster-provision/gocli/Makefile

+2-1
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,7 @@ SHELL := /bin/bash
22

33
IMAGES_FILE ?= images.json
44
KUBEVIRTCI_IMAGE_REPO ?= quay.io/kubevirtci
5+
GOARCH ?= $$(uname -m | grep -q s390x && echo s390x || echo amd64)
56

67
export GO111MODULE=on
78
export GOPROXY=direct
@@ -23,7 +24,7 @@ test-gocli:
2324

2425
.PHONY: gocli
2526
cli:
26-
CGO_ENABLED=0 GOOS=linux GOARCH=amd64 $(GO) build -ldflags "-X 'kubevirt.io/kubevirtci/cluster-provision/gocli/images.SUFFIX=:$(KUBEVIRTCI_TAG)'" -o $(BIN_DIR)/cli ./cmd/cli
27+
CGO_ENABLED=0 GOOS=linux GOARCH=${GOARCH} $(GO) build -ldflags "-X 'kubevirt.io/kubevirtci/cluster-provision/gocli/images.SUFFIX=:$(KUBEVIRTCI_TAG)'" -o $(BIN_DIR)/cli ./cmd/cli
2728
.PHONY: fmt
2829
fmt:
2930
$(GO) fmt ./cmd/...

cluster-provision/gocli/cmd/provision.go

+9-6
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,7 @@ import (
66
"os"
77
"os/signal"
88
"path/filepath"
9+
"runtime"
910
"strconv"
1011
"strings"
1112

@@ -23,6 +24,7 @@ import (
2324
containers2 "kubevirt.io/kubevirtci/cluster-provision/gocli/containers"
2425

2526
"kubevirt.io/kubevirtci/cluster-provision/gocli/cmd/utils"
27+
"kubevirt.io/kubevirtci/cluster-provision/gocli/pkg/libssh"
2628
"kubevirt.io/kubevirtci/cluster-provision/gocli/docker"
2729
)
2830

@@ -51,6 +53,7 @@ func NewProvisionCommand() *cobra.Command {
5153

5254
func provisionCluster(cmd *cobra.Command, args []string) (retErr error) {
5355
var base string
56+
sshUser := libssh.GetUserByArchitecture(runtime.GOARCH)
5457
packagePath := args[0]
5558
versionBytes, err := os.ReadFile(filepath.Join(packagePath, "version"))
5659
if err != nil {
@@ -228,13 +231,13 @@ func provisionCluster(cmd *cobra.Command, args []string) (retErr error) {
228231
}
229232

230233
// Wait for ssh.sh script to exist
234+
logrus.Info("Wait for ssh.sh script to exist")
231235
err = _cmd(cli, nodeContainer(prefix, nodeName), "while [ ! -f /ssh_ready ] ; do sleep 1; done", "checking for ssh.sh script")
232236
if err != nil {
233237
return err
234238
}
235239

236-
// Wait for the VM to be up
237-
err = _cmd(cli, nodeContainer(prefix, nodeName), "ssh.sh echo VM is up", "waiting for node to come up")
240+
err = waitForVMToBeUp(cli, prefix, nodeName)
238241
if err != nil {
239242
return err
240243
}
@@ -252,21 +255,21 @@ func provisionCluster(cmd *cobra.Command, args []string) (retErr error) {
252255
if err != nil {
253256
return err
254257
}
255-
err = _cmd(cli, nodeContainer(prefix, nodeName), "if [ -f /scripts/extra-pre-pull-images ]; then scp -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i vagrant.key -P 22 /scripts/extra-pre-pull-images vagrant@192.168.66.101:/tmp/extra-pre-pull-images; fi", "copying /scripts/extra-pre-pull-images if existing")
258+
err = _cmd(cli, nodeContainer(prefix, nodeName), fmt.Sprintf("if [ -f /scripts/extra-pre-pull-images ]; then scp -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i vagrant.key -P 22 /scripts/extra-pre-pull-images %s@192.168.66.101:/tmp/extra-pre-pull-images; fi", sshUser), "copying /scripts/extra-pre-pull-images if existing")
256259
if err != nil {
257260
return err
258261
}
259-
err = _cmd(cli, nodeContainer(prefix, nodeName), "if [ -f /scripts/fetch-images.sh ]; then scp -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i vagrant.key -P 22 /scripts/fetch-images.sh vagrant@192.168.66.101:/tmp/fetch-images.sh; fi", "copying /scripts/fetch-images.sh if existing")
262+
err = _cmd(cli, nodeContainer(prefix, nodeName), fmt.Sprintf("if [ -f /scripts/fetch-images.sh ]; then scp -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i vagrant.key -P 22 /scripts/fetch-images.sh %s@192.168.66.101:/tmp/fetch-images.sh; fi", sshUser), "copying /scripts/fetch-images.sh if existing")
260263
if err != nil {
261264
return err
262265
}
263266

264-
err = _cmd(cli, nodeContainer(prefix, nodeName), "ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i vagrant.key vagrant@192.168.66.101 'mkdir -p /tmp/ceph /tmp/cnao /tmp/nfs-csi /tmp/nodeports /tmp/prometheus /tmp/whereabouts /tmp/kwok'", "Create required manifest directories before copy")
267+
err = _cmd(cli, nodeContainer(prefix, nodeName), fmt.Sprintf("ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i vagrant.key %s@192.168.66.101 'mkdir -p /tmp/ceph /tmp/cnao /tmp/nfs-csi /tmp/nodeports /tmp/prometheus /tmp/whereabouts /tmp/kwok'", sshUser), "Create required manifest directories before copy")
265268
if err != nil {
266269
return err
267270
}
268271
// Copy manifests to the VM
269-
err = _cmd(cli, nodeContainer(prefix, nodeName), "scp -r -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i vagrant.key -P 22 /scripts/manifests/* vagrant@192.168.66.101:/tmp", "copying manifests to the VM")
272+
err = _cmd(cli, nodeContainer(prefix, nodeName), fmt.Sprintf("scp -r -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i vagrant.key -P 22 /scripts/manifests/* %s@192.168.66.101:/tmp", sshUser), "copying manifests to the VM")
270273
if err != nil {
271274
return err
272275
}

0 commit comments

Comments
 (0)