Skip to content

Commit d737b1b

Browse files
authored
Remove non inclusive terms (kubevirt#709)
* Remove non inclusive terms Replace the term master with control-plane when it refers to nodes. Replace the term master with main where possible if it refers to a branch. Signed-off-by: fossedihelm <[email protected]> * Use correct node selector when using k8s version < 1.20 Signed-off-by: fossedihelm <[email protected]>
1 parent 0f9ee07 commit d737b1b

File tree

14 files changed

+31
-26
lines changed

14 files changed

+31
-26
lines changed

CONTRIBUTING.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
Welcome! As stated in the [README](README.md) this repository contains code for the virtualized clusters used in testing KubeVirt.
44

5-
See [the KubeVirt contribution guide](https://github.com/kubevirt/kubevirt/blob/master/CONTRIBUTING.md) for general information about how to contribute.
5+
See [the KubeVirt contribution guide](https://github.com/kubevirt/kubevirt/blob/main/CONTRIBUTING.md) for general information about how to contribute.
66

77
## Getting started with gocli
88

KUBEVIRTCI_LOCAL_TESTING.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -57,7 +57,7 @@ export KUBEVIRT_DEPLOY_GRAFANA=true
5757

5858
After making changes to a kubevirtci provider, it's recommended to test it locally including kubevirt e2e tests before publishing it.
5959

60-
With the changes in place you can execute locally [`make functest`](https://github.com/kubevirt/kubevirt/blob/master/docs/getting-started.md#testing) against a cluster with kubevirt that was provisioned using `kubevirtci`.
60+
With the changes in place you can execute locally [`make functest`](https://github.com/kubevirt/kubevirt/blob/main/docs/getting-started.md#testing) against a cluster with kubevirt that was provisioned using `kubevirtci`.
6161

6262
`$KUBEVIRT_DIR` is assumed to be your kubevirt path.
6363

cluster-provision/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ gocli help
1515

1616
### Start the cluster
1717

18-
Start a k8s cluster which contains of one master and two nodes:
18+
Start a k8s cluster which contains of one control-plane and two nodes:
1919

2020
```bash
2121
gocli run --random-ports --nodes 3 --background kubevirtci/k8s-1.13.3

cluster-provision/gocli/cmd/scp.go

+1-1
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,7 @@ func NewSCPCommand() *cobra.Command {
5151

5252
ssh := &cobra.Command{
5353
Use: "scp SRC DST",
54-
Short: "scp copies files from master node to the local host",
54+
Short: "scp copies files from control-plane node to the local host",
5555
RunE: scp,
5656
Args: cobra.MinimumNArgs(2),
5757
}

cluster-provision/gocli/cmd/utils/ports.go

+2-2
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ import (
88
)
99

1010
const (
11-
// PortSSH contains SSH port for the master node
11+
// PortSSH contains SSH port for the control-plane node
1212
PortSSH = 2201
1313
// PortSSHWorker contains SSH port for the worker node
1414
PortSSHWorker = 2202
@@ -33,7 +33,7 @@ const (
3333
//PortUploadProxy contains CDI UploadProxy port
3434
PortUploadProxy = 31001
3535

36-
// PortNameSSH contains master node SSH port name
36+
// PortNameSSH contains control-plane node SSH port name
3737
PortNameSSH = "ssh"
3838
// PortNameSSHWorker contains worker node SSH port name
3939
PortNameSSHWorker = "ssh-worker"

cluster-provision/tools/check-image-pull-policies/testdata/cni.yaml

+2-2
Original file line numberDiff line numberDiff line change
@@ -474,7 +474,7 @@ subjects:
474474
# Source: calico/templates/calico-node.yaml
475475
# This manifest installs the calico-node container, as well
476476
# as the CNI plugins and network config on
477-
# each master and worker node in a Kubernetes cluster.
477+
# each control-plane and worker node in a Kubernetes cluster.
478478
kind: DaemonSet
479479
apiVersion: apps/v1
480480
metadata:
@@ -761,7 +761,7 @@ spec:
761761
# Mark the pod as a critical add-on for rescheduling.
762762
- key: CriticalAddonsOnly
763763
operator: Exists
764-
- key: node-role.kubernetes.io/master
764+
- key: node-role.kubernetes.io/control-plane
765765
effect: NoSchedule
766766
serviceAccountName: calico-kube-controllers
767767
priorityClassName: system-cluster-critical

cluster-up/cluster/K8S.md

+4-4
Original file line numberDiff line numberDiff line change
@@ -19,17 +19,17 @@ export KUBEVIRT_PROVIDER=k8s-1.21 # choose kubevirtci provider version by subd
1919
## Bringing the cluster up
2020

2121
```bash
22-
export KUBEVIRT_NUM_NODES=2 # master + one node
22+
export KUBEVIRT_NUM_NODES=2 # control-plane + one node
2323
make cluster-up
2424
```
2525

2626
The cluster can be accessed as usual:
2727

2828
```bash
2929
$ cluster/kubectl.sh get nodes
30-
NAME STATUS ROLES AGE VERSION
31-
node01 NotReady master 31s v1.21.1
32-
node02 NotReady <none> 5s v1.21.1
30+
NAME STATUS ROLES AGE VERSION
31+
node01 NotReady control-plane 31s v1.21.1
32+
node02 NotReady <none> 5s v1.21.1
3333
```
3434

3535
Note: for further configuration environment variables please see [cluster-up/hack/common.sh](../hack/common.sh)

cluster-up/cluster/K8S_DEV_GUIDE.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -49,8 +49,8 @@ CONTAINER ID IMAGE COMMAND CREATED
4949
Nodes:
5050
```
5151
[root@modi01 kubevirtci]# oc get nodes
52-
NAME STATUS ROLES AGE VERSION
53-
node01 Ready master 83m v1.21.0
52+
NAME STATUS ROLES AGE VERSION
53+
node01 Ready control-plane 83m v1.21.0
5454
```
5555

5656
# Inner look of a deployed cluster

cluster-up/cluster/k8s-provider-common.sh

+4-4
Original file line numberDiff line numberDiff line change
@@ -104,12 +104,12 @@ function up() {
104104

105105
kubectl="${_cli} --prefix $provider_prefix ssh node01 -- sudo kubectl --kubeconfig=/etc/kubernetes/admin.conf"
106106

107-
# For multinode cluster Label all the non master nodes as workers,
108-
# for one node cluster label master with 'master,worker' roles
107+
# For multinode cluster Label all the non control-plane nodes as workers,
108+
# for one node cluster label control-plane with 'control-plane,worker' roles
109109
if [ "$KUBEVIRT_NUM_NODES" -gt 1 ]; then
110-
label="!node-role.kubernetes.io/master"
110+
label="!node-role.kubernetes.io/control-plane"
111111
else
112-
label="node-role.kubernetes.io/master"
112+
label="node-role.kubernetes.io/control-plane"
113113
fi
114114
$kubectl label node -l $label node-role.kubernetes.io/worker=''
115115

cluster-up/cluster/kind-1.22-sriov/sriov-components/manifests/sriov-cni-daemonset.yaml

+1-1
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ spec:
2222
nodeSelector:
2323
beta.kubernetes.io/arch: amd64
2424
tolerations:
25-
- key: node-role.kubernetes.io/master
25+
- key: node-role.kubernetes.io/control-plane
2626
operator: Exists
2727
effect: NoSchedule
2828
containers:

cluster-up/cluster/kind-1.22-sriov/sriov-components/manifests/sriovdp-daemonset.yaml

+3-3
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ spec:
2929
nodeSelector:
3030
beta.kubernetes.io/arch: amd64
3131
tolerations:
32-
- key: node-role.kubernetes.io/master
32+
- key: node-role.kubernetes.io/control-plane
3333
operator: Exists
3434
effect: NoSchedule
3535
serviceAccountName: sriov-device-plugin
@@ -94,7 +94,7 @@ spec:
9494
nodeSelector:
9595
beta.kubernetes.io/arch: ppc64le
9696
tolerations:
97-
- key: node-role.kubernetes.io/master
97+
- key: node-role.kubernetes.io/control-plane
9898
operator: Exists
9999
effect: NoSchedule
100100
serviceAccountName: sriov-device-plugin
@@ -158,7 +158,7 @@ spec:
158158
nodeSelector:
159159
beta.kubernetes.io/arch: arm64
160160
tolerations:
161-
- key: node-role.kubernetes.io/master
161+
- key: node-role.kubernetes.io/control-plane
162162
operator: Exists
163163
effect: NoSchedule
164164
serviceAccountName: sriov-device-plugin

cluster-up/cluster/kind-k8s-1.19/provider.sh

+1-1
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ function up() {
3333
nodes=$(_kubectl get nodes -o=custom-columns=:.metadata.name | awk NF)
3434
for node in $nodes; do
3535
# Create local-volume directories, which, on other providers, are pre-provisioned.
36-
# For more info, check https://github.com/kubevirt/kubevirtci/blob/master/cluster-provision/STORAGE.md
36+
# For more info, check https://github.com/kubevirt/kubevirtci/blob/main/cluster-provision/STORAGE.md
3737
for i in {1..10}; do
3838
mount_disk $node $i
3939
done

cluster-up/cluster/kind/common.sh

+7-2
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,12 @@ ETCD_IN_MEMORY_DATA_DIR="/tmp/kind-cluster-etcd"
3939

4040
function _wait_kind_up {
4141
echo "Waiting for kind to be ready ..."
42-
while [ -z "$(docker exec --privileged ${CLUSTER_NAME}-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/master -o=jsonpath='{.items..status.conditions[-1:].status}' | grep True)" ]; do
42+
if [[ $KUBEVIRT_PROVIDER =~ kind-.*1\.1.* ]]; then
43+
selector="master"
44+
else
45+
selector="control-plane"
46+
fi
47+
while [ -z "$(docker exec --privileged ${CLUSTER_NAME}-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes --selector=node-role.kubernetes.io/${selector} -o=jsonpath='{.items..status.conditions[-1:].status}' | grep True)" ]; do
4348
echo "Waiting for kind to be ready ..."
4449
sleep 10
4550
done
@@ -150,7 +155,7 @@ function _get_pods() {
150155
function _fix_node_labels() {
151156
# Due to inconsistent labels and taints state in multi-nodes clusters,
152157
# it is nessecery to remove taint NoSchedule and set role labels manualy:
153-
# Master nodes might lack 'scheduable=true' label and have NoScheduable taint.
158+
# Control-plane nodes might lack 'scheduable=true' label and have NoScheduable taint.
154159
# Worker nodes might lack worker role label.
155160
master_nodes=$(_get_nodes | grep -i $MASTER_NODES_PATTERN | awk '{print $1}')
156161
for node in ${master_nodes[@]}; do

cluster-up/cluster/local/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# Local Kubernets Provider
1+
# Local Kubernetes Provider
22

33
This provider allows developing against bleeding-edge Kubernetes code. The
44
k8s sources will be compiled and a single-node cluster will be started.

0 commit comments

Comments
 (0)